Greenplum has taken massively parallel processing (MPP) of data to the next level with the introduction this week of its "MPP Scatter/Gather Streaming" (SG Streaming) technology, which manages the flow of data into all nodes of the database, eliminating the traditional bottlenecks with massive data loading.
The San Mateo, Calif. company, which provides large-scale analytics and data warehousing, says SG Streaming has allowed customers to achieve production-loading speeds of over four terabytes per hour with negligible impacts on concurrent database operations. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts.]
Under the "parallel everywhere" approach to loading data flows from one or more source systems to every node of the database without any sequential choke points. This differs from traditional “bulk loading” technologies, used by most mainstream database and parallel-processing appliance vendors that push data from a single source, often over a single or small number of parallel channels, and result in fundamental bottlenecks and ever-increasing load times.
The new technology "scatters" data from all source systems across hundreds or thousands of parallel streams that simultaneously flow to all nodes of the database. Performance scales with the number of nodes, and the technology supports both large batch and continuous near-real-time loading patterns with negligible impact on concurrent database operations.
Data can be transformed and processed in-flight, utilizing all nodes of the database in parallel, for extremely high-performance extract-load-transform (ELT) and extract-transform-load-transform (ETLT) loading pipelines. Final 'gathering' and storage of data to disk takes place on all nodes simultaneously, with data automatically partitioned across nodes and optionally compressed.
It was just six months ago that Greenplum publicly unveiled how it wrapped MapReduce approaches into the newest version of its data solution. That advance allowed users to combine SQL queries and MapReduce programs into unified tasks executed in parallel across thousands of cores.
Wednesday, March 18, 2009
Active Endpoints aims at greater process design and implementation productivity with ActiveVOS enhancements
Active Endpoints, maker of the ActiveVOS visual orchestration system, has kicked things up a notch with the recent release of ActiveVos 6.1, which incorporates new features and functions designed to make developers more productive.
The latest offering from the Waltham, Mass. company provides what amounts to shrink-wrapped service-oriented architecture (SOA) and provides business process management (BPM) automation, while adhering to business process execution language (BPEL) standards. [Disclosure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]
There's an Active Endpoints podcast on the solution, and a new white paper on SOA implications of the process efficiencies from Dave Linthicum. We also recently did an Analyst Insights podcast on recent BPEL4People work.
Following close on the heels of version 6.0, which debuted in September, and 6.0.2, which made its appearance in December, the newest ActiveVOS offering brings features aimed at smoothing the way for developers. For example, a new tool, the "participant's view," eliminates the need for developers to manually code complex programming constructs like BPEL partner links and BPEL partner link types that are needed to define how services are to be used in a BPM application.
Another major enhancement is "process rewind." At design time, no BPM application can anticipate all of the operational issues and error handling that will be required. Process rewind gives developers the ability to rewind a process to a specific activity and redo the work without having to invoke any of the built-in compensation logic. This allows certain steps of the process need to be “redone” without impacting work already performed.
Among the other improvements:
ActiveVOS is available as a perpetual license. In an internal development environment, the price is $5,000 per CPU socket. In a deployment environment, the price is $12,000 per CPU socket when the deployment environment licenses are ordered with a first-time purchase of internal development environment licenses. Annual support and maintenance is 20 percent of total license fees.
The latest offering from the Waltham, Mass. company provides what amounts to shrink-wrapped service-oriented architecture (SOA) and provides business process management (BPM) automation, while adhering to business process execution language (BPEL) standards. [Disclosure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]
There's an Active Endpoints podcast on the solution, and a new white paper on SOA implications of the process efficiencies from Dave Linthicum. We also recently did an Analyst Insights podcast on recent BPEL4People work.
Following close on the heels of version 6.0, which debuted in September, and 6.0.2, which made its appearance in December, the newest ActiveVOS offering brings features aimed at smoothing the way for developers. For example, a new tool, the "participant's view," eliminates the need for developers to manually code complex programming constructs like BPEL partner links and BPEL partner link types that are needed to define how services are to be used in a BPM application.
Another major enhancement is "process rewind." At design time, no BPM application can anticipate all of the operational issues and error handling that will be required. Process rewind gives developers the ability to rewind a process to a specific activity and redo the work without having to invoke any of the built-in compensation logic. This allows certain steps of the process need to be “redone” without impacting work already performed.
Among the other improvements:
- Any-order development, which presents services details as graphical tables into which details can be entered at any time. This is in contrast to earlier systems in which developers needed to know the details in advance.
- Automatic development, which eases the tasks for developers new to SOA-based BPM. Version 6.1 automatically understands “private” versus “public” web services description language (WSDL) files and creates the required WSDLs in both a standards-compliant mode and a human-understandable format.
- Improved data handling, which allows developers to visually specify what data is needed in each activity and guides the developer through XPath and XQuery statement generation. The BPEL standard separates assignment of data to activities from the invocation of those activities. While the technical reasons for this are clear to experienced developers, for new developers this can be an impediment.
ActiveVOS is available as a perpetual license. In an internal development environment, the price is $5,000 per CPU socket. In a deployment environment, the price is $12,000 per CPU socket when the deployment environment licenses are ordered with a first-time purchase of internal development environment licenses. Annual support and maintenance is 20 percent of total license fees.
Panda Security strengthens SaaS-based PC virus protection solution for SMBs
As the whirlwind of economic pressures and heightened concerns for security push small and medium -sized businesses (SMB) toward software-as-a-service (SaaS) solutions, Panda Security has delivered added functionality to the cause with Managed Office Protection (MOP) 5.03.
Panda, with North American operations in Glendale, CA, allows individual companies as well as value added resellers (VARs) to deploy and extend its hosted security services, which originally launched in May 2008. Panda says its solution can be more than 50 percent more efficient than traditional endpoint security software.
I expect that SMBs will be more likely to seek a full package of PC support services via third parties. Those third parties will want to deliver help desk, software management, patch management and -- now -- security as a full service, cloud-based offering.
By adding the Web-based Panda SaaS security benefits, branded under the third parties, the hassle and cost of managing each desktop on premises drops significantly. And it allows the SMBs to get closer to their goal of no IT department, or at least a majority of IT support gained as a service.
Enhancements to Panda's MOP, include:
The channel and PC support third parties gain a more complete package of services, while letting their partner, in this case Panda, pick up the security and on-going threats response requirements.
Another benefit comes from today's highly mobile workforce. Administrators are increasingly concerned with managing laptops belonging to traveling employees. A SaaS-based device support solution allows administrators to monitor and configure anti-malware software no matter what the employee's location.
In a recent study, Panda Security compared its SaaS product to three different traditional security products. The study found that using a SaaS product could be more than 50 percent less expensive over a two-year period than using the traditional products, when you consider staffing costs, capital expenditures, and deployment costs.
Panda MOP is available immediately in licenses sold by the seat in one- to three-year subscription packages. More information is available from www.pandasecurity.com.
Panda, with North American operations in Glendale, CA, allows individual companies as well as value added resellers (VARs) to deploy and extend its hosted security services, which originally launched in May 2008. Panda says its solution can be more than 50 percent more efficient than traditional endpoint security software.
I expect that SMBs will be more likely to seek a full package of PC support services via third parties. Those third parties will want to deliver help desk, software management, patch management and -- now -- security as a full service, cloud-based offering.
By adding the Web-based Panda SaaS security benefits, branded under the third parties, the hassle and cost of managing each desktop on premises drops significantly. And it allows the SMBs to get closer to their goal of no IT department, or at least a majority of IT support gained as a service.
Enhancements to Panda's MOP, include:
- Optimized management of end devices through a new Web-based management console that allows administrators to resolve deployment challenges from one centralized dashboard from on any computer with an Internet connection.
- Increased reporting flexibility that allows administrators to select from an expanded set of security reports, including executive, activity and detection reports.
- Easier software deployment, which allows IT managers to leverage automatic uninstallers along with unique MAC addresses, facilitating personalized security settings for each end-device.
- Simplified computer management that allows offline handling of exported files.
- Improved client network status control, which allows VARs providing security services to SMB clients to have remote access via the service provider administration console, where they can centrally manage any update on every device in the client network.
The channel and PC support third parties gain a more complete package of services, while letting their partner, in this case Panda, pick up the security and on-going threats response requirements.
Another benefit comes from today's highly mobile workforce. Administrators are increasingly concerned with managing laptops belonging to traveling employees. A SaaS-based device support solution allows administrators to monitor and configure anti-malware software no matter what the employee's location.
In a recent study, Panda Security compared its SaaS product to three different traditional security products. The study found that using a SaaS product could be more than 50 percent less expensive over a two-year period than using the traditional products, when you consider staffing costs, capital expenditures, and deployment costs.
Panda MOP is available immediately in licenses sold by the seat in one- to three-year subscription packages. More information is available from www.pandasecurity.com.
IBM buying Sun Microsystems makes no sense, it's a red herring
Someone has floated a trial balloon, through a leak to the Wall Street Journal, that IBM is in "talks" to buy Sun Microsystems for $6.5 billion. The only party that would leak this information is Sun itself, and it smacks of desperation in trying to thwart an unwanted acquisition, or to positively impact another deal that Sun is weak in.
If IBM wanted to buy Sun it would have done so years ago, at least on the merits of synergy and technology. If IBM wanted to buy Sun simply to trash the company, plunder the spoils and do it on the cheap -- the time for that was last fall.
So more likely, given that Sun has reportedly been shopping itself around (nice severance packages for the top brass, no doubt), is that Sun has been too successful at selling itself -- just to the wrong party at too low of a price. This may even be in the form of a chop shop takeover. The only thing holding up a hostile takeover of Sun to sell for spare parts over the past six months was the credit crunch, and the fact that private equity firms have had some distractions.
By buying Sun IBM gains little other than some intellectual property and mySQL. IBM could have bought mySQL or open sourced DB2 or a subset of DB2 any time, if it wanted to go that route. IBM has basically already played its open source hand, which it did masterfully at just the right time. Sun, on the other hand, played (or forced) its open source hand poorly, and at the wrong time. What's the value to Sun for having "gone open source"? Zip. Owning Java is not a business model, or not enough of one to help Sun meaningfully.
So, does IBM need chip architectures from Sun? Nope, has their own. Access to markets from Sun's long-underperforming sales force? Nope. Unix? IBM has one. Linux? IBM was there first. Engineering skills? Nope. Storage technology? Nope. Head-start on cloud implementations? Nope. Java license access or synergy? Nope, too late. Sun's deep and wide professional services presence worldwide? Nope. Ha!
Let's see ... hardware, software, technology, sales, cloud, labor, market reach ... none makes sense for IBM to buy Sun -- at any price. IBM does just fine by continuing to watch the sun set on Sun. Same for Oracle, SAP, Microsoft, HP.
With due respect to Larry Dignan on ZDNet, none of his reasons add up in dollars and cents. No way. Sun has fallen too far over the years for these rationales to stand up.
Only in playing some offense via data center product consolidation against HP and Dell would buying Sun help IBM. And the math doesn't add up there. The cost of getting Sun is more than the benefits of taking money from enterprise accounts from others. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
The cost of Sun is not cheap, or at least not cheap like a free puppy. Taking over Sun for technology and market spoils ignores the long-term losses to be absorbed, the decimated workforce, the fact that Cisco will now eat Sun's lunch as have the other server makers for more than five years.
So who might by Sun on the cheap, before Sun's next financial report to Wall Street? Cisco, Dell, EMC, Red Hat. That's about it for vendors. And it would be a big risk for them, unless the price tag were cheap, cheap, cheap. Anything under $4 billion might make sense. Might.
The economic crisis has come at a worst time for Sun than just about any other larger IT vendor. Sun, no matter what happens, will go for a fire sale deal -- not a deal of strength among healthy synergistic partners. No way.
If IBM wanted to buy Sun it would have done so years ago, at least on the merits of synergy and technology. If IBM wanted to buy Sun simply to trash the company, plunder the spoils and do it on the cheap -- the time for that was last fall.
So more likely, given that Sun has reportedly been shopping itself around (nice severance packages for the top brass, no doubt), is that Sun has been too successful at selling itself -- just to the wrong party at too low of a price. This may even be in the form of a chop shop takeover. The only thing holding up a hostile takeover of Sun to sell for spare parts over the past six months was the credit crunch, and the fact that private equity firms have had some distractions.
By buying Sun IBM gains little other than some intellectual property and mySQL. IBM could have bought mySQL or open sourced DB2 or a subset of DB2 any time, if it wanted to go that route. IBM has basically already played its open source hand, which it did masterfully at just the right time. Sun, on the other hand, played (or forced) its open source hand poorly, and at the wrong time. What's the value to Sun for having "gone open source"? Zip. Owning Java is not a business model, or not enough of one to help Sun meaningfully.
So, does IBM need chip architectures from Sun? Nope, has their own. Access to markets from Sun's long-underperforming sales force? Nope. Unix? IBM has one. Linux? IBM was there first. Engineering skills? Nope. Storage technology? Nope. Head-start on cloud implementations? Nope. Java license access or synergy? Nope, too late. Sun's deep and wide professional services presence worldwide? Nope. Ha!
Let's see ... hardware, software, technology, sales, cloud, labor, market reach ... none makes sense for IBM to buy Sun -- at any price. IBM does just fine by continuing to watch the sun set on Sun. Same for Oracle, SAP, Microsoft, HP.
With due respect to Larry Dignan on ZDNet, none of his reasons add up in dollars and cents. No way. Sun has fallen too far over the years for these rationales to stand up.
Only in playing some offense via data center product consolidation against HP and Dell would buying Sun help IBM. And the math doesn't add up there. The cost of getting Sun is more than the benefits of taking money from enterprise accounts from others. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
The cost of Sun is not cheap, or at least not cheap like a free puppy. Taking over Sun for technology and market spoils ignores the long-term losses to be absorbed, the decimated workforce, the fact that Cisco will now eat Sun's lunch as have the other server makers for more than five years.
So who might by Sun on the cheap, before Sun's next financial report to Wall Street? Cisco, Dell, EMC, Red Hat. That's about it for vendors. And it would be a big risk for them, unless the price tag were cheap, cheap, cheap. Anything under $4 billion might make sense. Might.
Other buyers could come in the form of carriers, cloud providers or other infrastructure service provider types. This is a stretch, because even cheap Sun would come with a lot of baggage for their needs. Another scenario is a multi-party deal, of breaking up Sun among several different kinds of firms. This also is hugely risky.
So my theory -- and it's just a guess -- is that today's trial balloon on an IBM deal is a last-ditch effort by Sun to find, solidify, or up the price on some other acquisition or exit strategy by Sun. The risk of such market shenanigans only underscores the depths of Sun's malaise. The management at Sun probably sees its valuation sinking yet gain to below tangible assets and cash value when it releases it's next quarterly performance results. ... Soon.The economic crisis has come at a worst time for Sun than just about any other larger IT vendor. Sun, no matter what happens, will go for a fire sale deal -- not a deal of strength among healthy synergistic partners. No way.
Monday, March 16, 2009
Cisco seeks for data center what Apple created with iPhone -- a new market that stops the madness
Apple with the iPhone changed the game in mobile devices by pulling together previously disparate elements of architecture, convenience, and technology. Software and services were the keys to new levels of integration, better interfaces and a comprehensive user experience.
The result has lead to a tectonic market shift that combines stunning customer adoption, whole new types of user productivity, a thriving third-party developer community -- and mobile and PC market boundaries that are swiftly blurring. Doing the advance work of pulling together elements of the full solution -- so that the users or channel players or consultants do not -- has worked well for Apple. It was bold, risky, and it worked.
Carriers could never pull off the iPhone integration value for users. Indeed, the way carriers go to market practically forbids it. It took an outsider and new entrant to the field to change the game, to remove the complexity and cost of integration -- and pass along both the savings and seductive leap in functionality to the buyers.
With today's announcement of the Cisco Unified Computing System -- along with a deep partnership with VMware on software and management -- Cisco Systems is attempting a similar solution-level value play as Apple with the iPhone. The solution may be at the other end of the IT spectrum -- but the potential leap in value, and therefore the disruption, may be as impactful.
We're seeing a whole new packaging of the modern data center in a way that may very well change the market. It's bold, and it's risky. Cisco -- as an entrant to the full data center solution field, but with a firm command of certain key elements (like the network) -- may be able to do what the incumbent data center providers -- along with the ecology of support armies -- have not. One-stop shopping for data centers is been only a goal, never fully realized. In fact, many enterprises probably don't want any one vendor to have such control, especially when standards are in short supply. But they need lower costs and lower complexity.
Cisco, therefore, is using the latest software and standards (to SOME degree at least) to integrate the major elements of "compute, network, storage access and virtualization into a cohesive system," according to Cisco. They go on to claim this leads to "IT as a service" when combined with VMware's upcoming vSphere generation of data center virtualization and management products. I'd like to see more open source software choices in the mix, too. Perhaps the marker will demand this?
The concept remains appealing, though. Rather than have a systems integrator, or outsourcer, or major vendor, or your own IT department (or all of the above) cobble these complex data center elements together -- at high initial and ongoing monstrous cost ad infinitum -- the "integration is the data center" (as distinct from the network is the computer) has a nice ring to it.
Cisco is proposing that the next-generation data center, then, is actually an appliance -- or a series of like appliances. Drop in, turn on, tune in and run your applications and services faster, better, cheaper. Works if it works. This may be too much for most seasoned IT professionals to stomach, but it's worth a try, I suppose.
And this will, of course, greatly appeal during a prolonged period of economic stress and uncertainty. Say hello to 2010. And the approach could be appealing to enterprises, carriers, hosting companies, and a variety of what are loosely called cloud providers. Indeed, the more common the data center architecture approaches across all of these players, the more likely for higher-order efficiencies and process-level integrations. Federating, sharing, tiering, cost-sharing -- all of these become more possible to the heightened productivity of the community of participants.
The cloud of clouds needs a common architecture to reach it's potential. Remember Metcalfe's Law on the network's value based on number of participants on it? Well, supplant "node" and participant with "data center" and the Law and the network gain entirely new levels of value if the interoperability is broad and deep.
Make no mistake, the next generation data center business is a very large, multi-tens-of-billions of dollars market, and the competition is global, well-positioned, cash-secure and tough. Selling these data center appliances and "IT as a service" into individual accounts will be a huge challenge, especially if they are perceived as replacements alone. The Cisco solution needs to work well inside, alongside and inclusive of the other stuff, and the integrators have deep claws into the very accounts Cisco must enter.
We'll need to see the Cisco Unified Computing System act as a data center of data centers first. It's appeal, then, must be breathtaking to supplant the frisky incumbents, all of which also understand the importance of virtualization and low-cost hardware.
IBM, HP, Oracle, EMC, Microsoft, Sun, and the global SIs -- all will see any market game changing by Cisco as disruptive in perhaps the wrong way. But the enterprise IT market is ripe for major better ways of doing things, just like the buyers of iPhone have been for the last two years. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
UPDATE: HP has a response.
At the very least, Cisco's salvo will accelerate the shifts already under way in the next generation data center market toward highly-efficient on-premises clouds, complete and integrated applications support solutions, a deep adoption of virtualization -- and probably to a lot less total cost, real estate use, and energy demand as a result. The move by Cisco could also spur the embrace of open source software, along with standards, standards, standards. It's hard to see the economics working without them.
Already, Red Hat and Cisco announced a global OEM partnership. Cisco will sell and support Red Hat Enterprise Linux as part of its Unified Computing System, and will also support the newly announced Red Hat Enterprise Virtualization portfolio when it ships.
"Combined, Red Hat and Cisco will offer customers next-generation computing beyond RISC, beyond UNIX, beyond yesterday's legacy solutions for both virtualized and non-virtualized systems," says the statement.
Cisco and VMware are leaders in their areas, for sure, but they will need a community of global partners like Red Hat to pull this off. How about the larger open source universe? Unlike with Apple, it's a lot harder to create a data center support ecology than an app store. So the risks here are pretty huge. The enemy of my enemy is my friend effect may well kick in ... or not.
Or even more weirdness may ensue. What if Microsoft wanted in in a big way, given where it needs to go? What if Windows became the default virtualized container in Cisco's shiny new data center appliance? Disruption can be, well, disruptive.
Cisco has been seeking a way for many years now to extend its networking successes into new businesses. It has bought, it's built, and it's partnered -- but not to great effect in the past. Could this be the big one? The one that works? Is this the new $20 billion business that Cisco so desperately needs?
The result has lead to a tectonic market shift that combines stunning customer adoption, whole new types of user productivity, a thriving third-party developer community -- and mobile and PC market boundaries that are swiftly blurring. Doing the advance work of pulling together elements of the full solution -- so that the users or channel players or consultants do not -- has worked well for Apple. It was bold, risky, and it worked.
Carriers could never pull off the iPhone integration value for users. Indeed, the way carriers go to market practically forbids it. It took an outsider and new entrant to the field to change the game, to remove the complexity and cost of integration -- and pass along both the savings and seductive leap in functionality to the buyers.
With today's announcement of the Cisco Unified Computing System -- along with a deep partnership with VMware on software and management -- Cisco Systems is attempting a similar solution-level value play as Apple with the iPhone. The solution may be at the other end of the IT spectrum -- but the potential leap in value, and therefore the disruption, may be as impactful.
We're seeing a whole new packaging of the modern data center in a way that may very well change the market. It's bold, and it's risky. Cisco -- as an entrant to the full data center solution field, but with a firm command of certain key elements (like the network) -- may be able to do what the incumbent data center providers -- along with the ecology of support armies -- have not. One-stop shopping for data centers is been only a goal, never fully realized. In fact, many enterprises probably don't want any one vendor to have such control, especially when standards are in short supply. But they need lower costs and lower complexity.
Cisco, therefore, is using the latest software and standards (to SOME degree at least) to integrate the major elements of "compute, network, storage access and virtualization into a cohesive system," according to Cisco. They go on to claim this leads to "IT as a service" when combined with VMware's upcoming vSphere generation of data center virtualization and management products. I'd like to see more open source software choices in the mix, too. Perhaps the marker will demand this?
The concept remains appealing, though. Rather than have a systems integrator, or outsourcer, or major vendor, or your own IT department (or all of the above) cobble these complex data center elements together -- at high initial and ongoing monstrous cost ad infinitum -- the "integration is the data center" (as distinct from the network is the computer) has a nice ring to it.
Cisco is proposing that the next-generation data center, then, is actually an appliance -- or a series of like appliances. Drop in, turn on, tune in and run your applications and services faster, better, cheaper. Works if it works. This may be too much for most seasoned IT professionals to stomach, but it's worth a try, I suppose.
And this will, of course, greatly appeal during a prolonged period of economic stress and uncertainty. Say hello to 2010. And the approach could be appealing to enterprises, carriers, hosting companies, and a variety of what are loosely called cloud providers. Indeed, the more common the data center architecture approaches across all of these players, the more likely for higher-order efficiencies and process-level integrations. Federating, sharing, tiering, cost-sharing -- all of these become more possible to the heightened productivity of the community of participants.
The cloud of clouds needs a common architecture to reach it's potential. Remember Metcalfe's Law on the network's value based on number of participants on it? Well, supplant "node" and participant with "data center" and the Law and the network gain entirely new levels of value if the interoperability is broad and deep.
Make no mistake, the next generation data center business is a very large, multi-tens-of-billions of dollars market, and the competition is global, well-positioned, cash-secure and tough. Selling these data center appliances and "IT as a service" into individual accounts will be a huge challenge, especially if they are perceived as replacements alone. The Cisco solution needs to work well inside, alongside and inclusive of the other stuff, and the integrators have deep claws into the very accounts Cisco must enter.
We'll need to see the Cisco Unified Computing System act as a data center of data centers first. It's appeal, then, must be breathtaking to supplant the frisky incumbents, all of which also understand the importance of virtualization and low-cost hardware.
IBM, HP, Oracle, EMC, Microsoft, Sun, and the global SIs -- all will see any market game changing by Cisco as disruptive in perhaps the wrong way. But the enterprise IT market is ripe for major better ways of doing things, just like the buyers of iPhone have been for the last two years. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
UPDATE: HP has a response.
At the very least, Cisco's salvo will accelerate the shifts already under way in the next generation data center market toward highly-efficient on-premises clouds, complete and integrated applications support solutions, a deep adoption of virtualization -- and probably to a lot less total cost, real estate use, and energy demand as a result. The move by Cisco could also spur the embrace of open source software, along with standards, standards, standards. It's hard to see the economics working without them.
Already, Red Hat and Cisco announced a global OEM partnership. Cisco will sell and support Red Hat Enterprise Linux as part of its Unified Computing System, and will also support the newly announced Red Hat Enterprise Virtualization portfolio when it ships.
"Combined, Red Hat and Cisco will offer customers next-generation computing beyond RISC, beyond UNIX, beyond yesterday's legacy solutions for both virtualized and non-virtualized systems," says the statement.
Cisco and VMware are leaders in their areas, for sure, but they will need a community of global partners like Red Hat to pull this off. How about the larger open source universe? Unlike with Apple, it's a lot harder to create a data center support ecology than an app store. So the risks here are pretty huge. The enemy of my enemy is my friend effect may well kick in ... or not.
Or even more weirdness may ensue. What if Microsoft wanted in in a big way, given where it needs to go? What if Windows became the default virtualized container in Cisco's shiny new data center appliance? Disruption can be, well, disruptive.
Cisco has been seeking a way for many years now to extend its networking successes into new businesses. It has bought, it's built, and it's partnered -- but not to great effect in the past. Could this be the big one? The one that works? Is this the new $20 billion business that Cisco so desperately needs?
Sunday, March 15, 2009
Forrester Research: SaaS gains enterprise adoption, expands beyond 'vanilla' offerings
Software as a service (SaaS) is coming into its own, as interest and adoption continue to grow among enterprises and SaaS itself expands to meet the challenge.
This is the conclusion of a Forrester Research report, TechRadar For Sourcing & Vendor Management Professionals: Software as a Service. After talking to customers, vendors, and researchers, Forrester discovered that about 21 percent of enterprises were piloting or already using SaaS and another 26 percent are interested in it or considering it.
I expect this growth of SaaS use to increase under the dour economy as companies look to increase applications productivity but without any up-front capital spending, and also as they shut off expensive standalone applications on older hardware. SaaS as an economic appeal well suited to the challenges facing IT managers.
At the same time, says Forrester, companies are taking a more strategic approach to SaaS, which until now often flew in under the radar. That means IT didn't bring SaaS apps in, workers and managers did. Part of the strategic interest now comes from IT too -- to rein in system redundancies and costs.
Any responsible IT department should now conduct the audits and due-dilgence to determine which old and new applications would be best delivered as SaaS from third parties. The ability to absord these apps well also puts IT department in a better position to leverage cloud-based services and infrastructure fabrics.
SaaS's march into enterprises is tempered, however, from real or perceived increased security risks that come from using off-premises systems. This may account for the fact that the number of people not interested in using SaaS has increased over the past year. Do we hav a culture gap on SaaS use? I advise enterprises to thing like start-ups these days -- and that means use SaaS aggressively.
Another key finding of the March 13 report: SaaS offerings have proliferated and moved beyond their traditional "vanilla" customer relationship management (CRM) and human capital management functions.
Forrester determined 13 areas where SaaS applications are making headway. These include:
The bottom line for enterprises considering getting into the SaaS arena:
None of this is surprising news to regular readers of BriefingsDirect or those who listen regularly to the podcasts. Our analysts and guests talk about the growing reliance on SaaS applications, especially in view of the economic decline. In fact, our year-end predictions for 2009 focused quite intensely on the role of SaaS in helping companies weather the storm -- and even chart a new course for the enterprise.
One of our regular analyst-guests and fellow ZDNet blogger Phil Wainewright charted out most of the 2008 developments over a year ago in his 2008 predictions. His predictions were based on what he saw as an awakening among users and vendors as to the potential of SaaS.
Jeff Kaplan in his Think IT Strategies blog made many of the same arguments in his 2009 predictions, in which he predicted that the thinking among IT executives was beginning to shift from whether to do SaaS to how to do it.
These bullish predictions and observations stand in stark contrast to a crepe-hanging piece last July in BusinessWeek, in which Gene Marks of the Marks Group declared SaaS overhyped, overpriced, and in need to debunking. The Marks Group sells customer relationship, service, and financial management tools to small and midsize businesses.
Nothing like a recession to focus the mind on practicality over ideology.
This is the conclusion of a Forrester Research report, TechRadar For Sourcing & Vendor Management Professionals: Software as a Service. After talking to customers, vendors, and researchers, Forrester discovered that about 21 percent of enterprises were piloting or already using SaaS and another 26 percent are interested in it or considering it.
I expect this growth of SaaS use to increase under the dour economy as companies look to increase applications productivity but without any up-front capital spending, and also as they shut off expensive standalone applications on older hardware. SaaS as an economic appeal well suited to the challenges facing IT managers.
At the same time, says Forrester, companies are taking a more strategic approach to SaaS, which until now often flew in under the radar. That means IT didn't bring SaaS apps in, workers and managers did. Part of the strategic interest now comes from IT too -- to rein in system redundancies and costs.
Any responsible IT department should now conduct the audits and due-dilgence to determine which old and new applications would be best delivered as SaaS from third parties. The ability to absord these apps well also puts IT department in a better position to leverage cloud-based services and infrastructure fabrics.
SaaS's march into enterprises is tempered, however, from real or perceived increased security risks that come from using off-premises systems. This may account for the fact that the number of people not interested in using SaaS has increased over the past year. Do we hav a culture gap on SaaS use? I advise enterprises to thing like start-ups these days -- and that means use SaaS aggressively.
Another key finding of the March 13 report: SaaS offerings have proliferated and moved beyond their traditional "vanilla" customer relationship management (CRM) and human capital management functions.
Forrester determined 13 areas where SaaS applications are making headway. These include:
- Archiving and eDiscovery
- Business Intelligence (BI)
- Collaboration
- CRM
- Digital asset management
- Enterprise content management
- Enterprise resource planning (ERP)
- Human resources
- Integration
- IT management
- Online backup
- Supply chain management
- Web content management
- Web conferencing
The bottom line for enterprises considering getting into the SaaS arena:
Sourcing and vendor management executives must keep ahead of the growing trend to understand where SaaS is most heavily used and where it lurks on the horizon, so that they can enable their business users to be more successful in business led SaaS deployments as well as to consider SaaS as a viable alternative to IT-led vendor evaluations. Regardless of where the SaaS deployment originates, sourcing and vendor management executives have a key role to play in contracts and pricing, due diligence, and vendor governance and risk.The full Forrester report is available from http://www.forrester.com/go?docid=46747
None of this is surprising news to regular readers of BriefingsDirect or those who listen regularly to the podcasts. Our analysts and guests talk about the growing reliance on SaaS applications, especially in view of the economic decline. In fact, our year-end predictions for 2009 focused quite intensely on the role of SaaS in helping companies weather the storm -- and even chart a new course for the enterprise.
One of our regular analyst-guests and fellow ZDNet blogger Phil Wainewright charted out most of the 2008 developments over a year ago in his 2008 predictions. His predictions were based on what he saw as an awakening among users and vendors as to the potential of SaaS.
Jeff Kaplan in his Think IT Strategies blog made many of the same arguments in his 2009 predictions, in which he predicted that the thinking among IT executives was beginning to shift from whether to do SaaS to how to do it.
These bullish predictions and observations stand in stark contrast to a crepe-hanging piece last July in BusinessWeek, in which Gene Marks of the Marks Group declared SaaS overhyped, overpriced, and in need to debunking. The Marks Group sells customer relationship, service, and financial management tools to small and midsize businesses.
Nothing like a recession to focus the mind on practicality over ideology.
Thursday, March 12, 2009
BriefingsDirect analysts discuss solutions for bringing human interactions into business process workflows
Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.
Read a full transcript of the discussion.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 37, a periodic discussion and dissection of software, services, SOA and compute cloud-related news and events with a panel of IT analysts.
In this episode, recorded Feb. 13, 2009, our guests examine the essential topic of bringing human activity into alignment with standards-based IT supported business processes. We revisit the topic of BPEL4People, an OASIS specification.
The need to automate and extend complex processes is obvious. What's less obvious, is the need to join the physical world of people, their habits, needs, and perceptions with the artificial world of service-oriented architecture (SOA) and business process management (BPM).
This interaction or junction will become all the more important as cloud-based services become more common.
Our discussion, moderated by me, includes noted IT industry analysts and experts Michael Rowley, director of technology and strategy at Active Endpoints; Jim Kobielus, senior analyst at Forrester Research; and JP Morgenthal, independent analyst and IT consultant.
Here are some excerpts:
Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Read a full transcript of the discussion.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 37, a periodic discussion and dissection of software, services, SOA and compute cloud-related news and events with a panel of IT analysts.
In this episode, recorded Feb. 13, 2009, our guests examine the essential topic of bringing human activity into alignment with standards-based IT supported business processes. We revisit the topic of BPEL4People, an OASIS specification.
The need to automate and extend complex processes is obvious. What's less obvious, is the need to join the physical world of people, their habits, needs, and perceptions with the artificial world of service-oriented architecture (SOA) and business process management (BPM).
This interaction or junction will become all the more important as cloud-based services become more common.
Our discussion, moderated by me, includes noted IT industry analysts and experts Michael Rowley, director of technology and strategy at Active Endpoints; Jim Kobielus, senior analyst at Forrester Research; and JP Morgenthal, independent analyst and IT consultant.
Here are some excerpts:
Rowley: [With BPEL4People] you can automate the way people work with their computers and interact with other people by pulling tasks off of a worklist and then having a central system, the BPM engine, keep track of who should do the next thing, look at the results of what they have done, and based on the data, send things for approval.Read a full transcript of the discussion.
It basically captures the business process, the actual functioning of a business, in software in a way that you can change over time. It's flexible, but you can also track things, and that kind of thing is basic.
... One of the hardest questions is what you standardize and how you divvy up the standards. One thing that has slowed down this whole vision of automating business process is the adoption of standards. ... The reason [BPM] isn't at that level of adoption yet is because the standards are new and just being developed. People have to be quite comfortable that, if they're going to invest in a technology that's running their organization, this is not just some proprietary technology.
The big insight behind BPEL4People is that there's a different standard for WS-Human Task. It's basically keeping track of the worklist aspect of a business process versus the control flow that you get in the BPEL4People side of the standard. So, there's BPEL4People as one standard and the WS-Human Task as another closely related standard.
By having this dichotomy you can have your worklist system completely standards based, but not necessarily tied to your workflow system or BPM engine. We've had customers actually use that. We've had at least one customer that's decided to implement their own human task worklist system, rather than using the one that comes out of the box, and know that what they have created is standards compliant.
All of the companies involved -- Oracle, IBM, SAP, Microsoft, and TIBCO, as well as Active Endpoints -- seem to be very interested in this. One interesting one is Microsoft. They are also putting in some special effort here.
One value of a BPM engine is that you should be able to have a software system, where the overall control flow, what's happening, how the business is being run can be at the very least read by a nontechnical user. They can see that and say, "You know, we're going through too many steps here. We really can skip this step. When the amount of money being dealt with is less than $500, we should take this shortcut."
That's something that at least can be described by a lay person, and it should be conveyed with very little effort to a technical person who will get it or who will make the change to get it so that the shortcut happens.
Koblielus: It's critically important that the leading BPM and workflow vendors get on board with this standard. ... This is critically important for SOA, where SOA applications for human workflows are at the very core of the application.
... BPEL4People, by providing an interoperability framework for worklisting capabilities of human workflow systems, offers the promise of allowing organizations to help users have a single view of all of their tasks and all the workflows in which they are participating. That will be a huge productivity gain for the average information worker, if that ever comes to pass.
... One thing that users are challenged with all the time in business is the fact that they are participating in so many workflows, so many business processes. They have to multi-task, and they have to have multiple worklists and to-do lists that they are checking all the time. It's just a bear to keep up with.
Morgenthal: Humans interact with humans, humans interact with machines, and data is changing everywhere. How do we keep everything on track, how do we keep everything coordinated, when you have a whole bunch of ad-hoc processes hitting this standardized process? That requires some unique features. It requires the ability to aggregate different content types together into a single place.
One key term that has been applied here industry wide I found only in the government. They call this "suspense tracking." That's a way of saying that something leaves the process and goes into "ad hoc land." We don't know what happens in there, but we control when it leaves and we control when it comes back.
I've actually extended this concept quite a bit and I am working on getting some papers and reports written around something I am terming "business activity coordination," which is a way to control what's in the black hole.
So, you have these ongoing ad hoc processes that occur in business everyday and are difficult to automate. I've been analyzing solutions to this, and business activity coordination is that overlap, the Venn diagram, if you will, of process-centric and collaborative actions. For a human to contribute back and for a machine to recognize that the dataset has changed, move forward, and take the appropriate actions from a process-centric standpoint, after a collaborative activity is taking place is possible today, but is very difficult.
One thing I'm looking at is how SharePoint, more specifically Windows SharePoint Services, acts as a solid foundation that allows humans and machines to interact nicely. It comes with a core portal that allows humans to visualize and change the data, but the behavioral connections to actually notify workflows that it's time to go to the next step, based on those human activities, are really critical functions. I don't see them widely available through today's workflow and BPM tools. In fact, those tools fall short, because of their inability to recognize these datasets.
... I don't necessarily agree with the statement earlier that we need to have tight control of this. A lot of this can be managed by the users themselves, using common tools. ... Neither WS-Human Task nor BPEL4People addresses how I control what's happening inside the black hole.
Rowley: Actually it does. The WS-Human Task does talk about how do you control what's in the black hole -- what happens to a task and what kind of things can happen to a task while its being handled by a user. One of the things about Microsoft involvement in the standards committee is that they have been sharing a lot with us about SharePoint and we have been discussing it. This is all public. The nice thing about OASIS is that everything we do is in public, along with the meeting notes.
The Microsoft people are giving us demonstration of SharePoint, and we can envision as an industry, as a bunch of vendors, a possibility of interoperability with a BPEL4People business process engine like the ActiveVOS server. Maybe somebody doesn't want to use our worklist system and wants to use SharePoint, and some future version of SharePoint will have an implementation of WS-Human Task, or possibly somebody else will do an implementation of WS-Human Task.
Until you get the standard, that vision that JP mentioned about having somebody use SharePoint and having some BPM engine be able to coordinate it, isn't possible. We need these standards to accomplish that.
A workflow system or a business process is essentially an event-based system. Complex Event Processing (CEP) is real-time business intelligence. You put those two together and you discover that the events that are in your business process are inherently valuable events.
You need to be able to discover over a wide variety of business processes, a wide variety of documents, or wide variety of sources, and be able to look for averages, aggregations and sums, and the joining over these various things to discover a situation where you need to automatically kickoff new work. New work is a task or a business process.
What you don't want to have is for somebody to have to go in and monitor or discover by hand that something needs to be reacted to. If you have something like what we have with ActiveVOS, which is a CEP engine embedded with your BPM, then the events that are naturally business relevant, that are in your BPM, can be fed into your CEP, and then you can have intelligent reaction to everyday business.
... Tying event processing to social networks makes sense, because what you need to have when you're in a social network is visibility, visibility into what's going on in the business and what's going on with other people. BPM is all about providing visibility. ... If humans are involved in discovering something, looking something up, or watching something, I think of it more as either monitoring or reporting, but that's just a terminology. Either way, events and visibility are really critical.
Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Monday, March 9, 2009
Survey says: Cloud computing proving to be a two-edged sword in a down economy
Cloud computing seems to be trapped between the rock of great expectations and the hard place of low confidence. While most enterprise and IT decision makers view cloud as a way to lower capital and operational costs, the way to more aggressive cloud adoption is blocked by concerns about security and control.
This is the finding of a recent survey commissioned by IT consultancy Avanade, Inc., Seattle, Wash., and conducted by Kelton Research, Culver City, CA.
The good news is that 54 percent of people surveyed used technology to cut costs, a boon for IT providers in these turbulent economic times. According to the survey, for every two companies that cut back on technology to save money, five will adopt new technology as a way of reducing expenses.
Also encouraging is the fact that most people, 9 out of 10 C-level executives, know what cloud computing is and what it can do. More than 60 percent know that it can reduce costs, make the company more flexible, help the company concentrate on its core business as well as react more quickly to market conditions.
The bad news is that 61 percent of those surveyed aren't using cloud technologies at this time, and of those who now rely solely on internal systems, 84 percent say they have no plans to switch to cloud in the next 12 months.
Something like Garrison Keillor's mythical hometown of Lake Wobegon, where "all the children are above average," nearly two thirds of US companies surveyed consider themselves "early adopters," which raises the question of how you can be an early adopter when almost everyone else is doing it. Whether early adopter or not, the fact remains that most people are shying away from cloud, though it's a hot topic at the Chitchat Cafe.
The main concern? Fears of security threats and loss of control over systems. Ironically, these were the same concerns we heard when email, the Internet, web services, and instant messaging appeared on the scene. None of those concerns were without merit, but enterprises seem to have adjusted and benefited.
The companies surveyed who had overcome their resistance reported business benefits and are accelerating their use of cloud technologies. Of those companies who have adopted cloud, use it for business applications:
I expect that trend to continue and accelerate, especially for new companies born in the recession where survival is the mother of invention (and the father of low or nil capital up front costs).
This is the finding of a recent survey commissioned by IT consultancy Avanade, Inc., Seattle, Wash., and conducted by Kelton Research, Culver City, CA.
The good news is that 54 percent of people surveyed used technology to cut costs, a boon for IT providers in these turbulent economic times. According to the survey, for every two companies that cut back on technology to save money, five will adopt new technology as a way of reducing expenses.
Also encouraging is the fact that most people, 9 out of 10 C-level executives, know what cloud computing is and what it can do. More than 60 percent know that it can reduce costs, make the company more flexible, help the company concentrate on its core business as well as react more quickly to market conditions.
The bad news is that 61 percent of those surveyed aren't using cloud technologies at this time, and of those who now rely solely on internal systems, 84 percent say they have no plans to switch to cloud in the next 12 months.
Something like Garrison Keillor's mythical hometown of Lake Wobegon, where "all the children are above average," nearly two thirds of US companies surveyed consider themselves "early adopters," which raises the question of how you can be an early adopter when almost everyone else is doing it. Whether early adopter or not, the fact remains that most people are shying away from cloud, though it's a hot topic at the Chitchat Cafe.
The main concern? Fears of security threats and loss of control over systems. Ironically, these were the same concerns we heard when email, the Internet, web services, and instant messaging appeared on the scene. None of those concerns were without merit, but enterprises seem to have adjusted and benefited.
The companies surveyed who had overcome their resistance reported business benefits and are accelerating their use of cloud technologies. Of those companies who have adopted cloud, use it for business applications:
- Customer relationship management (CRM) -- 50 percent
- Data storage -- 46 percent
- Human resources -- 44 percent
I expect that trend to continue and accelerate, especially for new companies born in the recession where survival is the mother of invention (and the father of low or nil capital up front costs).
Subscribe to:
Posts (Atom)