You have to give Oracle credit for persistence. The software giant has been trying to build out its groupware business for neary 10 years, and has as yet modest success.
Now, with Beehive, the next generation of its collaboration suite, Oracle may be sniffing some fresh and meaningful blood in the enterprise messaging waters.
The investment Oracle is making in Beehive, announced this week at the massive Oracle OpenWorld conference in San Francisco, signals an opportunity born more by the shifting sands beneath Microsoft Exchange and Outlook, than in any new-found performance breakthroughs from Oracle's developers.
Here's why: Economics and technology improvements, particularly around virtualization, are bringing more IT functionality generally back to the servers and off of the client PCs. As a result, the client-server relationship between Microsoft Exchange Server and the Outlook client -- and all those massive and costly (albeit risky) .pst files on each PC -- is being broken.
The new relationship is server to browser, or server to thin-client ICA-fed receiver. Here's what the CIO of Bechtel told a group of analysts recently: ""Spend your [IT] money on the backend, not on the front end."
The cost, security risks, and lack of extension of the data inside of Exchange, and on all those end device hard drives, is a non-sustainable IT millstone. Messaging times, the are a changin. Sure, some will ust keep Exchange and deliver the client as Outlook Web Access, or via terminal services.
But what I hear from those CIOs now leverging virtualization and evaluating VDI is that the Exchange-Outlook-SharePoint trifecta for Microsoft is near the top of their list of first strikes to slash costs and move this messaging beast onto the server resources pool where it can be wrestled to the ground and re-architected in an SOA. They have similar thoughts about client-side spread sheets like Excel, to, but that's another blog.
Yep, Exchange and its cotierie is widely acklowledged as coming with an agilty deficit and at a premium TCO -- but with commodity-priced features and functions. For all intents and purposes, email, calendar, files foldering, and even unified messaging functions are free, or at least low-cost features of larger applications function sets or suites.
Enterprises are paying gold for copper, when it comes to messaging and groupware. And then they have to integrate it.
Oracle recognizes that as enterprises move from high-cost, low-flexibity client-server Exchange to services-based server-based messaging -- increasingly extending messaging services in the context of SOA, network sevices like Cisco's SONA, web services, and cloud services -- they will be looking beyond Exchange.
Enterprises over the next several years will be undertaking a rethinking of messaging, from a paradigm, cost and feature set perspective. A big, honking expensive client-server approach will give way to something cheaper, more flexible, able to integrate better, more likey to play well in an on-premises cloud, where the data files are not messaging-system specific. Exchange is a Model T in a Thunderbird world.
Oracle, IBM, Google, Yahoo ... they all have their sights set on poaching and chipping away at the massive and vulnerable global Exchange franchise (just like MSFT did to Lotus and GroupWare). And that pulls out yet another tumbler from Microsoft's enterprise lock-in.
It won't happen overnight, but it will happen. Oracle is betting on it.
Tuesday, September 23, 2008
Sybase moves to spur process modeling agility with latest PowerDesigner
Sybase today announced a new version of its PowerDesigner tools, a model-driven approach to crafting and implementing business processes.
PowerDesigner 15 provides modeling and metadata management through a Link and Synch technology, helping to increase impact analysis and providing greater visibility for business analysts.
The main goal, according to Sybase, is to create greater agility by breaking down the silos that currently wall off the various IT elements from each other and from the business goals. See my thoughts on my CEP is stepping up to the plate on similar values. And we've seen a lot of action on improving business process modeling lately.
Key features of PowerDesigner 15 include:
PowerDesigner 15 provides modeling and metadata management through a Link and Synch technology, helping to increase impact analysis and providing greater visibility for business analysts.
The main goal, according to Sybase, is to create greater agility by breaking down the silos that currently wall off the various IT elements from each other and from the business goals. See my thoughts on my CEP is stepping up to the plate on similar values. And we've seen a lot of action on improving business process modeling lately.
Key features of PowerDesigner 15 include:
PowerDesigner 15 is currently scheduled to be available on Oct. 31 and ranges in price from $7,495 to $11,495 per developer seat. More information is available at the PowerDesigner Web site.
- The Link and Synch technology, which captures the intersections between all architectural layers and perspectives of the enterprise.
- An architecture model that allows users to formally capture all metadata relevant to traditional enterprise architecture (EA) analysis.
- An impact analysis diagram that allows visualization of the cascading impact of change and the management of time and costs associated with changes.
- Customizable support for homemade or industry standards.
- A repository Web viewer that allows sharing EA metadata with all stakeholders.
Monday, September 22, 2008
Complex Event Processing goes mainstream with a boost from TIBCO's latest solution
We often hear a lot about how IT helps business with their "outcomes," and then we're shown a flow chart diagram with a lot of arrows and boxes ... that ultimately points to business "agility" in flashing lights.
Sometimes the dots connect, and sometimes there's a required leap of faith that IT spending X will translate into business benefits Y.
But a new box on the flow chart these days, Complex Event Processing (CEP), really does close the loop between what IT does and what businesses want to do. CEP actually builds on what business intelligence (BI), services oriented architecture (SOA), cloud computing, business process modeling (BPM), and a few other assorted acronyms, provide.
CEP is a great way for all the myriad old and new investments in IT to be more fully leveraged to accommodate the business needs of automating processes, managing complexity, reducing risk, and capturing excellence for repeated use.
Based on its proven heritage in financial services, CEP has a lot of value to offer many other kinds of companies as they seek to extract "business outcomes" from the IT departments' raft of services. That's why I think CEP's value should be directed at CEOs, line of business managers, COOs, CSOs, and CMOs -- not just the database administrators and other mandarins of IT.
That's because modern IT has elevated many aspects of data resources into services that support "events." So the place to mine for patterns of efficiency or waste -- to uncover excellence or risk -- is in the interactions of the complex events. And once you done that, not only can you capture those good and bad events, you can execute on them to reduce the risks or to capture and excellence and instantiate it as repeatable processes.
And its in this ability to execute within the domain of CEP that TIBCO Software has introduced today TIBCO BusinessEvents 3.0. The latest version of this CEP harness solution builds on the esoteric CEP capabilities that program traders have used and makes them more mainstream, said TIBCO. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]
Making CEP mainstream through BusinessEvents 3.0 has required some enhancements, including:
I think that CEP offers the ability to extract real and appreciated business value from a long history of IT improvements. If companies like BI, and they do, then CEP takes off where BI leaves off, and the combination of strong capabilities in BI and CEP is exactly what enterprises need now to provide innovation and efficiency in complex and distributed undertakings.
And TIBCO's products are pointing up how now to take the insights of CEP into the realm of near real-time responses and ability to identify and repeat effective patterns of business behaviors. Dare I say, "agility"?
Sometimes the dots connect, and sometimes there's a required leap of faith that IT spending X will translate into business benefits Y.
But a new box on the flow chart these days, Complex Event Processing (CEP), really does close the loop between what IT does and what businesses want to do. CEP actually builds on what business intelligence (BI), services oriented architecture (SOA), cloud computing, business process modeling (BPM), and a few other assorted acronyms, provide.
CEP is a great way for all the myriad old and new investments in IT to be more fully leveraged to accommodate the business needs of automating processes, managing complexity, reducing risk, and capturing excellence for repeated use.
Based on its proven heritage in financial services, CEP has a lot of value to offer many other kinds of companies as they seek to extract "business outcomes" from the IT departments' raft of services. That's why I think CEP's value should be directed at CEOs, line of business managers, COOs, CSOs, and CMOs -- not just the database administrators and other mandarins of IT.
That's because modern IT has elevated many aspects of data resources into services that support "events." So the place to mine for patterns of efficiency or waste -- to uncover excellence or risk -- is in the interactions of the complex events. And once you done that, not only can you capture those good and bad events, you can execute on them to reduce the risks or to capture and excellence and instantiate it as repeatable processes.
And its in this ability to execute within the domain of CEP that TIBCO Software has introduced today TIBCO BusinessEvents 3.0. The latest version of this CEP harness solution builds on the esoteric CEP capabilities that program traders have used and makes them more mainstream, said TIBCO. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]
Making CEP mainstream through BusinessEvents 3.0 has required some enhancements, including:
- Decision Manager, a new business user Interface that helps business users write rules and queries that into tap the power of CEP in their domain of expertise.
- Events Stream Processing, a BusinessEvents query language that allows SQL-like queries to target event streams in real-time, which also allows immediate action to be taken on patterns of interest.
- Distributed BusinessEvents, a distributed cache and rules engine that provides massive scaling of events monitoring, as much as twice the magnitude of events monitoring previously possible.
I think that CEP offers the ability to extract real and appreciated business value from a long history of IT improvements. If companies like BI, and they do, then CEP takes off where BI leaves off, and the combination of strong capabilities in BI and CEP is exactly what enterprises need now to provide innovation and efficiency in complex and distributed undertakings.
And TIBCO's products are pointing up how now to take the insights of CEP into the realm of near real-time responses and ability to identify and repeat effective patterns of business behaviors. Dare I say, "agility"?
Saturday, September 20, 2008
LogLogic updates search and analysis tools for conquering IT systems management complexity
Insight into operations has been a hallmark of modern business improvements, from integrated back-office applications to business intelligence (BI) to balanced scorecards and management portals.
But what does the IT executive have to gain similar insight into the systems operations that support the business operations? Well, they have reams of disparate logs and systems analytics data that pour forth every second from all their network and infrastructure devices. Making sense of the data and leveraging the analytics to reduce risk of failure therefore becomes the equivalent of BI for IT.
Now a major BI for IT provider, LogLogic, has beefed up its flagship products with the announcement of LogLogic 4.6. By putting more data together in ways that can be quickly acted on helps companies gain critical visibility into their increasingly complex IT operations, while gaining ease of regulatory compliance along with improved security. [Disclosure: LogLogic is a sponsor of BriefingsDirect podcasts.]
The latest version of the log management tools from San Jose, Calif.-based LogLogic includes new features that help give enterprises a 360-degree view of how business operations are running, including dynamic range selection, graphical trending, and real-time reporting. Among the improvements are:
I have talked extensively to the folks at LogLogic about the log-centered approach to dealing with IT's growing complexity, as systems and services multiply and are spurred on by the virtualization wildfire. Last week I posted a podcast, in which LogLogic CEO Pat Sueltz explained how log-management aids in visibility and creates a favorable return on investment (ROI) for enterprises.
LogLogic 4.6 will be available later this month as a free upgrade to current customers under Support contract. For new customers, pricing will start at $14,995 for the LX appliance, $53,995 for the ST appliance and $37,500 for the MX appliance.
But what does the IT executive have to gain similar insight into the systems operations that support the business operations? Well, they have reams of disparate logs and systems analytics data that pour forth every second from all their network and infrastructure devices. Making sense of the data and leveraging the analytics to reduce risk of failure therefore becomes the equivalent of BI for IT.
Now a major BI for IT provider, LogLogic, has beefed up its flagship products with the announcement of LogLogic 4.6. By putting more data together in ways that can be quickly acted on helps companies gain critical visibility into their increasingly complex IT operations, while gaining ease of regulatory compliance along with improved security. [Disclosure: LogLogic is a sponsor of BriefingsDirect podcasts.]
The latest version of the log management tools from San Jose, Calif.-based LogLogic includes new features that help give enterprises a 360-degree view of how business operations are running, including dynamic range selection, graphical trending, and real-time reporting. Among the improvements are:
The latest release provides improved search for IT intelligence, forensics workflow and advanced secure remote access control. LogLogic 4.6 will be rolled out for the company's family of LX, ST, and MX products, helping large- and mid-sized companies to capture, search and store their log data to improve business operations, monitor user activity, and meet industry standards for security and compliance.
- Index search user interface, including clustering by source, dynamic range selection, trending over time and graphical representation of search results
- Search history, which automatically saves search criteria for later reuse
- Forensics clipboard to annotate, organize, record and save up to 1000 messages per clipboard – up to 100 clipboards per user
- Active directory remote authentication with role-based access control (RBAC)
- Enhanced security via complex password creation
- Enhanced backup/restore and failover, including incremental backup support and "backup now" capability.
I have talked extensively to the folks at LogLogic about the log-centered approach to dealing with IT's growing complexity, as systems and services multiply and are spurred on by the virtualization wildfire. Last week I posted a podcast, in which LogLogic CEO Pat Sueltz explained how log-management aids in visibility and creates a favorable return on investment (ROI) for enterprises.
LogLogic 4.6 will be available later this month as a free upgrade to current customers under Support contract. For new customers, pricing will start at $14,995 for the LX appliance, $53,995 for the ST appliance and $37,500 for the MX appliance.
Genuitec expands Pulse provisioning system beyond tools to Eclipse distros, eyes larger software management role
Genuitec, one of the founders of the Eclipse Foundation, has expanded the reach of its Pulse software provisioning system with the announcement of the Pulse "Private Label," designed to give companies control over their internal and external software distributions.
Until now, Pulse was designed for managing and standardizing software development tools in the Eclipse environment. With Private Label, enterprises can manage full enterprise software delivery for any Eclipse-based product or application suite.
Plans call for subsequently expanding Private Label into a full lifecycle management system for software beyond Eclipse. [Disclosure: Genuitec is a sponsor of BriefingsDirect podcasts.]
Private Label, which can be tailored to customer specifications, can be hosted either by Genuitec or within a corporate firewall to integrate with existing infrastructure. Customers also control the number of software catalogs, as well as their content. Other features include full custom branding and messaging, reporting of software usage, and control over the ability for end-users to customize their software profiles, if desired.
Last month, I sat down for a podcast with Todd Williams, vice president of technology at Genuitec, and we discussed the role of Pulse as a simple, intuitive way to install, update, and share custom configurations with Eclipse-based tools.
Coinciding with the release of Pulse Private Label is the release of Pulse 2.3 for Community Edition and Freelance users. Upgrades include performance improvements and catalog expansion. Pulse 2.3 Community Edition is a free service. Pulse 2.3 Freelance is a value-add service priced at $6 per month per user or $60/year. Pulse Private Label pricing is based on individual requirements.
More information is available at the Pulse site.
Until now, Pulse was designed for managing and standardizing software development tools in the Eclipse environment. With Private Label, enterprises can manage full enterprise software delivery for any Eclipse-based product or application suite.
Plans call for subsequently expanding Private Label into a full lifecycle management system for software beyond Eclipse. [Disclosure: Genuitec is a sponsor of BriefingsDirect podcasts.]
Private Label, which can be tailored to customer specifications, can be hosted either by Genuitec or within a corporate firewall to integrate with existing infrastructure. Customers also control the number of software catalogs, as well as their content. Other features include full custom branding and messaging, reporting of software usage, and control over the ability for end-users to customize their software profiles, if desired.
Last month, I sat down for a podcast with Todd Williams, vice president of technology at Genuitec, and we discussed the role of Pulse as a simple, intuitive way to install, update, and share custom configurations with Eclipse-based tools.
Coinciding with the release of Pulse Private Label is the release of Pulse 2.3 for Community Edition and Freelance users. Upgrades include performance improvements and catalog expansion. Pulse 2.3 Community Edition is a free service. Pulse 2.3 Freelance is a value-add service priced at $6 per month per user or $60/year. Pulse Private Label pricing is based on individual requirements.
More information is available at the Pulse site.
Wednesday, September 17, 2008
iTKO's SOA testing and validation role supports increasingly complex integration lifecycles
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: iTKO.
Read a full transcript of the discussion.
The real value of IT comes not from the systems themselves, but from managed and agile business processes in real-world use. Yet growing integration complexity, and the need to support process-level orchestrations of the old applications and new services, makes quality assurance at the SOA level challenging.
SOA, enterprise integration, virtualization and cloud computing place a premium on validating orchestrations at the process level before -- not after -- implementation and refinement. Process-level testing and validation also needs to help IT organizations reduce their labor and maintenance costs, while harmonizing the relationship between development and deployment functions.
iTKO, through it's LISA product and solutions methods, has created a continuous validation framework for SOA and middleware integrations to address these issues. The goal is to make sure all of the expected outcomes in SOA-supported activities occur in a controlled test phase, not in a trail-and-error production phase that undercuts IT's credibility.
To learn more about performance and quality assurance issues around enterprise integration, middleware, and SOA, I recently interviewed John Michelsen, chief architect and founder of iTKO. [See additional background and solutions.]
Here are some excerpts from our discussion:
Read a full transcript of the discussion.
Read a full transcript of the discussion.
The real value of IT comes not from the systems themselves, but from managed and agile business processes in real-world use. Yet growing integration complexity, and the need to support process-level orchestrations of the old applications and new services, makes quality assurance at the SOA level challenging.
SOA, enterprise integration, virtualization and cloud computing place a premium on validating orchestrations at the process level before -- not after -- implementation and refinement. Process-level testing and validation also needs to help IT organizations reduce their labor and maintenance costs, while harmonizing the relationship between development and deployment functions.
iTKO, through it's LISA product and solutions methods, has created a continuous validation framework for SOA and middleware integrations to address these issues. The goal is to make sure all of the expected outcomes in SOA-supported activities occur in a controlled test phase, not in a trail-and-error production phase that undercuts IT's credibility.
To learn more about performance and quality assurance issues around enterprise integration, middleware, and SOA, I recently interviewed John Michelsen, chief architect and founder of iTKO. [See additional background and solutions.]
Here are some excerpts from our discussion:
Folks who are using agile development principles and faster iterations of development are throwing services up fairly quickly -- and then changing them on a fairly regular basis. That also throws a monkey wrench into how that impacts the rest of the services that are being integrated.Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: iTKO.
That’s right, and we’re doing that on purpose. We like the fact that we’re changing systems more frequently. We’re not doing that because we want chaos. We’re doing it because it’s helping the businesses get to market faster, achieving regulatory compliance faster, and all of those good things. We like the fact that we’re changing, and that we have more tightly componentized the architecture. We’re not changing huge applications, but we’re just changing pieces of applications -- all good things.
Yet if my application is dependent upon your application, and you change it out from under me, your lifecycle impacts mine, and we have a “testable event” -- even though I’m not in a test mode at the moment. What are we going to do about this? We have to rethink the way that we do services lifecycles. We have to rethink the way we do integration and deployment.
If the world were as simple as we wanted it to be, we could have one vendor produce that system that is completely self-contained, self-managed, very visible or very "monitorable," if you will. That’s great, but that becomes one box of the dozens on the white board. The challenge is that not every box comes from that same vendor.
So we end up in this challenge where we’ve got to get that same kind of visibility and monitoring management across all of the boxes. Yet that’s not something that you just buy and that you get out of the box.
In a nutshell, we’ve got to be able to touch, from the testing point of view, all these different technologies. We have to be able to create some collaboration across all these teams, and then we have to do continuous validation of these business processes over time, even when we are not in lifecycles.
I can’t tell you how many times I’ve seen a customer who has said, “Well, we've run out and bought this ESB and now we’re trying to figure out how to use it.” I've said, “Whoa! You first should have figured out you needed it, and in what ways you would use it that would cause you to then buy it.”
We can’t build stuff, throw it over the wall into the production system to see if it works, and then have a BAM-type tool tell us -- once it gets into the statistics -- "By the way, they’re not actually catching orders. You’re not actually updating inventory or your account. Your customer accounts aren’t actually seeing an increase in their credit balance when orders are being placed."
That’s why we’ll start with the best practices, even though we’re not a large services firm. Then, we’ll come in with product, as we see the approach get defined. ... When you’re going down this kind of path, you’re going down a path to interconnect your systems in this same kind of ways. Call it service orientation or call it a large integration effort, either way, the outcome from a system’s point of view is the same.
What they’re doing is adopting these best practices on a team level so that each of these individual components is getting their own tests and validation. That helps them establish some visibility and predictability. It’s just good, old-fashioned automated test coverage at the component level. ... So this is why, as a part of lifecycles, we have to do this kind of activity. In doing so, we drive into value, we get something for having done our work.
Read a full transcript of the discussion.
Monday, September 15, 2008
Desktone, Wyse bring Flash content to desktop virtualization delivery
Desktone hopes to overcome two major roadblocks to the adoption of virtual desktop infrastructure (VDI) with today's announcement of a partnership that will bring rich media to virtual desktops and a try-before-you-buy program.
In a bid to bring a multimedia support to thin clients, Desktone of Chelmsford, Mass., and Wyse Technology, San Jose, Calif., announced at WMworld in Las Vegas that they are integrating Desktone dtFlash with Wyse TCX Multimedia, allowing companies to use Flash in a virtual desktop environment to think client devices.
Adobe's Flash technology is becoming more widespread for enterprises and consumers today, for video and rich Internet application interfaces alike. A lack of Flash support on thin clients and for applications and desktops delivery via VDI has potentially delayed adoption of desktop virtualization.
Word has it that Citrix will also offer Flash support for its VDI offerings before the end of the year. It's essential that VDI providers knock down each and every excuse not to use them, to do everything that full-running PCs do, only from the servers. Flash is a big item to fix.
Introduced last year, Wyse TCX Multimedia delivers rich PC-quality multimedia to virtual desktop users. It works with the RDP and ICA protocols that connect the virtual machines on the server to the client, accelerating and balancing workload to display rich multimedia on the client, often offloading the task from the server entirely.
Desktone dtFlash, introduced today, resides in the host virtual machine and acts as the interface between the Flash player and Wyse TCX. Together they allow users to run wide-ranging multimedia applications, including Flash, on their virtual desktops.
Another roadblock to virtualization is that many companies are hesitant to move to VDI because it requires substantial commitment of resources, and the companies are unsure of the benefits. To overcome this hesitancy, Desktone also announced a Desktop as a Service (DaaS) Pilot that will allow companies to explore the benefits of virtualization without having to build the environment themselves.
With pricing for the pilot starting at $7,500, enterprises use their own images and applications in a proof-of-concept that includes up to 50 virtual desktops. Desktone uses its proven DaaS best practices to jump-start the pilot, enabling customers to quickly ramp up. The physical infrastructure for this 30-day pilot is hosted by HP Flexible Computing Services, one of Desktone’s service provider partners.
This news joins last week's moves by SIMtone to bring more virtualization services to cloud providers. Citrix today also has some big news on moving its solutions to a cloud provider value.
In a bid to bring a multimedia support to thin clients, Desktone of Chelmsford, Mass., and Wyse Technology, San Jose, Calif., announced at WMworld in Las Vegas that they are integrating Desktone dtFlash with Wyse TCX Multimedia, allowing companies to use Flash in a virtual desktop environment to think client devices.
Adobe's Flash technology is becoming more widespread for enterprises and consumers today, for video and rich Internet application interfaces alike. A lack of Flash support on thin clients and for applications and desktops delivery via VDI has potentially delayed adoption of desktop virtualization.
Word has it that Citrix will also offer Flash support for its VDI offerings before the end of the year. It's essential that VDI providers knock down each and every excuse not to use them, to do everything that full-running PCs do, only from the servers. Flash is a big item to fix.
Introduced last year, Wyse TCX Multimedia delivers rich PC-quality multimedia to virtual desktop users. It works with the RDP and ICA protocols that connect the virtual machines on the server to the client, accelerating and balancing workload to display rich multimedia on the client, often offloading the task from the server entirely.
Desktone dtFlash, introduced today, resides in the host virtual machine and acts as the interface between the Flash player and Wyse TCX. Together they allow users to run wide-ranging multimedia applications, including Flash, on their virtual desktops.
Another roadblock to virtualization is that many companies are hesitant to move to VDI because it requires substantial commitment of resources, and the companies are unsure of the benefits. To overcome this hesitancy, Desktone also announced a Desktop as a Service (DaaS) Pilot that will allow companies to explore the benefits of virtualization without having to build the environment themselves.
With pricing for the pilot starting at $7,500, enterprises use their own images and applications in a proof-of-concept that includes up to 50 virtual desktops. Desktone uses its proven DaaS best practices to jump-start the pilot, enabling customers to quickly ramp up. The physical infrastructure for this 30-day pilot is hosted by HP Flexible Computing Services, one of Desktone’s service provider partners.
This news joins last week's moves by SIMtone to bring more virtualization services to cloud providers. Citrix today also has some big news on moving its solutions to a cloud provider value.
SIMtone races to provide cloud-deployed offerings, including wireless device support
SIMtone advanced its cloud-computing offerings last week with a three-pronged approach that includes a universal cloud-computing platform, a virtual service platform (VSP), and a cloud-computing wireless-ready terminal.
The combined offerings from the privately held SIMtone, Durham, N.C., helps pave the way for multiple cloud services to be created, managed and hosted centrally and securely in any data center, while allowing end users to access the services on the fly, through virtually any connected device, with a single user ID, the company said.
The SIMtone Universal Cloud Computing Platform enables network operators and customers to build and deliver multiple cloud services -- virtual desktops, desktop as a service (DaaS), software as a service (SaaS), or Web services.
The SIMtone VSP lets service providers of many stripes transform existing application and desktop infrastructure into cloud-computing infrastructures. SIMtone VSP supports any combination of VMware Server and ESX, Windows XP, Vista and Terminal Server hosts, multi-zone network security, and offers automated, user-activity driven, peak capacity-based guest machine management, load balancing, and failure recovery, radically reducing virtual data center real-estate and power requirements.
Pulling the effort together is the SNAPbook, a wireless-ready portable terminal that can access any services powered by the SIMtone platform. Based on Asus Eee PC solid state hardware, the SNAPbook operates without any local operating system or processing, with all computing tasks performed 100 percent in the cloud.
What was a virtualization value by these vendors -- at multiple levels, including desktop and apps virtualization -- has not struck the chord of "picks and shovels" for cloud providers. Citrix is this week extending the reach of its virtualization Delivery Center solutions to cloud providers as well.
This marks a shifts in the market. Until now, most if not all "cloud providers" like Google, Amazon, Yahoo, et al, have built their own infrastructures and worked out virtualization on their own, often based on the open source Xen hypervisor. They keep these formulas for data center and cloud development and deployment as closely guarded secrets.
But SIMtone and Citrix -- and we should expect others like Desktone, Red Hat, HP and VMware to move fast too -- are creating what they hope will become de facto standards for cloud delivery of virtualized services. Google may not remake its cloud based on third-party vendors, but carriers, service providers and enterprises may just.
The winner of the "picks and shovels" for cloud infrastructure may well end up the next billion-dollar company in the software space. It should be an intense next few years for these players, especially as other larger software vendors (like Microsoft) also build, buy or partner their way in.
Indeed, just as Microsoft is bringing its Hyper-V hypervisor to market, the value has moved up a notch to the management and desktop delivery level. The company that manages virtualization best and fastes for the nascent cloud infrastructure market may well get snatched up before long.
The combined offerings from the privately held SIMtone, Durham, N.C., helps pave the way for multiple cloud services to be created, managed and hosted centrally and securely in any data center, while allowing end users to access the services on the fly, through virtually any connected device, with a single user ID, the company said.
The SIMtone Universal Cloud Computing Platform enables network operators and customers to build and deliver multiple cloud services -- virtual desktops, desktop as a service (DaaS), software as a service (SaaS), or Web services.
The SIMtone VSP lets service providers of many stripes transform existing application and desktop infrastructure into cloud-computing infrastructures. SIMtone VSP supports any combination of VMware Server and ESX, Windows XP, Vista and Terminal Server hosts, multi-zone network security, and offers automated, user-activity driven, peak capacity-based guest machine management, load balancing, and failure recovery, radically reducing virtual data center real-estate and power requirements.
Pulling the effort together is the SNAPbook, a wireless-ready portable terminal that can access any services powered by the SIMtone platform. Based on Asus Eee PC solid state hardware, the SNAPbook operates without any local operating system or processing, with all computing tasks performed 100 percent in the cloud.
What was a virtualization value by these vendors -- at multiple levels, including desktop and apps virtualization -- has not struck the chord of "picks and shovels" for cloud providers. Citrix is this week extending the reach of its virtualization Delivery Center solutions to cloud providers as well.
This marks a shifts in the market. Until now, most if not all "cloud providers" like Google, Amazon, Yahoo, et al, have built their own infrastructures and worked out virtualization on their own, often based on the open source Xen hypervisor. They keep these formulas for data center and cloud development and deployment as closely guarded secrets.
But SIMtone and Citrix -- and we should expect others like Desktone, Red Hat, HP and VMware to move fast too -- are creating what they hope will become de facto standards for cloud delivery of virtualized services. Google may not remake its cloud based on third-party vendors, but carriers, service providers and enterprises may just.
The winner of the "picks and shovels" for cloud infrastructure may well end up the next billion-dollar company in the software space. It should be an intense next few years for these players, especially as other larger software vendors (like Microsoft) also build, buy or partner their way in.
Indeed, just as Microsoft is bringing its Hyper-V hypervisor to market, the value has moved up a notch to the management and desktop delivery level. The company that manages virtualization best and fastes for the nascent cloud infrastructure market may well get snatched up before long.
Subscribe to:
Posts (Atom)