Wednesday, October 7, 2009
Survey says slow, kludgy business processes hamper competitiveness
So says a new independent survey conducted by Vanson Bourne for Progress Software.
The survey had a single goal, to determine the tools and processes large companies have put in place to support operational responsiveness and the ability to make "real-time" decisions. Vanson Bourne surveyed 400 large companies in the United States and Western Europe to develop its findings.
The bottom line: An overwhelming majority of businesses still feel they have a ways to go before they are equipped to respond to market or customer changes quickly enough to compete well in a global marketplace.
“The quest for faster operational responsiveness is becoming more urgent now that external factors such as social networking have boosted speed of response,” says Dr. Giles Nelson, senior director of strategy at the Apama division of Progress Software. “If organizations can’t keep up with the pace of customer feedback, they will find themselves exposed to competitive threats.”
I recently reached a similar conclusion in a podcast discussion with IT analyst Howard Dresner, with an emphasis on business intelligence (BI) in the stew of real-time requirements. Other firms I've worked with, such as Active Endpoints and BP Logix, call the value "nimble" or the ability to quick orchestrate and adapt processes.
[UPDATE: TIBCO today delivered its iProcess Spotfire product for real-time BI aligned to business process management.]
Sure is a lot of emphasis on real-time data, analysis and process reactivity nowadays! No process like the present, I always say. [Disclosure: TIBCO and Progress are sponsors of BriefingsDirect podcasts.]
On average, 22 percent of U.S. companies surveyed by Vanson Bourne admitted that, by the time they noticed it, they had missed the opportunity to react competitively to a change or trend affecting one of their processes. A lack of information seems to be fueling the problem. More than half of companies identified information gaps in decision-making as a cause.
The good news is that surveyed companies have solutions to the information gap in mind, namely access to real-time data. Ninety-four percent of companies cited the importance of real-time data – and the majority of those companies are making moves to gather it. Some 82 percent are planning to invest in real-time technology by mid-2010 in an effort to speed up internal processes, they said.
As Nelson at Apama sees it, bad news now travels very quickly – and companies need to make sure they’re not stuck in the slow lane when it comes to responding to customer issues.
“The overwhelming majority of people we spoke to recognize the importance of responding quickly to customers and to be much more responsive to changes in market conditions. Unfortunately, in most cases at present the process and information reporting infrastructure can’t match that vision,” Nelson says. “Business Event Processing is becoming the way of dealing with this decision-making lag.”
I'd add a bit more. What we're actually seeing is that corporations now see that they must be able to analyze and act in Internet time. Many of us webby and social-media types have known that for some time, but the urgency has now hit the mainstream bricks (not just the clicks).
Furthermore, the payoffs from becoming a real-time-oriented organization will go far beyond knowing what's being said about you on Twitter. As the economy has shown in the last year, those who can move fast and move well will survive and thrive. The others will find themselves in a downward spiral.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post.
Monday, October 5, 2009
HP roadmap dramatically reduces energy consumption across data centers
Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.
Producing meaningful, long-term energy savings in IT operations depends on a strategic planning and execution process.
The goal is to seek out long-term gains from prudent, short-term investments, whenever possible. It makes little sense to invest piecemeal in areas that offer poor returns, when a careful cost-benefit analysis for each specific enterprise can identify the true wellsprings of IT energy conservation.
The latest BriefingsDirect podcast discussion therefore targets significantly reducing energy consumption across data centers strategically. In it we examine four major areas that result in the most energy policy bang for the buck -- virtualization, application modernization, data-center infrastructure best practices, and properly planning and building out new data-center facilities.
By focusing on these major areas, but with a strict appreciation of the current and preceding IT patterns and specific requirements for each data center, real energy savings -- and productivity gains -- are in the offing.
To help learn more about significantly reducing energy consumption across data centers, we welcome two experts from HP: John Bennett, worldwide director, Data Center Transformation Solutions , and Ian Jagger, worldwide marketing manager for Data Center Services. The discussion is moderated by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.
Here are some excerpts:
Bennett: We, as an industry, are full of advice around best practices for what people should be taking a look at. We provide these wonderful lists of things that they should pay attention to -- things like hot and cold aisles, running your data center hotter, and modernizing your infrastructure, consolidating it, virtualizing it, and things of that ilk.Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.
The mistakes that customers do make is that they have this laundry list and, without any further insight into what will matter the most to them, they start implementing these things.
The real opportunity is to take a step back and assess the return from any one of these individual best practices. Which one should I do first and why? What's the technology case and what's the business case for them? That's an area that people seem to really struggle with.
... We know very well that modern infrastructure, modern servers, modern storage, and modern networking items are much more energy efficient than their predecessors from even two or three years ago.
... If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.
With x86 servers, we see utilization rates typically in the 10 percent range. So, while there are a lot interesting benefits that come from virtualization from an energy efficiency point of view, we're basically eliminating the need for a lot of server units by making much better use of a smaller number of units.
So, consolidation and modernization, which reduces the number of units you have, and then multiplying that with virtualization, can result in significant decreases in server and storage-unit counts, which goes a long way toward affecting energy consumption from an infrastructure point of view.
That can be augmented, by the way, by doing application modernization, so you can eliminate legacy systems and infrastructure and move some of those services to a shared infrastructure as well.
We're talking about collapsing infrastructure requirements by factors of 5, 6, or 10. You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.
These are very significant from a server point of view on the storage side. You're eliminating the need for sparsely used dedicated storage and moving to a shared, or virtualized storage environment, with the same kind of cost saving ratios at play here. So, it's a profound impact in the infrastructure environment.
Jagger: Going back to the original point that John made, we have had the tendency in the past to look at cooling or energy efficiency coming from the technology side of the business and the industry. More recently, thankfully, we are tending to look at that in a more converged view between IT technology, the facility itself, and the interplay between the two.
... Each customer has a different situation from the next, depending on how the infrastructure is laid out, the age of the data center, and even the climatic location of the data center. All of these have enormous impact on the customer's individual situation.
... If we're looking, for example, at the situation where a customer needs a new data center, then it makes sense for that customer to look at all the cases put together -- application modernization, virtualization, and also data center design itself.
Here is where it all stands to converge from an energy perspective. Data centers are expensive things to build, without doubt. Everyone recognizes that and everybody looks at ways not to build a new data center. But, the point is that a data center is there to run applications that drive business value for the company itself.
What we don't do a good job of is understanding those applications in the application catalog and the relative importance of each in terms of priority and availability. What we tend to do is treat them all with the same level of availability. That is just inherent in terms of how the industry has grown up in the last 20-30 years or so. Availability is king. Well, energy has challenged that kingship if you like, and so it is open to question.. . . Converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.
Now, you could look at designing a facility, where you have within the facility specific PODs (groups of compute resources) that would be designed according to the application catalog's availability and priority requirements, tone down the tooling infrastructure that is responsible for those particular areas, and just retain specific PODs for those that do require the highest levels of availability.
Just by doing that, by converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.
... One of the smartest things you can actually do as a business, as an IT manager, is to actually go and talk to your utility company and ask them what rebates are available for energy savings. They typically will offer you ways of addressing how you can improve your energy efficiency within the data center.
That is a great starting point, where your energy becomes measurable. Taking an action on reducing your energy, not just hits your operating cost, but actually allows you to get rebates from your energy company at the same time. It's a no-brainer.
Bennett: What we are advising customers to do is take a more complete view of the resources and assets that go into delivering business services to the company.
It's not just the applications and the portfolio. ... It's the data center facilities themselves and how they are optimized for this purpose -- both from a data center perspective and from the facility-as-a-building perspective.
In considering them comprehensively in working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value -- and a lot of significant savings to the organization.
... For customers who are very explicitly concerned about energy and how to reduce their energy cost and energy consumption, we have an Energy Analysis Assessment service. It's a great way to get started to determine which of the best practices will have the highest impact on you personally, and to allow you to do the cherry-picking.
For customers who are looking at things a little more comprehensively, energy analysis and energy efficiency are two aspects of a data-center transformation process. We have a data center transformation workshop.
Jagger: The premise here is to understand possible savings or the possible efficiency available to you through forensic analysis and modeling. That has got to be the starting point, and then understanding the costs of building that efficiency.
Then, you need a plan that shows those costs and savings and the priorities in terms of structure and infrastructure, have that work in a converged way with IT, and of course the payback on the investment that's required to build it in the first place.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.
Part 2 of 4: Web data services provide ease of data access and distribution from variety of sources, destinations
As enterprises seek to gain better insights into their markets, processes, and business development opportunities, they face a daunting challenge -- how to identify, gather, cleanse, and manage all of the relevant data and content being generated across the Web.
As the recession forces the need to identify and evaluate new revenue sources, businesses need to capture such web data services for business intelligence (BI) to work better and fuller. In Part 1 of our web data series we discussed how external data has grown in both volume and importance across internal Internet, social networks, portals, and applications in recent years.
Enterprises need to know what's going on and what's being said about their markets across those markets. They need to share those web data service inferences quickly and easily across their internal users. The more relevant and useful content that enters into BI tools, the more powerful the BI outcomes -- especially as we look outside the enterprise for fast shifting trends and business opportunities.
In this podcast, Part 2 of the series with Kapow Technologies, we identify how BI and web data services come together, and explore such additional subjects as text analytics and cloud computing. So, how to get started and how to affordably manage web data services with BI and business consumers as intelligence and insights?
To find out, we brought together Jim Kobielus, senior analyst at Forrester Research, and Stefan Andreasen, co-founder and chief technology officer at Kapow Technologies. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.
Here are some excerpts:
Kobielus: The more relevant content you bring into your analytic environment the better, in terms of having a single view or access in a unified fashion to all the information that might be relevant to any possible decision you might make. But, clearly, there are lots of caveats, "gotchas," and trade-offs there.Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Kapow Technologies.
One of these is that it becomes very expensive to discover, to capture, and to do all the relevant transformation, cleansing, storage, and delivery of all of that content. It becomes very expensive, especially as you bring more unstructured information from your content management system (CMS) or various applications from desktops and from social networks.
... Filtering the fire hose of this content is where this topic of web data services for BI comes in. Web data services describes that end-to-end analytic information pipe-lining process. It's really a fire hose that you filter at various points, so that the end users turn on their tap and they're not blown away by a massive stream. Rather, it's a stream of liquid intelligence that is palatable and consumable.
Andreasen: There is a fire hose of data out there, but some of that data is flowing easily, but some of it might only be dripping and some might be inaccessible.
Think about it this way. The relevant data for your BI applications is located in various places. One is in your internal business applications. Another is your software-as-a-service (SaaS) business application, like Salesforce, etc. Others are at your business partners, your retailers, or your suppliers. Another one is at government. The last one is on the World Wide Web in those tens of millions of applications and data sources.
Accessible via browser
Today, all of this data that I just described is more or less accessible in a web browser. Web data services allow you to access all these data sources, using the interface that the web browser is already using. It delivers that result in a real-time, relative, and relevant way into SQL databases, directly into BI tools, or to even service enabled and encapsulated data. It delivers the benefits that IT can now better serve the analysts need for new data, which is almost always the case.
What's even more important is that incremental daily improvement of existing reports. Analysts sit there, they find some new data source, and they say, "It would be really good, if I could add this column of data to my report, maybe replace this data, or if I could get this amount of data in real-time rather than just once a week." So it's those kinds of improvements that web data services also really can help with.
Kobielus: At Forrester, we see traditional BI as a basic analytics environment, with ad-hoc query, OLAP, and the like. That's traditional BI -- it's the core of pretty much every enterprise's environment.
Advanced analytics -- building on that initial investment and getting to this notion of an incremental add-on environment -- is really where a lot of established BI users are going. Advanced analytics means building on those core reporting, querying, and those other features with such tools as data mining and text analytics, but also complex event processing (CEP) with a front-end interactive visualization layer that often enables mashups of their own views by the end users.
... We see a strong push in the industry toward smashing those silos and bringing them all together. A big driver of that trend is that users, the enterprises, are demanding unified access to market intelligence and customer intelligence that's bubbling up from this massive Web 2.0 infrastructure, social networks, blogs, Twitter and the like.
Andreasen: Traditionally, for BI, we've been trying to gather all the data into one unified, centralized repository, and accessing the data from there. But, the world is getting more diverse and the data is spread in more and different silos. What companies realize today is that we need to get service-level access to the data, where they reside, rather than trying to assemble them all.
...Web data services can encapsulate or wrap the data silos that were residing with their business partners into services -- SOAP services, REST services, etc. -- and thereby get automated access to the data directly into the BI tool.
... So, tomorrow's data stores for BI, and today's as well, is really a combination of accessing data in your central data repositories and then accessing them where they reside. ... Think about it. I'm an analyst and I work with the data. I feel I own the data. I type the data in. Then, when I need it in my report, I cannot get it there. It's like owning the house, but not having the key to the house. So, breaking down this barrier and giving them the key to the house, or actually giving IT a way to deliver the key to the house, is critical for the agility of BI going forward.
Tools are lacking
Today, the IT department often lacks tools to deliver those custom feeds that the line of business is asking for. But, with web data services, you can actually deliver these feeds. The data that IT is asking for is almost always data they already know, see, and work with in the business applications, with the business partners, etc. They work with the data. They see them in the browsers, but they cannot get the custom feeds. With the web data services product, IT can deliver those custom feeds in a very short time.
Kobielus: The user feels frustration, because they go on the Web and into Google and can see the whole universe of information that's out there. So, for a mashup vision to be reality, organizations have got to go the next step.
... It's good to have these pre-configured connections through extract, transform and load (ETL) and the like into their data warehouse from various sources. But, there should also be ideally feeds in from various data aggregators. There are many commercial data aggregators out there who can provide discovery of a much broader range of data types -- financial, regulatory, and what not.
Also, within this ideal environment there should be user-driven source discovery through search, through pub-sub, and a variety of means. If all these source-discovery capabilities are provided in a unified environment with common tooling and interfaces, and are all feeding information and allowing users to dynamically update the information sets available to them in real-time, then that's the nirvana.
Andreasen: This is where Kapow and web data services come in, as a disruptive new way of solving a problem of delivering the data -- the real-time relevant data that the analyst needs.
The way it works is that, when you work with the data in a browser, you see it visually, you click on it, and you navigate tables and so on. The way our product works is that it allows you to instruct our system how to interact with a web application, just the same way as the line of business user.
...The beauty with web data services is that it's really accessing the data through the application front end, using credentials and encryptions that are already in place and approved. You're using the existing security mechanism to access the data, rather than opening up new security holes, with all the risk that that includes.
... This means that you access and work with the data in the world in which the end users see the data. It's all with no coding. It's all visual, all point and click. Any IT person can, with our product, turn data that you see in a browser into a real feed, a custom feed, virtually in minutes or in a few hours for something that would typically take days, weeks, or months -- or may even be impossible.
Thursday, October 1, 2009
Cloud computing by industry: Novel ways to collaborate via extended business processes
Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.
Welcome to a podcast discussion on how to make the most of cloud computing for innovative solving of industry-level problems. As enterprises seek to exploit cloud computing, business leaders are focused on new productivity benefits. Yet, the IT folks need to focus on the technology in order to propel those business solutions forward.
As enterprises confront cloud computing, they want to know what's going to enable new and potentially revolutionary business outcomes. How will business process innovation -- necessitated by the reset economy -- gain from using cloud-based services, models, and solutions?
Early examples of applying cloud to industry challenges, such as the recent GS1 Canada Food Recall Initiative, show that doing things in new ways can have huge payoffs.
We'll learn about the HP Cloud Product Recall Platform that provides the underlying infrastructure for the GS1 Canada food recall solution, and we will dig deeper into what cloud computing means for companies in the manufacturing and distribution industries and the "new era" of Moore's Law.
Here to help explain the benefits of cloud computing and vertical business transformation, we're joined by Mick Keyes, senior architect in the HP Chief Technology Office; Rebecca Lawson, director of Worldwide Cloud Marketing at HP, and Chris Coughlan, director of HP's Track and Trace Cloud Competency Center. The dicussion is koderated by me, Dana Gardner, principal analyst at Interarbor Solutions.
Here are some excerpts:
Lawson: Everyone knows that "cloud" is a word that tends to get hugely overused. We try to think about what kinds of problems our customers are trying to solve, and what are some new technologies that are here now, or that are coming down the pike, to help them solve problems that currently can't be solved with traditional business processing approaches.Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Hewlett-Packard.
Rather than the cloud being about just reducing costs, by moving workloads to somebody else's virtual machine, we take a customer point of view -- in this case, manufacturing -- to say, "What are the problems that manufacturers have that can't be solved by traditional supply chain or business processing the way that we know it today, with all the implicated integrations and such?"
As we move forward, we see that, different vertical markets -- for example, manufacturing or pharmaceuticals -- will start to have ecosystems evolve around them. These ecosystems will be a place or a dynamic that has technology-enabled services, cloud services that are accessible and sharable and help the collaboration and sharing across different constituents in that vertical market.
We think that, just as social networks have helped us all connect on a personal level with friends from the past and such, vertical ecosystems will serve business interests across large bodies of companies, organizations, or constituents, so that they can start to share, collaborate, and solve different kinds of issues that are germane to that industry.
A great example of that is what we're doing with the manufacturing industry around our collaboration with GS1, where we are solving problems related to traceability and recall.
Keyes: If you look at supply chains, food is a good example. It's one of the more complicated ones, actually. You can have anywhere up to 15-20 different entities involved in a supply chain.
In reality, you've got a farmer out there growing some food. When he harvests that food, he's got to move it to different manufacturers, processors, wholesalers, transportation, and to retail, before it finally gets to the actual consumer itself. There is a lot of data being gathered at each stage of that supply chain.
Coughlan: As a consumer, it gives you a lot more confidence that the health and safety issues are being dealt with, because, in some cases, this is a life and death situation. The sooner you solve the problem, the sooner everybody knows about it. You have a better opportunity of potentially saving lives.So we really look at it from a positive view also, about how this is creating benefits from a business point of view.
As well as that, you're looking at brand protection and you're also looking at removing from the supply chain things that could have further knock-on effects as well.
Keyes: In the traditional way we looked at how that supply chain has traceability, they would have the, infamous -- I would call it -- "one step up, one step down" exchange of data, which meant really that each entity in the supply chain exchanged information with the next one in line.
That's fine, but it's costly. Also, it doesn't allow for good visibility into the total supply chain, which is what the end goal actually is.
What we are saying to industry at the moment -- and this is our thesis here that we are actually developing -- is that, HP, with a cloud platform, will provide the hub, where people can either send data or allow us to access data. What a cloud will do is aggregate different piece of information to provide value to all elements of the supply chain to give greater visibility into the supply itself.
... We have SaaS now, not just to any individual entity in the supply chain, but anybody who subscribes to our hub. We can aggregate all the information, and we're able to give them back very valuable information on how their product is used further up the supply chain. So we really look at it from a positive view also, about how this is creating benefits from a business point of view.
So, depending on what type of industry you're in, we're looking at this platform as being almost a repeatable type of offering, and you can start to lay out individual or specific industry services around this.
We're also looking at how you integrate this into the whole social-networking arena, because that's information and data out there. People are looking to consume information, or get involved in information sharing to a certain degree. We see that as a cool component also that we can perhaps do some BI around and be able to offer information to industry, consumers, and the regulatory bodies fairly quickly.
Coughlan: The point there is that cloud is enabling a convergence between enterprises. It's enabling enterprise collaboration, first of all, and then it's going one step further, where it's enabling the convergence of that enterprise collaboration with Web 2.0.
You can overlay a whole pile of things --carbon footprints, dietary information, and ethical food. Not only is it going to be in the food area, as we said. It's going to be along every manufacturing supply chain -- pharmaceuticals, the motor industry, or whatever.
Lawson: The key to this is that this technology is not causing the manufacturers to do a lot of work. ... It's not a lot of effort on my part to participate in the benefits of being in that traceability and recall ecosystem, because I and all the other people along that supply chain are all contributing the relevant data that we already have. That's going to serve a greater whole, and we can all tap into that data as well.
Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.
Kapow and StrikeIron team-up to offer web data services capabilities to SMBs
Kapow's Web Data Services 7.0.0 will allow SMBs to wrap any Web site or Web application into RSS feeds or REST Web services. [Disclosure: Kapow is a sponsor of BriefingsDirect podcasts.]
Under Kapow's strategic partnership with StrikeIron, Web Data Services 7.0.0, which is available immediately, will be offered on StrikeIron's Web Services Catalog. The software-as-a-service (SaaS) distribution engine allows developers and business users to integrate live data from private and public Web applications and Web sites.
By using Kapow's latest offering, SMBs that need enterprise-class Web data services access and quality will have automated and structured access without resorting, as they did previously, to cutting and pasting the data from a Web browser. [Learn more about Web data services and business inteligence.]
Kapow's “no coding” technology enables companies to rapidly build, test and deploy standard RSS data feeds and REST web services delivery of real-time web data directly into common business applications such as Microsoft Excel, NetSuite or Salesforce as well as any RSS feed reader.
Kapow can also deliver feeds and services directly to any application builder that can access data in standard RSS, JSON and XML format, including IBM Mashup Center, IBM Rational EGL, JackBe and WaveMaker.
The feeds and services are constructed by a visual point-and-click desktop tool that enables users to create “robots” that automate the navigation and interaction with any Web application or Web site, providing secure and reliable access to the underlying data and business logic. This enables the collection of web intelligence and market data in real-time.
Under the terms of the agreement, Kapow will maintain full technical and operational responsibility for Kapow Web Data Services, including enhancements and upgrades. StrikeIron will provide the commercialization capabilities, handling all customer relationship management functions, including sales, billing, and account support.
Private clouds: A valuable concept or buzzword bingo?
Take the BriefingsDirect middleware/ESB survey now.
By Ronald Schmelzer
Every once in a while, the machinery of marketing goes haywire and starts labeling all manner of things with inappropriate terminology. The general rationale of most marketers is that if there’s a band wagon rolling along somewhere and gaining some traction in the marketplace, it’s best to jump on it while it’s rolling.
After all, much of the challenge of marketing products is getting the attention of your target customer in order to get an opportunity to pitch products or services to them. Of course, if it doesn’t work with one band wagon, as the old adage goes, try, try again. This is why we often see the same products marketed with different labels and categories applied to them. Sure, the vendors will insist that they have indeed developed some new add-on or tweaked a user interface to include the new concept front and center, but at the very core of it, the products remain fundamentally unchanged.
Now, I don’t want to sound overly pessimistic about product marketing and the state of IT research and development, since the industry couldn’t exist without innovations that are truly new and disruptive and change the very face of the market. However, this sort of innovation often comes not from the established vendors in the market (who have customer bases to grow and defend), but rather from small upstarts that have nothing to lose. It is in this context that we need to evaluate some of the marketing terminology currently coming to the fore around the cloud computing concept.
ZapThink has had many positive things to say about cloud computing, and we do believe that as a business model, technological approach, and service-oriented domain it will have significant impact on the way companies large and small procure, develop, deploy, and scale their applications. Indeed, we’re starting to see hundreds of companies that develop whole products and services without procuring a penny of internal IT hardware or software resources. This is the bonanza that is cloud computing.
Yet, we’re now starting to see the emergence of a more perplexing concept called “private clouds.” If the benefit of the cloud is primarily loosely coupled, location-independent virtualized services (implemented in a service-oriented manner, of course), and we’re doing this with the intent of reducing IT expenditures, then is there any value in a new concept called private clouds? How does the addition of this word “private” add any value to the sort of service-oriented cloud computing that we’ve been now talking about for a handful of years? Is this a valuable term, or mere marketing spin?
To attempt to gain some clarity around this issue, ZapThink reached out to a number of pundits and opinion-leaders in the space to get their thoughts and definitions on private cloud, and to no surprise, the definitions all varied significantly. Let’s explore these definitions and see what additional value (if any) they contribute to the cloud computing discussion.
Private cloud concept #1: Company-owned and operated, location-independent, virtualized (homogeneous) service infrastructure
My colleague, Jason Bloomberg, is of the opinion that a private cloud consists of infrastructure owned by a company to deploy services in a virtualized, location-independent manner. What differentiates private clouds from simply implementing clustered applications or servers, is that the cloud is not built for a specific service or application in mind.
Rather, it is an abstracted, virtualized environment that allows for deployment of a wide range of disparate services. It is important to note that in practical terms, companies will most likely not implement this vision of private clouds using a diversity of heterogeneous infrastructure. Indeed, it is in their best interests to control costs and complexity of support, training, and administration by implementing their private clouds using a single vendor stack.
So, this vision of private clouds is often a single-vendor (homogeneous) cluster of virtualized infrastructure that enables location-independent service consumption. Of course, implementing any sort of homogeneous stack reduces the need for loosely-coupled services, and thus weakens the service-oriented cloud computing value proposition as a whole for that company.
Private cloud concept #2: Virtualization plus dynamic provisioning (elasticity)
In a response to a Facebook post, Jean-Jacques Dubray comments that the above definition doesn’t go far enough. Rather, in order for the company-owned and implemented infrastructure to be considered a private cloud, it must include the concept of “elasticity.” Specifically, this means that the hardware and software resources must be provisioned in a dynamic manner, scaling up and down to meet changes in demand, thus enabling a more responsive and cost-sensitive approach to IT provisioning.
This idea of private clouds sounds a lot like the utility computing concept sold as part of IBM’s decade-old vision of on-demand computing. From this perspective, a private cloud is company-owned on-demand utility computing implemented with services instead of tightly coupled applications.
Private cloud concept #3: Governed, virtualized, location-independent services
In a response my Tweet on the subject, David Chappell comments that the private cloud is really a response to some of the security and governance issues raised by the (public) cloud. Specifically, he states that a “private cloud (equals) more control over what and how.”
Reading between the 140 character lines, I can guess that his perspective is that a private cloud is a governed cloud that enables virtualized, governed, location-independent services. For sure, there has been a lot of consternation over the fact that the most popular “public” clouds share infrastructure between customers and require that data and communications cross the company firewall.
This stresses out a lot of IT administrators and managers. So in response, these folks insist that they want all the technological benefits of cloud computing, but without the governance risk of having it reside in someone else’s infrastructure. Basically, they want the virtualization, loose coupling, and location-independent benefits of cloud computing without the economic benefits of leveraging someone else’s costs and investments. Basically, they would rather own a version of the Amazon EC2 than use it, solely for reasons of governance.
Many people are indeed concerned about those supposed governance and security draw-backs of cloud computing. However, rather than simply dismissing the economic benefits of the public clouds, why can’t we simply approach private clouds as a veneer that we place on top of the public clouds?
Couldn’t companies impose their governance and security requirements on third-party infrastructure, using company-owned governance tools and approaches to manage remote services? Couldn’t we simply demand that the public clouds provide greater governance and security control?
Basically, does the addition of the term private provide the same sort of value as it does in the context of the virtual private network (VPN)? We didn’t throw out the Internet because it was insecure and create a private Internet. So, why should we do the same with cloud computing and create private clouds?
Private cloud concept #4: Internal business model for pay on demand consumption of location-independent, virtualized resources
JP Morgenthal takes an entirely different perspective on the private cloud concept and insists that the primary value of any cloud, whether implemented privately or acquired from a public vendor, is the business model of pay-as-you-go service consumption.
From this perspective, a private cloud is an internal business model that enables organizations to consume and procure internal, virtualized, loosely coupled services using a pay on-demand model similar to a charge-back mechanism. Rather than an IT organization paying for and supporting the costs of the business users in an aggregate fashion, they can provide those resources using the same business models employed by Amazon, Google, Salesforce.com and others in their public clouds.
In order to realize this vision of private clouds, companies need a means to enable transactional service purchases, auditing of service usage, and organizational methods for enabling such inter-departmental charges. At the most fundamental level, this vision of the private cloud treats IT as a business and a service provider to the rest of the organization.
Private cloud concept #5: Marketing hype, pure and simple
TechTarget offers the most cynical view of the private cloud. In their words, a private cloud is a “marketing term for a proprietary computing architecture that provides hosted services to a limited number of people behind a firewall."
"Marketing media that uses the words "private cloud" is designed to appeal to an organization that needs or wants more control over their data than they can get by using a third-party hosted service. …” Basically, they opine that the term has marketing value only. Where does this place IT practitioners? Reading between the lines, they encourage us to ignore the usage of the term.
More fodder for pundits
Thomas Bittman from Gartner recently posted a rather snarky blog post that says that if we don’t get private clouds, we’re basically silly people who are missing the boat. In that article, he states, “Can you find a better term? Go ahead.”
Yes, we can. "Service-oriented cloud computing" adequately defines an architectural and infrastructure approach to develop location-independent, loosely coupled services, in a manner that virtualizes and abstracts the implementation of these services. What additional value does the term “private” add to that? It’s not entirely clear, and as we can see from the discussion above, there’s no consensus.
Adding more fuel to the fire, a well-publicized video of Oracle’s Larry Ellison and follow-up audio post is now making the rounds where he (humorously or embarrassingly, depending on your perspective) pokes holes in the cloud computing concept as a whole and chastises IT marketing efforts.
Regardless of where you stand on the cloud computing discussion, the video sheds some light on Oracle’s perspective on this whole mess. While it would be hard to say if Ellison speaks for all of Oracle (although you would think so), it indicates that even vendors are starting to strain at the marketing hype that threatens to devalue billions of dollars of their own product investment over the prior decades.
The ZapThink take
The fact that there’s no single perspective on private cloud might indicate that none of the definitions really warrant separating the private cloud concept from that of cloud computing as a whole -- especially the service-oriented sort of clouds that ZapThink espouses.
One reasonable perspective is that the definitions discussed above are simply differing infrastructural and organizational approaches to implementing service-oriented cloud computing. However, those approaches should not warrant a whole new term and certainly not millions more in infrastructure expenditure.
Trying to create a new concept of private clouds from any of a number of perspectives -- architectural, infrastructural, organizational, governance, business model -- seems to introduce more confusion than clarification. After all, shouldn’t all clouds, private or not, have many of the benefits described above? Doesn’t the concept of a private, company-owned cloud in some ways weaken the cloud value proposition? Who really benefits from this private cloud discussion -- IT practitioners or vendors with products to sell?
The point of any new term should be to clarify and differentiate. If the term does neither, then it is part of the problem, not the solution. However, when vendors start pitching their warmed-over middleware stacks and now-dull enterprise service buses (ESB) as “private cloud” infrastructure stacks – ask yourself: Does this change what you are doing now, or is this the beating of the band wagon’s marketing drum?
The goal is not to buy more stuff – the goal is to provide the business increasing value from their existing IT investments. This is the purpose and goal of enterprise architecture and the reason why IT exists in the first place.
This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.
Take the BriefingsDirect middleware/ESB survey now.
SOA and EA Training, Certification,
and Networking Events
In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.
Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.
Wednesday, September 30, 2009
Open Mashup Alliance sets out to breed ease of apps and data access, portability
The latest consortium on the technology scene, the OMA’s mission is to foster the successful use of web data services and enterprise mashup technologies and the adoption of an open language that promotes enterprise mashup interoperability and portability. This is a high priority for more and more enterprises, which is why the OMA could gain momentum.
In fact, it already has on one level. The founding members of OMA is a diverse list of software vendors, consultants, tech service provides and other industry leaders that share a common interest: promoting the open, free-to-use Enterprise Mashup Markup Language (EMML) for the development, interoperability and compatibility of enterprise mashup offerings.
Charter members include Adobe, Bank of America, Capgemini, Hinchcliffe & Co., HP, Intel, JackBe, Kapow Technologies, ProgrammableWeb, Synteractive, and Xignite. Any organization that wants to advance EMML and enterprise mashup interoperability and compatibility can join the OMA. [Disclosure: HP and Kapow are sponsors of BriefingsDirect podcasts.]
Remove vendor lock-in
Michael Ogrinz, principal architect at Bank of America and author of the book Mashup Patterns‚ was right when he said the industry needs to remove vendor lock-in concerns raised by proprietary toolsets in order for enterprise mashups to take hold.
“We also need to inspire the innovative minds of the open-source community to start working in this space,” Ogrinz says. “By establishing an open standard for mashups, the OMA and EMML addresses both of these issues.”
Andy Mulholland, Global CTO at Capgemini and co-author of the book Mashup Corporations has a different take. As he sees it, enterprises around the world are achieving excellent results with enterprise mashup solutions. But, he adds, these enterprises also realize they could reduce their risk and increase their value with solutions built on standardized vendor products. That’s a good observation and seems to be a driving force for the OMA.
HP's collaboration with Open Mashup Alliance members to promote the standard design of mashups will help customers advance their SOA initiatives by allowing them to provide a rich user experience on top of their web services.
But there is another driving force that resonates in a down economy: return on investment (ROI). Tim Hall, director of HP’s SOA Center, focused on the ROI aspects of enterprise mashup standards. He’s convinced enterprises can accelerate ROI, reduce the risks of mashup efforts and deliver real-time reporting of dynamic information to business users by adopting industry-wide open standards like EMML.“HP's collaboration with Open Mashup Alliance members to promote the standard design of mashups will help customers advance their SOA initiatives by allowing them to provide a rich user experience on top of their web services,” Hall says.
The EMML specification will be governed under the Creative Commons License and supported by a free-to-use EMML reference runtime engine. The Open Mashup Alliance will steward and enhance the EMML v1.0 specification for future contribution to a standards body.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
Doing nothing can be costliest IT course when legacy systems and applications are involved
This latest BriefingsDirect podcast discussion tackles the high -- and often under-appreciated -- cost for many enterprises of doing nothing about aging, monolithic applications. Not making a choice about legacy mainframe and poorly utilized applications is, in effect, making a choice not to transform and modernize the applications and their supporting systems.
Not doing anything about aging IT essentially embraces an ongoing cost structure that helps prevent new spending for efficiency-gaining IT innovations. It’s a choice to suspend applications on ossified platforms and to make their reuse and integration difficult, complex, and costly.
Doing nothing is a choice that, especially in a recession, hurts companies in multiple ways -- because successful transformation is the lifeblood of near and long-term productivity improvements.
Here to help us better understand the perils of continuing to do nothing about aging legacy and mainframe applications, we’re joined by four IT transformation experts from Hewlett-Packard (HP): Brad Hipps, product marketer for Application Lifecycle Management (ALM) and Applications Portfolio Software at HP; John Pickett from Enterprise Storage and Server marketing at HP; Paul Evans, worldwide marketing lead on Applications Transformation at HP, and Steve Woods, application transformation analyst and distinguished software engineer at HP Enterprise Services. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.
Here are some excerpts:
Evans: What we’re seeing is that the cost of legacy systems and the cost of supporting the mainframe hasn’t changed in 12 months. What has changed is the available cash that companies have to spend on IT, as, over time, that cash amount may have either been frozen or is being reduced. That puts even more pressure on the IT department and the CIO in how to spend that money, where to spend that money, and how to ensure alignment between what the business wants to do and where the technology needs to go.Listen to podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett Packard.
Our concern is that there is a cost of doing nothing. People eventually end up spending their whole IT budgets on maintenance and upgrades and virtually nothing on innovation.
At a time when competitiveness is needed more than it was a year ago, there has to be a shift in the way we spend our IT dollars and where we spend our IT dollars. That means looking at the legacy software environments and the underpinning infrastructure. It’s absolutely a necessity.
Woods: For years, the biggest hurdle was that most customers would say they didn’t really have to make a decision, because the [replacement] performance wasn’t there. The performance-reliability wasn't there. That is there now. There is really no excuse not to move because of performance-reliability issues.
What's changing today is the ability to look at a legacy source code. We have the tools now to look at the code and visualize it in ways that are very compelling.
What has also changed is the growth of architectural components, such as extract transform and load (ETL) tools, data integration tools, and reporting tools. When we look at a large body of, say, 10 million lines of COBOL and we find that three million lines of that code is doing reporting, or maybe two million is doing ETL work, we typically suggest they move that asymmetrically to a new platform that does not use handwritten code.
That’s really risk aversion -- doing it very incrementally with low intrusion, and that’s also where the best return on investment (ROI) is. ... These tools have matured so that we have the performance and we also have the tools to help them understand their legacy systems today.
Pickett: Typically, when we take a look at the high-end of applications that are going to be moving over and sitting on a legacy system, many times they’re sitting on a mainframe platform. With that, one of the things that have changed over the last several years is the functionality gap between what exists in the past 5 or 10 years ago in the mainframe. That gap has not only been closed, but, in some cases, open systems exceed what’s available on the mainframe.
It’s not only a matter of cost, but it’s also factoring in the power and cooling as well. Certainly, what we’ve seen is that the cost savings that can be applied on the infrastructure side are then applied back into modernizing the application.
Hipps: This term "agility" gets used so often that people tend to forget what it means. The reality of today’s modern organization -- and this is contrasted even from 5, certainly 10 years ago -- is that when we look at applications, they are everywhere. There has been an application explosion.
When we start talking about application transformation and we assign that trend to agility, what we’re acknowledging is that for the business to make any change today in the way it does business -- in any new market initiative, in any competitive threat it wants to respond to, there is going to be an application -- very likely "applications" plural.
The decisions that you're going to make to transform your applications should all be pointed at and informed by shrinking the amount of time that takes you to turn around and realize some business initiative.
That's what we’re seeking with agility. Following pretty closely behind that, you can begin to see why there is a promise in cloud. It saves me a lot of infrastructural headaches. It’s supposed to obviate a lot of the challenges that I have around just standing up the application and getting it ready, let alone having to build the application itself.
So I think that is the view of transformation in terms of agility and why we’re seeing things like cloud. These other things really start to point the direction to greater agility.
... I tend to think that in application transformation in most ways they’re breaking up and distributing that which was previously self-contained and closed.
Whether you're looking at moving to some sort of mainframe processing to distributed processing, from distributed processing to virtualization, whether you are talking about the application team themselves, which now are some combination of in-house, near-shore, offshore, outsourced sort of a distribution of the teams from sort of the single building to all around the world, certainly the architectures themselves from being these sort of monolithic and fairly brittle things that are now sort of services driven things.
You can look at any one of those trends and you can begin to speak about benefits, whether it’s leveraging a better global cost basis or on the architectural side, the fundamental element we’re trying to do is to say, "Let’s move away from a world in which everything is handcrafted."
Assembly-line model
Let’s get much closer to the assembly-line model, where I have a series of preexisting trustworthy components and I know where they are, I know what they do, and my work now becomes really a matter of assembling those. They can take any variety of shapes on my need because of the components I have created.
We're getting back to this idea of lower cost and increased agility. We can only imagine how certain car manufacturers would be doing, if they were handcrafting every car. We moved to the assembly line for a reason, and software typically has lagged what we see in other engineering disciplines. Here we’re finally going to catch up. We're finally be going to recognize that we can take an assembly line approach in the creation of application, as well, with all the intended benefits.
Evans: ... Once we have done it, once we have removed that handwritten code, that code that is too big for what it needs to be in terms to get the job done. Once we have done it once, it’s out and it’s finished with and then we can start looking at economics that are totally different going forward, where we can actually flip this ratio.
Today, we may spend 80 percent or 90 percent of our IT budget on maintenance, and 10 percent on innovation. What we want to do is flip it. We're not going to flip it in a year or maybe even two, but we have got to take steps. If we don’t start taking steps, it will never go away.