Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.
Read a full transcript of the discussion.
Quality early in application development sounds nice, but actually making it happen brings significant cost savings, repeatable quality assurance processes, higher user satisfaction, and shorter development cycles. The results reward developers, end users, and IT operators alike.
To better understand the journey to quality assurance for new applications -- and the processes that work best -- BriefingsDirect interviewed IT executives at FICO, Gevity and JetBlue in a podcast discussion moderated by me, Dana Gardner. It comes as part of a special BriefingsDirect podcast series from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas this week.
Listen as we hear from Matt Dixon, senior manager of tools and processes at FICO; Vito Melfi, vice president of IT operations at Gevity, a part of TriNet, and HP Award of Excellence winner Sagi Varghese, manger of quality assurance at JetBlue.
Read a full transcript of the discussion.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.
Friday, June 19, 2009
HP Software marketing head Anton Knolmar delves into creating new IT economies of performance
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.
Read a full transcript of the discussion.
IT departments are nowadays having to do more with less, gaining additional productivity while spending less money. It sounds simple, but making it happen is very complex.
How do IT departments and companies approach this problem? How will cloud computing and "fluid sourcing" options help or hinder the process? And how can IT budgets slide while expectations rise that new architectural approaches can be adopted with low risk?
To probe deeper into the harsh new IT economies of performance can be managed, BriefingsDirect sat down with Anton Knolmar, Vice President of Marketing for HP Software & Solutions, for a discussion moderated by me, Dana Gardner. It comes as part of a special BriefingsDirect podcast series from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas this week.
Here are some excerpts:
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.
Read a full transcript of the discussion.
IT departments are nowadays having to do more with less, gaining additional productivity while spending less money. It sounds simple, but making it happen is very complex.
How do IT departments and companies approach this problem? How will cloud computing and "fluid sourcing" options help or hinder the process? And how can IT budgets slide while expectations rise that new architectural approaches can be adopted with low risk?
To probe deeper into the harsh new IT economies of performance can be managed, BriefingsDirect sat down with Anton Knolmar, Vice President of Marketing for HP Software & Solutions, for a discussion moderated by me, Dana Gardner. It comes as part of a special BriefingsDirect podcast series from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas this week.
Here are some excerpts:
We've just come out of an executive track. We had about 70 people gathered for the discussion. What is at the top of their minds is all about linking IT with the business. This is a story that we've been telling now for more than 10 or 15 years, and the storyline is not over.
They’re still trying to bridge the gap and talk business language, instead of IT language. One the other hand, they're trying as well to look at the emerging trends. What the heck does this cloud means for them? How can you do cloud computing here? Does this bring added value to them? What’s the business outcome they can drive out of those activities?Read a full transcript of the discussion.
What companies are facing at the moment is that a lot of these activities that were going on in the past -- utility computing, Adaptive Enterprise, eServices -- failed because they couldn’t be managed, but it was out there on the Web, on the Internet.
Our offerings around the cloud at the moment are governance tools along with the cloud. You can really manage the cloud. You can really secure the cloud. And, you can get the right performance out of the cloud. That’s our offering at the moment to our customers. They can take the first step, get this one right, and move into the cloud environment.
Mitigation of risk will never go away. At the moment, everyone is talking about reduction of costs, but there is always a risk factor attached to it. Hopefully, the outcome will be that a lot of companies can talk about their revenue growth again, moving from 2009 into 2010.
We are ready to drive those three angles. How we can help customers drive revenue growth? How we can help them mitigate the risk? And, on the other side, how can we help them get their costs under control? These are the three angles will be on the table for quite some time.
The developer community, as you said, has different concerns in terms of developing the applications and developing things for the cloud as well. Our approach at this time is that we enable them to have the appropriate developing and testing tools in terms of quality, performance, and security. These are essentially for those people who have to develop applications well for the cloud. Those are blocked in immediately, are ready to go out there, and can be managed across the lifecycle.
Getting the right information at the right place and making the appropriate decisions are still on top of the agenda for lot of our customers at the moment. It’s been the number one issue for quite some time, and I think it will be the number one issue for quite some time.
We have an offering in these four lines of business in HP Software and Solutions. One is, you gather around the Business Intelligence piece. What we are investigating at the moment is really about how can we bring those offerings as more of a direct offering to our customers in terms of purchasing and licensing? How can you bring those offering into kind of a cloud offering?
But, that still needs some further negotiations inside the company, as well, about development products. But that’s definitely an interesting angle.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.
Who's Architecting the Cloud?
By Ron Schmelzer
This guest post comes courtesy of ZapThink. Ron Schmelzer is a senior analyst at ZapThink. You can reach him here.
As the hype cycle for the cloud computing continues to gather steam, an increasing number of end users are starting to see the silver lining, while others are simply lost in the fog. It is clear that the debate over the definition, business model, and benefits of cloud will continue for some time, but it is also clear that the sluggish economic environment is increasing the appeal of having someone else pay for the robust infrastructure needed to run one’s applications. Yet, all this talk of leveraging cloud capabilities, or perhaps even building one’s own cloud, whether for public or private consumption, introduces thorny problems. How can we make sure that the cloud will bring us closer to the heavenly vision of IT we search for rather than a fog that hides a complex mess? Who will make sure that the cloud vision isn’t just another reinterpretation of the Software-as-a-Service (SaaS), Application Service Provider (ASP), grid and utility computing model that provided some technical answers but didn’t simplify anything for the internal organization? Who is architecting this mess?
Architecture and the Utility Services Cloud
Most of the time, when people point to practical, in-production examples of cloud computing efforts, they are talking about the sorts of utility services offered by Amazon.com, Google, Salesforce.com, and others. The Services offered in these clouds are not built with any particular application in mind, but rather whole categories of applications. For obvious reasons, these cloud providers seek to leverage economies of scale by serving the largest possible audience using a handful of highly reusable Services, where reuse is defined by usage in multiple contexts. For these cloud providers, the utility Services simultaneously provide a source of revenue as well as a platform their customers use to replace proprietary, in-house infrastructure and middleware.
Given that the emphasis of these Services is to meet the needs of a large and
continuously growing audience who have diverse requirements, the utility cloud provider’s primary focus is placed on infrastructural concerns. As a result, it’s the infrastructure technologists who are in charge of this cloud. When the “architecture team” meets in these cloud providers, what problems are they aiming to solve? Business problems? Certainly not. In most cases, the architecture teams for these providers (of which we’ve been privy to a number of conversations), focus almost exclusively on technology and infrastructural concerns. Key conversations revolve around performance optimization, implementation change management, optimizing the balance between efficiency and cost, meeting reliability and uptime concerns, and addressing privacy, security, and governance issues.
Where’s the business in all this? The answer: nowhere. Where should the business be in all this? That’s a tough question to answer because without Service consumers, the cloud wouldn’t exist at all. However, it is not the goal of the cloud provider to meet any specific business requirements. Rather, the requirements are aggregated to create a business “persona” that is the focus of continual Service releases. In this manner, one could argue that there are no enterprise architects providing any value in this environment. The most pervasive form of architecture done in these environments is more akin to Information Technology Infrastructure Library (ITIL) approaches rather than any form of enterprise architecture (EA). Utility clouds are the domain of infrastructure experts, not business-IT gap bridgers or process modelers, and one could argue that this status quo will probably never change.
Architecture and the Application (Process) Cloud
However, the utility Service vision of the cloud is not the only one. Indeed, we’re starting to see the emergence of application and process clouds that provide the same infrastructural and economic benefits of clouds, but applied to process-specific concerns. These cloud providers enable the outsourcing of entire processes that run in a virtualized cloud environment as a way of handling variability in scale. For example, an insurance company can use a cloud provider's claims processing Service when their internal capacity is not sufficient to meet demand. As long as the process is Service-oriented, this approach works well and leverages the strength of the cloud's abstract infrastructure capability while staying focused on the process. This way, an organization can have its internal processes augmented by third-party cloud processes. For example, insurance clouds provide elastic capabilities for insurance applications as demand ebbs and flows. Likewise, banking, supply chain, retail, and other process-specific clouds provide cloud computing benefits for specific groups of business users.
In this environment, the cloud provider needs to balance two different, but equal concerns:
As we often discuss in our Licensed ZapThink Architect (LZA) SOA training courses, the job of the enterprise architecture team is to optimize the conceptual equation of producing the smallest set of Services that meet the largest number of business processes. You don’t want to produce too many Services, otherwise there’s waste. Likewise, you don’t want to produce too few Services as that constrains the number of business processes you can address. As new Services are introduced, the universe of business processes addressed likewise increases. Since application / process-specific cloud providers are businesses that must justify their existence by staying focused on the business without impacting existing operations. Sounds like something all enterprise architect teams should do, no?
The ZapThink Take
In many ways, the discussion of architecture has been given short shrift in cloud computing conversations. In much the same way that the Service-Oriented Architecture (SOA) conversation degenerated into a conversation about the (often unnecessary) Enterprise Services Bus (ESB), the cloud conversation is degenerating into one about the infrastructure needed to handle scalable Service provider volume. And where is the conversation about the business process? Unless you are planning to build a general-purpose Service provider cloud to compete with the likes of Amazon.com and others, you should be focused on where the opportunity is: in the process. And to focus on the process while keeping an eye on the technology requires an enterprise architecture perspective.
The mistake that many cloud-consuming companies are making is that the cloud is giving them an excuse not to think about enterprise architecture at all.
Given that too few cloud computing providers have your business in mind when they architect their solutions, and the ones that have a process-specific business model and approach aren’t concerned with your specific business, it lands upon the laps of enterprise architects within the organization to plan, manage, and govern their own architecture. Once again, the refrain is that SOA is not something you buy, but something you do. Perhaps we can start hearing the same mantra with cloud computing? Or will the cloud succumb to the same short-sighted, market pressure that doomed the ASP model and still plagues SaaS approaches? It’s not up to vendors to answer this question. It’s up to you … the enterprise architect. There are no short-cuts to EA.
This guest post comes courtesy of ZapThink. Ron Schmelzer, a senior analyst at ZapThink, can be reached here.
SOA and EA Training,Certification,
and Networking Events
In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.
Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.
This guest post comes courtesy of ZapThink. Ron Schmelzer is a senior analyst at ZapThink. You can reach him here.
As the hype cycle for the cloud computing continues to gather steam, an increasing number of end users are starting to see the silver lining, while others are simply lost in the fog. It is clear that the debate over the definition, business model, and benefits of cloud will continue for some time, but it is also clear that the sluggish economic environment is increasing the appeal of having someone else pay for the robust infrastructure needed to run one’s applications. Yet, all this talk of leveraging cloud capabilities, or perhaps even building one’s own cloud, whether for public or private consumption, introduces thorny problems. How can we make sure that the cloud will bring us closer to the heavenly vision of IT we search for rather than a fog that hides a complex mess? Who will make sure that the cloud vision isn’t just another reinterpretation of the Software-as-a-Service (SaaS), Application Service Provider (ASP), grid and utility computing model that provided some technical answers but didn’t simplify anything for the internal organization? Who is architecting this mess?
Architecture and the Utility Services Cloud
Most of the time, when people point to practical, in-production examples of cloud computing efforts, they are talking about the sorts of utility services offered by Amazon.com, Google, Salesforce.com, and others. The Services offered in these clouds are not built with any particular application in mind, but rather whole categories of applications. For obvious reasons, these cloud providers seek to leverage economies of scale by serving the largest possible audience using a handful of highly reusable Services, where reuse is defined by usage in multiple contexts. For these cloud providers, the utility Services simultaneously provide a source of revenue as well as a platform their customers use to replace proprietary, in-house infrastructure and middleware.
Given that the emphasis of these Services is to meet the needs of a large and
For obvious reasons, these cloud providers seek to leverage economies of scale by serving the largest possible audience using a handful of highly reusable Services, where reuse is defined by usage in multiple contexts.
continuously growing audience who have diverse requirements, the utility cloud provider’s primary focus is placed on infrastructural concerns. As a result, it’s the infrastructure technologists who are in charge of this cloud. When the “architecture team” meets in these cloud providers, what problems are they aiming to solve? Business problems? Certainly not. In most cases, the architecture teams for these providers (of which we’ve been privy to a number of conversations), focus almost exclusively on technology and infrastructural concerns. Key conversations revolve around performance optimization, implementation change management, optimizing the balance between efficiency and cost, meeting reliability and uptime concerns, and addressing privacy, security, and governance issues.
Where’s the business in all this? The answer: nowhere. Where should the business be in all this? That’s a tough question to answer because without Service consumers, the cloud wouldn’t exist at all. However, it is not the goal of the cloud provider to meet any specific business requirements. Rather, the requirements are aggregated to create a business “persona” that is the focus of continual Service releases. In this manner, one could argue that there are no enterprise architects providing any value in this environment. The most pervasive form of architecture done in these environments is more akin to Information Technology Infrastructure Library (ITIL) approaches rather than any form of enterprise architecture (EA). Utility clouds are the domain of infrastructure experts, not business-IT gap bridgers or process modelers, and one could argue that this status quo will probably never change.
Architecture and the Application (Process) Cloud
However, the utility Service vision of the cloud is not the only one. Indeed, we’re starting to see the emergence of application and process clouds that provide the same infrastructural and economic benefits of clouds, but applied to process-specific concerns. These cloud providers enable the outsourcing of entire processes that run in a virtualized cloud environment as a way of handling variability in scale. For example, an insurance company can use a cloud provider's claims processing Service when their internal capacity is not sufficient to meet demand. As long as the process is Service-oriented, this approach works well and leverages the strength of the cloud's abstract infrastructure capability while staying focused on the process. This way, an organization can have its internal processes augmented by third-party cloud processes. For example, insurance clouds provide elastic capabilities for insurance applications as demand ebbs and flows. Likewise, banking, supply chain, retail, and other process-specific clouds provide cloud computing benefits for specific groups of business users.
In this environment, the cloud provider needs to balance two different, but equal concerns:
. . . the job of the enterprise architecture team is to optimize the conceptual equation of producing the smallest set of Services that meet the largest number of business processes.
infrastructural issues of the sort described above, and the challenge of meeting continuously changing business requirements. When application-specific cloud provider architect groups meet, their conversations look very different from utility Service cloud providers. Rather than focusing on infrastructural issues as they try to meet the common denominator of needs (“speeds and feeds”), the conversation usually revolves around how the team will meet new business process requirements given the existing set of Services and infrastructure. In many ways, these teams have a true EA conversation: the continuously changing and diverse business requirements on the one hand, and the technical capabilities on the other. These EA conversations invoke aspects of Agile Methodologies and EA frameworks more so than ITIL. Rather than trying to minimize the set of business processes handled by the cloud, they seek to continuously expand the universe of processes addressed.As we often discuss in our Licensed ZapThink Architect (LZA) SOA training courses, the job of the enterprise architecture team is to optimize the conceptual equation of producing the smallest set of Services that meet the largest number of business processes. You don’t want to produce too many Services, otherwise there’s waste. Likewise, you don’t want to produce too few Services as that constrains the number of business processes you can address. As new Services are introduced, the universe of business processes addressed likewise increases. Since application / process-specific cloud providers are businesses that must justify their existence by staying focused on the business without impacting existing operations. Sounds like something all enterprise architect teams should do, no?
The ZapThink Take
In many ways, the discussion of architecture has been given short shrift in cloud computing conversations. In much the same way that the Service-Oriented Architecture (SOA) conversation degenerated into a conversation about the (often unnecessary) Enterprise Services Bus (ESB), the cloud conversation is degenerating into one about the infrastructure needed to handle scalable Service provider volume. And where is the conversation about the business process? Unless you are planning to build a general-purpose Service provider cloud to compete with the likes of Amazon.com and others, you should be focused on where the opportunity is: in the process. And to focus on the process while keeping an eye on the technology requires an enterprise architecture perspective.
The mistake that many cloud-consuming companies are making is that the cloud is giving them an excuse not to think about enterprise architecture at all.
Once again, the refrain is that SOA is not something you buy, but something you do. Perhaps we can start hearing the same mantra with cloud computing?
The thought going through the head of many a supposed architect is: “whew, thank goodness we’re putting this in the cloud so that I don’t have to invest in architecture.” Wow, what a mistake. These companies will be in for a rude awakening when they realize that all they’ve done is shifted their internal mess, which at least they have some control and visibility over, to an external mess that they have less control over. Enterprise architecture doesn’t go away simply because someone else is hosting or providing your Services. Organizations that want to have any chance of improving their agility, flexibility, reliability, and performance need to be in charge of their own architecture. There is no other option.Given that too few cloud computing providers have your business in mind when they architect their solutions, and the ones that have a process-specific business model and approach aren’t concerned with your specific business, it lands upon the laps of enterprise architects within the organization to plan, manage, and govern their own architecture. Once again, the refrain is that SOA is not something you buy, but something you do. Perhaps we can start hearing the same mantra with cloud computing? Or will the cloud succumb to the same short-sighted, market pressure that doomed the ASP model and still plagues SaaS approaches? It’s not up to vendors to answer this question. It’s up to you … the enterprise architect. There are no short-cuts to EA.
This guest post comes courtesy of ZapThink. Ron Schmelzer, a senior analyst at ZapThink, can be reached here.
SOA and EA Training,Certification,
and Networking Events
In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.
Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.
Tuesday, June 16, 2009
HP unveils financial planning and analysis solutions designed to both optimize and modernize IT operations
LAS VEGAS -- Hewlett-Packard (HP) today unveiled its new HP Financial Planning and Analysis (FP&A) solutions, aimed at recession-beleaguered IT executives who need to cut costs, prepare for a service-based future, and run their departments like a business -- all at the same time.
FP&A is part of HP’s expanding IT Financial Management (ITFM) portfolio designed to help chief information officers (CIOs) and IT managers create comprehensive financial transparency, optimize costs deeply but prudently, and newly demonstrate the business value of IT services.
In a related announcement here at the HP Software Universe conference this week, HP unveiled enhancements to its project and portfolio management (PPM) solution for planning and organizing IT investments.
HP also opened its related Tech Forum conference here this week. For the second year in a row, BriefingsDirect will cover the HP Software Universe 2009 conference through a series of podcasts, blogs, transcripts and Twitter entries. [Disclosure: HP is sponsor of BriefingsDirect podcasts.]
Follow the HP Software Universe 2009 conference on Twitter by searching on #HPSU09.
HP Project and Portfolio Management (PPM) Center 8.0 arrives as a key component in ITFM, providing integrated capabilities for IT portfolio investment management, global resource efficiencies and IT financial transparency.
“PPM popularity is on the rise as organizations align planned business investments with IT project portfolios,” said Daniel Stang, principal research analyst at Gartner, in a release.
Analysts in addition to myself are hearing consistently from IT executives that cost-optimization, cost-containment, and cost-reduction initiatives are the top priorities being driven from the business side onto IT.
The business leaders are demanding a clear understanding of all IT costs and benefits as the global recession lingers, if no longer still steeply deepening. HP’s enhanced IT planning and analysis solutions are designed to help IT executives reduce costs without jeopardizing IT's ability to support future growth when it's called for.
The recession therefore accelerates the need to reduce total IT cost through identification and elimination of wasteful operations and practices. But at the same time, IT departments need to better define and implement streamlined processes for operations -- and to show the near and far business value of any new projects.
As part of the opening keynote address here today, Andy Isherwood, Vice President and General Manager of HP Software and Solutions, said the recession compels better management of IT. CIOs need to reduce costs, yes, but they should do so without jeopardizing future growth.
Consolidating IT cut costs and saves energy by focusing on the operational inefficiencies up front. "It's about getting down and dirty, not pie in the sky solutions," said Isherwood.
Along with consolidation, IT leaders can increasingly automate and virtualize infrastructure and data centers. Combined with greater financial management, IT performance analytics, and IT resources optimization, enterprises can cut their IT operations bills while setting the stage for the new phases of advancement.
And those new benefits, said Isherwood, include using flexible sourcing, from on-house premises data centers to outsourcers like HP's EDS, as well as clouds, both on or via off premises partners like Amazon Web Services. As Ann Livermore of HP said yesterday: Everything as a service.
HP is already preparing to better manage and govern the cloud transitions with its Cloud Assure, which joins IT financial management, IT performance analytics, resource management as next major focuses for the HP Software and Solutions group.
To sum up, Isherwood said that HP's major solutions drives are around IT Management Software, Information Management Software, BI Solutions, and Communications and Media Solutions.
HP expects that after a 12-month period of operational optimization initiatives that CIOs will also seek more transformative IT functional delivery improvements, including such next-generation data center bulwarks as consolidation, automation, and virtualization.
Today's pressing IT management and architecture decisions, then, need to gain from better financial management tools, proffer IT performance analytics, and exploit IT resources optimization techniques -- for both near- and long-term benefits.
These financial performance indicator insights and disciplines for IT will also place CIOs in a better position to look at and pursue future flexible and cost-reducing sourcing options. Those are sure to include modernizing in-house legacy deployments, outsourcing to providers such as HP's EDS, and exploring a variety of burgeoning third-party cloud offerings (on premises, off premises, or managed hybrids).
Knowing the true costs and benefits of complex and often sprawling IT portfolios quickly helps improve the financial performance, while setting up the ability to meaningfully compare and contrast current with future IT deployment scenarios. Who knows if cloud computing will save money if we don't know the true costs of all-on-premises approaches?
Gaining real-time visibility into dynamic IT cost structures provides a powerful tool for reducing cost, while also maintaining and improving overall performance. Holistic visibility across an entire IT portfolio also develops the visual analytics that can help better probe for cost improvements and uncover waste.
This is where the HP planning, analysis and financial management solution comes to the rescue in terms of value, optimization priorities, and future planning comparisons.
The HP Financial Planning and Analysis product announced here today is designed to help organizations understand costs from a service-based perspective. It provides a common extract transform load (ETL) capability that can pull information from data sources, including HP PPM and asset management products as well as non-HP data sources.
Cost Explorer, a key component of FP&A, provides business intelligence (BI) capability for visualizing data that is applied to IT costs. Users are able to see data displays color-coded to help identify different dimensions and variants in costs.
HP FP&A can be run as a stand-alone or in conjunction with other HP software products such as HP Project Portfolio Management Center, HP Asset Manager and HP Configuration Management System as well as the newly enhanced version of HP Project Portfolio Management (PPM) Center 8.0.
Along with the software products, HP is also offering consulting services based on best practices, including:
HP is also offering new Software Professional Services for HP PPM 8.0, including:
FP&A is part of HP’s expanding IT Financial Management (ITFM) portfolio designed to help chief information officers (CIOs) and IT managers create comprehensive financial transparency, optimize costs deeply but prudently, and newly demonstrate the business value of IT services.
In a related announcement here at the HP Software Universe conference this week, HP unveiled enhancements to its project and portfolio management (PPM) solution for planning and organizing IT investments.
HP also opened its related Tech Forum conference here this week. For the second year in a row, BriefingsDirect will cover the HP Software Universe 2009 conference through a series of podcasts, blogs, transcripts and Twitter entries. [Disclosure: HP is sponsor of BriefingsDirect podcasts.]
Follow the HP Software Universe 2009 conference on Twitter by searching on #HPSU09.
HP Project and Portfolio Management (PPM) Center 8.0 arrives as a key component in ITFM, providing integrated capabilities for IT portfolio investment management, global resource efficiencies and IT financial transparency.
“PPM popularity is on the rise as organizations align planned business investments with IT project portfolios,” said Daniel Stang, principal research analyst at Gartner, in a release.
Analysts in addition to myself are hearing consistently from IT executives that cost-optimization, cost-containment, and cost-reduction initiatives are the top priorities being driven from the business side onto IT.
The business leaders are demanding a clear understanding of all IT costs and benefits as the global recession lingers, if no longer still steeply deepening. HP’s enhanced IT planning and analysis solutions are designed to help IT executives reduce costs without jeopardizing IT's ability to support future growth when it's called for.
The recession therefore accelerates the need to reduce total IT cost through identification and elimination of wasteful operations and practices. But at the same time, IT departments need to better define and implement streamlined processes for operations -- and to show the near and far business value of any new projects.
As part of the opening keynote address here today, Andy Isherwood, Vice President and General Manager of HP Software and Solutions, said the recession compels better management of IT. CIOs need to reduce costs, yes, but they should do so without jeopardizing future growth.
Consolidating IT cut costs and saves energy by focusing on the operational inefficiencies up front. "It's about getting down and dirty, not pie in the sky solutions," said Isherwood.
Along with consolidation, IT leaders can increasingly automate and virtualize infrastructure and data centers. Combined with greater financial management, IT performance analytics, and IT resources optimization, enterprises can cut their IT operations bills while setting the stage for the new phases of advancement.
And those new benefits, said Isherwood, include using flexible sourcing, from on-house premises data centers to outsourcers like HP's EDS, as well as clouds, both on or via off premises partners like Amazon Web Services. As Ann Livermore of HP said yesterday: Everything as a service.
HP is already preparing to better manage and govern the cloud transitions with its Cloud Assure, which joins IT financial management, IT performance analytics, resource management as next major focuses for the HP Software and Solutions group.
To sum up, Isherwood said that HP's major solutions drives are around IT Management Software, Information Management Software, BI Solutions, and Communications and Media Solutions.
HP expects that after a 12-month period of operational optimization initiatives that CIOs will also seek more transformative IT functional delivery improvements, including such next-generation data center bulwarks as consolidation, automation, and virtualization.
Today's pressing IT management and architecture decisions, then, need to gain from better financial management tools, proffer IT performance analytics, and exploit IT resources optimization techniques -- for both near- and long-term benefits.
These financial performance indicator insights and disciplines for IT will also place CIOs in a better position to look at and pursue future flexible and cost-reducing sourcing options. Those are sure to include modernizing in-house legacy deployments, outsourcing to providers such as HP's EDS, and exploring a variety of burgeoning third-party cloud offerings (on premises, off premises, or managed hybrids).
Knowing the true costs and benefits of complex and often sprawling IT portfolios quickly helps improve the financial performance, while setting up the ability to meaningfully compare and contrast current with future IT deployment scenarios. Who knows if cloud computing will save money if we don't know the true costs of all-on-premises approaches?
Gaining real-time visibility into dynamic IT cost structures provides a powerful tool for reducing cost, while also maintaining and improving overall performance. Holistic visibility across an entire IT portfolio also develops the visual analytics that can help better probe for cost improvements and uncover waste.
This is where the HP planning, analysis and financial management solution comes to the rescue in terms of value, optimization priorities, and future planning comparisons.
The HP Financial Planning and Analysis product announced here today is designed to help organizations understand costs from a service-based perspective. It provides a common extract transform load (ETL) capability that can pull information from data sources, including HP PPM and asset management products as well as non-HP data sources.
Cost Explorer, a key component of FP&A, provides business intelligence (BI) capability for visualizing data that is applied to IT costs. Users are able to see data displays color-coded to help identify different dimensions and variants in costs.
HP FP&A can be run as a stand-alone or in conjunction with other HP software products such as HP Project Portfolio Management Center, HP Asset Manager and HP Configuration Management System as well as the newly enhanced version of HP Project Portfolio Management (PPM) Center 8.0.
Along with the software products, HP is also offering consulting services based on best practices, including:
- Strategy and Advisory Services to help synthesize organizational requirements, data, process and technical gaps for developing detailed implementation roadmaps.
- Implementation Services to provide BI services for strategic decision making including forecasting budgetary needs, quantifying the value of IT services delivered to the business, improving cost efficiency, and aligning IT resources with business needs.
- Process Consulting and Solution Implementation Services based on the HP Service Management Reference Model help in deploying HP ITFM and HP PPM to get improved business results.
- Best practices for Configuration Management Systems help accelerate deployment and provide a use model for customers to identify IT assets and relate them to the costs of the services delivered to the business.
- IT portfolio investment management for improved alignment between IT and business with cash flow analysis that supports business reviews with actionable, real-time information.
- HP PPM Center Mobility Access for governing IT expenditures through secure and automated checkpoints from mobile devices, which send email notifications and workflow actions to cell phones and PDAs.
- Global resource efficiencies for managing human resources with reports and notifications in the recipient’s language.
- Additional IT financial transparency and controls for decision support with a comprehensive financial summary that aggregates IT investment data and related analyses.
- HP Universal Configuration Management Database (UCMDB) integration with HP PPM Center 8.0 provides advanced search capabilities for business and technical users.
- HP Service Manager integration offers a single IT services access point, so users can access services by creating an HP PPM Center proposal from an HP Service Manager catalog item via Web services.
HP is also offering new Software Professional Services for HP PPM 8.0, including:
- Solution Consulting Services for PPM 8.0 providing design and implementation consulting to help customers reduce IT costs by automating enterprise-wide portfolio management via services.
- Fast Track Deployment and Upgrades to help speed deployment of the new software.
- Process Consulting Services to help customer make use of best practices guides for industry standards. HP delivers standardized processes based on HP and industry best practices such as Information Technology Infrastructure Library (ITIL) v3, COBIT and ISO
'Everything' as a service future means transforming IT for efficiency and scale, says HP's Livermore
LAS VEGAS -- Hewlett-Packard opened its Tech Form 2009 conference here Monday evening with a portrait of a future in which everything in IT is delivered -- and perhaps consumed -- as a service.
Ann Livermore, Executive Vice President for HP's Technology Solutions Group (TSG), said the recession and technology advances have combined to offer a new era in computing, one where a hybrid of sourcing and delivery means moves all IT assets to the level of a service.
Livermore identified three mega trends now buffeting the IT landscape: Information explosion, Everything as a Service, and Data Center Transformation.
HP expects that after a 12-month period of operational optimization initiatives that CIOs will also seek more transformative IT functional delivery improvements, including such next-generation data center bulwarks as consolidation, automation, and virtualization. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
But CIOs and IT managers will also see more infrastructure, application development, applications, data, business intelligence, and IT management delivered as services, either from on-premises next-generation data centers, services abstracted from legacy systems, via outsourced IT operations and also from a growing ecology of third-party cloud providers.
In addition, Livermore said that providing such IT services, via HP's acquisition of EDS, now accounts for the majority of HP's revenues. "Services is now HP's biggest business," she said.
The current goal then for IT is to manage IT operations for cost efficiency and performance optimization while preparing for a transformation to the "everything" services future.
In a hint of a building tussle with Cisco, Livermore says much more is to come from HP in networking "equipment and solutions. "We'll be more aggressive ... we're serious," she said. Cisco has entered HP's server business turf, and HP has been providing more of Cisco's core of networking equipment to the market. A market clash is under way. Brocade, a Cisco competitor, is a major sponsor of this years Tech Form conference.
See more about what went on during the keynote in a live stream by doing a Twitter search on #HPTF.
Livermore's keynote address also emphasized energy conservation as an essential ingredient of today's IT operations. If you don't transform your data center, you'll find yourself running out of electricity in few years, she told the attendees. I believe that.
Keynote speaker Paul Miller, HP Vice President of Enterprise Servers and Storage Marketing, sees strong growth for HP in virtualization, private cloud, and "Extreme ScaleOut" products.
So much so that he introduced a new product, HP Extreme ScaleOut server, a powerful pooled resource server that can be managed as a cloud, and which helps conserve energy, space and costs. The devise is based on ProLiant SL technology, but is "skinless," meaning it fits into racks for much less weight, waste, and footprint. Mean and Green, was the message.
Furthermore, Miller says "storage as a service" is coming from HP that works like a storage area network (SAN), but with far less complexity, to works like a private cloud, with much lower total storage cost.
Lastly, Prith Banerjee, Senior Vice President and Research Director of HP Labs, provided a fascinating look at HP research efforts in eight areas:
--Digital commercial printing
--Intelligent infrastructure
--Content transformation
--Immersive interactions
--Information management
--Analytics
--Cloud
--Sustainability (ie, Green IT)
If you have a chance to watch Banerjee's presentation online, I highly recommend it.
My major take-away from the presentations was that HP, and much of the IT industry, now knows what needs to be done to make IT enter its next era. It's all pretty clear. But getting there ... that's the rub. And to fail, is to probably die as a competitive organization.
Ann Livermore, Executive Vice President for HP's Technology Solutions Group (TSG), said the recession and technology advances have combined to offer a new era in computing, one where a hybrid of sourcing and delivery means moves all IT assets to the level of a service.
Livermore identified three mega trends now buffeting the IT landscape: Information explosion, Everything as a Service, and Data Center Transformation.
HP expects that after a 12-month period of operational optimization initiatives that CIOs will also seek more transformative IT functional delivery improvements, including such next-generation data center bulwarks as consolidation, automation, and virtualization. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
But CIOs and IT managers will also see more infrastructure, application development, applications, data, business intelligence, and IT management delivered as services, either from on-premises next-generation data centers, services abstracted from legacy systems, via outsourced IT operations and also from a growing ecology of third-party cloud providers.
In addition, Livermore said that providing such IT services, via HP's acquisition of EDS, now accounts for the majority of HP's revenues. "Services is now HP's biggest business," she said.
The current goal then for IT is to manage IT operations for cost efficiency and performance optimization while preparing for a transformation to the "everything" services future.
In a hint of a building tussle with Cisco, Livermore says much more is to come from HP in networking "equipment and solutions. "We'll be more aggressive ... we're serious," she said. Cisco has entered HP's server business turf, and HP has been providing more of Cisco's core of networking equipment to the market. A market clash is under way. Brocade, a Cisco competitor, is a major sponsor of this years Tech Form conference.
See more about what went on during the keynote in a live stream by doing a Twitter search on #HPTF.
Livermore's keynote address also emphasized energy conservation as an essential ingredient of today's IT operations. If you don't transform your data center, you'll find yourself running out of electricity in few years, she told the attendees. I believe that.
Keynote speaker Paul Miller, HP Vice President of Enterprise Servers and Storage Marketing, sees strong growth for HP in virtualization, private cloud, and "Extreme ScaleOut" products.
So much so that he introduced a new product, HP Extreme ScaleOut server, a powerful pooled resource server that can be managed as a cloud, and which helps conserve energy, space and costs. The devise is based on ProLiant SL technology, but is "skinless," meaning it fits into racks for much less weight, waste, and footprint. Mean and Green, was the message.
Furthermore, Miller says "storage as a service" is coming from HP that works like a storage area network (SAN), but with far less complexity, to works like a private cloud, with much lower total storage cost.
Lastly, Prith Banerjee, Senior Vice President and Research Director of HP Labs, provided a fascinating look at HP research efforts in eight areas:
--Digital commercial printing
--Intelligent infrastructure
--Content transformation
--Immersive interactions
--Information management
--Analytics
--Cloud
--Sustainability (ie, Green IT)
If you have a chance to watch Banerjee's presentation online, I highly recommend it.
My major take-away from the presentations was that HP, and much of the IT industry, now knows what needs to be done to make IT enter its next era. It's all pretty clear. But getting there ... that's the rub. And to fail, is to probably die as a competitive organization.
PostgreSQL delivers alternative for MySQL users wary of Oracle's Sun acquisition
Potential MySQL customers who are wary of the database's future under Oracle stewardship have a possible alternative in Postgres Plus, an open source alternative from EnterpriseDB, says that company’s CEO, Ed Boyajian.
He sees reality biting the MySQL community as a feeding frenzy in the software acquisition food chain from both Sun Microsystems' gobbling up of MySQL last year, and now Oracle's likely snapping up of Sun. “When MySQL got acquired by Sun, a lot of that community got fractured,” Boyajian told BriefingsDirect. “That fracturing started with Sun and continues with Oracle so I think that will have an impact on adoption patterns.”
He says potential MySQL customers, wary of getting “sucked into Oracle’s sales machine,” are looking at EnterpriseDB’s Postgres Plus®Advanced Server, the company’s relational database management system (RDBMS) product, which is based on the PostgreSQL open source database.
Competing with Oracle is nothing new for EnterpriseDB, which has been playing David to Oracle’s Goliath in the database market for years. Although this David has its own Goliath watching its back as IBM is an investor in and has a partnership with the Westford, Mass. company, which was founded in 2004
The latest version of Postgres Plus, being released today is touted by EnterpriseDB as “the fifth-generation of Oracle compatibility technology,” which allows Oracle customers to move applications to the EnterpriseDB database.
This version of Postgres Plus is designed to require “minimal migration effort” for Oracle customers looking for a low-cost, open source-based RDBMS as an alternative to giant vendor’s proprietary database products.
Oracle buying Sun and acquiring MySQL does have a positive side, Boyajian says.
His company maintains a close relationship with the Postgres community, Boyajian said. Several EnterpriseDB employees are "key core members" of Postgres, he said.
One of the selling points for Postgres Plus is that it runs on commodity hardware and now it is being deployed in virtual and cloud environments.
“There are some customers that are using blade servers,” Jim Mlodgenski, EnterpriseDB's chief architect told BriefingsDirect. “For the cache servers [used heavily in social networking apps] you don’t need much horsepower as far as the CPU goes,”
Social networking sites have greater requirements for maintaining a data cache in memory rather than for CPU power, he explained. Postgres Plus offers a feature called “Infinite Cache” to support those requirements.
Some customers take advantage of the commodity prices for “one CPU and a lot of RAM,” Mlodgenski said. “Using commodity hardware at the caching layer you’re able to leverage low cost commodity hardware to cache everything, get the performance benefits of running everything in memory without investing a lot in a high-end SAN [storage area network] boxes,” the architect explained.
The cloud is also on the horizon for Postgres Plus users. “We have other people who are deploying in more virtualized environments, cloud environments,” Mlodgenski said.
He said when the product was designed several years ago it wasn’t focused on the cloud but because of its flexible architecture Postgres Plus users were able to move into cloud environments such as Amazon EC2.
BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.
He sees reality biting the MySQL community as a feeding frenzy in the software acquisition food chain from both Sun Microsystems' gobbling up of MySQL last year, and now Oracle's likely snapping up of Sun. “When MySQL got acquired by Sun, a lot of that community got fractured,” Boyajian told BriefingsDirect. “That fracturing started with Sun and continues with Oracle so I think that will have an impact on adoption patterns.”
He says potential MySQL customers, wary of getting “sucked into Oracle’s sales machine,” are looking at EnterpriseDB’s Postgres Plus®Advanced Server, the company’s relational database management system (RDBMS) product, which is based on the PostgreSQL open source database.
Competing with Oracle is nothing new for EnterpriseDB, which has been playing David to Oracle’s Goliath in the database market for years. Although this David has its own Goliath watching its back as IBM is an investor in and has a partnership with the Westford, Mass. company, which was founded in 2004
The latest version of Postgres Plus, being released today is touted by EnterpriseDB as “the fifth-generation of Oracle compatibility technology,” which allows Oracle customers to move applications to the EnterpriseDB database.
This version of Postgres Plus is designed to require “minimal migration effort” for Oracle customers looking for a low-cost, open source-based RDBMS as an alternative to giant vendor’s proprietary database products.
Oracle buying Sun and acquiring MySQL does have a positive side, Boyajian says.
One of the selling points for Postgres Plus is that it runs on commodity hardware and now it is being deployed in virtual and cloud environments.
“When Oracle acquires Sun and gets a great asset like MySQL it’s a great endorsement for open source software,” he said.His company maintains a close relationship with the Postgres community, Boyajian said. Several EnterpriseDB employees are "key core members" of Postgres, he said.
One of the selling points for Postgres Plus is that it runs on commodity hardware and now it is being deployed in virtual and cloud environments.
“There are some customers that are using blade servers,” Jim Mlodgenski, EnterpriseDB's chief architect told BriefingsDirect. “For the cache servers [used heavily in social networking apps] you don’t need much horsepower as far as the CPU goes,”
Social networking sites have greater requirements for maintaining a data cache in memory rather than for CPU power, he explained. Postgres Plus offers a feature called “Infinite Cache” to support those requirements.
Some customers take advantage of the commodity prices for “one CPU and a lot of RAM,” Mlodgenski said. “Using commodity hardware at the caching layer you’re able to leverage low cost commodity hardware to cache everything, get the performance benefits of running everything in memory without investing a lot in a high-end SAN [storage area network] boxes,” the architect explained.
The cloud is also on the horizon for Postgres Plus users. “We have other people who are deploying in more virtualized environments, cloud environments,” Mlodgenski said.
He said when the product was designed several years ago it wasn’t focused on the cloud but because of its flexible architecture Postgres Plus users were able to move into cloud environments such as Amazon EC2.
BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.
Friday, June 12, 2009
Cloud grows globally: Russia, South Korea, and Malaysia join Open Cirrus Initiative
More evidence has emerged that cloud research and development is a growing worldwide phenomenon.
The Open Cirrus initiative spread across more borders this week with the addition of the Russian Academy of Sciences, South Korea’s Electronics and Telecommunications Research Institute, and the Ministry of Science, Technology and Innovation in Malaysia (MIMOS).
A global, multiple data center, open-source test bed for the advancement of cloud computing research, Open Cirrus was started last summer by HP, Intel Corp. and Yahoo! Inc. The goal is to “promote open collaboration among industry, academia and governments by removing the financial and logistical barriers to research in data-intensive, Internet-scale computing,” the founders say.
Prior to announcing the three newest members at this week’s Open Cirrus Summit in Palo Alto, Calif., the founders had already attracted researchers from the University of Illinois at Urbana Champaign, the Karlsruhe Institute of Technology, Germany, and the Infocomm Development Authority, Singapore.
Noting that IDC predicts that cloud computing will become a $42 billion market by 2012, rival IBM announced its own Blue Cloud initiative earlier this year and in April opened its first cloud computing laboratory in Hong Kong. IBM is also ramping up its PaaS offerings.
HP also has developed Cloud Assure to help make any moves to cloud models mission critical in nature. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
Not to be left out, Oracle Corp. is refining its grid middleware into cloud software products and is partnering with Amazon Web Services, one of the early cloud pioneers.
And, of course, the other 900-pound gorilla, Microsoft, has its Azure initiative although it is not entirely clear what shape that cloud will take.
So as vaudeville comic Jimmy Durante used to say when the stage got crowded: “Everybody wants to get into the act.”
The HP, Intel, Yahoo! initiative is impressive not only for the membership it is attracting but for the seriousness and scope of the Open Cirrus approach.
With a growing membership list, the Open Cirrus community offers researchers worldwide “access to new approaches and skill sets that will enable them to more quickly realize the full potential of cloud computing,” according to this week’s announcement. The new members plan to host additional test bed research sites, expanding Open Cirrus to nine locations, “creating the most geographically diverse cloud computing test bed currently available to researchers.”
This expands the cloud test bed to an “unprecedented scale,” according to Prith Banerjee, senior vice president of research at HP and director of HP Labs. He sees the Open Cirrus collaboration with academia, government and industry as “vital in charting the course for the future of cloud computing in which everything will be delivered as a service.”
The new members bring impressive resources to Open Cirrus.
The Russian Academy of Sciences, the first Eastern European institution to join Open Cirrus, provides R&D from three of its own organizations:
MIMOS in Malaysia plans to develop a national cloud computing platform to deploy services throughout Malaysia, focusing on enabling services through software, security frameworks and mobile interactivity, as well as testing new cloud tools and methodologies.
Andrew Chien, vice president and director of Intel Research sees these added resources and projects creating a critical mass for “our vision of an open-source cloud stack as a strong, large-scale platform for research and development.”
BriefingsDirect contributor Rich Seeley provided research and editorial assistance to BriefingsDirect on this post. He can be reached at RichSeeley@aol.com.
The Open Cirrus initiative spread across more borders this week with the addition of the Russian Academy of Sciences, South Korea’s Electronics and Telecommunications Research Institute, and the Ministry of Science, Technology and Innovation in Malaysia (MIMOS).
A global, multiple data center, open-source test bed for the advancement of cloud computing research, Open Cirrus was started last summer by HP, Intel Corp. and Yahoo! Inc. The goal is to “promote open collaboration among industry, academia and governments by removing the financial and logistical barriers to research in data-intensive, Internet-scale computing,” the founders say.
Prior to announcing the three newest members at this week’s Open Cirrus Summit in Palo Alto, Calif., the founders had already attracted researchers from the University of Illinois at Urbana Champaign, the Karlsruhe Institute of Technology, Germany, and the Infocomm Development Authority, Singapore.
Noting that IDC predicts that cloud computing will become a $42 billion market by 2012, rival IBM announced its own Blue Cloud initiative earlier this year and in April opened its first cloud computing laboratory in Hong Kong. IBM is also ramping up its PaaS offerings.
HP also has developed Cloud Assure to help make any moves to cloud models mission critical in nature. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
Not to be left out, Oracle Corp. is refining its grid middleware into cloud software products and is partnering with Amazon Web Services, one of the early cloud pioneers.
And, of course, the other 900-pound gorilla, Microsoft, has its Azure initiative although it is not entirely clear what shape that cloud will take.
So as vaudeville comic Jimmy Durante used to say when the stage got crowded: “Everybody wants to get into the act.”
The HP, Intel, Yahoo! initiative is impressive not only for the membership it is attracting but for the seriousness and scope of the Open Cirrus approach.
With a growing membership list, the Open Cirrus community offers researchers worldwide “access to new approaches and skill sets that will enable them to more quickly realize the full potential of cloud computing,” according to this week’s announcement. The new members plan to host additional test bed research sites, expanding Open Cirrus to nine locations, “creating the most geographically diverse cloud computing test bed currently available to researchers.”
This expands the cloud test bed to an “unprecedented scale,” according to Prith Banerjee, senior vice president of research at HP and director of HP Labs. He sees the Open Cirrus collaboration with academia, government and industry as “vital in charting the course for the future of cloud computing in which everything will be delivered as a service.”
The new members bring impressive resources to Open Cirrus.
The Russian Academy of Sciences, the first Eastern European institution to join Open Cirrus, provides R&D from three of its own organizations:
- Institute for System Programming (ISP), which will conduct fundamental scientific research and applications in the field of system programming.
- Joint SuperComputer Center (JSCC), which will engage in the processing of large arrays of biological data, nanotechnology, 3D modeling and other applications, and port them to cloud infrastructure.
- Russian Research Center Kurchatov Institute, which will explore how cloud computing is different from other technologies, and apply its techniques for large-scale data processing.
MIMOS in Malaysia plans to develop a national cloud computing platform to deploy services throughout Malaysia, focusing on enabling services through software, security frameworks and mobile interactivity, as well as testing new cloud tools and methodologies.
Andrew Chien, vice president and director of Intel Research sees these added resources and projects creating a critical mass for “our vision of an open-source cloud stack as a strong, large-scale platform for research and development.”
BriefingsDirect contributor Rich Seeley provided research and editorial assistance to BriefingsDirect on this post. He can be reached at RichSeeley@aol.com.
Thursday, June 11, 2009
Analysts define growing requirements for how governance needs to support corporate adoption of cloud computing
Read a full transcript of the discussion. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 42. Our latest topic centers on governance as a requirement and an enabler for cloud computing.
Our panel of IT analysts discusses the emerging requirements for a new and larger definition of governance. It's more than IT governance, or service-oriented architecture (SOA) governance. The goal is really more about extended enterprise processes, resource consumption, and resource-allocation governance.
In other words, "total services governance." Any meaningful move to cloud-computing adoption, certainly that which aligns and coexists with existing enterprise IT, will need to have such total governance in place. Already, we see a lot of evidence that the IT vendor community and the cloud providers themselves recognize the need for this pending market need and requirement for additional governance.
So listen then as we go round-robin with our IT analyst panelists on their top five reasons why service governance is critical and mandatory for enterprises to properly and safely modernize and prosper vis-Ã -vis cloud computing: David A. Kelly, president of Upside Research; Joe McKendrick, independent analyst and ZDNet blogger, and Ron Schmelzer, senior analyst at ZapThink. Our discussion is hosted and moderated by me, BriefingsDirect's Dana Gardner.
Here are some excerpts ...
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 42. Our latest topic centers on governance as a requirement and an enabler for cloud computing.
Our panel of IT analysts discusses the emerging requirements for a new and larger definition of governance. It's more than IT governance, or service-oriented architecture (SOA) governance. The goal is really more about extended enterprise processes, resource consumption, and resource-allocation governance.
In other words, "total services governance." Any meaningful move to cloud-computing adoption, certainly that which aligns and coexists with existing enterprise IT, will need to have such total governance in place. Already, we see a lot of evidence that the IT vendor community and the cloud providers themselves recognize the need for this pending market need and requirement for additional governance.
So listen then as we go round-robin with our IT analyst panelists on their top five reasons why service governance is critical and mandatory for enterprises to properly and safely modernize and prosper vis-Ã -vis cloud computing: David A. Kelly, president of Upside Research; Joe McKendrick, independent analyst and ZDNet blogger, and Ron Schmelzer, senior analyst at ZapThink. Our discussion is hosted and moderated by me, BriefingsDirect's Dana Gardner.
Here are some excerpts ...
Schmelzer's top four governance rationales:
At ZapThink we just did a survey of the various topics that people are interested in for education, training, and stuff like that. The number one thing that people came back with was governance.
- Control. So first reason to use governance, to prevent chaos. ... You want the benefit of loose coupling. That is, you want the benefit of being able to take any service and compose it with any other service without necessarily having to get the service provider involved. ... But the problem is how to prevent people from combining these services in ways that provide unpredictable or undesirable results. A lot of the efforts in governance from the runtime prevents that unpredictability.
- Design Parameters. Two, then there is the design-time thing. How do you make sure services are provided in a reliable predictable way? People want to create services. Just because you can build a service doesn't mean that your service looks like somebody else's service. How do you prevent issues of incompatibility? How do you prevent issues of different levels of compliance?
- Policy Adherence. Of course, the third one is around policy. How do you make sure that the various services comply with the various corporate policies, runtime policies, IT policies, whatever those policies are?
Kelly's top five governance rationales:
- Reliability. To add a fourth, people are starting to think more and more about governance, because we see the penalty for what happens when IT fails. People don't want to be consuming stuff from the cloud or putting stuff into a cloud and risking the fact that the cloud may not be available or the service of the cloud may not be available. They need to have contingency plans, but IT contingency plans are a form of governance.
At one level, what we're going to see in cloud computing and governance is a pretty straightforward extension of what you've seen in terms of SOA governance and the bottom-up from the services governance area. As you said, it gets interesting when you start to up-level it from individual services into the business processes and start talking about how those are going to be deployed in the cloud.
Read a full transcript of the discussion. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.
- Focus on Business Goals. My first point is one of the key areas where governance is critical for the cloud, and that is ensuring that you're connecting the business goals with those cloud services. As services move out to the cloud, there's a larger perspective and with it the potential for greater disruption.
- Ensuring Compliance. [Governance] is going to be the initial driver that you're going to see in the cloud in terms of compliance for data security, privacy, and those types of things. Can the consumers trust the services that they're interacting with, and can the providers provide some kind of assurance in terms of governance for the data, the processes, and an overall compliance of the services they're delivering?
- Consistent Change Management. With cloud, you have a very different environment than most IT organizations are used to. You've got a completely different set of change-management issues, although they are consistent to some extent with what we've seen in SOA. You need to both maintain the services, and make sure they don't cause problems when you're doing change management.
- Service Level Agreements (SLAs). The fourth point is making sure that the governance can increase or help monitor quality of services, both in design quality and in runtime quality. That could also include performance. ... What we've seen so far is a very limited approach to governance. ... We're going to have to see a much broader expansion over the next four or five years.
McKendrick's top five governance rationales:
- Managing Service Lifecycles. Looking at this from a macro perspective, we need managing the cloud-computing life cycle. From the definitions of the services, through the deployment of the services, to the management of the services, to the performance of the services, to the retirement of the services, it's everything that's going on in the cloud. As those services get aggregated into larger business processes, that's going to require different set of governance characteristics.
There is an issue that's looming that hasn't really been discussed or addressed yet. That is the role of governance for companies that are consuming the services versus the role of governance for companies that are providing the services. On some level, companies are going to be both consumers and providers of cloud services.
- Provisioning Management. Companies and their IT departments will be the cloud providers internally, and there is a level of ... design-time governance issues that we've been wrestling with SOA all these years that come into play as providers. They will want to manage how much of their resources are devoted to delivery of services, and to manage the costs of supplying those services.
- SLA Management. Companies will have to tackle SLA management, which is assuring the availability of the applications they're receiving from some outside third party. So, the whole topic of governance splits in two here, because there is going to be all this activity going on outside the firewall that needs to be discussed.
- Service Ecology Management. A lot of companies are taking on the role of a broker or brokerage. They're picking up services from partners, distributors, and aggregators, and providing those services to specific markets. They need the ability to know what services are available in order to be able to discover and identify the assets to build the application or complete a business process. How will we go about knowing what's out there and knowing what's been embedded and tested for the organization?
- Return on Investment (ROI). ROI is another hot button, and we need to be able to determine what services and processes are delivering the best ROI. How do we measure that? How do we capture those metrics?
Gardner's top five governance rationales:
- Business Involvement. How do we get the business involved [in shaping the refining the use of services in the context of business processes]? How do we move it beyond something that IT is implementing and move it to the business domain? How do we ensure that business people are intimately involved with the process and are identifying their needs? Ultimately, it's all about governing services.
The road to cloud computing is increasingly paved with, or perhaps is going to be held up by a lack of, governance.
- Managing Scale. We're going to need to scale beyond what we do with business to employee (B2E). For cloud computing, we're going to need to see a greater scale for business to business (B2B) cloud ecologies, and then ultimately business to consumer (B2C) with potentially very massive scale. New business models will demand a high scale and low margin, so the scale becomes important. In order to manage scale, you need to have governance in place. ... We're going to need to see governance on API usage, but also in what you're willing to let your APIs be used for and at what scale.
- Federated Cloud Ecologies. We need to make this work within a large cloud ecology. Standards and neutrality at some level are going to be essential for this to happen across a larger group of participants and consumers. So with people coming and going in and out of an ecology of process, delivered via cloud services, means we need federation. That means open and shared governance mechanisms of some type. Standards and neutrality at some level are going to be essential for this to happen at that scale across a larger group of participants and consumers.
- Keep IT Happy. My third reason is that IT is going to need to buy into this. We've heard some talk recently about doing away with IT, going around IT, or doing all of these cloud mechanisms vis-Ã -vis the line of business folks. I think there is a role for that, and I think it's exploratory at that level. Ultimately, for an enterprise to be successful with cloud models as a business, they're going to have to take advantage of what they already have in place in IT. They need to make it IT ready and acceptable, and that means compliance. IT should have a checklist of what needs to take place in order for their resources and assets to be used vis-Ã -vis outside resources or even within the organization across a shared-services environment.
- Collect the Money. The business models that we're just starting to see well up in the marketplace around cloud are also going to require governance in order to do billing, to satisfy whether the transaction has occurred, to provision people on and off based on whether they've paid properly or they're using it properly under the conditions of a license or a SLA of some kind. This needs to be done at a very granular level. Governance is going to be essential for making money at cloud types of activities.
- Data Access Management. Lastly, cloud-based data is going to be important. We talk about transactions, services, APIs, and applications, but data needs to be shared, not just at a batch level, but at a granular level across multiple partners. To govern the security, provisioning, and protection of data at a granular level falls back once again to governance. So, I come down on the side that governance is monumental and important to advancing cloud, and that we are still quite a ways away from [controlling access] around data.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Subscribe to:
Posts (Atom)