Tuesday, June 15, 2010

New HP products take aim at managing complexity in 'hybrid data center' era

WASHINGTON - The HP Software Universe 2010 conference is in full swing here this week, highlighted by a slew of HP products and services focused on giving IT leaders the means to view and control their rapidly escalating complexity.

Among the most significant announcements Tuesday is the next iteration of the HP Business Service Management (BSM) software suite. HP BSM 9.0 works on automating comprehensive management across applications lifecycles, with new means to bring a common approach to hybrid-sourced app delivery models. Whether apps are supported from virtualized infrastructures, on-premises stacks, private clouds, public clouds, software as a service (SaaS) sources, or outsourced IT -- they need to managed with commonality. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

"Our customers are dealing with some of the most significant combination of changes in IT technologies and paradigm that I have ever seen," said Robin Purohit, Vice President and General Manager of HP Software Products. "So whether it's a whole new way of developing applications like Agile, the real acceleration of virtualization, test environments moving into production workloads, and all of the evaluations of where the cloud and SaaS fits -- and then how they support all the enterprise applications."

[See an interview with Purohit on BSM 9.0, or listen to it as a podcast.]

HP’s core message around the BSM 9.0 is clear: To help companies automate apps and services management amid complexity so they can reinvest in innovation. The economics of innovation -- of being able to do more in terms of results without the luxury of spending a lot more -- has to be factored into the management matrix.

“We did a study last October that showed our clients believe innovation is going to help them even through uncertain economic times—and they see technology as central to their ability to succeed in a changing environment,” said Paul Muller, vice president of Strategic Marketing, Software Products, HP Software & Solutions. “There is some skepticism within the business community that IT is ready to make those changes.”

HP’s view of the hybrid world

HP wants to help IT combat that skepticism by equipping them with HP BSM 9.0. The solutions not only address hybrid delivery models, but also what Muller calls the “consumerization of IT,” referring to people who use non-company-owned devices on a company network. As Muller sees it, employees expect to have the same dependable experience while working from home as they do at work.

“We believe organizations will struggle to deliver the quality outcomes expected of them unless they are able to deal with the increase rate of change that occurs when you deploy virtualization or cloud technologies,” Muller said. “It’s a change that allows for innovation, but it’s also a change that creates opportunity for something to go wrong.”

Indeed, Bill Veghte, HP Executive Vice President for HP Software and Solutions, said that three major trends -- all game changers on their own -- are converging around IT: virtualization, cloud, and mobile.

"We need to continuously simplify" management to head off rapid complexity acceleration aroud this confluence of trends. He pointed to the need for gaining a comprehensive view into hybrid IT operations, automation for management and remediation, and "simply expressing" the views on what is going on it IT, regardless of the location or types of services.

"Users want a unified view in a visually compelling way, and they want to be able to take action on it," said Veghte.

At the center of Wednesday’s announcement is what HP proposes as the solution to this challenge: HP BSM 9.0. The software offers several features that should cause companies that work with hybrid IT environments to take a closer look. One of those features is automated operations that work to reduce troubleshooting costs and hasten repair time. BSM 9.0 also offers cloud-ready and virtualized operations that aim to reduce security risks with strategic management services.

"The active interest of our clients in cloud computing has just exploded," said Purohit. "I think last year was a curiosity for many senior IT executives something on the horizon, but this year it's really an active evaluation. I think most customers are looking initially at something a little safer, meaning a private cloud approach, where there is either new stack of infrastructure and applications are run for them by somebody else on their site, or at some off-site operation. So that seems to be the predominant new paradigm."

Purohit described BSM 9 as "a breakthrough" for coming to grips with the "hybrid data center."

It’s the great trap because if you don’t know what infrastructure your application is depending on from one minute to the next, you can’t troubleshoot it when something goes wrong.


Indeed, integration is a running theme with HP BSM 9.0. The solution offers a single, integrated view for IT to manage enterprise services in hybrid environments, while new collaborative operations promise to boost efficiency with an integrated view for service operations management. Every IT operations user receives contextual and role-based information through mobile devices and other access points for faster resolution.

"BSM 9 is our solution for end-to-end monitoring of services in the data center. It's been a great business for us, and we now have a break-through release that we reveled to our customers today, that’s anchored around what we call the Runtime Service Model," said Purohit.

"So a service model is basically a real-time map of everything from the business transactions of the businesses running, to all of the software that makes up that composite applications for the service, and all of the infrastructure whether it be physical or virtual or on-premise or off-premise that supports all of that application," said Purohit.

"So all of that together -- knowing how it's connected, what the health of it is, what's changing in it so, you can actually make sure it's all running exactly the way the business expects -- is really critical," he said.

The run-time service model works to save time by improving organizational service impact analysis and troubleshooting processing times, said HP. The HP Business Availability Center (BAC) 9.0 offers an integrated user experience as well as applications monitoring and diagnostics with HP’s twist on the run-time service model.

“The rate of change in the way infrastructure elements relate to each other -- or even where they are from one minute to the next -- means we’ve moved from an environment where you could scan your infrastructure weekly and still be quite accurate to workloads shifting minute by minute,” Muller said. “It’s the great trap because if you don’t know what infrastructure your application is depending on from one minute to the next, you can’t troubleshoot it when something goes wrong.”

Other elements of BPM 9.0 include the BAC Anywhere service that lets organizations monitor external web apps from anywhere, even outside the firewall, from a single integrated console. HP Operations Manager i 9.0 promises to improve IT service performance by way of “smart plug-ins” that automatically discover application changes and updates them in the run-time service model. Finally, HP Network Management Center 9.0 gives aims to give companies better network visibility by connecting virtual services, physical networks and public cloud services together.

HP’s expanded universe

In other HP news, the company announced software that aims to accelerate application testing while reducing the risks associated with new delivery models. Dubbed HP Test Data Management, HP also promises the new solution lowers costs associated with application testing and ensures sensitive data doesn’t violate compliance regulations.

The improvement helps simplify and accelerate testing data preparation, an important factor in making tests and quality more integral to applications development and deployment, again, across a variety of infrastructure models.

Along with HP Test Data Management, HP launched new integrations between HP Quality Center and the Collabnet TeamForge with the goal of improving communication and collaboration among business analysts, project managers, developers and quality assurance teams.

The integration with CollabNet, built largely on Apache Subversion, will help further bind the "devops" process, said Purohit. "The result is better apps," he said.

And as part of its work to help clients maximize software investments, HP also rolled HP Solution Management Services. This offering is a converged portfolio of software support and consulting services that offers a single point of accountability to manage enterprise software investments.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

HP Data Protector, a case study on scale and completeness for total enterprise data backup and recovery

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from the HP Software Universe 2010 Conference in Washington, DC. We're here the week of June 14, 2010 to explore some major enterprise software and solutions trends and innovations making news across HP's ecosystem of customers, partners, and developers.

Our topic for this live conversation focuses on the challenges and progress in conducting massive and comprehensive backups of enterprise live data, applications, and systems. We'll take a look at how HP Data Protector is managing and safeguarding petabytes of storage per week across HP's next-generation data centers.

The case-study on HP's ongoing experiences sheds light on how enterprises can consolidate their storage and backup efforts to improve response and recovery times, while also reducing total costs.

To learn more about high-performance enterprise scale storage and reliable backup, please welcome Lowell Dale, a technical architect in HP's IT organization. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Dale: One of the things that everyone is dealing with these days is the growth of data. Although we have a lot of technologies out there that are evolving -- virtualization and the globalization-effect -- what we're dealing with on the backup and recovery side is an aggregate amount of data that's just growing year after year.

Some of the things that we're running into are the effects of consolidation. For example, we end up trying to backup databases that are getting larger and larger. Some of the applications and servers that consolidate will end up being more of a challenge for some of the services such as backup and recovery. It's pretty common across the industry.

In our environment at HP, we're running about 93,000-95,000 backups per week with an aggregate data volume of about 4 petabytes of backup data and 53,000 run-time hours. That's about 17,000 servers worth of backup across 14 petabytes of storage.

It's pretty much every application that HP's business is run upon. It doesn’t matter if it's enterprise warehousing or data warehousing or if it's internal things like payroll or web-facing front-ends like hp.com. It's the whole slew of applications that we have to manage.

The storage technologies are managed across two different teams. We have a storage-focused team that manages the storage technologies. They're currently using HP Surestore XP Disk Array and EVA as well. We have our Fibre Channel networks in front of those. In the team that I work on, we're responsible for the backup and recovery of the data on that storage infrastructure.

We're using the Virtual Library Systems that HP manufactures as well as the Enterprise System Libraries (ESL). Those are two predominant storage technologies for getting data to the data protection pool.

One of the first things we had to do was simplify, so that we could scale to the size and scope that we have to manage. You have to find and simplify configuration and architecture as much as possible, so that you can continue to grow out scale.

We had to take a step-wise approach on how we adopted virtual tape library and what we used it for. Virtual tape libraries were one of the things that we had to figure out. What was the use-case scenario for virtual tape? It's not easy to switch from old technology to something new and go 100 percent at it.

We first started with a minimal amount of use-cases and little by little, we started learning what that was really good for. We’ve evolved the use case even more, so that in our next generation design that will move forward. That’s just one example.

We're still using physical tape for certain scenarios where we need the data mobility to move applications or enable the migration of applications and/or data between disparate geographies.



HP Data Protector 6.11 is the current release that we are running and deploying in our next generation. Some of the features with that release that are very helpful to us have to do with checkpoint recoveries.

For example, if the backup or resource should fail, we have the ability with automation to go out and have it pick up where it left off. This has helped us in multi-fold ways. If you have a bunch of data that you need to get backed up, you don’t want to start over, because it’s going to impact the next minute or the next hour of demand.

Not only that, but it’s also helped us be able to keep our backup success rates up and our tickets down. Instead of bringing a ticket to light for somebody to go look at it, it will attempt a few times for a checkpoint recovery. After so many attempts, then we’ll bring light to the issue so that someone would have to look at.

Data Protector also has a very powerful feature called object copy. That allowed us to maintain our retention of data across two different products or technologies. So, object copy was another one that was very powerful.

There are also a couple of things around the ability to do the integration backups. In the past, we were using some technology that was very expensive in terms of using of disk space on our XPs, and using split-mirror backups. Now, we're using the online integrations for Oracle or SQL and we're also getting ready to add SharePoint and Microsoft Exchange.

Now, we're able to do online backups of these databases. Some of them are upwards of 23 terabytes. We're able to do that without any additional disk space and we're able to back that up without taking down the environment or having any downtime. That’s another thing that’s been very helpful with Data Protector.

Scheduling overhead

With VMs increasing and the use case for virtualization increasing, one of the challenges is trying to work with scheduling overhead tasks. It could be anywhere from a backup to indexing to virus scanning and whatnot, and trying to find out what the limitations and the bottlenecks are across the entire ecosystem to find out when to run certain overhead and not impact production.

That’s one of the things that’s evolving. We are not there yet, but obviously we have to figure out how to get the data to the data protection pool. With virtualization, it just makes it a little bit more interesting.

Nowadays, we can bring up a fairly large-scale environment, like an entire data center, within a matter of months -- if not weeks. This is how long it would take us. The process from there moves toward how we facilitate setting up backup policies and schedules, and even that’s evolving.

Right now, we're looking at ideas and ways to automate that, so that' when a server plugs in, basically it’ll configure itself. We're not there yet, but we are looking at that. Some of the things that we’ve improved upon are how we build out quickly and then turn around and set up the configurations, as that business demand is then turned around and converted into backup demand, storage demand, and network demand. We’ve improved quite a bit on that front.

Being able to bring that backup success rate up is key. Some of the things that we’ve done with architecture and the product -- just the different ways for doing process -- has helped with that backup success rate.

The other thing that it's helped us do is that we’ve got a team now, which we didn’t have before, that’s just focused on analytics, looking at events before they become incidents.

With some of the evolving technologies and some of the things around cloud computing, at the end of the day, we'll still need to mitigate downtime, data loss, logical corruption, or anything that would jeopardize that business asset.

With cloud computing, if we're using the current technology today with peak base backup, we have to get the data copied over to a data protection pool. There still would be the same approach of trying to get that data. If there is anything to keep up with these emerging technologies, for example, maybe we approach data protection a little bit differently and spread the load out, so that it’s somewhat transparent.

Some of the things we need to see and we may start seeing in the industry are load management and how loads from different types of technologies talk to each other. I mentioned virtualization earlier. Some of the tools with content-awareness and indexing has overhead associated with it.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript. Sponsor: HP.

You may also be interested in:

Delta Air Lines improves customer self-service apps quickly using automated quality assurance tools strategically

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington, D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

Our next customer case study focuses on Delta Air Lines and the use of quality assurance tools for requirements management as well as mapping the test cases and moving into full production quickly.

We're joined by David Moses, Manager of Quality Assurance for Delta.com and its self-service apps efforts, and John Bell, a Senior Test Engineer at Delta. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Moses: Generally, the airline industry, along with the lot of other industries I'm sure, is highly competitive. We have a very, very quick, fast-to-market type environment, where we've got to get products out to our customers. We have a lot of innovation that's being worked on in the industry and a lot of competing channels outside the airline industry that would also like to get at the same customer set. So, it's very important to be able to deliver the best products you can as quickly as possible. "Speed Wins" is our motto.

It goes back to speed to market with new functionality and making the customer's experience better. In all of our self-service products, it's very important that we test from the customers’ point of view.

We deliver those products that make it easier for them to use our services. That's one of the things that always sticks in my mind, when I'm at an airport, and I'm watching people use the kiosk. That's one of the things we do. We bring our people out to the airports and we watch our customers use our products, so we get that inside view of what's going on with them.

A lot on the line

I'll see people hesitantly reaching out to hit a button. Their hand may be shaking. It could be an elderly person. It could be a person with a lot on the line. Say it’s somebody taking their family on vacation. It's the only vacation they can afford to go on, and they’ve got a lot of investment into that flight to get there and also to get back home. Really there's a lot on the line for them.

A lot of people don’t know a lot about the airline industry and they don’t realize that it's okay if they hit the wrong button. It's really easy to start over. But, sometimes they would be literally shaking, when they reach out to hit the button. We want to make sure that they have a good comfort level. We want to make sure they have the best experience they could possibly have. And, the faster we can deliver products to them, that make that experience real for them, the better.

By offering these types of products to the customers, you give them the best of both worlds. You give them a fast pass to check in. You give them a fast pass book. But, you can also give the less-experienced customer an easy-to-understand path to do what they need as well.

Bell: One thing that we've found to be very beneficial with HP Quality Center is that it shows the development organization that this just isn't a QA tool that a QA team uses. What we've been able to do by bringing the requirements piece into it and by bringing the defects and other parts of it together, is bring the whole team on board to using a common tool.

In the past, a lot of people have always thought of Quality Centers as a tool that the QA people use in the corner and nobody else needs to be aware of. Now, we have our business analysts, project managers, and developers, as well as the QA team and even managers on it, because each person can get a different view of different information.

From Dashboard, your managers can look at your trends and what type of overall development lifecycle is coming through. Your project managers can be very involved in pulling the number of defects and see which ones are still outstanding and what the criticality of that is. The developers can be involved via entering information in on defects when those issues have been resolved?

We've found that Quality Center is actually a tool that has drawn together all of the teams. They're all using a common interface, and they all start to recognize the importance of tying all of this together, so that everyone can get a view as to what's going on throughout the whole lifecycle.

Moses: We've realized the importance of automating, and we've realized the importance of having multiple groups using the same tool.

It's not just a tool. There are people there too. There are processes. There are concepts you're going to have to get in your head to get this to work, but you have to be willing to buy-in by having the people resources dedicated to building the test scripts. Then, you're not done. You've got to maintain them. That's where most people fall short and that's where we fell short for quite some time.

Once we were able to finally dedicate the people to the maintenance of these scripts to keep them active and running, that's where we got a win. If you look at a web site these days, it's following one of two models. You either have a release schedule, that’s a more static site, or you have a highly dynamic site that's always changing and always throwing out improvements.

We fit into that "Speed Wins," when we get the product out for the customers’ trading, and improve the experience as often as possible. So, we’re a highly dynamic site. We'll break up to 20 percent of all of our test scripts, all of our automated test scripts, every week. That's a lot of maintenance, even though we're using a lot of reusable code. You have to have those resources dedicated to keep that going.

Bell: One thing that we've been able to do with HP Quality Center is connect it with Quick Test Pro, and we do have Quality Center 10, as well as Quick Test Pro 10. We've been able to build our automation and store those in the Test Plan tab of Quality Center.

It's very nice that Quality Center has it all tied into one unit. So, as we go through our processes, we're able to go from tab to tab and we know that all of that information is interconnected. We can ultimately trace a defect back to a specific cycle or a specific test case, all the way back to our requirement. So, the tool is very helpful in keeping all of the information in one area, while still maintaining the consistent process.

This has really been beneficial for us, when we go into our test labs and build our test set. We're able to take all of these automated pieces and combine them into test set. What this has allowed us to do is run all of our automation as one test set. We've been able to run those on a remote box. It's taken our regression test time from one person for five days, down to zero people and approximately an hour and 45 minutes.

So, that's a unique way that we've used Quality Center to help manage that and to reduce our testing times by over 50 percent.

I look back to metrics we pulled for 2008. We were doing fewer than 70 projects. By 2009, after we had fully integrated Quality Center, we did over 129 projects. That also included a lot of extra work, which you may have heard about us doing related to a merger.

Moses: The one thing I really like about the HP Quality Center suite especially is that your entire software development cycle can live within that tool. Whenever you're using different tools to do different things, it becomes a little bit more difficult to get the data from one point to another. It becomes a little bit more difficult to pull reports and figure out where you can improve.

Data in one place

What you really want to do is get all your data in one place and Quality Center allows you to do that. We put our requirements in in the beginning. By having those in the system, we can then map to those with our test cases, after we build those in the testing phase.

Not only do we have the QA engineers working on it in Quality Center, we also have the business analysts working on it, whenever they're doing the requirements. That also helps the two groups work together a bit more closely.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript. Sponsor: HP.

You may also be interested in:

McKesson shows bringing testing tools on the road improves speed to market and customer satisfaction

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington, D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This customer case-study focuses on McKesson Corp., a provider of certified healthcare information technology, including electronic health records, medical billing, and claims management software. McKesson is a user of HP’s project-based performance testing products used to make sure that applications perform in the field as intended throughout their lifecycle.

To learn more about McKesson’s innovative use of quality assurance software, we interview Todd Eaton, Director of Application Lifecycle Management Tools in the CTO’s office at McKesson. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Eaton: It's one thing that we can test within McKesson. It's another thing when you test out at the customer site, and that's a main driver of this new innovation that we’re partnering up with HP.

When we build an application and sell that to our customers, they can take that application, bring it into their own ecosystem, into their own data center and install it onto their own hardware.

Controlled testing

The testing that we do in our labs is a little more controlled. We have access to HP and other vendors with their state-of-the-art equipment. We come up with our own set of standards, but when they go out to the site and get put in to those hospitals, we want to ensure that our applications act at the same speed and same performance at their site that we experience in our controlled environment.

We want to make sure that our solutions get out there as fast as possible, so that we can help those providers and those healthcare entities in giving the best patient care that they can. So, being able to test on their equipment is very important for us.

Just knowing how many different healthcare providers there are out there, you could imagine all the different hardware platforms, different infrastructures, and the needs or infrastructure items that they may have in their data centers.

After further investigation, it became apparent to us that we weren’t able to replicate all those different environments in our data center. It’s just too big of a task.

The next logical thing to do was to take the testing capabilities that we had and bring it all out on the road. We have these different services teams that go out to install software. We could go along with them and bring the powerful tools that we use with HP into those data centers and do the exact same testing that we did, and make sure that our applications were running as expected on their environments.

Another very important thing is using their data. The hospitals themselves will have copies of their production data sets that they keep control of. There are strict regulations. That kind of data cannot leave their premises. Being able to test using the large amount of data or the large volume of data that they will have onsite is very crucial to testing our applications.

The tool that we use primarily within McKesson is Performance Center, and Performance Center is an enterprise-based application. It’s usually kept where we have multiple controllers, and we have multiple groups using those, but it resides within our network.

On the road

So, the biggest hurdle was how to take that powerful tool and bring it out to these sites? So, we went back to HP, and said, "Here’s our challenge. This is what we’ve got. We don’t really see anything where you have an offering in that space. What can you do for us?"

Currently, we have two engagements going on simultaneously with two different hospitals, testing two different groups of applications. I have one site that’s using it for 26 different applications and other that’s using it for five. We’ve got two teams going out there, one from my group and one from one of the internal R&D groups that are assisting the customer and testing the applications on their equipment.

We have been able to reduce the performance defects dramatically. We’re talking something like 40-50 percent right off the bat. Some of the timing that we had experienced internally seemed to be fine, well within SLAs. But as soon as I got out to a site and onto different hardware configurations, it took some application tuning to get it down. We were finding 90 percent increases with our help of continual testing and performance tweaks.

Items like that are just so powerful, when you are bringing that out to the various customer, and can say, "If you engage us, and we can do this testing for you, we can make sure that those applications will run in the way that you want them to."
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript. Sponsor: HP.

You may also be interested in:

Monday, June 14, 2010

Top reasons and paybacks for adopting cloud computing sooner rather than later

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a podcast panel discussion on identifying the top reasons and paybacks for adopting cloud computing.

Like any other big change affecting business and IT, if cloud, in its many forms, gains traction, then adopters will require a lot of rationales, incentives, and measurable returns to keep progressing successfully. But, just as the definition of cloud computing itself can elicit myriad responses, the same is true for why an organization should encourage cloud computing.

The major paybacks are not clearly agreed upon, for sure. Are the paybacks purely in economic terms? Is cloud a route to IT efficiency primarily? Are the business agility benefits paramount? Or, does cloud transform business and markets in ways not yet fully understood?

We'll seek a list of the top reasons why exploiting cloud computing models make sense, and why at least experimenting with cloud should be done sooner rather than later. We have assembled a panel of cloud experts to put some serious wood behind the arrow leading to the cloud.

Please join me now in welcoming Archie Reed, HP's Chief Technologist for Cloud Security and the author of several publications including The Definitive Guide to Identity Management and a new book, The Concise Guide to Cloud Computing; Jim Reavis, executive director of the Cloud Security Alliance (CSA) and president of Reavis Consulting Group, and Dave Linthicum, Chief Technology Officer of Bick Group and also a prolific cloud blogger and author. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Reed: When we go into all of this discussion around what is the benefit [to cloud], we need to do our standard risk analysis. There’s nothing too much that's new here, but what we do see is that when you get to the cloud and you're doing that assessment, the [payoffs] come down to agility.

Agility, in this sense, has the dimensions of speed at scale. For businesses, that can be quite compelling in terms of economic return and business agility, which is another variation on the theme. But, we gain this through the attributes we ascribe to cloud -- things like instant on/off, huge scale, per-use billing, all the things we tried to achieve previously but finally seem to be able to get with a cloud-computing architectural model.

The risks may go down, if it’s a private environment.



If we're going to do the cost-benefit analysis, it does come down to the fact that, through that per-use billing, we're able to do this in a much more fine-grain manner and then compare to the risks that we are going to encounter as a result of using this type of environment. Again, that's regardless of whether it’s public or private. The risks may go down, if it’s a private environment.

Factoring all those things in together, there's not too much of a new model in how we try to achieve this justification and gain those benefits.

Linthicum: This notion of business agility is really where the money is. It's the ability to scale up and scale down, the ability to allocate compute resources around business opportunities, and the ability to align the business to new markets quickly and efficiently, without doing waves and waves of software acquisitions, setups, installs, and all the risks around doing that. That's really where the core benefit is.

If you look at that and you look at the strategic value of agility within your enterprise, it’s always different. In other words, your value of agility is going to vary greatly between a high tech company, a finance company, and a manufacturing company. You can come up with the business benefit and the reason for moving into cloud computing, and people have a tendency not to think that way.

Innate risks

But you have to weigh that benefit in line with the innate risks in moving to these platforms. Whether or not you are moving from on-premises to off-premises, on-premies to cloud, or traditional on-premises to private cloud computing, there’s always risk involved in terms of how you do security, governance, latency, and those things.

Once you factor those things in and you understand what the value drivers are in both OPEX and CAPEX cost and the trade-offs there, as well as business agility, and weigh in the risk, then you have your cost-benefit analysis equation, and it comes down to a business decision. Nine times out of ten, the cloud computing provider is going to provide a more strategic IT value than traditional computing platforms.

Reavis: When you think about the economics, what’s the core of economics? It's supply and demand. Cloud gives you that ability to more efficiently serve your customers. It becomes a customer-service issue, where you can provide a supply of whatever your service is that really fits with their demand.

Their business would not have been able to exist in the earlier era of the Internet. It’s just not possible.



Ten years ago I started a little minor success in the Internet dot-com days. It was called Securityportal.com. You all remember something called the "Slashdot effect," where a story would get posted on Slashdot and it would basically take your business out. You would have an outage, because so much traffic would go your way.

We would, on the one hand, love those sorts of things, and we would live in fear of when that would happen, when we would get recognition, because we didn’t have cloud-based models for servicing our customers. So, when good things would happen, it would sometimes be a bad thing for us.

I had a chance to spend a lot of time with an online gaming company, and the way they've been able to scale up would only be possible in the cloud. Their business would not have been able to exist in the earlier era of the Internet. It’s just not possible.

So, yeah, it provides us this whole new platform. I've maintained all along that we're not just going to migrate IT into the cloud, but we're going to reinvent new businesses, new business processes, and new ways of having an intermediary relationship with other suppliers and our customers as well. So it’s going to be very, very transformational.

Reed: At HP, when we talk to customers and even try to evaluate internally, we talk about this thing called business outcomes being core to how IT and business align. Whether they're small companies or large companies, it's providing services that support the business outcomes and understanding that ultimately you want to deliver.

In business terms, it's more processing of loan requests and financial transactions. Then, if that’s the measure that people are looking at what the business outcomes need to be, then IT can align with that and they become the service provider for that capability.

We've talked to a lot of customers, particularly in the financial industry, for example, where IT wasn’t measured in how they cut costs or how much staff they had. They were measured in incremental improvements on how many advances could be made in delivering more business capability.

In that example, one particular business metric was, "We can process more loans in a day, when necessary." The way they achieved that was by re-architecting things in a more cloud or service-centric way, wherein they could essentially ramp up, on what they called a private cloud, the ability to process things much more quickly.

Now, many in IT realize -- perhaps not enough, but we're seeing the change -- that they need to make this toward the service oriented architecture (SOA) approach and delivery, such that they are becoming experts in brokering the right solution to deliver the most significant business outcomes.

That becomes the latency that drives the lateness of the business process changes that need to occur within the enterprise.



The source of those services is less about how much hardware and software you need to buy and integrate and all that sort of thing, and more about the most economical and secure way that they can deliver the majority of desired outcomes. You don’t just want to build one service to provide a capability. You want to build an environment and an architecture that achieves the bulk of the desired outcomes.

Linthicum: Cloud computing will provide us with some additional capabilities. It's not necessarily nirvana, but you can get at compute and you can get at even some of these pretty big services. For example, the Predictive API that Google just announced at Google I/O recently is an amazing piece of data-mining stuff that you can get for free, for now.

The ability to tie that into your existing processes and perhaps make some predictions in terms of inventory control things, means you could save potentially a million dollars a month, supporting just-in-time inventory processes within your enterprise. Those sorts of things really need to come into the mix in order to provide the additional value.

Sometimes we can drive processes out of the cloud, but I think processes are really going to be driven on-premises and they are going to include cloud resources. The ability to on-board those cloud resources is needed to support the changes in the processes and is really going to be the value of cloud computing.

That the area that’s probably the most exciting thing. I just came back from Gluecon in Denver. That is, in a sense, a cloud developers’ conference, and they're all talking about application programming interfaces (APIs) and building the next infrastructure.

When those things come online, become available, and we don’t have to build those things in-house, we can actually leverage them into a "pay per drink" basis through some kind of provider, buying those into our processes. We'll perhaps have thousands of APIs that exist all over the place, and perhaps even not even local data within these APIs.

That’s where the value of cloud computing is going to appear, and we haven’t seen anything yet. There are huge amounts of value being built right now.



They just produce behavior, and we bring them together to form these core business processes. More importantly, we bring them together to recreate these core business processes around new needs of the business.

Reed: I think the incentives, the risks, and all those things with cloud computing change, dependent on the type of business we're looking at.

Certainly, when we talk to smaller organizations and mid-sized organizations as well, they're looking for the edge that they can gain in terms of cost and support and, in most cases, more security. In this case, they look for broader back-office solutions than perhaps some of the larger organizations, things such as email, account management, HR, and so forth, as well as front-end stuff, basic web hosting and more advanced versions of that.

We've implemented things like Microsoft Business Productivity Online Suite (BPOS) for many customers, especially in the mid range. They do find better support, better up time, better cost controls, and to Jim’s point, more security than they are able to provide for themselves.

When we get to talk to larger organizations, some are looking for this. We know, even in the financial industry, which you might consider to be one of the most security paranoid type environments there are outside of the three-letter agencies, they find that kind of thing appealing as well. Some of those have actually gone to use Salesforce.com for some of their services.

But, they're generally more concerned with the security stuff and they often find specific capabilities more appealing in a service model, such as data processing, data analysis, data retrieval, functional analysis, and things like that. The mashups are definitely more popular as a type of model or the service-oriented nature is more popular model with larger organizations that we talk to.

Linthicum: Moving into cloud is going to make people think in a very healthy, paranoid state. In other words, they are going to think twice about what information goes out there, how that information is secured and modeled, what APIs they are leveraging, and service level agreements (SLAs). They're going to consider encryption and identity management systems that they haven’t done in the past.

In most of the instances that I am seeing deploying cloud computing systems, they are as secure, if not more secure, than the existing on-premise systems. I would trust those cloud computing systems more than I would the existing on-premise systems.

That comes with some work, some discipline, some governance, some security, and a lot of things that we just haven’t thought about a lot, or haven’t thought about enough with the traditional on-premise systems. So, that’s going to be a side benefit. In two years, we're going to have better security and better understanding of security because of cloud.

Reed: There will be businesses that are willing and able and can manage cloud-type environments to their benefit. But, eventually, the gaps become so small and the availability of these services online becomes so ubiquitous that I'm not sure how long this window goes for.

I don’t want to say that, in a few years, everybody will be able to deliver the same thing just as quickly. But for the moment, I think there’s a few forward thinking organizations that will be able to achieve that to great success.

Reavis: The organizations that are developing what they think is state-of-the-art -- but it’s not cloud -- are going to be struggling, because all of the neat, interesting new developments. It’s hard to even put your head around all of implications of compute-as-a-utility and all the innovation we are going to see, but we know it’s going to be on that platform.

If you think of this as the new development platform, then yeah, it’s going to be a real competitive issue. There are going to be a lot of new capabilities that will only be accessible in this platform, and they're going to come a lot quicker.

Five years from now

So, in terms of the first movers and the environment now, it’s going to look very different. Anybody who carved out some space right now and some lead in the market in cloud shouldn't feel too comfortable about their position, because there are companies we don't even know about at this point, that are going to be fairly pervasive and have a lot to say about IT five years from now.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, June 10, 2010

HP BTO executive on how cloud service automation aids visibility and control over total management lifecycles

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

The latest BriefingsDirect executive interview centers on gaining visibility and control into the IT services management lifecycle while progressing toward cloud computing. We dig into the Cloud Service Automation (CSA) and lifecycle management market and offerings with Mark Shoemaker, Executive Program Manager, BTO Software for Cloud at HP.

As cloud computing in its many forms gains traction, higher levels of management complexity are inevitable for large enterprises, managed service providers (MSPs), and small-to-medium sized businesses (SMBs). Gaining and keeping control becomes even more critical for all these organizations, as applications are virtualized and as services and data sourcing options proliferate, both inside and outside of enterprise boundaries.

More than just retaining visibility, however, IT departments and business leaders need the means to fine-tune and govern services use, business processes, and the participants accessing them across the entire services ecosystem. The problem is how to move beyond traditional manual management methods, while being inclusive of legacy systems to automate, standardize, and control the way services are used.

We're here with HP's Shoemaker examine an expanding set of CSA products, services, and methods designed to help enterprises exploit cloud and services values, while reducing risks and working toward total management of all systems and services. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Shoemaker: When we talk about management, it starts with visibility and control. You have to be able to see everything. Whether it’s physical or virtual or in a cloud, you have to be able to see it and, at some point, you have to be able to control its behavior to really benefit.

Once you marry that with standards and automation, you start reaping the benefits of what cloud and virtualization promise us. To get to the new levels of management, we’ve got to do a better job.

Up until a few years ago, everything in the data center and infrastructure had a physical home, for the most part. Then, virtualization came along. While we still have all the physical elements, now we have a virtual and a cloud strata that actually require the same level of diligence in management and monitoring, but it moves around.

Where we're used to having things connected to physical switches, servers, and storage, those things are actually virtualized and moved into the cloud or virtualization layer, which makes the services more critical to manage and monitor.

All the physical things

Cloud doesn’t get rid of all the physical things that still sit in data centers and are plugged in and run. It actually runs on top of that. It actually adds a layer, and companies want to be able to manage the public and private side of that, as well as the physical and virtual. It just improves productivity and gets better utilization out of the whole infrastructure footprint.

I don’t know many IT shops that have added people and resources to keep up with the amount of technology they have deployed over the last few years. Now, we're making that more complex.

They aren't going to get more heads. There has to be a system to manage it. The businesses are going to be more productive, the people are going to be happier, and the services are going to run better.

We're looking at a more holistic and integrated approach in the way we manage. A lot of the things we're bringing to bear -- CSA, for example -- are built on years of expertise around managing infrastructures, because it’s the same task and functions.

Ensuring the service level

We’ve expanded these [products and services] to take into account the public cloud ... . We've been able to point these same tools back into a public cloud to see what’s going on and making sure you are getting what you are paying for, and getting what the business expects.

CSA products and services are the product of several years of actually delivering cloud. Some of the largest cloud installations out there run on HP software right now. We listened to what our customers would tell us and took a hard look at the reference architecture that we created over those years that encompassed all these different elements that you could bring to bear in a cloud and started looking, how to bring that to market and bring it to a point where the customer can gain benefit from it quicker.

We want to be able to come in, understand the need, plug in the solution, and get the customer up and running and managing the cloud or virtualization inside that cloud as quickly as possible, so they can focus on the business value of the application.

The great thing is that we’ve got the experience. We’ve got the expertise. We’ve got the portfolio. And, we’ve got the ability to manage all kinds of clouds, whether, as I said, it’s infrastructure as a service (IaaS) or platform as a service (PaaS) that your software's developed on, or even a hybrid solution, where you are using a private cloud along with a public cloud that actually bursts up, if you don’t want to outlay capital to buy new hardware.

We have the ability, at this point, to tap into Amazon’s cloud and actually let you extend your data center to provide additional capacity and then pull it back in on a per-use basis, connected with the rest of your infrastructure that we manage today.

A lot of customers that we talk to today are already engaged in a virtualization play and in bringing virtualization into their data centers and putting on top of the physical.



We announced CSA on May 11, and we're really excited about what it brings to our customers ..., industry-leading products together with solutions that allow you to control, build, and manage a cloud.

We’ve taken the core elements. If you think about a cloud and all the different pieces, there is that engine in the middle, resource management, system management, and provisioning. All those things that make up the central pieces are what we're starting with in CSA.

Then, depending on what the customer needs, we bolt on everything around that. We can even use the customers’ investments in their own third-party applications, if necessary and if desired.

As the landscape changes, we're looking at how to change our applications as well. We have a very large footprint in the software-as-a-service (SaaS) arena right now where we actually provide a lot of our applications for management, monitoring, development, and test as SaaS. So, this becomes more prevalent as public cloud takes off.

Also, we're looking at what’s going to be important next. What are going to be the technologies and the services that our customers are going to need to be successful in this new paradigm.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

You may also be interested in:

HP service aims to lower cost and risk by tackling vulnerabilities early in 'devops' cycle

Security breaches and the cost of repairing and patching enterprise applications hang like a cloud over every company doing business today. HP is taking direct aim at that problem today with release of a security service that aims to prevent vulnerabilities and to bake security and reliability in at the earliest stages of application design and architecture.

Part of HP's Secure Advantage, the Comprehensive Applications Threat Analysis (CATA) service provides architectural and design guidance alongside recommendations for security controls and best practices. By addressing and eliminating application vulnerabilities as early in the lifecycle as possible, companies stand to gain incredible returns on investment (ROI) and drastically lower total cost of ownership (TCO) across the "devOps" process, according to HP. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

"Customers are under increasing pressure from threats that exploit security weaknesses that were either missed or insufficiently addressed during the early lifecycle phases," said Chris Whitener, chief security strategist of Secure Advantage. Whitener added that he believes HP is the first company to come to market with such a service.

HP has been using this service internally for more than six years and, according to Whitener, has seen a return of 5- 20-times on the cost of implementation. And this, he says, is just on things that can be measured. The service has freed up a lot of schedule time formerly spent in finding and fixing application vulnerabilities.

Two problems

Many other risk-analysis programs come later in the development process, meaning that developers often miss vulnerabilities at the earliest stages of design. That brings up two problems, according to John Diamant, HP's Secure Product Development strategist, the risks associated with the vulnerabilities and the cost of patching the software.

"By addressing these vulnerabilities early in the process," Diamant said, "we're able to reduce the risk and eliminate the cost of repair."

The new service offers two main thrusts for increased security:
  • A gap analysis to examine applications and identify often-missed technical security requirements imposed by laws, regulations, or best practices.
  • An architectural threat analysis, which identifies changes in application architecture to reduce the risk of latent security defects. This also eliminates or lowers costs from security scans, penetration tests, and other vulnerability investigations.
While lowering development costs, using a security service early in the lifecycle can also lower the threat of security breaches, which can cost in the millions of dollars in fines and penalties, as well as the fallout in a loss of customer confidence.

Security and proper applications development, of course, come into particular focus when cloud computing models and virtualization are employed, and where an application is expected to scale dramatically and dynamically.

Although HP plans to develop a training program sometime in the future, right now, this is offered as a service using HP personnel who have been schooled in the processes and who have been using it inside HP for years. For more information, go to http://h10134.www1.hp.com/services/applications-security-analysis/.

You may also be interested in: