Wednesday, July 15, 2009

Panda's SaaS-based PC security manages client risks, adds efficiency for SMBs and providers

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com.

Download the transcript. Learn more. Sponsor: Panda Security.

PC security has proven a thorny and expensive problem for users, small businesses, enterprises and managed services providers alike for many years.

But PC security can be increasingly enhanced -- with a cloud-enhanced trouble discovery-and-remediation lifecycle approach -- and delivered as services. This reduces the strain on the PC itself, as well as improves the ability to staunch malware problems quickly before they spread.

As a result, new offerings around cloud-based anti-virus and security protection services are on the rise.

Furthermore, Internet-delivered security -- from the low-touch client agent to the fuller managed services -- provides a strong business opportunity for resellers and channel providers. A fuller such solution then allows small and larger businesses to protect all of their PCs, regardless of location, at decreasing -- rather than increasing -- total costs.

To help delve more deeply into the benefits of security as a service, and explore the cloud strengths of managing malware protection more centrally from the Web, I recently moderated a discussion with independent IT analyst Phil Wainewright, director of Procullux Ventures and a ZDNet SaaS blogger, as well as Josu Franco, director of the Business Customer Unit at Panda Security.

Here are some excerpts:
Franco: There are two basic problems that we're trying to solve here, problems which have increased lately. One is the level of cyber crime. There are lots and lots of new attacks coming out every day. We're seeing more and more malware come into our labs. On any given day, we're seeing approximately 30,000 new malware samples that we didn't know about the day before. That's one of the problems.

The second problem that we're trying to solve for companies is the complexity of managing the security. You have vectors for attack -- in other words, ways in which a system can be infected. If you combine that with the usage of more and more devices in the networks, that combination makes it very difficult for administrators to really be on top of the security.

In order to address the first problem ... we need to take an approach that is sustainable over time. ... We found the best approach is to move processing power into the cloud, ... to process more and more malware automatically in our labs. That's the part of cloud computing that we're doing.

In order to address the second problem, we believe that the best approach for most companies is via management solutions that are easier to administer, more convenient, and less costly for the administrators and for the companies.

We don't see the agents disappearing any time soon to protect the [PC] endpoints. [But by] rebuilding the endpoint agent from scratch, ... we get a much lighter agent, much faster than previous agents. And, very importantly, an agent that is able to leverage the cloud computing capacity that we have, which we call "Collective Intelligence," to process malware automatically.

We've just released this very first version of the Cloud Antivirus agent. We're distributing it for free with the idea that first we want people to know about it. We want people to use it, but very importantly, the more people that are using it, the better protected they're all going to be.

Special offer: Download the free protection.

Once you've downloaded this agent, which works transparently for the end user, all the management takes place via SaaS. ... We believe that the more intelligence that we can pack into the agent, the better, but always respecting the needs of consumers -- that is to be very fast, to be very light, to be very transparent to them.

[Next we provide] ... a management console [Panda Managed Office Protection] that's hosted from our infrastructure, in which any admin, regardless of where they are, can manage any number of computers, regardless of where they are located.

This works by having every agent talk to this infrastructure via Internet, and to talk to other agents, which might be installed in the same network, distributing updates or distributing other types of polices.

Wainewright: To be honest, I've never really understood why people wanted to tackle Web-based malware in an on-premise model, because it just doesn't make any sense at all. The attacks are coming from the Web. The intelligence about the attacks obviously needs to be centralized in the Web. It needs to be gathering information about what's happening to clients and to instances all around the Web, and across the globe these days.

Really making sure that the protection is up-to-date with the latest intelligence and is able to react quickly to new threats as they appear means that you've go to have that managed in the center, and the central management has got to be able to update the PCs and other devices around the edge, as soon as they've got new information.

... The malware providers are already using network scale to great effect, particularly in the use of these zombie elements of malware that effectively lurk on devices around the Web, and are called into action to coordinate attacks.

You've got these malware providers using the collective intelligence of the Web, and if the good guys don't use the same arsenal, then they're just going to be left behind.

... More and more, in large enterprises, but also in smaller businesses, we're seeing people turning to outside providers for expertise and remote management, because that's the most cost effective way to get at the most up-to-date and the most proficient knowledge and capabilities that are out there.

Franco: In the current economic times, more and more resellers are looking to add more value to what they are offering. For them, margins, if they're selling hardware or software licenses, are getting tougher to get and are being reduced. So, the way for them to really see the opportunity into this is thinking that they can now offer remote management services without having to invest any amount in what is infrastructure or in any other type of license that they may need.

It's really all based on the SaaS concept. [Managed service providers] can now say to the customers, "Okay, from now on, you'll forget about having to install all this management infrastructure in-house. I'm going to remotely manage all the endpoint security for you. I'm going to give you this service-level agreement (SLA), whereby I'm going to check the status of your network twice or three times a week or once a day, and if there is any problem, I can configure it remotely, or I can just spot where the problems are and I can fix them remotely."

This means that for the end user it's going to reduce the operating cost, and for the reseller it's going to increase the margins for the services they're offering. We believe that there is a clear alignment among the interests of end users and partners, and, most importantly, also from our side with the partners. We don't want to replace the channel here. What we want is to become the platform of choice for these resellers to provide these value-added services.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com.

Download the transcript. Learn more. Sponsor: Panda Security.

Tuesday, July 14, 2009

Rethinking virtualization: Why enterprises need a sustainable virtualization strategy over hodge-podge approaches

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion. Download a pdf of this transcript.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Enterprises today need a better way to prevent server sprawl and complexity that can impact the cost of virtualization projects. Three important considerations are instrumental for effective enterprise virtualization adoption, and they often amount to a rethinking of virtualization.

For example, one important question is, How do enterprises manage and control how network interconnections are impacted by widespread virtualization? Second, how can configuration management databases (CMDBs) help in deploying virtualized servers? And third, how can outsourcing help organizations get the most bang for their virtualization buck?

Rethinking virtualization becomes necessary to attain a sustainable enterprise virtualization strategy because virtual machines (VMs) present unique challenges.

To get to the bottom of the larger, pro-active means of virtualization planning, I recently interviewed three executives from HP: Michael Kendall, worldwide Virtual Connect marketing lead; Shay Mowlem, strategic marketing lead for HP Software and Solutions, Ryan Reed, a product manager for EDS Server Management Services.

Here are some excerpts:
Mowlem: Certainly, many companies today have recognized that consolidating their infrastructure through virtualization can reduce power consumption and space utilization, and can really maximize the value of the infrastructure that they’ve already purchased.

Just about everybody has jumped on the virtualization bandwagon, and many companies have seen tremendous gains in their development in lab environments, in managing what I would consider to be non-mission-critical production systems. But, as companies have tried to apply virtualization to their Tier 2 and Tier 1 mission-critical systems, they're discovering a whole new set of issues that, without effective management, really run counter to the cost benefits.

... For IT to realize the large-scale cost benefits of virtualization in their production environments they need to prove to the business that the service performance and the quality are not going to be lost. ... The ideal approach should include a central vantage point, from which to detect, isolate, and prevent service problems across all infrastructure elements, heterogeneous servers, spanning physical and virtual network storage, and all the subcomponents of a service.

We provide tools today that offer native discovery and dependency mapping of all infrastructure, physical and virtual, and then store that information in our central universal configuration management database (UCMDB), where we then track the make-up of a business service, all of the infrastructure that supports that service, the interdependencies that exists between the infrastructure elements, and then manage that and monitor that on an ongoing basis. ... Essentially a configuration database attracts all of the core interdependencies of infrastructure and their configuration settings over time

Kendall: When you consolidate a lot of different application instances that are normally on multiple servers, and each one of those servers has certain number of I/O for data and storage and you put them all on one server, that does consolidate the number of servers we have.

[It also] has the tendency to expand the number of network interface controllers (NICs) that you need, the number of connections you need, the number of cables you need, and the number of upstream switch ports that you need. ... Just because you can either set up a new virtual machine or want to migrate virtual machines in a matter of minutes, it isn’t as easy in the connection space. Either you have to add additional capacity for networks and for storage, add additional host bus adapters (HBAs), or add additional NICs.

We did some basic rethinking around how to remove some of these interconnect bottlenecks. HP Virtual Connect actually can virtualize the physical connections between the server, the data network, and the storage network. Virtualizing these connections allows IT managers to set up, move, replace, or upgrade blade servers and the workloads that are on them, without having to involve the network or storage folks or being able to impact the network or storage topologies.

Reed: Business services today demand higher levels of uptime and availability. Those data centers, if they were to fail due to a power outage or some other source of failure, are no longer able to provide the uptime requirements for those types of business services. So, it’s one of the first questions that a virtual infrastructure program raises to the program manager.

Does the company or the organization have the skill set necessary in-house to do large-scale virtualization in data center modernization projects? Often times, they don’t, and if they don’t, then what is their action? What is their remedy? How are they going to resolve that skill gap?

... [And there's] a hybrid model, which would be one where virtual infrastructures and non-virtual infrastructures can be managed from either client or organization-owned data center -- or the services provider data center. There are various models to consider. A lot of the questions that lead into how to plan for this type of virtual infrastructure also lead into a conversation about how an outsourcer can be the most value-add.

Outsourcers nowadays are very skilled at providing infrastructure services to virtual server environments. That would include things like profiling, analysis planning, mapping of targets to source servers, and creating a business value for understanding how it’s going to impact the business in terms of ROI and total cost of ownership.

Choose the right partner, and they can grow with you. As your business grows and as you expand your market presence, choosing the services provider that has the capability and capacity to deliver in the areas that you want to grow makes the most sense.

The traditional outsourcing model is one where enterprises realize that the data center itself is not a strategic asset to the business anymore. So they move the infrastructure to an outsourcer data center where the services provider, the outsourcing company, can provide the best services with virtual infrastructures during the design and plan phase. ... We’ve been doing this for 45 years, and it’s really the critical piece of what we do.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion. Download a pdf of this transcript.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Software AG seeks IDS Scheer in webMethods aquisition follow-up act

This guest post comes courtesy of Tony Baer’s OnStrategies blog . Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

Who says there are no second acts in life?

After having caught its breadth with the webMethods acquisition almost exactly two years ago, Software AG has struck again with an offer to buy roughly half the shares of IDS Scheer from the company’s founders. The offer, worth roughly $320 million, is still subject to regulatory review.

Both deals are similar in that they are major, but their impacts will be different. webMethods expanded the Software AG business horizontally, adding critical mass to a new SOA middleware business that it was only beginning to build. Additionally, webMethods was a less mature business with more headroom for growth.

By contrast, IDS Scheer simply deepens one of Software AG’s existing businesses: webMethods Business Process Management (BPM). It adds the ARIS process modeling language, which would provide yet another onramp for webMethods BPM customers. And IDS Scheer is a pretty mature business, with the brunt of its installed base being large SAP customers who have used the ARIS language to model their SAP applications. There obviously aren’t a lot of new SAP installations going in these days.

But in other ways, webMethods could give IDS Scheer the jolt that the ARIS business could use. While Software AG’s numbers continued to grow in spite of the recession, IDS Scheer’s business has flattened out with what little growth occurring attributable to maintenance streams.

For Software AG, IDS Scheer’s maintenance streams resemble those of its legacy ETS data management business, which has provided the company the annuity revenue

Just about the only thing that surprised us in this announcement was that SAP didn’t act first. Their customers only happen to form the majority of the ARIS base.

flow to fund its acquisitions. But that’s where the similarity ends. The webMethods BPM business, which is much earlier in its growth curve, represents a potential greenfield base for ARIS. Better yet for Software AG, it provides a foothold into the SAP customer base where the company has not been heavily present. And, although SAP is also a player in the middleware space with NetWeaver, it has not been terribly active with BPM.

More interestingly, it throws down a gauntlet to Oracle, which currently OEMs the ARIS language as one of the options for its Fusion BPM middleware stack. Although Oracle promotes Fusion’s “hot pluggable” best of breed strategy, probably the last place Oracle wants best of breed is in the BPM stack. With ARIS providing a direct onramp to webMethods BPM, and in turn the Software AG SOA stack, continuation of the OEM deal provides Software AG the opportunity for a wedge strategy.

As for IBM, making ARIS native to the webMethods BPM suite provides a line of defense against WebSphere incursion into the SAP installed base. Although hardly a show stopper, it provides Software AG yet another tool in its arsenal to compete with IBM WebSphere.

Just about the only thing that surprised us in this announcement was that SAP didn’t act first. Their customers only happen to form the majority of the ARIS base.

Postscript: Here’s hoping that maybe we’ll have a chance to hear Professor Scheer’s mean baritone sax at Software AG events.

This guest post comes courtesy of Tony Baer’s OnStrategies blog . Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

Rackspace takes open source approach with release of Cloud Servers API

Positioning its cloud hosting services as an alternative to Amazon’s Elastic Compute Cloud (EC2), Rackspace announced today the public availability of Cloud Servers API based on representational state transfer (REST).

Taking an open-source approach, Rackspace’s 43,000 cloud-computing customers played a major role in the API specifications, explained Emil Sayegh general manager for The Rackspace Cloud, formerly branded as Mosso cloud hosting. They overwhelmingly preferred the newer lighter-weight REST approach to the older heavy-duty SOAP standard that Amazon uses, he said.

“With the number of companies that provided input into this API, the way I see it this is their design,” he told BriefingsDirect. “This API is based on their input.”

This open community approach is a major differentiator between Amazon and the Rackspace alternative.

It may very well also be a difference with Microsoft and its Windows Azure offerings, the initial pricing of which was also unveiled today. See Mary-Jo Foley's take.

The next step in Rackspace’s strategy is to open source the API, which according to Sayegh will be announced soon. He notes that Amazon has no announced plans to go to open source.

“What we’re seeing is customers are really clamoring for an alternative to Amazon,” Sayegh said, acknowledging that Amazon is the market leader while positioning Rackspace as the number two that is trying harder.

“We have the largest platform as a service (PaaS) in cloud sites,” Sayegh said. “We are definitely in terms of size second to Amazon.” He sees today’s release of the API strengthening the Rackspace Cloud position in the market.

I recently talked with Mosso co-founder Jonathan Bryce, and a group of analysts, on the subject of PaaS and its role in propelling cloud computing forward. Read a transcript.

Prior to today’s API release, customers used a Web-based control panel to manage their Rackspace cloud usage. This meant they had to manually scale up or down as their business demands fluctuated.

The API allows developers to programmatically interact with the Rackspace cloud

What we’re seeing is customers are really clamoring for an alternative to Amazon.

servers so scalability can be made automatic, Sayegh explained. The control panel option is still available but the API offers greater choice and flexibility.

“People are raving about how easy it is to use,” he said. As an example, he pointed to Michael Mayo, a developer working alone who was able to create an iPhone remote cloud server management app based on the new API in just three days. Sayegh said even he was surprised that a lone coder could use the API to build an application that quickly.

Rackspace Cloud currently offers three cloud hosting products:
  • Cloud Sites, which provides pools of servers for customer Websites.

  • Cloud Servers, which provides server capacity that can be scaled up and down as the customer requirements change.

  • Cloud Files, which provides “unlimited storage” for images, large files, and backups.
BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.

Thursday, July 9, 2009

Paglo SaaS offering provides means to harness untamed collection of log and IT resources data

Paglo, the IT management software-as-a-service (SaaS) company, recently announced a new low-cost service that allows companies to tackle the Herculean task of trying to winnow out a rapidly growing mountain of log data.

With log data piling up in terabyte leaps and increasing regulatory pressure to maintain that data for several years, companies now find themselves in danger of being swamped with information about operational events and the daunting challenge of making sense of it. [Disclosure: Paglo is a sponsor of BriefingsDirect podcasts.]

Paglo, Menlo Park, Calif., has upgraded its SaaS log management application, Paglo Logs, for IT professionals to automatically capture and store their logs and instantly search and analyze them. The expanded service provides a powerful Google-like search capability to enable rapid discovery of key operational events, a platform for meeting compliance requirements, and a way to accelerate the investigation of security incidents.

I was impressed with Paglo when they first came out, and the additional services -- now extending to capture and search of expansive sets of IT assets and other metadata on their performance -- makes it a powerful tool for the cloud era.

How can you be responsible for performance on systems that cross company or provider boundaries? With SaaS offerings like Paglo, you can set up log gathering and search across all the systems that support a business process, regardless of their sourcing. Very cool.

As on-demand and with a "zero footprint" architecture, the Paglo Logs service collects rich systems data from all networked devices and requires no additional software or appliances to use. Paglo Logs allows users to:
  • Accelerate problem resolution by going directly from the logged events to the underlying infrastructure, to view health and performance data or to access a particular machine.

  • Meet the Payment Card Industry (PCI) Data Security Standard (DSS) by tracking all devices, software and configurations, monitoring wireless access, and securing central log collection.

  • Provide both developers and operations the ability to troubleshoot application issues and understand user behavior without logging into the production servers.

  • Improve their security profile and incident response by immediately receiving alerts and using saved searches and dashboards.
To maintain security, each business using Paglo has its own search index that keeps the log and network information separate and private from other subscribers. Setting up requires no appliances, on-site dedicated servers

As I said in the Paglo release on the news, "Companies need to harness and analyze the information explosion coming from all of their computer, server, network and log data. It's a very productive way to improve operating efficiencies, gain a clear understanding of true IT costs, and to meet compliance requirements. As an on-demand service, Paglo helps drop the complexity barriers to quick and effective log search and analytics."

The services come in three flavors, Paglo IT, a more complete offering; Paglo MSP, targeted at managed services providers, and Paglo Logs, for the full search and visualization services (and with a free introductory offer). The services are designed to appeal to security professionals, IT administrators, and developers of on-demand applications and services.

The new Log Management service is available immediately and accounts can be created directly online. A free trial is available at https://app.paglo.com/signup?product=logs. Paid plans start at an aggressive $99 per month.

Wednesday, July 8, 2009

Don’t use an ESB unless you absolutely, positively need one, Mule CTO warns

“To ESB or not to ESB,” that is the question Ross Mason, MuleSource CTO, raises in a his blog this week.

It would be heresy among marketers at many vendors, but the MuleSource CTO is actively discouraging architects and developers from using an enterprise service bus (ESB), including his company’s open-source version, unless they are sure they really need one.

Misuse of ESBs leads to overly complex architectures that can be more difficult to remedy than a straightforward Web services-based architecture that omits the ESB in early versions of an enterprise application, Mason argued in a phone conversation about his blog.

“There are two main mistakes I see most of the time,” he told BriefingsDirect. “There’s not enough of an integration requirement or there’s not enough use of the ESB features to warrant it.”

You don’t need an ESB if your project involves two applications, or if you are only using one type of protocol, he explains.

“If I’m only using HTTP or Web services, I’m not going to get a lot of value from an ESB as opposed to using a simpler Web services framework,” Mason said. “Web services frameworks are very good at handling HTTP and SOAP. By putting in an ESB, you’re adding an extra layer of complexity that’s not required for that job.”

Architects and developers using an ESB in these cases are probably engaging in "resume-driven development (RDD)." If anybody asks you if you’ve deployed an ESB in an application you’ve worked on you can say, yes. And then you can hope the hiring manager doesn’t ask if the application really required the technology.

Another mistake, Mason cites, is using an ESB and thinking that you are future-proofing an application that doesn’t need it now, but might someday.

“You’ll Never Need It (YNNI), that acronym has been around awhile for a reason,” Mason says. “That’s another killer problem. If you select an ESB because you think you might need it, you definitely don’t have an architecture that lays out how you’re going to use an ESB because you haven’t given it that much thought. That’s a red flag. You could be bringing in technology just for the sake of it.”

Adding his two-cents to the “Is service-oriented architecture (SOA) dead” debate, the MuleSource CTO says such over-architecting is one of the things that contributes to the problems being encountered by IT in SOA that has given the acronym a bad name. “Architecture is hard enough without adding unnecessary complexity,” he said. “You need to keep it as simple as possible.”

Ironically, adding an ESB because you might need it someday can lead to future problems that might be avoided if you left it out to begin with and then added it in later, Mason said.

“The price of architecting today and re-architecting later is going to be a lot less than architecting badly the first time,” he explained. “If you have a stable architecture, you can augment it later with an ESB, which is going to be easier than trying to plug in an ESB where it’s not going to be needed at that time.”

While the conversation focused on the pitfalls of using an ESB where you don’t need one, the MuleSource CTO naturally believes there are architectures where the ESB makes sense. To begin with, you need to be working on a project where you have three or more applications that need to talk to each other, he explained.

“If you’ve got three applications that have to talk to each other, you’ve actually got six integration points, one for each service, and then it goes up exponentially,” Mason said.

The ESB technology is also needed where the protocols go beyond HTTP. “You should consider an ESB when you start using Java Message Service (JMS), representational state transfer (REST), or any of the other protocols out there,” Mason said. “When communications start getting more complicated is when an ESB shows its true value.”

BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.

Monday, July 6, 2009

Consolidation, modernization, and virtualization: A triple-play for long-term enterprise IT cost reduction

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

As the global economic downturn accelerates the need to reduce total technology costs, IT consolidation, application modernization, and server virtualization play self-supporting roles alone -- and in combination.

Taken apart these initiatives offer greater efficiency and reduced IT energy demands. But combined, these initiatives produce much greater costs controls by slashing labor and maintenance costs, producing far better server utilization rates, and removing unneeded or unused applications and data.

These initiatives when done in coordination can do more than cut costs, they improve how IT delivers services to their businesses. A better IT infrastructure enables market agility, supports flexible business processes, and places the enterprise architect in a position to better leverage flexible sourcing options such as cloud computing and SaaS.

To dig into the relationship between a modern and consolidated approach to IT data centers and total cost, I recently interviewed John Bennett, worldwide solution manager for Data Center Transformation Solutions at Hewlett-Packard (HP).

Here are some excerpts:
Bennett: It’s easy to say, "reduce costs." It’s very difficult to understand what types of costs I can reduce and what kind of savings I get from them.

In my mind, the themes of consolidation, which people have been doing forever; modernization, very consciously making decisions to replace existing infrastructure with newer infrastructure for gains other than performance; and virtualization, which has a lot of promise in terms of driving cost out of the organization can increase aspects like flexibility and agility. ... [These allow companies] to grow quickly, to respond the competitive opportunities or threats very quickly, and offer the ability for IT to enable the business to be more aggressive, rather than becoming a limiting factor in the roll-out of new products or services.

By combining these initiatives, and taking an integrated approach to them, ... you can use them to address a broad set of issues, and realize aspects of a data center transformation by approaching these things in an orderly and planned way.

When you move to a shared infrastructure environment, the value of that environment is enhanced the more you have standardized that environment. That makes it much easier not only to manage the environment with a smaller numbers of sys-admins, but gives you a much greater opportunity to automate the processes and procedures.

... I no longer have the infrastructure and the assets tied to specific business services and applications. If I have unexpected growth, I can support it by using resources that are not being used quite as much in the environment. It’s like having a reserve line of troops that you can throw into the fray.

If you have an opportunity and you can deploy servers and assets in the matter of hours instead of a matter of days or months, IT becomes an enabler for the business to be more responsive. You can respond to competitive threats, respond to competitive opportunities, roll out new business services much more quickly, because the processes are much quicker and much more efficient. Now, IT becomes a partner in helping the business take advantage of opportunities, rather than delaying the availability of new products and services.

We’ve seen some other issues pop up in the last several years as well. One of them is an increasing focus on green, which means a business perspective on being green as an organization. For many IT organizations, it means really looking to reduce energy consumption and energy-related costs.

In some of the generations of servers that we’ve released, we see 15 to 25 percent improvements from a cost perspective and an energy consumption perspective, just based on modernizing the infrastructure. So, there are cost savings that can be had by replacing older devices with newer ones.

We’ve also seen in many organizations, as they move to a bladed infrastructure and move to denser environments, that the data center capacity and energy constrain -- that the amount of energy available to a data center -- is also an inhibiting factor. It’s one of the reasons that we really advise customers to take a look at doing consolidation, modernization, and virtualization together.

[These efficiencies] have been enhanced by a lot of the improvements in the IT products themselves. They are now instrumented for increasing

What we’re doing as a company is focusing on the management and the automation of that environment, because we see virtualization really stressing data center and infrastructure management environments pretty substantively.

manageability and automation. The products are integrated to provide management support not just for availability and for performance, but also for energy. They're instrumented to support the automation of the environment, including the ability to turn off servers that you don’t know or care about. They’re further enhanced by the enhancements in virtualization.

With virtualization ... it becomes a shared environment, and your shared environment is just more productive and more flexible if it’s one shared environment instead of 3, 4, 5 or 10 shared environments. That increases the density and it goes back to these other factors that we talked about. That’s clearly one of the more recent trends of the last few years in many data centers.

A lot of people are doing virtualization. What we’re doing as a company is focusing on the management and the automation of that environment, because we see virtualization really stressing data center and infrastructure management environments pretty substantively. In many cases, it's impacting governance of the data center. ... So, you really have full control, insight, and governance over everything taking place in the data center.

Our recommendations to many customers would be, first of all, if you identify assets that aren’t being used at all, just get rid of them. The cost savings are immediate. ... Identify all of the assets in the environment, the applications, software they're running, and the interdependencies between them. In effect, you build up a map of the infrastructure and know what everything is doing. You can very quickly see if there are servers, for example, not doing anything.

If I've got 10 servers doing this particular application and I can have that support the environment by using 3 of those servers, get rid of 7. I can modernize the environment, so that if I had 10 servers doing this work before, and the consolidation gives me the opportunity to go to only to 6 or 7, if I modernize, I might be able to reduce it to 2 or 3.

On top of that, I can explore virtualization. Typically, in environments not using virtualization, server utilization rates, especially for industry standard servers, are under 10 percent. That can be driven up to 70 or 80 percent or even higher by virtualizing the workloads. Now, you can go from 10 to 3 to perhaps just 1 server doing the work. Ten to 3 to 1 is an example. In many environments, you may have hundreds of servers supporting web-based applications or email. The number of servers that can be reduced out from that can be pretty phenomenal.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Friday, July 3, 2009

Oracle Fusion 11g Middleware: Executed according to plan

This guest post comes courtesy of Tony Baer’s OnStrategies blog . Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

This week's announcement by Oracle of the rollouts of Fusion Middleware 11g is a bit anticlimactic in that the details are pretty much according to the plan that came out exactly a year ago today. Although the Fusion stack is comprised of multiple parts, internally developed and acquired, the highlight is that it represents the fruition of the BEA acquisition. Oracle had Fusion middleware prior to acquiring BEA, but there’s little question that BEA was the main event. WebLogic filled the donut hole in the middle of the Fusion stack with a server that was far more popular than Oracle Containers for Java EE (OC4J). Singlehandedly, BEA catapulted Oracle Fusion into becoming a major player in middleware.

Oracle largely stuck to the previously announced roadmap for convergence of BEA products, with the only major surprises being in the details. As planned, Oracle incorporated WebLogic as the strategic Java platform, JDeveloper as the primary development environment, dual business process modeling paths, with master data management, data integration, and identity management driven largely by Oracle offerings with some added BEA content.

Although the Oracle Fusion product portfolio came from far more diverse sources than BEA (as Oracle was obviously a more aggressive acquirer), the result is far more unified than anything that BEA ever fielded. Before getting swallowed by Oracle, BEA had multiple portal, development, and integration technologies lacking a common framework. By comparison, Oracle has emphasized a common framework for mashing the pieces together.

That’s rooted in Oracle’s heritage for developing native tools and utilities, dating back to the Oracle Forms 4GL and the various utilities for managing the Oracle database;

It’s an outgrowth of the mentality at Oracle that good is the enemy of best, and that what Oracle is building is a platform rather than discrete products.

the tools were sufficiently native that they typically were confined to Oracle shops. But that approach to native tooling morphed with development of a broader framework that is optimized for Oracle platforms. It’s an outgrowth of the mentality at Oracle that good is the enemy of best, and that what Oracle is building is a platform rather than discrete products.

It’s an approach that also makes Oracle’s tagline of Fusion being standards-based as being more nuanced. Yes, the Fusion products are designed to support Oracle’s “hot pluggable” best of breed strategy to work with other vendors products, but for designing and managing the Fusion environment, Oracle has you surrounded with native tooling if you want them. Call it a subtle pull for encouraging customers to add more Oracle content.

That explains how, 6 – 7 years ago, Oracle began developing what has become the Application Development Framework (ADF) as its own model-view-controller alternative to the Apache Struts framework that it previously used in early versions of the JDeveloper Java tool. That approach has carried through to this day with JDeveloper, which provides a higher level, declarative approach to development that would not fit with traditional Eclipse IDEs. And that approach applies to Oracle Enterprise Manager (EM), which does not necessarily compete with BMC, CA, HP, or IBM Tivoli in application management, but provides the last mile of declarative deployment, monitoring, and performance testing capabilities for the Fusion platform.

Bringing together the Oracle and BEA technologies resulted in some synergies where the value was greater than the sum of its parts. A good example is the pairing of BA’s quasi-real time JRockit JVM with Oracle Coherence data grid, a distributed caching layer for Java objects. In essence, JRockit juices up performance of Coherence, which is used whenever you need higher performance with frequently used objects; conversely, Coherence provides a high end enterprise clustered platform that provides an excellent use case for JRockit.

As noted, while the broad outlines of Fusion 11g are hardly any mystery, there are some interesting departures that occurred along the way. One of the more notable was in BPM where Oracle added another option to its runtime strategy for Oracle BPM Suite.

Make no doubt about it, the Fusion 11g migration was a huge reengineering project, involving nearly 2000 development projects and over 5000 product enhancements. So it’s a shame that Oracle did not take the opportunity of re-architecting its middleware stack by migrating it to microkernel architecture, with OSGi being the most prominent example.

Originally, Oracle BPEL Process Manager was to be the runtime, requiring BPM users to map their process models to BPEL, essentially an XML-based sequential programming language that lacks process semantics. A year later, OMG is putting finishing touches to BPMN 2.0, a process modeling notation that has added support for executable models. And so with release of 11g, Oracle BPM Suite users will gain the option of bypassing BPEL as long as their processes are not that transactionally complex.

Make no doubt about it, the Fusion 11g migration was a huge reengineering project, involving nearly 2000 development projects and over 5000 product enhancements. So it’s a shame that Oracle did not take the opportunity of re-architecting its middleware stack by migrating it to microkernel architecture, with OSGi being the most prominent example. Oracle WebLogic Server is OSGi-based, but the BPM/SOA stack is not. Oracle remains mum as to whether it plans to adopt a microkernel architecture throughout the rest of the Fusion stack.

So why are we all hot and bothered about this? OSGi, or the principle of dynamic, modular microkernels in general, offer the potential to vastly reduce Java’s footprint through deployment of highly compact, servers that contain only the Java modules that are necessary to run. The good news is that this is potentially a highly economic, energy-efficient, space efficient green strategy. The bad news is that it’s not enough for the vendor to adopt a microkernel, as the user has to learn how to selectively and dynamically deploy them.

But as we just noted, OSGi seems to have lost its momentum of late. As we noted, in our Ovum research last year, we believed that OSGi was going to become the de facto standard for Java platforms as IBM and SpringSource fully migrated their stacks, and as rivals were providing at least tacit support. A year later, Oracle’s silence is deafening.

As we noted last week, Oracle’s pending acquisition of Sun adds some interesting dynamics to the plot, as Sun has continued to speak on both sides of its mouth on the topic: supporting OSGi for its open source Glassfish Java platform, while putting its weight behind Project Jigsaw that aims to redefine Java modularity as JSR 294. Unfortunately, announcement of Fusion 11g has not cleared up matters.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum. His profile is here. You can reach him here.