Monday, August 17, 2009

Understanding the value of reference architectures in the SOA story

This guest post comes courtesy of ZapThink. Ron Schmelzer is a senior analyst at ZapThink. You can reach him here.

Take the BriefingsDirect middleware/ESB survey now.

By Ron Schmelzer

There's nothing more that architects love to do than argue about definitions. If you ever find yourself with idle time in a room of architects, try asking for a definition of "service" or "architecture" and see what sort of creative melee you can start.

That being said, definitions are indeed very important so that we can have a common language to communicate the intent and benefit of the very things we are trying to convince business to invest in. From that perspective, a number of concepts have emerged in the past decade or so that have become top of mind for self-styled enterprise architects: architecture frameworks and reference architectures.

In previous ZapFlashes, we discussed architecture frameworks, which leaves the topic of reference architectures left untouched by ZapThink. Since we can't leave a good argument behind, we're going to use this ZapFlash to explore what reference architectures are all about and what value they have to add to the Service-Oriented Architecture (SOA) story.

What is a reference architecture?

One commonly accepted definition for reference architecture
is that it provides a methodology and/or set of practices and templates that are based on the generalization of a set of successful solutions for a particular category of solutions. Reference architectures provide guidance on how to apply specific patterns and/or practices to solve particular classes of problems. In this way, it serves as a "reference" for the specific architectures that companies will implement to solve their own problems. It is never intended that a reference architecture would be implemented as-is, but rather used either as a point of comparison or as a starting point for individual companies' architectural efforts.

Others refine the definition of reference architecture
as a description of how to build a class of artifacts. These artifacts can be embodied in many forms including design patterns, methodologies, standards, metadata, and documents of all sorts. Long story short, if you need guidance on how to develop a specific architecture based on best practices or authoritative sets of potential artifacts, you should look to a reference architecture that covers the scope of the architecture that you're looking to build.

One of the most popular examples of reference architectures in IT is the Java Platform Enterprise Edition (Java EE) architecture, which provides a layered reference architecture and templates addressing a range of technology and business issues that have guided many Java-based enterprise systems.

Reference architectures vs. architecture frameworks

While the above definition(s) may seem fairly cut and dried, there is a lot in common between the concepts of reference architectures and architecture frameworks. For some, this is where things get dicey and definitions get blurry. Architecture frameworks, such as the Zachman Framework, the Open Group Architecture Framework (TOGAF), and Department of Defense Architecture Framework (DoDAF) provide approaches to describe and identify necessary inputs to a particular architecture as well as means to describe that architecture. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

If a particular architecture is a cookbook that provides guidance on how to go about solving a particular set of problems with a particular approach, an architecture framework is a book about how to write cookbooks. So, architecture frameworks give enterprise architects the tools they need to adequately describe and collect requirements, without mandating any specific architecture type. More specifically, architecture frameworks describe an example taxonomy of the kinds of architectural "views" that an architect might consider developing, and why, and provides guidelines for making the choice for developing particular views.

This differs from the above concept of a reference architecture in that a reference architecture

Both frameworks and RAs provide best practices, and while it might be argued that RAs provide more of a methodology than a framework does, RAs are still not really characterized by their methodology component

goes one step further by accelerating the process for a particular architecture type, helping to identify which architectural approaches will satisfy particular requirements, and figuring out what a minimally acceptable set of architectural artifacts are needed to meet the "best practices" requirements for a particular architecture. To continue our analogy with cookbooks, if an architecture framework is a book on how to write cookbooks, then a reference architecture is a book that provides guidance and best practices on how to write cookbooks focused on weight loss, for example. This would then mean that the particular architecture you develop for your organization would be a specific cookbook that provides weight-loss recipes targeted to your organization. Indeed, if you get puzzled with the definitions, replacing the term "architecture" with "cookbook" is helpful: cookbook frameworks, reference cookbooks, and your particular cookbook.

Furthermore, most reference architectures emphasize the "template" part of the definition of a reference architecture. Both frameworks and RAs provide best practices, and while it might be argued that RAs provide more of a methodology than a framework does, RAs are still not really characterized by their methodology component. Most can be characterized by their template component, however. From this perspective, patterns are instances of templates in this context. In fact, multiple reference architectures for the same domain are allowable and quite useful. Reference architectures can be complementary providing guidance for a single architecture, such as SOA, from multiple viewpoints.

The value of a SOA reference architecture

In many ways, SOA projects are in desperate need of well-thought out reference architectures. ZapThink sees a high degree of variability in SOA projects. Some flourish and succeed while others flounder and fail. Many times the reason for failure can be traced to bad architectural practices, premature infrastructure purchasing, and inadequate governance and management. Other times the failure is primarily organizational. However, what is common in most successes is well-documented and/or communicated architectural practices and a systematic method for learning from one's mistakes and having a low cost of failure.

Furthermore, we find that many architects spend a significant amount of their time researching, investigating, (re-)defining, contemplating, and arguing architectural decisions. In many cases, these architects are reinventing the wheel as their peers in other companies, or even the same company, have already spent that time and effort defining their own architectural practices. This extra effort is not only inefficient, but also prevents the company from learning from its own experiences and applying that knowledge for increased effectiveness.

From this perspective, SOA reference architectures can provide some help to those struggling

While the OASIS SOA Reference Architecture is certainly not the only valid one on the block, it certainly makes a good starting point for those looking for a vendor-neutral SOA reference architecture on which to base their own architectural efforts.

with their SOA efforts or thinking about launching a new one. SOA reference architectures allow organizations to learn from other architects' successes and failures and inherit proven best practices. Reference architectures can provide missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices. In this way, the SOA reference architecture provides a base of assets that SOA efforts can draw from throughout the project lifecycle.

Indeed, in order to gain the promised SOA benefits of reuse, reduced redundancy, reduced cost of integration, and increased visibility and governance, companies need to apply their SOA efforts in a consistent manner. This means more than buying and establishing some vendor's infrastructure as a corporate standard or adhering to the latest WS-* standards stack. SOA reference architectures can serve as the basis for disparate SOA efforts throughout the organization, even if they use different tools and technologies. Good SOA reference architectures provide SOA best practices and approaches in a vendor-, technology-, and standards-independent way. Therefore, don't go hunting for one from your favorite vendor of choice. In fact, if you got your SOA reference architecture from that vendor, you might want to consider dropping it in lieu of something more vendor-neutral.

In particular, OASIS offers a SOA Reference Architecture (RA) that "models the abstract architectural elements for a SOA independent of the technologies, protocols, and products that are used to implement a SOA. Some sections of the RA will use common abstracted elements derived from several standards." Their approach uses the concept of "patterns" to identify different methods and approaches for implementing different parts of the architectural picture. While the OASIS SOA Reference Architecture is certainly not the only valid one on the block, it certainly makes a good starting point for those looking for a vendor-neutral SOA reference architecture on which to base their own architectural efforts.

The ZapThink take

Enterprise architects needs all the help they can get to make sure that they deliver reliable, agile, resilient, vendor-neutral architectures to their organization that meet the continuously changing requirements of the business. While certainly the art and practice of enterprise architecture continues to mature, companies should look to borrow as much best practices as they can and learn from others who have already gone down the EA and SOA path. If you plan to learn SOA, or any form of EA for that matter, as you go along, or even worse, from a vendor, then you risk the entire success of your SOA efforts. Rather, leverage (for free) SOA reference architectures so that you can advance at a faster pace and lower risk.

Bernard of Chartres put it best in the well-known saying: "We are like dwarfs on the shoulders of giants, so that we can see more than they, and things at a greater distance, not by virtue of any sharpness of sight on our part, or any physical distinction, but because we are carried high and raised up by their giant size." Stand on the shoulders of other enterprise architecture giants and let them increase your vision and success.

This guest post comes courtesy of ZapThink. Ron Schmelzer, a senior analyst at ZapThink, can be reached here.

Take the BriefingsDirect middleware/ESB survey now.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Friday, August 14, 2009

HP partners with iTKO on LISA services testing suite for SOA, BPM

Take the BriefingsDirect middleware/ESB survey now.

When HP inks a deal to resell your testing software, you know you must be doing something right.

HP is reselling iTKO’s LISA Virtualize product, a suite of test, validation and virtualization solutions optimized for distributed, multi-tier applications that leverage SOA, BPM, cloud computing, integration suites and ESBs. HP’s aim is to help customers reduce testing costs and speed the time to market for modern applications. [Disclosure: HP and iTKO are sponsors of BriefingsDirect podcasts.]

How does LISA help HP’s Quality and Performance Management solutions suite? By eliminating common system infrastructure dependencies during application testing. The idea is to trim both the cost and risk of modern Quality Assurance – a major issue for today’s enterprise.

Here’s how it works: LISA Virtualize does away with system dependency constraints by simulating the dynamic behavior and performance conditions of downstream system dependencies. In other words, you can see how systems react and respond as if they were running live – but they aren’t running live. That saves time and money.

Jonathan Rende, vice president and general manager of the Business Technology Optimization Applications, Software and Solutions group at HP, said: "Customers can reduce costs and speed up their ability to respond to business needs by modernizing their applications.”

By bringing together HP Quality Center and HP Performance Center solutions with iTKO's LISA Virtualize software, Rende said customers can remove delay-causing system dependencies during testing processes. The result: saving time and lowering the cost of delivering complex applications.

To be sure, putting quality top of mind earlier in the development process is a key to reducing defects and speeding time to market. And Shridhar Mittal, iTKO's CEO, claims the company’s virtualization capabilities lower test lab costs by up to 65 percent and shortening software release cycles by up to 38 percent.

If those claims hold true, it’s easy to see why HP is partnering with this young company. The running theme with this announcement is saving time and money – both critical selling points in a down economy.

Take the BriefingsDirect middleware/ESB survey now.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached here and here.

Thursday, August 13, 2009

Got middleware? Got ESBs? Take this survey, please.

Take the brief online survey.

I keep hearing about how powerful social media is for gathering insights from the IT communities and users. Yet I rarely see actual market research conducted via the social media milieu.

So now's the time to fully test the process. I'm hoping that you users and specifiers of enterprise software middleware, SOA infrastructure, integration middleware, and enterprise service buses (ESBs) will take 5 minutes and fill out my BriefingsDirect survey. We'll share the results via this blog in a few weeks.

We're seeking to uncover the latest trends in actual usage and perceptions around these technologies -- both open source and commercial.

How middleware products -- like ESBs -- are used is not supposed to change rapidly. Enterprises typically choose and deploy integration software infrastructure slowly and deliberately, and they don't often change course without good reason.

But the last few years have proven an exception. Middleware products and brands have shifted more rapidly than ever before. Vendors have consolidated, product lines have merged. Users have had to grapple with new and dynamic requirements.

Open source offerings have swiftly matured, and in many cases advanced capabilities beyond the commercial space. Interest in SOA is now shared with anticipation of cloud computing approaches and needs.

So how do enterprise IT leaders and planners view the middleware and SOA landscape after a period of adjustment -- including the roughest global recession in more than 60 years?

This brief survey, distributed by BriefingsDirect for Interarbor Solutions, is designed to gauge the latest perceptions and patterns of use and updated requirements for middleware products and capabilities. Please take a few moments and share your preferences on enterprise middleware software. Thank you.

Take the brief online survey.

Wednesday, August 12, 2009

Cloud Security Panel: Is cloud computing more or less secure than on-premises IT?

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Sponsor: The Open Group.

Welcome to a special sponsored podcast discussion coming from The Open Group’s 23rd Enterprise Architecture Practitioners Conference in Toronto. This podcast, part of a series from the July 2009 event, centers on cloud computing security.

Much of the cloud security debate revolves around perceptions. ... For some cloud security is seeing the risk glass as half-full or half empty. Yet security in general takes on a different emphasis as services are mixed and matched from a variety of internal and external sources.

So will applying conventional security approaches and best practices be enough for low-risk, high-reward, cloud computing adoption? Most importantly, how do companies know when they are prepared to begin adopting cloud practices without undo security risks?

Here to help better understand the perils and promises of adopting cloud approaches securely, we welcome our panel: Glenn Brunette, distinguished engineer and chief security architect at Sun Microsystems and founding member of the Cloud Security Alliance (CSA); Doug Howard, chief strategy officer of Perimeter eSecurity and president of USA.NET; Chris Hoff, technical adviser at CSA and director of Cloud and Virtualization Solutions at Cisco Systems; Dr. Richard Reiner, CEO of Enomaly; and Tim Grance, program manager for cyber and network security at the National Institute of Standards and Technology (NIST).

The discussion is moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts:
Reiner: There are security concerns to cloud computing. Relative to the security concerns in the ideal enterprise mode of operation, there is some good systematic risk analysis to model the threats that might impinge upon this particular application and the data it processes, and then to assess the suitability of different environments for potential deployment of that stuff.

There are a lot more question marks around today's generation of public-cloud services, generally speaking, than there are around the internal computing platforms that enterprises can use. So it's easier to answer those questions. It's not to say the answers are necessarily better or different, but the questions are easier to answer with respect to the internal systems, just because there are more decades of operating experience, there is more established audit practice, and there is a pretty good sense of what's going to be acceptable in one regulatory framework or another.

Howard: The first thing that you need to know is, "Am I going to be able to deliver a service the same way I deliver it today at minimum? Is the user experience going to be, at minimum, the same that I am delivering today?"

Because if I can't deliver, and it's a degradation of where my starting point is, then that will be a negative experience for the customers. Then, the next question is, obviously, is it secured as a business continuity? Are all those things and where that actual application resides completely transparent to the end user?

Brunette: Is cloud computing more or less secure than client-server? I don't think so. I don't think it is either more or less secured. Ultimately, it comes down to the applications you want to run and the severity or criticality of these applications, whether you want to expose them in a shared virtualized infrastructure.

... When you start looking at the cloud usage patterns and the different models, you're going to see that governance does not end at your organization's border. You're going to need to understand the policies, the processes, and the governance model of the cloud providers.

It's going to be important that we have a degree of transparency and compliance out in the cloud in a way that can be easily consumed and integrated back into an organization.

Hoff: One of the interesting notions of how cloud computing alters the business case and use models really comes down to a lot of pressure combined with the economics today. Somebody, a CIO or a CEO, goes home and is able to fire up their Web browser, connect to a service we all know and love, get their email, enjoy a robust Internet experience that is pretty much seamless, and just works.

Then, they show up on Monday morning and they get the traditional, "That particular component is down. That doesn't work. This is intrusive. I've got 47,000 security controls that I don't understand. You keep asking for more money."

Grance: Cloud has a vast potential to cause a disintermediation, just like in power and other kinds of industries. I think it may run eventually through some of these consulting companies, because you won't be able to get as rich off of consulting for that.

In the meantime, I think you're going to have ... people simply just roll their own [security]. Here's my magic set of controls. It may not be all of them. It may just be a few of them. I think people will shop around for those answers, but I think the marketplace will punish them.

Howard: ... If you look at a lot of the cloud providers, we tend, in many cases, to fight some standards, because, in reality, we want to have competitive differentiators in the marketplace. Sometimes, standards and interoperability are key ones, sometimes standards create a lack of our ability to differentiate ourselves in the marketplace.

However, on the security side, I that's one of the key areas that you definitely can get the cloud providers behind, because, if we have 10,000 clients, the last thing we want is to have enough people sitting around taking the individual request of all the audits that are coming in from those customers.

... So, to put standards behind those types of efforts is an absolute requirement in the industry to make it scalable, not just beyond the infrastructure, performance, availability, and all those things, but actually from a cost perspective of people supporting and delivering these services in the marketplace.

Brunette: ... One of the other things I'd point out is that, it's not just about the cloud providers and the cloud consumers, but there are also other opportunities for other vendors to get into the fray here.

One of the things that I've been a strong proponent of is, for example, OS vendors producing better, more secured, hardened versions of their operating systems that can be deployed and that are measurable against some standard, whether a benchmark from the Center for Internet Security, or FDCC in the commercial or in the federal space.

You may also have the opportunity of third parties to develop security-hardened stacks. So, you'd be able to have a LAMP stack, a Drupal stack, an Oracle stack, or whatever you might want to deploy, which has been really vetted by the vendor for supportability, security, performance, and all of these things. Then, everyone benefits, because you don't all have to go out there and develop your own.

Howard: ... At the end of the day, if you develop and you deliver a service ... and the user experience is positive, they're going to stay with the service.

On the flip side, if somebody tries to go the cheap way and ultimately delivers a service that has not got that high availability, has got problems, is not secure, and they have breaches, and they have outages, eventually that company is going to go out of business. Therefore, it's your task right now to figure out who are the real players, and does it matter if it's an Oracle database, SQL database, or MySQL database underneath, as long as it's meeting the performance requirements that you have.

Unfortunately, right now, because everything is relatively new, you will have to ask all the questions and be comfortable that those answers are going to deliver the quality of service that you want. Over time, on the flip side, it will play out and the real players will be the real players at the end of the day.

Hoff: ... It [also] depends on what you pay for it, and I think that's a very interesting demarcation point. There is a service provider today who doesn’t charge me anything for getting things like mail and uploading my documents, and they have a favorite tag line, “Hey, it’s always in beta.” So the changes that you might get could be that the service is no longer available. Even with enterprise versions of them, what you expect could also change.

... In the construct of SaaS, can that provider do a better job than you can, Mr. Enterprise, in running that particular application?

This comes down to an issue of scale. More specifically, what I mean by that is, if you take a typical large enterprise with thousands of applications, which they have to defend, safeguard, and govern, and you compare them to a provider that manages what, in essence, equates to one application, comparing apples to elephants is a pretty unreasonable thing, but it’s done daily.

What’s funny about that is that, if you take a one-to-one comparison with that enterprise that is just running that one application with the supporting infrastructure, my argument would be that you may be able to get just as good as, perhaps even better, performance than the SaaS provider. It’s when you get to the point of where you define scale, it's on the consumer side or number of apps you provide where that question gets interesting.

... What happens then when I end up having 50 or 60 cloud providers, each running a specific instance of these applications. Now, I've squeezed the balloon. Instead of managing my infrastructure, I'm managing a bunch of other guys who I hope are doing a good job managing theirs. We are transferring responsibility, but not accountability, and they are two very different things.

Brunette: ... In almost every case, the cloud providers can hide all of that complexity, but it gives them a lot more flexibility in terms of which technology is right for their underlying application. But, I do believe that over time they will have a very strong value proposition. It will be more on the services that they expose and provide than the underlying technology.

Hoff: ... The reality is, portability and interoperability are going to be really nailed to firstly define workload, express the security requirements attached to that workload, and then be able to have providers attest in the long-term in a marketplace.

I think we called it "the Intercloud," a way where you go through service brokers or do direct interchange with this type of standards and protocols to say, “Look I need this stuff. Can you supply these resources that meet these requirements? “No? Well, then I go somewhere else.”

Some of that is autonomic, some of it’s automated, and some of it will be manual. But, that's all predicated, in my opinion, upon building standards that lets us exchange that information between parties.

Reiner: I don't think anyone would disagree that learning how to apply audit standards to the cloud environment is something that takes time and will happen over time. We probably are not in a situation where we need yet another audit standard. What we need is a community of audit practices to evolve and to mature to the point where there is a good consensus of opinion about what constitutes an appropriate control in a cloud environment.

Brunette: As Chris said, it comes down to open standards. It's important that you are able to get your data out of a cloud provider. It's just as important that you need to have a standard representation of that data, something that can be read by your own applications, if you want to bring it back in house, and something that you can use with another provider, if you decide go that route.

Grance: I'm going to out on a limb and say that NIST is in favor of open, voluntary consensus, but data representation and APIs are early places where people can start. I do want to say important things about open standards. I want to be cautious about how much we specify too early, because there is a real ability to over specify early and do things really badly.

So it's finding that magic spot, but I think it begins with data representation and APIs. Some of these areas will start with best practices and then evolve into these things, but again the marketplace will ultimately speak to this. We convey our requirements in clear and pristine fashion, but put the procurement forces behind that, and you will begin to get the standards that you need.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Sponsor: The Open Group.

Cloud computing proves a natural for offloading time-consuming test and development processes

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Learn more. Sponsor: Electric Cloud.

Our latest podcast discussion centers on using cloud computing technologies and models to improve the test and development stages of applications' creation and refinement. One area of cloud computing that has really taken off and generated a lot of interest is the development test and performance proofing of applications -- all from an elastic cloud services fabric.

The build and test basis of development have traditionally proven complex, expensive, and inefficient. Periodic bursts of demand on runtime and build resources are the norm. By using a cloud approach, the demand burst can be accommodated better through dynamic resources, pooling, and provisioning.

We've seen this done internally for development projects and now we're starting to see it applied increasingly to external cloud resource providers like Amazon Web Services. And Microsoft is getting into the act too.

To help explain the benefits of cloud models for development services and how to begin experimenting and leveraging external and internal clouds -- perhaps in combination -- for test resource demand and efficiency, I recently interviewed Martin Van Ryswyk, vice president of engineering at Electric Cloud, and Mike Maciag, CEO at Electric Cloud.

Here are some excerpts:
Van Ryswyk: Folks have always wanted their builds to be fast and organized and to be done with as little hardware as possible. We've always struggled to get enough resources applied to the build process.

One of the big changes is that folks like Amazon have come along and really made this accessible to a much wider set of build teams. The dev and test problem really lends itself to what's been provided by these new cloud players.

Maciag: The traditional approaches of the overnight build, or even to the point of what people refer to as continuous integration, have fallen short, because they find problems too late. The best world is where engineers or developers find problems before they even check in their code and go to a preflight model, where they can run builds and tests on production class systems before checking in code in the source code control system.

Van Ryswyk: At a certain point, you just want it to happen like a factory. You want to be able to have builds run automatically. That's what ElectricCommander does. It orchestrates that whole process, tying in all the different tools, the software configuration management (SCM) tools, defect tracking tools, reporting tools, and artifact management -- all of that -- to make it happen automatically.

And that's really where the cloud part comes in. ... Then, you're bringing it all back together for a cohesive end report, which says, "Yes, the build worked." ElectricCommander was already allowing customers to manage the heterogeneity on physical machines and virtual machines (VMs). With some integrations we've added you can now extend that into the cloud.

There will be times when you need a physical machine, there will be times when your virtual environment is right, and there will be times when the cloud environment is right. ... We may not want to put our source code out in the cloud but we can use 500 machines for few hours to do some load, performance, or user interface testing. That's a perfect model for us.

... When you have these short duration storms of activity that sometimes require hundreds and hundreds of computers to do the kind of testing you want to do, you can rent it, and just use what you need. Then, as soon as you're done with your test storm, it goes away and you're back to the baseline of what you use on average.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Learn more. Sponsor: Electric Cloud.

VMware fleshes out its cloud computing support model with SpringSource grab

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

VMware’s proposed $362 million acquisition of SpringSource is all about getting serious in competing with Salesforce.com and Google App Engine as the Platform-as-a-Service (PaaS) cloud with the technology that everybody already uses.

This acquisition was a means to an end, pairing two companies that could not be less alike. VMware is a household name, sells software through traditional commercial licenses, and markets to IT operations. SpringSource is a grassroots, open source developer-oriented firm whose business is a cottage industry by comparison. The cloud brought both companies together that each faced complementary limitations on their growth. VMware needed to grow out beyond its hardware virtualization niche if it was to regain its groove, while SpringSource needed to grow up and find deeper pockets to become anything more than a popular niche player.

The fact is that providing a virtualization engine, even if you pad it with management utilities that act like an operating system, is still a raw cloud with little pull unless you go higher up in the stack. Raw clouds have their appeal only to vendors that resell capacity or enterprise large firms with the deep benches of infrastructure expertise to run their own virtual environments. For the rest of us, we need a player that provides a deployment environment, handles the plumbing, that is married to a development environment. That is what Salesforce’s Force.com and Google’s App Engine are all about. VMware’s gambit is in a way very similar to Microsoft’s Software + Services strategy: use the software and platforms that you are already used to, rather than some new

The most glaring omission is need for Java object distributed caching to provide yet another alternative to scalability.

environment in a cloud setting. There’s nothing less familiar to large IT environments than VMware’s ESX virtualization engine, and in the Java community, there’s nothing more familiar than the Spring framework which – according to the company – accounts for roughly half of all Java installations.

With roughly $60 million in stock options for SpringSource’s 150-person staff, VMware is intent on keeping the people as it knows nothing about the Java virtualization business. Normally, we’d question a deal like this because the company’s are so dissimilar. But the fact that they are complementary pieces to a PaaS offering gives the combination stickiness.

For instance, VMware’s vSphere’s cloud management environment (in a fit of bravado, VMware calls it a cloud OS) can understand resource consumption of VM containers; with SpringSource, it gets to peer inside the black box and understand why those containers are hogging resource. That provides more flexibility and smarts for optimizing virtualization strategies, and can help cloud customers answer the question: do we need to spin out more VMs, perform some load balancing, or re-apportion all those Spring TC (Tomcat) servlet containers?

The addition of SpringSource also complements VMware’s cloud portfolio in other ways. In his blog about the deal, SpringSource CEO Rod Johnson noted that the idea of pairing VMware’s Lab Manager (that’s the test lab automation piece that VMware picked up through the Akimbi acquisition) proved highly popular with Spring framework customers. In actuality, if you extend Lab manager from simply spinning out images of testbeds to spinning out runtime containers, you would have VMware’s answer to IBM’s recently-introduced WebSphere Cloudburst appliance.

VMware isn’t finished however. The most glaring omission is need for Java object distributed caching to provide yet another alternative to scalability. If you only rely on spinning out more VMs, you get a highly rigid one-dimensional cloud that will not provide the economies of scale and flexibility that clouds are supposed to provide. So we wouldn’t be surprised if GigaSpaces or Terracotta might be next in VMware’s acquisition plans.

This guest post comes courtesy of Tony Baer’s OnStrategies blog . Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

Monday, August 10, 2009

BriefingsDirect analysts debate the 'imminent death' of enterprise IT as cloud models ascend

Download or view the transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 43. Our topic centers on the pending purported death of corporate IT. You may recall that in the early 1990s, IT pundits glibly also predicted that the plug would be pulled on the last mainframe in 1996. It didn't happen.

The mainframe continues to support many significant portions of corporate IT functions. But these sentiments are newly rekindled and expanded these days through the mounting expectations that cloud computing and software-as-a-service (SaaS) will hasten the death of on-premises enterprise IT.

Some of the analyst reports these days indicate that hundreds of billions of dollars in IT spending will soon pass through the doors of corporate IT and into the arms of various cloud-service providers. We might conclude that IT is indeed about to expire.

Not all of us, however, subscribe to this extent in the pace of the demise of on-premises systems, their ongoing upkeep, maintenance, and support. To help us better understand the actual future role of IT on the actual floors inside of actual companies, we're joined by our guests and analysts this week: Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Brad Shimmin, principal analyst at Current Analysis; Ron Schmelzer, senior analyst, ZapThink; Sandy Rogers, former program director at IDC, and now independent IT analyst and consultant, and, as our guest this week, Alex Neihaus, vice president of marketing at Active Endpoints.

Here are some excerpts:
Kobielus: I can predict right now, based on my conversations with Forrester customers, and specifically my career in data warehousing and business intelligence (BI), that this notion of the death of IT is way too premature, along the lines of the famous Mark Twain quote.

... There aren't a substantial number of enterprises that have outsourced their data warehouse or their marts. [But] I think 2011 will see a substantial number of data warehouses deployed into the cloud.

The component of your data-warehousing environment that will be outsourced to public cloud, initially, in many cases, will not be your whole data warehouse. Rather it will be a staging layer, where you're staging a lot of data that's structured and unstructured and that you're pulling from both internal systems, blogs, RSS feeds, and the whole social networking world -- clickstream data and the like.

Baer: I just completed actually a similar study in application lifecycle management (ALM), and I did find that that cloud certainly is transforming the market. It's still at the very early stages, but ... two areas really stuck out. One is anything collaborative in nature, where you need to communicate -- especially as development teams go more global and more distributed -- ... [and] planning, budgeting, asset management, project portfolio management, and all those collaborative functions did very well [in the cloud].

Another side that did very well ... is anything that had very dynamic resource needs where today you need a lot of resource, tomorrow you don't. A good example of that is testing -- if you are a corporate IT department, which has periodic releases, where you have peaks and valleys in terms of when you need to test and do regression test.

[But] I got a lot of reluctance out there to do anything regarding coding in the cloud. ... So, in terms of IT being dead, well, at least with regard to cloud and on-premise, that's hardly the case in ALM.

Shimmin: Because I follow the collaboration area, I see [cloud adoption] happening much, much more quickly. ... Those are the functions that IT would love to get rid of. It's like a diseased appendix. I would just love to get rid of having to manage Exchange Servers. Any of us who have touched any of those beasts can attest to that.

So, even though I'm a recovering cynic and I kind of bounce between "the cloud is just all hype" and "yes, the cloud is going to be our savior," for some things like collaboration, where it already had a lot of acceptance, it's going to drive a lot of costs [out].

Schmelzer: It's really interesting. If you look at when most of the major IT shifts happen, it's almost always during period of economic recession. ... Companies are like, "I hate the systems I have. I'm trying to deal with inefficiency. There must be something wrong we're doing. Let's find some other way to do it." Then, we go ahead and find some new way to do it. Of course, it doesn't really solve all of our problems.

The cost-saving benefit of cloud is clearly there. That's part of the reason there is so much attention on it. People don't want to be investing their own money in their own infrastructure. They want to be leveraging economies of scale, and one of the great things that clouds do is provide that economy of scale. ... On the whole question of IT, the investments, and what's going to happen with corporate enterprise IT, I think we're going to see much bigger changes on the organizational side than the technological side. It’s hard for companies to get rid of stuff they have invested billions of dollars in.

... IT organizations will become a lot smaller. I don't really believe in 4,000-person IT organization, whose primary job is to keep the machines running. That's very industrial revolution, isn't it?

Rogers: I see enterprises all the time that are caught between a rock and a hard place, where they have specialized technologies that were built out in the client-server era. They haven't been able to find any replacements.

The ability to take a legacy system that may be very specialized, far reaching, have a lot of integrations and dependencies with two other systems, is a very difficult change. ... When we're talking about cloud and SaaS, it's going to impact different layers. ... We may want to think about leveraging other systems and infrastructure, more of the server, more of the data center layer, but there is going to be a huge number of implications as you move up the stack, especially in the middleware and integration space.

We're still at the very beginning stages of leveraging services and SOA, when you look at the mass market. ... There's a lot of work that needs to be done to just think about turning something off, turning something on, and thinking that you are going to be able to rely on it the same way that you've relied on the systems that have been developed internally. It's not to say it won't be done, but it certainly has a big learning curve that the whole industry will be engaging in.

Neihaus: What we find more interesting is not the question of whether the cloud will subsume IT, or IT will subsume the cloud, but who should be creating applications? ... There is a larger question today of whether end users can use these technologies to completely go around IT and create their own applications themselves.

For us, that seems to be the ultimate disingenuousness, the ultimate inability for all

For us, the cosmic question is whether we are really at the point where end users can take elements that exist in the cloud and their own data centers and create processes and applications that run their business themselves.

the reasons that everyone discussed. ... The question really is whether the combination of these technologies can be made to foster a new level of collaboration in enterprises where, frankly, IT isn't going to go away. The most rapid adoption of these technologies, we think, is in improving the way IT responds in new ways, and in more clever ways, with lot more end-user input, into creating and deploying applications.

For us, the cosmic question is whether we are really at the point where end users can take elements that exist in the cloud and their own data centers and create processes and applications that run their business themselves. And our response is that that's probably not the case, and it's probably not going to be the case anytime soon. If, in fact, it were the case, it would still be the wrong thing to do in enterprises, because I am not sure many CEOs want their business end users being IT.

Kobielus: You need strong governance to keep this massive cloud sandbox from just becoming absolute chaos.

So, it's the IT group, of course, doing what they do best, or what they prefer to be doing, which is architecture, planning, best practices, templates, governance control, oversight support, and the whole nine yards to make sure that, as you deal in new platforms for process and data, such as the cloud, those platforms are wrapped with strong governance.

Baer: You can't provide users the ability to mash-up assets and start creating reports without putting some sort of boundary around it.

This is process-related, which is basically instituting strong governance and having policies that say, "Okay, you can use these types of assets or data under these scenarios, and these roles can access this and share this."

Rogers: The sophistication of the solution interfaces and the management in the administrative capabilities to enable governance, are very nascent in the cloud offerings. That's an opportunity for vendors to approach this. There's an increasing need to compose and integrate silos within organizations. That has a huge implication on governance activities.

Gardner: I'd like to go around the table, on a scale of 1 to 10 where do you think we're going to see the IT department's role in three years -- with 1 being IT is dead, and 10 being IT is alive, robust, and growing vibrantly?

Kobielus: I'll be really wishy-washy and give it a 5. ... IT will be alive, kicking, robust, and migrating toward more of a pure planning, architecture, and best practices function.

Much of the actual guts of IT within an organization will migrate to hosted environments, and much of the development will be done by end users and power users. I think that's writing on the wall.

Baer: I am going to give it an 8. ... I don't see IT's role diminishing. There may be a lower headcount, but that can just as much be attributed to a new technology that provides certain capabilities to end users and also using some external services. But, that's independent of whether there's a role for IT, and I think it pretty much still has a role.

Shimmin: I'm giving it a 7 for similar reasons, I think that it's going to scale back in size little bit, but it's not going to diminish in value.
IT is not going to go away. I don't think IT is going to be suffering. IT is just a continuously changing thing. ... I think it's going to be very much alive, but the value is going to be more of a managerial role working with partners. Also, the role is

In some enterprises, IT is in deep trouble if they do not embrace new technologies and new opportunities and become an adviser to the business.

changing to be more of business analysts, if you will, working with their end users too. Those end users are both customers and developers, in some ways, rather than these guys just running around, rebooting Exchange servers to keep the green lights blinking.

Schmelzer: I think it's 10. IT is not going to go away. I don't think IT is going to be suffering. ... I guarantee that whatever it looks like, it will be still as important as an IT organization.

Rogers: Probably in the 7 to 8 range. ... In some enterprises, IT is in deep trouble if they do not embrace new technologies and new opportunities and become an adviser to the business. So it comes down to the transition of IT in understanding all the tools and capabilities that they have at their disposal to get accomplished what they need to.

Some enterprises will be in rough shape. The biggest changeover is the vendor community. They are in the midst of changing over from being technology purveyors to solution and service purveyors. That's where the big shift is going to happen in three years.

Neihaus: Our self-interest is in a thriving a segment of IT, because that's who we serve. So, I rate it as a 10 for all of the reasons that the much-more-distinguished-than-I panel has articulated. The role of IT is always changing and impacted by the technologies around it, but I don't think that that could be used as an argument that it's going to diminish its importance or its capabilities really inside organizations.

Gardner: Well, I'll go last and I'll of course cheat, because I'm going to break it into two questions. I think their importance will be as high or higher, so 8 to 10, but their budget, the percent of spend that they're able to derive from the total revenues of the organization, will be going down. The pressure will be on from a price and monetary budgeting perspective, so the role of IT will probably be down around 4.
Download or view the transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Friday, August 7, 2009

Cloud pushes enterprise architects' role beyond IT into business process optimization czar

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Sponsor: The Open Group.

Welcome to a special sponsored podcast discussion coming from The Open Group’s 23rd Enterprise Architecture Practitioners Conference in Toronto. This podcast, part of a series from the July 2009 event, centers on the fast-changing role and expanding impact of enterprise architecture (EA).

The enterprise architect role is in flux, especially as we consider the heightening interest in cloud computing. The down economy has also focused IT spending to seek out faster, better, and cheaper means to acquire and manage IT functions and business processes.

As service components shift in their origins and delivery models, the task of meeting or exceeding business requirements based on these services becomes all the more complicated. Business outcomes and business processes become the focus, yet they may span many aspects of IT, service providers and the business units and partners/suppliers involved.

The new services era calls for powerful architects who can define, govern, and adjust all of the necessary ingredients. This new process czar role must creatively support and improve a business process lifecycle over many years.

Yet who or what will step into this gulf between the traditional means of IT and the new cloud ecology of services? The architect's role, still a work in progress at many enterprises, may well become the key office where the buck stops in this new era.

What then should be the role, and therefore what is the new opportunity for enterprise architects? Here to lead the way in understanding the evolving EA issue, we're joined by our panel, Tim Westbrock, managing director of EAdirections; Sandy Kemsley, an independent IT analyst and architect; and John Gotze, international president for the Association of Enterprise Architects. The discussion is moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts:
Kemsley: I work a lot with companies to help them implement business process management (BPM) solutions, so I get involved in architecture things, because you're touching all parts of the organization. ... A lot of very tactical solution architects are working on a particular project, but they're not thinking about the bigger picture.

... In many organizations, architecture is not done all that well. It's done on an ad hoc basis. It's done at more of the deep technical level. I can understand why the anti-architecture people get frustrated with that type of architecture, because it's not really EA.

Westbrock: The more strategic enterprise architects depend on the strategic nature of the executives of the organization. If we're going to bring it into layers of abstraction, they don't go more than a layer or two down from strategy. ... One of the good transformations, or evolutionary steps that I have seen in enterprise architects is less of a technology-only focus. Enterprise architect used to be synonymous with some kind of a technology architect, a platform architect, or a network architect, and now you are seeing broader enterprise architects.

Gotze: [The down economy] is helping to change the focus in EA from the more tactical to the more strategical issues. I've seen this downturn in the economy before. It's reinforcing the changes in the discipline, and EA is becoming more and more of a strategic effort in the enterprise.

There are some who call us enterprise architects by profession, and this group at The Open Group conference is primarily people who are practitioners as enterprise architects. But, the role of EA is widening, and, by and large, I would say the chief executive is also an enterprise architect, especially with the downturn.

Westbrock: I still don't think business architecture is within the domain of most IT enterprise architects. ... There are some different drivers that are getting some organizations to think more holistically about how the business operates. ... Modeling means we need architects. We're getting involved in some of these more transformational elements, and because of that, need to look at the business. As that evolves more, you might see more business ownership of enterprise architects. I don't see it a lot right now.

Kemsley: In many of the companies that I work with ... there is this struggle between the IT architects and/or the enterprise architects, who are really IT architects, looking at, how we need to bring things in from the cloud and how we need to make use of services outside.

They're vowing to have all of that come through IT, through the technology side. This puts a huge amount of overhead on it, both from a governance standpoint, but also from an operational standpoint. That's causing a lot of issues. If you don't get EA out of IT, you're going to have those issues as you start going outside the organization [for services].

... It's the ones who are starting to regenerate their architect community internally -- both with business architects and with architects on the IT side -- who can bring these ideas about cloud computing. [It's about] using business process modeling notation (BPMN) that can be done by the business architects and even business people, as opposed to having all of that type of work done in the IT area.

Gotze: The IT department will not disappear, of course. It's naive to say that IT doesn't matter. It's not the point that IT is irrelevant or anything, but it's the emphasis on the strategic benefits for the enterprise.

The whole notion of business-IT alignment ... is yesterday's concern. Now it's more about thinking about the coherent enterprise, that everything should fit together. It's not just alignment. You can have perfectly well aligned systems and processes, without having a coherent enterprise. So, the focus basically must be on coherency in the enterprise.

Westbrock: I don't think that this is a new problem. ... The difference between '80s and '90s and now is that it's not a chain with seven big links. It's an intricate network with hundreds, if not thousands of pieces. ... That adds complexity an element of governance that we need to mature toward. ... Where is that expertise going to come from? How are we going to capture which vendors that popped up this week are still going to be around next week?

Kemsley: The ones that can handle this new world of complexity well are ones that can bring some of the older aspects of governance, because you still have to worry about the legacy systems and all of the things that you have internally. You're not going to throw that stuff away tomorrow and bring in some completely new architecture. But, you need to start bringing in these new ideas.

Gotze: There will be a standardization and certification [process for architects]. That will not go away. ... [But it's at] the strategic level of architecture where you must have an emphasis on innovation and diversity to make it work.

... It will be some kind of hybrid model. Look at how government is working with it.

What's missing is somebody with this portfolio, meaning holistic, enterprise-wide view of what services we need, what services we have, where we can go get other services -- basically the services portfolio. Enterprise architects are uniquely positioned to do that justice.

They are enterprises after all -- it's not just the private sector. There's much more emphasis in government about getting all the agencies and departments to work together and to understand each other.

Westbrock: We're still decades away from any kind of maturity in the business architecture space, whether that be method, process, or organization. But, we're now at the point where more standardization in the applications or solutions and the data or information layers is going to help us with this particular challenge that's facing enterprise architects.

... I don’t think that the expectations for most enterprise architects are to enable business transformation. In most organizations that I deal with it’s to help with better solutions here and there. It’s to do some technology research and mash it up against business capabilities. It’s not this grand vision that I think most of us have as enterprise architects in the profession of what we can accomplish.

Kemsley: I don’t see the business leadership clamoring to take over architecture anytime soon. ... You're not going to get the CEO coming in and saying on day one, "Oh, I want to takeover that architecture stuff."

Gotze: That’s also because we in the profession have managed to create a vocabulary that's nearly impossible-to-understand for people outside the profession. I think the executive leadership will want to take over the work that the strategic EA is doing. They might not call it EA, but they will be the ultimate architect. The CEO is the ultimate chief architect for a forward-looking and innovative enterprise.

Kemsley: We have to learn to use EA power for good, rather than evil, though. In a lot of cases, it’s just about implementation. It’s sort of downward looking. Enterprise architects tend to look down into the layers rather than, as Tim was saying, feed it back up to the layers above that.

Westbrock: When we talk to folks about the kinds of capabilities, skills, and credentials that they're looking for in enterprise architects, deep technical ability is nowhere on the list. It's not because that deep technical ability is not useful. It's because generally people that are performing those deep technical task lack the breadth of experience that make enterprise architects good.

They have that deep technical knowledge, because they've done that a long time. They've become experts in that silo. ... [But] the folks that are going to be called to function as enterprise architects are folks that need a much broader set of skills and experience.

Gotze: I agree. The deep technical skills will come way down the list. Communication is very high on the list -- understanding, contracting, and so on, because we have the cloud and similar stuff also very high on the list.

Westbrock: The folks that have been successful are the ones that take the time to do two things. They build artifacts and processes that work down, they build artifacts and processes that work up, and they realize that they're different. You don't build an artifact for a developer and take that to a member of the board. You don't build project design review processes and then say, "Okay, we're going to apply that same process at the portfolio level or at the department level."

We don't have communication strategies that are going to facilitate the broadcast of results to the people that use the standards, and then use the same strategy and modes of communication for attaining strategic understanding of business drivers. It's really been a separation, knowing that there's a whole different set of languages and models and artifacts that we need here and a whole different set here.

... There is a huge opportunity for enterprise architects relative to not just the cloud. The cloud is just one more of the enablers of service orientation, not SOA, but service orientation.

Somebody needs to own the services portfolio. Maybe we're going to call them the "Chief Services Architect." I don't know. But, what I see in so many organizations is service oriented infrastructure being controlled by one group, doing a good job of putting in place the kinds of foundational elements that we need to be able to do service orientation.

What's missing is somebody with this portfolio, meaning holistic, enterprise-wide view of what services we need, what services we have, where we can go get other services -- basically the services portfolio. Enterprise architects are uniquely positioned to do that justice.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Sponsor: The Open Group.