Tuesday, November 3, 2009

You'll be far better off in a future without enterprise software

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.

By Ronald Schmelzer

The conversation about the role and future of enterprise software is a continuous undercurrent in the service oriented architecture (SOA) conversation. Indeed, ZapThink’s been talking about the future of enterprise software in one way or another for years.

So, why bother bringing up this topic again, at this juncture? Has anything changed in the marketplace? Can we learn something new about where enterprise software is heading? The answer is decidedly "yes" to the latter two questions. And this might be the right time to seriously consider acting on the very things we’ve been talking about for a while.

The first major factor is significant consolidation in the marketplace for enterprise software. While a decade or so ago there were a few dozen large and established providers of different sorts of enterprise software packages, there are now just a handful of large providers, with a smattering more for industry-specific niches.

We can thank aggressive M&A activity combined with downward IT spending pressure for this reality. As a result of this consolidation, many large enterprise software packages (such as enterprise resource planning (ERP), customer relationship management (CRM), supply chain management (SCM) offerings) have been eliminated, are in the process of being phased out, or are getting merged (or “fused”) with other solutions.

Many companies rationalized the spending of millions of dollars on enterprise software applications because the costs could be amortized over a decade or more of usage, and they could claim that these enterprise software applications would be cheaper, in the long run, than building and managing their existing custom code. But, we’ve now had a long enough track record to realize that the result of mass consolidation, need for continuous spending, and inflexibility is causing many companies to reconsider that rationalization.

We can thank expensive, cumbersome, and tightly-coupled customization, integration, and development for this lack of innovation in enterprise software.

Furthermore, by virtue of their weight, significance in the enterprise environment, and astounding complexity, enterprise software solutions are much slower to adopt and adapt to new technologies that continuously change the face of IT.

We refer to this as the “enterprise digital divide.” You get one IT user experience when you are at home and use the Web, personal computing, and mobile devices and applications and a profoundly worse experience when you are at work. It’s as if the applications you use at work are a full decade behind the innovations that are now commonplace in the consumer environment. We can thank expensive, cumbersome, and tightly coupled customization, integration, and development for this lack of innovation in enterprise software.

In addition, no company can purchase and implement an enterprise software solution “out of the box.” Not only does a company need to spend significant money customizing and integrating their enterprise software solutions, but they often spend significant amounts of money on custom applications that tie into and depend on the software.

What might seem to be discrete enterprise software applications are really tangled masses of single-vendor functionality, tightly-coupled customizations and integrations, and custom code tied into this motley mess. In fact, when we ask people to describe their enterprise architecture (EA), they often point to the gnarly mess of enterprise software they purchased, customized, and maintain. That’s not EA. That’s an ugly baby only a mother could love.

Yet, companies constantly share with us their complete dependence on a handful of applications for their daily operation. Imagine what would happen at any large business if you were to shut down their single-vendor ERP, CRM, or SCM solutions. Business would grind to a halt.

While some would insist on the necessity of single-vendor, commercial enterprise software solutions as a result, we would instead assert how remarkably insane it is for companies to have such a single point of failure. Dependence on a single product, single vendor for the entirety of a company’s operations is absolutely ludicrous in an IT environment where there’s no technological reason to have such dependencies. The more you depend on one thing for your success, the less you are able to control your future. Innovation itself hangs in the balance when a company becomes so dependent on another company’s ability to innovate. And given the relentless pace of innovation, we see huge warning signs.

Services, clouds, and mashups: Why buy enterprise software?

In previous ZapFlashes, we talked about how the emergence of services at a range of disparate levels combined with evolutions in location- and platform-independent, on-demand, and variable provisioning enabled by clouds, and rich technologies to facilitate simple and rapid service composition will change the way companies conceive of, build, and manage applications.

Instead of an application as something that’s bought, customized, and integrated, the application itself is the instantaneous snapshot of how the various services are composed together to meet user needs. From this perspective, enterprise software is not what you buy, but what you do with what you have.

One outcome of this perspective on enterprise software is that companies can shift their spending from enterprise software licenses and maintenance (which eats up a significant chunk of IT budgets) to service development, consumption, and composition.

This is not just a philosophical difference. This is a real difference. While it is certainly true that services expose existing capabilities, and therefore you still need those existing capabilities when you build services, moving to SOA means that you are rewarded for exposing functionality you already have.

Whereas traditional enterprise software applications penalize legacy because of the inherent cost of integrating with it, moving to SOA inherently rewards legacy because you don’t need to build twice what you already have. In this vein, if you already have what you need because you bought it from a vendor, keep it – but don’t spend more money on that same functionality. Rather, spend money exposing and consuming it to meet new needs. This is the purview of good enterprise architecture, not good enterprise software.

When you ask these people to show you their enterprise software, they’ll simply point at their collection of Services, Cloud-based applications, and composition infrastructure.



The resultant combination of legacy service exposure, third-party service consumption, and the cloud (x-as-a-service) has motivated the thinking that if you don’t already have a single-vendor enterprise software suite, you probably don’t need one.

We’ve had first-hand experience with new companies that have started and grown operations to multiple millions of dollars without buying a penny of enterprise software. Likewise, we’ve seen billion-dollar companies dump existing enterprise software investments or start divisions and operations in new countries without extending their existing enterprise software licenses. When you ask these people to show you their enterprise software, they’ll simply point at their collection of services, cloud-based applications, and composition infrastructure.

Some might insist that cloud-based applications and so-called software-as-a-service (SaaS) applications are simply monolithic enterprise software applications deployed using someone else’s infrastructure. While that might have been the case for the application service provider (ASP) and SaaS applications of the past, that is not the case anymore. Whole ecosystems of loosely-coupled service offerings have evolved in the past decade to value-add these environments, which look more like catalogs of service capabilities and less like monolithic applications.

Want to build a website and capture lead data? No problem -- just get the right service from Salesforce.com or your provider of choice and compose it using web services or REST or your standards-based approach of choice. And you didn’t incur thousands or millions of dollars to do that.

Open source vs. commercial vs. build your own

Another trend pointing to the stalling of enterprise software growth is the emergence of open source alternatives. Companies now are flocking to solutions such as WebERP, SugarCRM Community Edition, and other no-license and no-maintenance fee solutions that provide 80% of the required functionality of commercial suites.

While some might point at the cost of support for these offerings, we point out the factor of difference between support and license/maintenance costs. At the very least, you know what you’re paying for. It’s hard to justify spending millions of dollars in license fees when you’re using 10% or less of a product’s capabilities.

Enhancing this open source value proposition is that others are building capabilities on top of those solutions and giving those solutions away as well. The very nature of open source enables creation of capabilities that further value-adds a product suite. At some point, a given open source solution reaches a tipping point where the volume of enhancements far outweighs what any commercial vendor can offer. Simply put, when a community supports an open source effort, the result can out-innovate any commercial solution.

There are now a lot of pieces and parts available that are free, cheap, or low cost that companies can assemble into not only workable, but scalable offerings that can compete with many commercial offerings.



Beyond open source, commercial, and SaaS-cum-cloud offerings, companies have a credible choice in building their own enterprise software application. There are now a lot of pieces and parts available that are free, cheap, or low cost that companies can assemble into not only workable, but scalable offerings that can compete with many commercial offerings. In much the same way that companies leveraged Microsoft’s Visual Basic to build applications using the thousands of free or cheap widgets and controls built by the legions of developers, so too are we seeing a movement to free or cheap Service widgets that can enable remarkably complex and robust applications.

The future of commercial enterprise software applications

It is not clear where commercial enterprise software applications go from here. Surely, we don’t see companies tearing out their entrenched solutions any time soon, but likewise, we don’t see much reason for expansion in enterprise software sales either.

In some ways, enterprise software has become every bit the legacy they sought to replace in mainframe applications that still exist in abundance in the enterprise. Smart enterprise software vendors realize that they have to get out of the application business altogether and focus on selling composable service widgets. These firms, however, don’t want to innovate their way out of business. As such, they don’t want to just provide the trains to get you from place to place, but they want to own the tracks as well.

The question is: Is the cost of the proprietary runtime infrastructure you are getting with those widgets worth the cost?



In many ways, this idea of enterprise software-as-a-platform is really just a shell game. Instead of spending millions on a specific application, you’re instead spending millions on an infrastructure that comes with some pre-configured widgets. The question is: Is the cost of the proprietary runtime infrastructure you are getting with those widgets worth the cost? Have you lost some measure of loose coupling in exchange for a “single throat to choke?”

Much of the enterprise software market is heading in direct collision course with middleware vendors who never wanted to enter the application market. As enterprise software vendors start seeing their runtime platform as the defensible position, they will start conflicting with EA strategies that seek to remove their single-vendor dependence.

We see this as the area of greatest tension in the next few years. Do you want to be in control of your infrastructure and have choice, or do you want to resign your infrastructure to the control of a single vendor, who might be one merger or stumble away from non-existence or irrelevance?

The ZapThink take

We hope to use this ZapFlash to call out the ridiculousness of multi-million dollar “applications” that cost millions more to customize to do a fraction of what you need. In an era of continued financial pressure, the last thing companies should do is invest more in technology conceived of in the 1970s, matured in the 1990s, and incrementally made worse since then.

The reliance on single-vendor mammoth enterprise software packages is not helping, but rather hurting the movement to loosely coupled, agile, composition-centric heterogeneous SOA. Now is the time for companies to pull up the stakes and reconsider their huge enterprise software investments in favor of the sort of real enterprise architecture that cares little about buying things en masse and customizing those solutions -- but instead to building, composing, and reusing what you need iteratively to respond to continuous change.

As if to prove a point, SAP stock recently slid almost 10% on missed earnings. Some may blame the overall state of the economy, but we point to the writing on the wall: All the enterprise software that could be sold has been sold, and the reasons for buying or implementing new licenses are few and far between. Invest in enterprise architecture over enterprise software, services over customizations, and clouds over costly and unpredictable infrastructure -- and you’ll be better off.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Friday, October 30, 2009

Business and technical cases build for data center consolidation and modernization

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download a copy. Learn more. Sponsor: Akamai Technologies.

Data-center consolidation and modernization of IT systems helps enterprises reduce cost, cut labor, slash energy use, and become more agile.

Infrastructure advancements, standardization, performance density, and network services efficiencies are all allowing for bigger and fewer data centers and strategically architected and located facilities that can efficiently carry more of the total IT requirements load.

But to gain the benefits of these large and strategic infrastructure undertakings, the impact on the network beyond the firewall has to be considered. User expectations for performance and IT requirements for reliability need to be maintained, and even improved.

Fewer data centers means longer distances between servers and users. Network services and Internet performance management therefore need to be brought considered to produce the desired effect of topnotch applications and data delivery to enterprises, consumers, partners, and employees at far lower cost.

Here to help us better understand how to get the best of all worlds -- that is, high performance and lower total cost from data center consolidation -- we're joined by James Staten, Principal Analyst at Forrester Research; Andy Rubinson, Senior Product Marketing Manager at Akamai Technologies, and Tom Winston, Vice President of Global Technical Operations at Phase Forward, a provider of integrated data management solutions for clinical trials and drug safety. The panel is moderated by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Staten: Oftentimes, the biggest reason to do [consolidation] is because you have sprawl in the data center. You're running out of power, you're running out of the ability to cool any more equipment, and you are running out of the ability to add new servers, as your business demands them.

If there are new applications the business wants to roll out, and you can't bring them to market, that's a significant problem. This is something the organizations have been facing for quite some time.

As a result, if they can start consolidating, they can start moving some of these workloads onto fewer systems. This allows them to reduce the amount of equipment they have to manage and the number of software licenses they have to maintain and lower their support costs. In the data center overall, they can lower their energy costs, while reducing some of the cooling required.

... Most applications actually end up consuming on average only 15-20 percent of the server. If that's the case, you've got an awful lot of headroom to put other applications on there.

We were isolating applications on their own physical systems, so that they would be protected from any faults or problems with other applications that might be on the same system and take them down. Virtualization is the primary isolating technology that allows us to do that.

... More and more applications are being broken down into modules, and, much like the web services and web applications that we see today, they're broken into tiers. Individual logic runs on its own engine, and all of that can be spread across some more monetized, consistent infrastructure. We are learning these lessons from the dot-coms of the world and now the cloud-computing providers of the world, and applying them to the enterprise.

... On average, across all the enterprises we have spoken to, you can realistically expect to see about a 20 percent cost reduction from doing this. But, as you said, if you've got 5,000 servers, and they're all running at 5 percent utilization, there are big gains to be had.

Rubinson: I focus mainly on delivery over the Internet. There are definitely some challenges, if you're talking about using the Internet with your data center infrastructure -- things like performance latency, availability challenges from cable cuts, and things of that nature, as well as security threats on the Internet.

It's thinking about how can you do this, how can you deliver to a global user base with your data center, without having to necessarily build out data centers internationally, and to be able to do that from a consolidated standpoint.

... From the cost perspective, we're able to eliminate unnecessary hardware. We're able to take some of that load off of the servers, and do the work in the cloud, which also helps reduce them.

... In terms of responsiveness, by using the Internet, you can deploy a lot more quickly. It allows us to give that same type of performance, availability, and security that you would get from having a private WAN, but doing it over the much less expensive Internet.

This is really important, as we have seen more and more users that are going outside of the corporate [networks]. People are connecting to suppliers, to partners, to customers, and to all sorts of things now.

... By optimizing the cloud, we're able to speed the delivery of information from the origin as well. That's where it's benefiting folks like Tom, where he is able to not only cache information, but the information that is dynamic, that needs to get back from the data center, goes more quickly.

Winston: When I joined [Phase Forward], it had two different data centers -- one on the East Coast and one on the West Coast. We were facing the challenge of potentially having to expand into a European data center, and even potentially a Pacific Rim data center.

By continuing to expand our virtualization efforts, as well as to leverage some of the technologies that Andy just mentioned ... Internet acceleration via some of the Akamai technologies, we were able to forgo that data center expansion. In fact, we were able to consolidate our data center to one East Coast data center, which is now our primary hosting center for all of our applications.

So it had a very significant impact for us by being able to leverage both that WAN acceleration, as well as virtualization, within our own four walls of the data center.

We run electronic data capture (EDC) software, and pharmacovigilance software for the largest pharmaceutical and clinical device makers in the world. They are truly global organizations in nature. So, we have users throughout the world, with more and more heavy population coming out of the Asia Pacific area.

... We have a very large, diverse user base that is accessing our applications 24x7x365, and, as a result, we have performance needs all the time for all of our users.

... Our primary application, our flagship application, is a product called InForm, which is the main EDC product that our customers use across the Internet. It's accelerated using Akamai technology, and almost 100 percent of our content is dynamic. It has worked extremely well.

Staten: ... Users are all over the place. Whether they are an internal employee, a customer, or a business partner, they need to get access to those applications, and they have a performance expectation that's been set by the Internet. They expect whatever applications they are interacting with will have that sort of local feel.

That's what you have to be careful about in your planning of consolidation. You can consolidate branch offices. You can consolidate down to fewer data centers. In doing so, you gain a lot of operational efficiencies, but you can potentially sacrifice performance.

You have to take the lessons that have been learned by the people who set the performance bar, the providers of Internet-based services, and ask, "How can I optimize the WAN? How can I push out content? How can I leverage solutions and networks that have this kind of intelligence to allow me to deliver that same performance level?" That's really the key thing that you have to keep in mind. Consolidation is great, but it can't be at the sacrifice of the user experience.

... The right location [for data centers] has to be optimized for a variety of factors. It has to be optimized for where the appropriate skill sets are. It has to be optimized for the geographic constraints that you may be under.

We're able to take some of that load off of the servers, and do the work in the cloud, which also helps reduce them.



You may be doing business in a country in which all of the citizen information of the people who live in that country must reside in that country. If that's the case, you don't necessarily have to own a data center there, but you absolutely have to have a presence there.

Winston: ... We had users in China who, due to the amount of traffic that had to traverse the globe, were not happy with the performance of the application. Specifically, we brought in Akamai to start with a very targeted group of users and to be able to accelerate for them the application in that region.

It literally cut the problem right out. It solved it almost immediately. At that point, we then began to spread the rest of that application acceleration product across the rest of our domains, and to continue to use that throughout the product set.

Rubinson: ... We recently commissioned a study with Forrester, looking at what is that tolerance threshold [for a page to load]. In the past it had been that people had tolerance for about four seconds. As of this latest study, it's down to two seconds. That's for business to consumer (B2C) users. What we have seen is that the business-to-business (B2B) users are even more intolerant of waiting for things.

It really has gotten to a point where you need that immediate delivery in order to drive the usage of the tools that are out there.

... Just putting yourself in the cloud doesn't mean that you're not going to have the same type of latency issues, delivering over the Internet. It's the same thing with availability in trying to reach folks who are far away from that hosted data center. So, the cloud isn't necessarily the answer. It's not a pill that you can take to fix that issue.

... For Akamai, it's really about how we're able to accelerate. How we are able to optimize the routing and the other protocols on the Internet to make that get from wherever it's hosted to a global set of end users.

We don't care about where they are. They don't have to be on the corporate, private WANs. It's really about that global reach and giving the levels of performance to actually provide an SLA. Tell me who else out there provides an SLA for delivery over the Internet? Akamai does.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download a copy. Learn more. Sponsor: Akamai Technologies.

Thursday, October 29, 2009

Separating core from context brings high returns in legacy application transformation

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


T
his podcast is the second in a series of three to examine Application Transformation: Getting to the Bottom Line. Through panel discussions we examine the rationale and likely returns of assessing the true role and character of legacy applications, and then further determine the paybacks from modernization.

To gain the most return on modernization projects, many enterprises are separating core from context when it comes to legacy enterprise applications and their modernization processes. As enterprises seek to cut their total IT costs, they need to identify what legacy assets are working for them and carrying their own weight, and which ones are merely hitching a high cost -- but largely unnecessary -- ride.

A widening cost and productivity division exists between older, hand-coded software assets and replacement technologies on newer, more efficient standards-based systems. Somewhere in the mix, there are also core legacy assets distinct from so-called contextal assets. There are peripheral legacy processes and tools that are costly vestiges of bygone architectures. There is legacy wheat and legacy chaff.

With us to delve deeper into the high rewards of transforming legacy enterprise applications is Steve Woods, distinguished software engineer at HP, and Paul Evans, worldwide marketing lead on Applications Transformation at HP. The discussion is moderated be me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Evans: This podcast is about two types of IT assets: core and context. That whole approach to classifying business processes and their associated applications was invented by Geoffrey Moore, who wrote Crossing the Chasm, Inside the Tornado, etc.

He came up in Dealing with Darwin: How Great Companies Innovate at Every Phase of their Evolution with this notion of core and context applications. Core being those that provide the true innovation and differentiation for an organization. Those are the ones that keep your customers. Those are the ones that improve the service levels. Those are the ones that generate your money. They are really important, which is why they're called "core."

When these applications were invented to provide the core capabilities, it was 5, 10, 15, or 20 years ago. What we have to understand is that what was core 10 years ago may not be core anymore. There are ways of effectively doing it at a much different price point.

As Moore points out, organizations should be looking to build "core," because that is the unique intellectual property of the organization, and to then buy "context." They need to understand, how do I get the lowest-cost provision of something that doesn't make a huge difference to my product or service, but I need it anyway.

The "context" applications are not less important, but ... you should be looking to understand how that could be done in terms of lower-cost provisioning [of them].

Woods: [A lot of the interest in separating core and context in legacy IT applications] has to do with the pain users are going through. We have had customers who had assessments with us before, as much as a year ago, and now they're coming back and saying they want to get started and actually do something. So, a good deal of the interest is caused by the need to drive down costs.

Also, there's the realization that a lot of these tools -- extract, transform, and load (ETL) tools, enterprise application integration (EAI) tools, reporting, and business process management (BPM) -- are proving themselves now. We can't say that there is a risk in going to these tools. They realize that the strength of these tools is that they bring a lot of agility, solve skill sets issues, and make you much more responsive to the business needs of the organization.

... What I created at HP is a tool, an algorithm, that can go into any language legacy code and find the duplicate code, and not only find it, but visualize it in very compelling ways. That helps us drill down to identify what I call the unintended design. When we find these unintended designs, they lead us to ask very critical questions that are paramount to understanding how to design the transformation strategy.

... When you identify the IT elements that are not core and that could be moved out of handwritten code, you're transferring power from the developers -- say, of COBOL -- to the users of the more modern tools, like the BPM tools.

So there is always a political issue. What we try to do, when we present our findings, is to be very objective. You can't argue that we found that 65 percent of the application is not doing core. You can then focus the conversation on something more productive. What do we do with this? The worst thing you could possibly do is take a million lines of COBOL that's generating reports and rewrite that in Java or C# hard-written code.

We take the concept of core versus context not just to a possible off-the-shelf application, but at architectural component level. In many cases, we find that this is helpful for them to identify legacy code that could be moved very incrementally to these new architectures.

... A typical COBOL application -- this is true of all legacy code, but particularly mainframe legacy code -- can be as much as 5, 10, or 15 million lines of code. I think the sheer idea of the size of the application is an impediment. There is some sort of inertia there. An object at rest tends to stay at rest, and it's been at rest for years, sometimes 30 years.

So, the biggest impediment is the belief that it's just too big and complex to move and it's even too big and complex to understand. Our approach is a very lightweight process, where we go in and answer to a lot of questions, remove a lot of uncertainty, and give them some very powerful visualizations and understanding of the source code and what their options are.

... When you go to the legacy side of the house, you start finding that 65 percent of this application is just doing ETL. It's just parsing files and putting them into databases. Why don't you replace that with a tool? The big resistance there is that, if we replace it with a tool, then the people who are maintaining the application right now are either going to have to learn that tool or they're not going to have a job.

If we get the facts on the table, particularly visually, then we find that we get a lot of consensus. It may be partial consensus, but it's consensus nonetheless, and we open up the possibilities and different options, rather than just continuing to move through with hand-written code.

If you look at this whole core-context thing, at the moment, organizations are still in survival mode.



Evans: If you look at this whole core-context thing, at the moment, organizations are still in survival mode. Money is still tight in terms of consumer spending. Money is still tight in terms of company spending. Therefore, you're in this position where keeping your customers or trying to get new customers is absolutely fundamental for staying alive. And, you do that by improving service levels, improving your services, and improving your product.

... The line-of-business people are now pushing on technology and saying, "You can't back off. You can't not give us what we want. We have to have this ability to innovate and differentiate, because that way we will keep our customers and we will keep this organization alive."

That applies equally to the public and private sectors. The public sector organizations have this mandate of improving service, whether it's in healthcare, insurance, tax, or whatever. So all of these commitments are being made and people have to deliver on them, albeit that the money, the IT budget behind it, is shrinking or has shrunk.

The leaders must understand what drives their company. Understand the values, the differentiation, and the innovations that you want and put your money on those and then find a way of dramatically reducing the amount of money you spend on the contextual stuff, which is pure productivity.

Woods: ... Decentralizing the architecture improves your efficiency and your redundancy. There is much more opportunity for building a solid, maintainable architecture than there would be if you kept a sort of monolithic approach that's typical on the mainframe.

... The problem is sometimes not nearly as big as it seems. If you look at the analogy of the clone codes that we find, and all the different areas that we can look at the code and say that it may not be as relevant to a transformation process as you think it is.

The subject matter experts and the stakeholders very slowly start to understand that this is actually possible. It's not as big as we thought.



I do this presentation called "Honey I Shrunk the Mainframe." If you start looking at these different aspects between the clone code and what I call the asymmetrical transformation from handwritten code to model driven architecture, you start looking at these different things. You start really seeing it.

We see this, when we go in to do the workshops. The subject matter experts and the stakeholders very slowly start to understand that this is actually possible. It's not as big as we thought. There are ways to transform it that we didn't realize, and we can do this incrementally. We don't have to do it all at once.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.

Monday, October 26, 2009

Linthicum's latest book: How SOA and cloud intersect for enterprise productivity benefits

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download a transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Take the BriefingsDirect middleware/ESB survey now.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 45. This periodic discussion and dissection of IT infrastructure related news and events with industry analysts and guests, looks at a new book on cloud computing, a step-by-step guide on figuring out the right path to combined cloud and SOA benefits.

Dave Linthicum's new book, Cloud Computing and SOA Convergence in Your Enterprise: A Step-by-Step Guide, has just arrived and digs into the conflation of SOA and cloud computing. Our discussion with Linthicum on his findings is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Linthicum: SOA is the way to do cloud. I saw early on that SOA, if you get beyond the hype that's been around for the last two years, is really an architectural pattern that predates the SOA buzzword, or the SOA TLA.

It's really about breaking down your architecture into a primitive state of several components, including services and data and processes., Then, it's figuring out how to assemble those in such a way that you can not only solve your existing problems, but use those components to resolve problems, as your business changes over time or your mission changes or expands.

Cloud computing is a nice enhancement to that. Cloud doesn't replace SOA, as some people say. Cloud computing is basically architectural options or ways in which you can host your services, in this case, in the cloud.

As we go through reinventing your architecture around the concept of SOA, we can figure out which components, services, processes, or data are good candidates for cloud computing, and we can look at the performance, security and governance aspects of it.

Architectural advantages

We find that some of our services can exist out on the platform in the cloud, which provides us with some additional architectural advantages such as self-provisioning, the ability to get on the cloud very quickly in a very short time without buying hardware and software or expanding our data centers, and the ability to rapidly expand as we need to expand basically on demand.

If we need to go from 10 users to 1,000 users, we can do so in a matter of weeks, not having to buy data-center space, waves and waves of servers, software, hardware licenses, and all those sorts of things. Cloud computing provides you with some flexibility, but it doesn't get away from the core needs to architecture. So, really the book is about how to use SOA in the context of cloud computing, and that's the message I'm really trying to get across.

... As we move toward cloud computing, there are more economical and cost-effective architectural options. There is also the ability to play around with SOA in the cloud, which I think is driving a lot of the SOA. In fact, I find that a lot of people build their first initial SOA as cloud-delivered systems, be it Amazon, IBM, Azure from Microsoft, and some of the other platforms that are out there.

Then, once they figure out the benefits of that, they start putting pieces of it on premise, as it makes sense, and put pieces of it on the cloud. It has the tendency to drive prototyping on the cheap and to leverage architecture and play around with different technologies without the investment we had to do in the past.

... We've got to stop the insanity. We've got control IT spending. We've got to be much more effective and efficient with the way in which we spend and leverage IT resources. Cloud computing is only a mechanism, it's not a savior for doing that. We need to start marching in new directions and being aggressively innovative around the efficiency, the expandability, and ultimately the agility of IT.

... When you're doing SOA and considering SOA within your enterprise or agency, you should always consider cloud as an architectural option. In other words, we have servers we're looking to deploy in middleware, we're looking to leverage in databases we're looking to leverage in terms of SOA. It's governance systems, security systems, and identity management.

Cloud computing is really another set of things that you need to consider in the context of SOA, and you need to start playing around with the stuff now, because it's so cheap. There's no reason that anybody who's working on an SOA shouldn't be playing around with cloud, given the amount of investment that's needed. It's almost nothing, especially with some of the initial forays, some of the prototypes, and some of the pilot projects that need to be done around cloud.

... Software as a service (SaaS) is probably the easiest way to get into the cloud. It also has the most potential to save you the greatest amount of money. Instead of buying a million-dollar, or a two-million-dollar customer reliationship management (CRM) system, you can leverage Salesforce.com for $50-60 a month.

After that, I would progress into infrastructures as a service (IaaS), and that's basically data center on demand. So, it's databases, application servers, WebSphere, and all those sorts of things that you are able to leverage from the data center, but, instead of a data center, you leverage it from the cloud.

Guys like Amazon obviously are in that game. Microsoft, or the Azure platform, are in that game. Any number of players out there are going to be able to provide you with core infrastructure or primitive infrastructure. In other words, it's just available to you over the 'Net with some of kind of a metering system. I would start playing around with that technology after you get through with SaaS.

. . . Instead of having to buy infrastructure and buy a server and set it up and use it, we could go get Google App Engine accounts or Azure accounts.



Then, I would take a look at the platform-as-a-service (PaaS) technology, if you are doing any kind of application development. That's very cool stuff. Those are guys like Force, Google App Engine, and Bungee Labs. They provide you with a complete application development and deployment platform as a service. Then, I would progress into the more detailed stuff -- database, storage, and some of the other more sophisticated services on top of the primitive services that we just mentioned.

... PaaS with that Google App Engine is driving a lot of innovation right now. People are building applications out there, because they don't have to bother existing IT to get servers and databases brought online, and that will spur innovation.

So, today, we could figure out we want to go off and build this great application and do this great thing to automate a business and, instead of having to buy infrastructure and buy a server and set it up and use it, we could go get Google App Engine accounts or Azure accounts.

Huge potential

Then, we can start building, deploying, defining the database, do the testing, get it up and running, and have it immediately. It's web based and accessible to millions of users who are able to leverage the application in a scalable way. It's an amazing kind of infrastructure when you think about it. The potential is there to build huge, innovative things with very few resources.

... Ten years ago, it was very difficult to do a start up. You'd have a million dollars in investment funds just to get your infrastructure up and running. Now, startups can basically operate with a minimal amount of resources, typically a laptop, pointing at any number of cloud resources.

They can build their applications out there. They can build their intellectual capital. They can build their software. They can deploy it. They can test it. Then, they can provision the customers out there and meter their customers. So, it's a great time to be in this business.

... There needs to be a lot of education about the opportunities and the advantages of using cloud computing, as well as what the limitations are and what things we have to watch out for. Not all applications and all pieces of data are going to be right for the cloud. However, we need to educate people in terms of what the opportunities are.

The fact of the matter is that it's not going to be a dysfunctional and risky thing to move pieces of our architecture out into cloud computing. Get them around the pilot. Get them to go out there and try it. Get them to basically experiment with the technology. Figure out what the capabilities are, and that will ultimately change the culture.

... We're going to get to a point where the data is going to be a ubiquitous thing. It doesn't really matter where it resides and where we can access it, as long as we access it from a particular model. It's not going to make any difference to the users either. I just blogged about that in InfoWorld.

In fact, we're getting into this notion of what I call the "invisible cloud." In other words, we're not doing application as a service or SaaS, where people get new interfaces that are web-driven. We're putting pieces of the back-end architectural components -- processes, services, and, in this case, data -- out on the platform of the cloud. It really doesn't matter to them where that data resides, as long as they can get at it when they need it.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download a transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Take the BriefingsDirect middleware/ESB survey now.

Sunday, October 25, 2009

Application transformation case study targets enterprise bottom line with eye-popping ROI

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


This podcast is the first in the series of three to examine Application Transformation: Getting to the Bottom Line. Through a case study, we'll discuss the rationale and likely returns of assessing the true role and character of legacy applications, and then assess the true paybacks from modernization.

The ongoing impact of the reset economy is putting more emphasis on lean IT -- of identifying and eliminating waste across the data-center landscape. The top candidates, on several levels, are the silo-architected legacy applications and the aging IT systems that support them.

Using our case study, we'll also uncover a number of proven strategies on how to innovatively architect legacy applications for transformation and for improved technical, economic, and productivity outcomes. The podcasts coincidentally run in support of HP virtual conferences on the same subjects:
Register here to attend the Asia Pacific event on Nov. 3. Register here to attend the EMEA event on Nov. 4. Register here to attend the Americas event on Nov. 5.
Here to start us off on our series on the how and why of transforming legacy enterprise applications are Paul Evans, worldwide marketing lead on Applications Transformation at HP, and Luc Vogeleer, CTO for Application Modernization Practice in HP Enterprise Services. The discussion is moderated be me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Evans: When the economic situation hit really hard, we definitely saw customers retreat, and basically say, "We don't know what to do now. Some of us have never been in this position before in a recessionary environment, seeing IT budgets reduce considerably."

That wasn't surprising. ... It was obvious that people would retrench and then scratch their heads and say, "Now what do we do?"

Now we're seeing a different dynamic, ... something like a two-fold increase in what you might call "customer interest" [in applications transformation]. The number of opportunities we're seeing as a company has doubled over the last six or nine months.

If you ask any CIO or IT head, "Is application transformation something you want to do," the answer is, "No, not really." It's like tidying your garage at home. You know you should do it, but you don't really want to do it. You know that you benefit, but you still don't want to do it.

This has moved from being something that maybe I should do to something that I have to do, because there are two real forces here. One is the force that says, "If I don't continue to innovate and differentiate, I go out of business, because my competitors are doing that." If I believe the economy doesn't allow me to stand still, then I've got it wrong. So, I have to continue to move forward.

Secondly, I have to reduce the amount of money I spend on my innovation, but at the same time I need a bigger payback. I've got to reduce the cost of IT. Now, with 80 percent of my budget being dedicated to maintenance, that doesn't move my business forward. So, the strategic goal is, I want to flip the ratio.

... Today, we'll hear about a case study -- with the Italian Ministry of Instruction, University and Research (MIUR). This customer received an ROI in 18 months. In 18 months, the savings they had made -- and this runs into millions of dollars -- had been paid for. Their new system, in under 18 months, paid for itself. After that, it was pure money to the bottom-line.

... Our job is to minimize that risk by exposing them to customers who have done it before. They can view those best-case scenarios and understand what to do and what not to do.

Vogeleer: We take a very holistic approach and look at the entire portfolio of applications from a customer. Then, from that application portfolio -- depending on the usage of the application, the business criticality of the application, as well as the frequency of changes that this application requires -- we deploy different strategies for each application.

We not only focus on one approach of completely re-writing or re-platforming the application or replacing the application with a package, but we go for a combination of all those elements. By doing a complete portfolio assessment, as a first step into the customer legacy application landscape, we're able to bring out a complete road map to conduct this transformation.

We first execute applications that bring a quick ROI. We first execute quick wins and the ROI and the benefits from those quick wins are immediately reinvested for continuing the transformation. So, transformation is not just one project. It's not just one shot. It's a continuous program over time, where all the legacy applications are progressively migrated into a more agile and cost-effective platform.

The Italian Ministry of Instruction, University and Research (MIUR), is the customer we're going to cover with this case, is a large governmental organization and their overall budget is €55 billion.

This Italian public education sector serves 8 million students from 40,000 schools, and the schools are located across the country in more than 10,000 locations, with each of those locations connected to the information system provided by the ministry.

Very large employer

The ministry is, in fact, one of the largest employers in the world, with over one million employees. Its system manages both permanent and temporary employees, like teachers and substitutes, and the administrative employees. It also supports the ministry users, about 7,000 or 8,000 school employees. It's a very large employer with a large number of users connected across the country.

Why do they need to modernize their environment? In fact, their system was written in the early 1980s on IBM mainframe architecture. In early 2000, there was a substantial change in Italian legislation, which was called so-called a Devolution Law. The Devolution Law was about more decentralization of their process to school level and also to move the administration processes from the central ministry level into the regions, and there are 20 different regions in Italy.

This change implied a completely different process workflow within their information systems. To fulfill the changes, the legacy approach was very time-consuming and inappropriate. A number of strong application have been developed incrementally to fulfill those new organizational requirements, but very quickly this became completely unmanageable and inflexible. The aging legacy systems were expected to be changed quickly.

In addition to the element of agility to change application to meet the new legislation requirement, the cost in that context went completely out of control. So, the simple, most important objective of the modernization was to design and implement a new architecture that could reduce cost and provide a more flexible and agile infrastructure.

The first step we took was to develop a modernization road map that took into account the organizational change requirements, using our service offering, which is the application portfolio assessment.

From the standard engagement that we can offer to a customer, we did an analysis of the complete set of applications and associated data assets from multiple perspectives. We looked at it from a financial perspective, a business perspective, functionality and the technical perspective.

From those different dimensions, we could make the right decision on each application. The application portfolio assessment ensured that the client's business context and strategic drivers were understood, before commencing a modernization strategy for a given application in the portfolio.

A business case was developed for modernizing each application, an approach that was personalized for each group of applications and was appropriate to the current situation.

... This assessment phase took about three months with the seven people. From there, we did a first transformation pilot, with a small staff of people in three months.

After the pilot, we went into the complete transform and user-acceptance test, and after an additional year, 90 percent of the transformation was completed. In the transformation, we had about 3,500 batch processes. We had the transformation. We had re-architecting of 7,500 programs. And, all the screens were also transformed. But, that was a larger effort with a team of about 50 people over one year.

... We tried to use automated conversion, especially for non-critical programs, where they're not frequently changed. That represented 60 percent of the code. This code could be then immediately transferred by removing only the barriers in the code that prevented it from compiling.

All barriers removed

We had also frequently updated programs, where all barriers were removed and code was completely cleaned in the conversion. Then, in critical programs, especially, the conversion effort was bigger than the rewrite effort. Thirty percent of the programs were completely rewritten.

The applications are now accessed through a more efficient web-based user interface, which replaces the green screen and provides improved navigation and better overall system performance, including improved user productivity.

End-user productivity is doubled in terms of the daily operation of some business processes. Also, the overall application portfolio has been greatly simplified by this approach. The number of function points that we're managing has decreased by 33 percent.

From a financial perspective, there are also very significant results. Hardware and software license and maintenance cost savings were about €400,000 in the first year, €2 million in the second year, and are projected to be €3.4 million this year. This represents a savings of 36 percent of the overall project.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.

Wednesday, October 21, 2009

Global study: Hybrid model rules as cloud heats up, SaaS adoption blazing

Cloud” is the game and “hybrid” is the name. A recent global study has encouraging news for cloud-computing enthusiasts, revealing a sharp uptick in the adoption, as well as consideration, of cloud computing. The same study also indicates that those who are adopting cloud aren’t going whole hog, but are taking a hybrid approach -- mixing external and internal clouds.

The study, commissioned by global IT consultancy Avanade, showed a surprising increase in the interest in cloud computing, even from a similar study conducted in January of this year. In January, 54 percent of respondents said they had no plans to adopt cloud computing. By September, that percentage had shrunk to 37 percent.

At the same time, the percentage of companies planning or testing cloud computing increased three-fold, going from 3 percent of respondents to 10 percent.

What’s significant in the report is that less than 5 percent of companies are using an all-cloud model. The rest are relying on a hybrid approach, and report security concerns as the chief factor for being cautious.

Nine months ago, 61 percent of respondents indicated that they were using only internal IT systems and today, that number has dropped to 41 percent. At the same time, those using a combined approach on a global level have increased to 54 percent from 33 percent nine months earlier.

The report says it not clear whether the hybrid model will lead to a pure-play adoption at some point.

SaaS is taking off

One aspect of cloud computing that’s finding wide adoption is software as a service (SaaS), with more than half of the respondents worldwide -- and 68 percent in the US -- reporting that they have adopted SaaS at some level. Despite extremely high satisfaction -- more than 90 percent -- reliability is still an issue. About 30 percent of respondents said they had lost more than a day of business due to a service outage.

Still, the reliability concerns haven’t dampened users’ enthusiasm for SaaS, and 62 percent of respondents reported that they had plans to move into more SaaS within the next year. However, similar to their experience with cloud, users tend to deliver SaaS applications internally, rather than from the third-party provider.

On a global basis, those who deliver SaaS application internally outnumber those who used a third party by a ratio of 2 to 1. In the US, that increases to 4 to 1. Also, those who do use SaaS often rely on multiple providers, with one third using three or more providers. This leads the report to conclude that there is opportunity in the SaaS market.

Other conclusion from the report:
  • Cloud will continue to make significant inroads for the next year, although there won’t be a migration to a full cloud environment.

  • The gap is closing between companies with plans to adopt and those without. Avenade sees those curves intersecting in 2011 or 2012.

  • Despite the widespread adoption of cloud, there will be some applications that should remain on-premises.

  • SaaS adoption will continue to spread and is spreading faster than other technologies have in the past.
The study was conducted by Kelton Research and surveyed 500 C-level and IT executives worldwide.

BriefingsDirect contributor Carlton Vogt provided editorial assistance and research on this post.

Here's why Apple is doing so well -- it's the top half, stupid

I've been ruminating the past few days on why Apple is doing so well with it's pricey high-end products and services during a recession. The answer came as I was reading today's New York Times column by Thomas Friedman, whom I deeply admire and read anything and everything he puts out.

Friedman points out that the winners in today's fast-shifting U.S. job market are the ones demonstrating "entrepreneurship, innovation and creativity." He says, "They are the new untouchables," in contrast to other still highly educated but less creative types.

Friedman cites Harvard University labor expert Lawrence Katz, who explains in the column that the now disadvantaged are "those engineers and programmers working on more routine tasks and not actively engaged in developing new ideas or recombining existing technologies or thinking about what new customers want. ... They’ve been much more exposed to global competitors that make them easily substitutable.”

They are also more likely to be using personal computers with nine-year-old operating systems, with little choice but to take what their companies provide in terms of personal productivity IT. They are the 90 percent for whom good enough IT has made them as good as anyone anywhere.

In contrast, it's the "top half" of the labor pool, and more specifically the apparent 10 percent that are "entrepreneurship, innovation and creativity"-focused among them, that know to succeed and win they need the very best computer and associated services, even if it costs $500 more. Nowadays there's no better way to gain an advantage in business and life than to have the best technology.

The people who are succeeding are buying Macs, iPhones, iPod Touches and Apple's services and applications. A flight to quality is usually spurred by disruption and uncertainty. It's not about brand religion or pretty graphics. It's about survival and success when the going gets tough. It works for me, it has to.

A chef doesn't buy the cheapest knifes. A painter doesn't buy the cheapest brushes. A carpenter doesn't buy the cheapest hammer. And all the winners in the economy today -- those that have a say in what they use to do all the digital things so critical now to almost any knowledge- and services-based job -- need the best tools. And they will upgrade those tools just as fast as they can (hence the rapid adoption of Apple's Snow Leopard OS X upgrade in recent months.)

So for all those millions of newly laid off workers who know that "entrepreneurship, innovation and creativity" is their only ticket to a new, fresh start -- those that no longer have an IT department to tell them what to do (at lowest cost) -- they seem to be making a new move to a Mac. I expect they won't soon go back, once they taste the fruits of heightened knowledge productivity.

Because when failure is not an option, you have to have the best tools, especially when the going gets tough. The sad part is that Apple does so well when so many are not.

Tuesday, October 20, 2009

SOA user survey defines latest ESB trends, middleware use patterns

Take the BriefingsDirect middleware/ESB survey now.

Forgive my harping on this, but I keep hearing about how powerful social media is for gathering insights from the IT communities and users. Yet I rarely see actual market research conducted via the social media milieu.

So now's the time to fully test the process. I'm hoping that you users and specifiers of enterprise software middleware, SOA infrastructure, integration middleware, and enterprise service buses (ESBs) will take 5 minutes and fill out my BriefingsDirect survey. We'll share the results via this blog in a few weeks.

We're seeking to uncover the latest trends in actual usage and perceptions around these SOA technologies -- both open source and commercial.

How middleware products -- like ESBs -- are used is not supposed to change rapidly. Enterprises typically choose and deploy integration software infrastructure slowly and deliberately, and they don't often change course without good reason.

But the last few years have proven an exception. Middleware products and brands have shifted more rapidly than ever before. Vendors have consolidated, product lines have merged. Users have had to grapple with new and dynamic requirements.

Open source offerings have swiftly matured, and in many cases advanced capabilities beyond the commercial space. Interest in SOA is now shared with anticipation of cloud computing approaches and needs.

So how do enterprise IT leaders and planners view the middleware and SOA landscape after a period of adjustment -- including the roughest global recession in more than 60 years?

This brief survey, distributed by BriefingsDirect for Interarbor Solutions, is designed to gauge the latest perceptions and patterns of use and updated requirements for middleware products and capabilities. Please take a few moments and share your preferences on enterprise middleware software. Thank you.

Take the BriefingsDirect middleware/ESB survey now.