Thursday, October 1, 2009

Kapow and StrikeIron team-up to offer web data services capabilities to SMBs

Kapow Technologies has joined forced with StrikeIron to give small and medium-sized businesses (SMB) a leg up in accessing, using, and sharing Web-based data.

Kapow's Web Data Services 7.0.0 will allow SMBs to wrap any Web site or Web application into RSS feeds or REST Web services. [Disclosure: Kapow is a sponsor of BriefingsDirect podcasts.]

Under Kapow's strategic partnership with StrikeIron, Web Data Services 7.0.0, which is available immediately, will be offered on StrikeIron's Web Services Catalog. The software-as-a-service (SaaS) distribution engine allows developers and business users to integrate live data from private and public Web applications and Web sites.

By using Kapow's latest offering, SMBs that need enterprise-class Web data services access and quality will have automated and structured access without resorting, as they did previously, to cutting and pasting the data from a Web browser. [Learn more about Web data services and business inteligence.]

Kapow's “no coding” technology enables companies to rapidly build, test and deploy standard RSS data feeds and REST web services delivery of real-time web data directly into common business applications such as Microsoft Excel, NetSuite or Salesforce as well as any RSS feed reader.

Kapow can also deliver feeds and services directly to any application builder that can access data in standard RSS, JSON and XML format, including IBM Mashup Center, IBM Rational EGL, JackBe and WaveMaker.

The feeds and services are constructed by a visual point-and-click desktop tool that enables users to create “robots” that automate the navigation and interaction with any Web application or Web site, providing secure and reliable access to the underlying data and business logic. This enables the collection of web intelligence and market data in real-time.

Under the terms of the agreement, Kapow will maintain full technical and operational responsibility for Kapow Web Data Services, including enhancements and upgrades. StrikeIron will provide the commercialization capabilities, handling all customer relationship management functions, including sales, billing, and account support.

Private clouds: A valuable concept or buzzword bingo?

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.

Take the BriefingsDirect middleware/ESB survey now.

By Ronald Schmelzer

Every once in a while, the machinery of marketing goes haywire and starts labeling all manner of things with inappropriate terminology. The general rationale of most marketers is that if there’s a band wagon rolling along somewhere and gaining some traction in the marketplace, it’s best to jump on it while it’s rolling.

After all, much of the challenge of marketing products is getting the attention of your target customer in order to get an opportunity to pitch products or services to them. Of course, if it doesn’t work with one band wagon, as the old adage goes, try, try again. This is why we often see the same products marketed with different labels and categories applied to them. Sure, the vendors will insist that they have indeed developed some new add-on or tweaked a user interface to include the new concept front and center, but at the very core of it, the products remain fundamentally unchanged.

Now, I don’t want to sound overly pessimistic about product marketing and the state of IT research and development, since the industry couldn’t exist without innovations that are truly new and disruptive and change the very face of the market. However, this sort of innovation often comes not from the established vendors in the market (who have customer bases to grow and defend), but rather from small upstarts that have nothing to lose. It is in this context that we need to evaluate some of the marketing terminology currently coming to the fore around the cloud computing concept.

ZapThink has had many positive things to say about cloud computing, and we do believe that as a business model, technological approach, and service-oriented domain it will have significant impact on the way companies large and small procure, develop, deploy, and scale their applications. Indeed, we’re starting to see hundreds of companies that develop whole products and services without procuring a penny of internal IT hardware or software resources. This is the bonanza that is cloud computing.

Yet, we’re now starting to see the emergence of a more perplexing concept called “private clouds.” If the benefit of the cloud is primarily loosely coupled, location-independent virtualized services (implemented in a service-oriented manner, of course), and we’re doing this with the intent of reducing IT expenditures, then is there any value in a new concept called private clouds? How does the addition of this word “private” add any value to the sort of service-oriented cloud computing that we’ve been now talking about for a handful of years? Is this a valuable term, or mere marketing spin?

To attempt to gain some clarity around this issue, ZapThink reached out to a number of pundits and opinion-leaders in the space to get their thoughts and definitions on private cloud, and to no surprise, the definitions all varied significantly. Let’s explore these definitions and see what additional value (if any) they contribute to the cloud computing discussion.

Private cloud concept #1: Company-owned and operated, location-independent, virtualized (homogeneous) service infrastructure

My colleague, Jason Bloomberg, is of the opinion that a private cloud consists of infrastructure owned by a company to deploy services in a virtualized, location-independent manner. What differentiates private clouds from simply implementing clustered applications or servers, is that the cloud is not built for a specific service or application in mind.

Rather, it is an abstracted, virtualized environment that allows for deployment of a wide range of disparate services. It is important to note that in practical terms, companies will most likely not implement this vision of private clouds using a diversity of heterogeneous infrastructure. Indeed, it is in their best interests to control costs and complexity of support, training, and administration by implementing their private clouds using a single vendor stack.

So, this vision of private clouds is often a single-vendor (homogeneous) cluster of virtualized infrastructure that enables location-independent service consumption. Of course, implementing any sort of homogeneous stack reduces the need for loosely-coupled services, and thus weakens the service-oriented cloud computing value proposition as a whole for that company.

Private cloud concept #2: Virtualization plus dynamic provisioning (elasticity)

In a response to a Facebook post, Jean-Jacques Dubray comments that the above definition doesn’t go far enough. Rather, in order for the company-owned and implemented infrastructure to be considered a private cloud, it must include the concept of “elasticity.” Specifically, this means that the hardware and software resources must be provisioned in a dynamic manner, scaling up and down to meet changes in demand, thus enabling a more responsive and cost-sensitive approach to IT provisioning.

This idea of private clouds sounds a lot like the utility computing concept sold as part of IBM’s decade-old vision of on-demand computing. From this perspective, a private cloud is company-owned on-demand utility computing implemented with services instead of tightly coupled applications.

Private cloud concept #3: Governed, virtualized, location-independent services

In a response my Tweet on the subject, David Chappell comments that the private cloud is really a response to some of the security and governance issues raised by the (public) cloud. Specifically, he states that a “private cloud (equals) more control over what and how.”

Reading between the 140 character lines, I can guess that his perspective is that a private cloud is a governed cloud that enables virtualized, governed, location-independent services. For sure, there has been a lot of consternation over the fact that the most popular “public” clouds share infrastructure between customers and require that data and communications cross the company firewall.

This stresses out a lot of IT administrators and managers. So in response, these folks insist that they want all the technological benefits of cloud computing, but without the governance risk of having it reside in someone else’s infrastructure. Basically, they want the virtualization, loose coupling, and location-independent benefits of cloud computing without the economic benefits of leveraging someone else’s costs and investments. Basically, they would rather own a version of the Amazon EC2 than use it, solely for reasons of governance.

Many people are indeed concerned about those supposed governance and security draw-backs of cloud computing. However, rather than simply dismissing the economic benefits of the public clouds, why can’t we simply approach private clouds as a veneer that we place on top of the public clouds?

Couldn’t companies impose their governance and security requirements on third-party infrastructure, using company-owned governance tools and approaches to manage remote services? Couldn’t we simply demand that the public clouds provide greater governance and security control?

Basically, does the addition of the term private provide the same sort of value as it does in the context of the virtual private network (VPN)? We didn’t throw out the Internet because it was insecure and create a private Internet. So, why should we do the same with cloud computing and create private clouds?

Private cloud concept #4: Internal business model for pay on demand consumption of location-independent, virtualized resources

JP Morgenthal takes an entirely different perspective on the private cloud concept and insists that the primary value of any cloud, whether implemented privately or acquired from a public vendor, is the business model of pay-as-you-go service consumption.

From this perspective, a private cloud is an internal business model that enables organizations to consume and procure internal, virtualized, loosely coupled services using a pay on-demand model similar to a charge-back mechanism. Rather than an IT organization paying for and supporting the costs of the business users in an aggregate fashion, they can provide those resources using the same business models employed by Amazon, Google, Salesforce.com and others in their public clouds.

In order to realize this vision of private clouds, companies need a means to enable transactional service purchases, auditing of service usage, and organizational methods for enabling such inter-departmental charges. At the most fundamental level, this vision of the private cloud treats IT as a business and a service provider to the rest of the organization.

Private cloud concept #5: Marketing hype, pure and simple

TechTarget offers the most cynical view of the private cloud. In their words, a private cloud is a “marketing term for a proprietary computing architecture that provides hosted services to a limited number of people behind a firewall."

"Marketing media that uses the words "private cloud" is designed to appeal to an organization that needs or wants more control over their data than they can get by using a third-party hosted service. …” Basically, they opine that the term has marketing value only. Where does this place IT practitioners? Reading between the lines, they encourage us to ignore the usage of the term.

More fodder for pundits

Thomas Bittman from Gartner recently posted a rather snarky blog post that says that if we don’t get private clouds, we’re basically silly people who are missing the boat. In that article, he states, “Can you find a better term? Go ahead.”

Yes, we can. "Service-oriented cloud computing" adequately defines an architectural and infrastructure approach to develop location-independent, loosely coupled services, in a manner that virtualizes and abstracts the implementation of these services. What additional value does the term “private” add to that? It’s not entirely clear, and as we can see from the discussion above, there’s no consensus.

Adding more fuel to the fire, a well-publicized video of Oracle’s Larry Ellison and follow-up audio post is now making the rounds where he (humorously or embarrassingly, depending on your perspective) pokes holes in the cloud computing concept as a whole and chastises IT marketing efforts.

Regardless of where you stand on the cloud computing discussion, the video sheds some light on Oracle’s perspective on this whole mess. While it would be hard to say if Ellison speaks for all of Oracle (although you would think so), it indicates that even vendors are starting to strain at the marketing hype that threatens to devalue billions of dollars of their own product investment over the prior decades.

The ZapThink take

The fact that there’s no single perspective on private cloud might indicate that none of the definitions really warrant separating the private cloud concept from that of cloud computing as a whole -- especially the service-oriented sort of clouds that ZapThink espouses.

One reasonable perspective is that the definitions discussed above are simply differing infrastructural and organizational approaches to implementing service-oriented cloud computing. However, those approaches should not warrant a whole new term and certainly not millions more in infrastructure expenditure.

Trying to create a new concept of private clouds from any of a number of perspectives -- architectural, infrastructural, organizational, governance, business model -- seems to introduce more confusion than clarification. After all, shouldn’t all clouds, private or not, have many of the benefits described above? Doesn’t the concept of a private, company-owned cloud in some ways weaken the cloud value proposition? Who really benefits from this private cloud discussion -- IT practitioners or vendors with products to sell?

The point of any new term should be to clarify and differentiate. If the term does neither, then it is part of the problem, not the solution. However, when vendors start pitching their warmed-over middleware stacks and now-dull enterprise service buses (ESB) as “private cloud” infrastructure stacks – ask yourself: Does this change what you are doing now, or is this the beating of the band wagon’s marketing drum?

The goal is not to buy more stuff – the goal is to provide the business increasing value from their existing IT investments. This is the purpose and goal of enterprise architecture and the reason why IT exists in the first place.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.

Take the BriefingsDirect middleware/ESB survey now.



SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Wednesday, September 30, 2009

Open Mashup Alliance sets out to breed ease of apps and data access, portability

Industry consortia often set out with lofty goals, but don’t always reach them in the face of conflicts among major players. The newly formed Open Mashup Alliance (OMA) could be quite different.

The latest consortium on the technology scene, the OMA’s mission is to foster the successful use of web data services and enterprise mashup technologies and the adoption of an open language that promotes enterprise mashup interoperability and portability. This is a high priority for more and more enterprises, which is why the OMA could gain momentum.

In fact, it already has on one level. The founding members of OMA is a diverse list of software vendors, consultants, tech service provides and other industry leaders that share a common interest: promoting the open, free-to-use Enterprise Mashup Markup Language (EMML) for the development, interoperability and compatibility of enterprise mashup offerings.

Charter members include Adobe, Bank of America, Capgemini, Hinchcliffe & Co., HP, Intel, JackBe, Kapow Technologies, ProgrammableWeb, Synteractive, and Xignite. Any organization that wants to advance EMML and enterprise mashup interoperability and compatibility can join the OMA. [Disclosure: HP and Kapow are sponsors of BriefingsDirect podcasts.]

Remove vendor lock-in

Michael Ogrinz, principal architect at Bank of America and author of the book Mashup Patterns‚ was right when he said the industry needs to remove vendor lock-in concerns raised by proprietary toolsets in order for enterprise mashups to take hold.

“We also need to inspire the innovative minds of the open-source community to start working in this space,” Ogrinz says. “By establishing an open standard for mashups, the OMA and EMML addresses both of these issues.”

Andy Mulholland, Global CTO at Capgemini and co-author of the book Mashup Corporations has a different take. As he sees it, enterprises around the world are achieving excellent results with enterprise mashup solutions. But, he adds, these enterprises also realize they could reduce their risk and increase their value with solutions built on standardized vendor products. That’s a good observation and seems to be a driving force for the OMA.

HP's collaboration with Open Mashup Alliance members to promote the standard design of mashups will help customers advance their SOA initiatives by allowing them to provide a rich user experience on top of their web services.

But there is another driving force that resonates in a down economy: return on investment (ROI). Tim Hall, director of HP’s SOA Center, focused on the ROI aspects of enterprise mashup standards. He’s convinced enterprises can accelerate ROI, reduce the risks of mashup efforts and deliver real-time reporting of dynamic information to business users by adopting industry-wide open standards like EMML.

“HP's collaboration with Open Mashup Alliance members to promote the standard design of mashups will help customers advance their SOA initiatives by allowing them to provide a rich user experience on top of their web services,” Hall says.

The EMML specification will be governed under the Creative Commons License and supported by a free-to-use EMML reference runtime engine. The Open Mashup Alliance will steward and enhance the EMML v1.0 specification for future contribution to a standards body.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Doing nothing can be costliest IT course when legacy systems and applications are involved

Listen to podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett Packard.

This latest BriefingsDirect podcast discussion tackles the high -- and often under-appreciated -- cost for many enterprises of doing nothing about aging, monolithic applications. Not making a choice about legacy mainframe and poorly utilized applications is, in effect, making a choice not to transform and modernize the applications and their supporting systems.

Not doing anything about aging IT essentially embraces an ongoing cost structure that helps prevent new spending for efficiency-gaining IT innovations. It’s a choice to suspend applications on ossified platforms and to make their reuse and integration difficult, complex, and costly.

Doing nothing is a choice that, especially in a recession, hurts companies in multiple ways -- because successful transformation is the lifeblood of near and long-term productivity improvements.

Here to help us better understand the perils of continuing to do nothing about aging legacy and mainframe applications, we’re joined by four IT transformation experts from Hewlett-Packard (HP): Brad Hipps, product marketer for Application Lifecycle Management (ALM) and Applications Portfolio Software at HP; John Pickett from Enterprise Storage and Server marketing at HP; Paul Evans, worldwide marketing lead on Applications Transformation at HP, and Steve Woods, application transformation analyst and distinguished software engineer at HP Enterprise Services. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Evans: What we’re seeing is that the cost of legacy systems and the cost of supporting the mainframe hasn’t changed in 12 months. What has changed is the available cash that companies have to spend on IT, as, over time, that cash amount may have either been frozen or is being reduced. That puts even more pressure on the IT department and the CIO in how to spend that money, where to spend that money, and how to ensure alignment between what the business wants to do and where the technology needs to go.

Our concern is that there is a cost of doing nothing. People eventually end up spending their whole IT budgets on maintenance and upgrades and virtually nothing on innovation.

At a time when competitiveness is needed more than it was a year ago, there has to be a shift in the way we spend our IT dollars and where we spend our IT dollars. That means looking at the legacy software environments and the underpinning infrastructure. It’s absolutely a necessity.

Woods: For years, the biggest hurdle was that most customers would say they didn’t really have to make a decision, because the [replacement] performance wasn’t there. The performance-reliability wasn't there. That is there now. There is really no excuse not to move because of performance-reliability issues.

What's changing today is the ability to look at a legacy source code. We have the tools now to look at the code and visualize it in ways that are very compelling.

What has also changed is the growth of architectural components, such as extract transform and load (ETL) tools, data integration tools, and reporting tools. When we look at a large body of, say, 10 million lines of COBOL and we find that three million lines of that code is doing reporting, or maybe two million is doing ETL work, we typically suggest they move that asymmetrically to a new platform that does not use handwritten code.

That’s really risk aversion -- doing it very incrementally with low intrusion, and that’s also where the best return on investment (ROI) is. ... These tools have matured so that we have the performance and we also have the tools to help them understand their legacy systems today.

Pickett: Typically, when we take a look at the high-end of applications that are going to be moving over and sitting on a legacy system, many times they’re sitting on a mainframe platform. With that, one of the things that have changed over the last several years is the functionality gap between what exists in the past 5 or 10 years ago in the mainframe. That gap has not only been closed, but, in some cases, open systems exceed what’s available on the mainframe.

It’s not only a matter of cost, but it’s also factoring in the power and cooling as well. Certainly, what we’ve seen is that the cost savings that can be applied on the infrastructure side are then applied back into modernizing the application.

Hipps: This term "agility" gets used so often that people tend to forget what it means. The reality of today’s modern organization -- and this is contrasted even from 5, certainly 10 years ago -- is that when we look at applications, they are everywhere. There has been an application explosion.

When we start talking about application transformation and we assign that trend to agility, what we’re acknowledging is that for the business to make any change today in the way it does business -- in any new market initiative, in any competitive threat it wants to respond to, there is going to be an application -- very likely "applications" plural.

The decisions that you're going to make to transform your applications should all be pointed at and informed by shrinking the amount of time that takes you to turn around and realize some business initiative.

That's what we’re seeking with agility. Following pretty closely behind that, you can begin to see why there is a promise in cloud. It saves me a lot of infrastructural headaches. It’s supposed to obviate a lot of the challenges that I have around just standing up the application and getting it ready, let alone having to build the application itself.

So I think that is the view of transformation in terms of agility and why we’re seeing things like cloud. These other things really start to point the direction to greater agility.

... I tend to think that in application transformation in most ways they’re breaking up and distributing that which was previously self-contained and closed.

Whether you're looking at moving to some sort of mainframe processing to distributed processing, from distributed processing to virtualization, whether you are talking about the application team themselves, which now are some combination of in-house, near-shore, offshore, outsourced sort of a distribution of the teams from sort of the single building to all around the world, certainly the architectures themselves from being these sort of monolithic and fairly brittle things that are now sort of services driven things.

You can look at any one of those trends and you can begin to speak about benefits, whether it’s leveraging a better global cost basis or on the architectural side, the fundamental element we’re trying to do is to say, "Let’s move away from a world in which everything is handcrafted."

Assembly-line model

Let’s get much closer to the assembly-line model, where I have a series of preexisting trustworthy components and I know where they are, I know what they do, and my work now becomes really a matter of assembling those. They can take any variety of shapes on my need because of the components I have created.

We're getting back to this idea of lower cost and increased agility. We can only imagine how certain car manufacturers would be doing, if they were handcrafting every car. We moved to the assembly line for a reason, and software typically has lagged what we see in other engineering disciplines. Here we’re finally going to catch up. We're finally be going to recognize that we can take an assembly line approach in the creation of application, as well, with all the intended benefits.

Evans: ... Once we have done it, once we have removed that handwritten code, that code that is too big for what it needs to be in terms to get the job done. Once we have done it once, it’s out and it’s finished with and then we can start looking at economics that are totally different going forward, where we can actually flip this ratio.

Today, we may spend 80 percent or 90 percent of our IT budget on maintenance, and 10 percent on innovation. What we want to do is flip it. We're not going to flip it in a year or maybe even two, but we have got to take steps. If we don’t start taking steps, it will never go away.
Listen to podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett Packard.

Tuesday, September 29, 2009

Akamai joins industry push for rich and fast desktop virtualization services

Call it a trend – and not just a virtual one. Akamai Technologies is the latest tech firm to join the effort to push desktop virtualization into the mainstream with the salient message of swift return on investment (ROI) and lower total costs for PC desktop delivery.

Akamai joins HP, Microsoft, VMware, as well as Citrix, Desktone and a host of others in the quest to advance the cause of desktop virtualization (aka VDI) in a sour economy. Better known for optimizing delivery of web content, video, dynamic transactions and enterprise applications online, Akamai just introduced a managed Internet service that optimizes the delivery of virtualized client applications and PC desktops.

Akamai isn’t starting from scratch. The company is leveraging core technology from its IP Application Accelerator solution to offer a new service that promises cost-efficiency, scalability and the global reach to deliver applications over virtual desktop infrastructure products offered by Citrix, Microsoft and VMware. [Disclosure: Akamai is a sponsor of BriefingsDirect podcasts.]

“We see the desktop virtualization market poised for significant growth and believe that our unique managed services model allows us to work with enterprises on large, global deployments of their virtual desktop infrastructure,” says Willie Tejada, vice president of Akamai’s Application and Site Acceleration group, in a release.

Since Akamai launched its IP Application Accelerator, Tejada reports good traction beyond browser-based applications. Now, he’s betting Akamai’s new customized offering will make room for the company to focus even more on virtualization. He’s also betting enterprise customers will appreciate the new pricing model. With IP Application Accelerator targeted for VDI, Akamai is rolling out concurrent user-based pricing and customized integrations through professional services to virtual desktops.

Significant growth
Tejada is right about one thing: the expected and significant growth of virtual desktop connected devices. Gartner predicts this sector will grow to about 66 million by the end of 2014. That translates to 15 percent of all traditional professional desktop PCs. With these numbers on hand, it’s clear that enterprises are rapidly adopting virtualization as a key component of cost-containment efforts.

I think we're facing an inflection point for desktop virtualization, fueled by the pending Windows 7 release, pent-up refresh demand on PCs generally, and the need for better security and compliance on desktops. Add to that economic drivers of reducing client support labor costs, energy use, and the need to upgrade hardware, and Gartner's numbers look conservative.

Device makers are hastening the move to VDI with thin clients (both PCs and notebooks) that add all the experience of the full PC but in the size of a ham sandwich and for only a few hundred dollars. Hold the mayo!

But there are yet challenges to guaranteeing the performance and scale of VDI across wide area networks. Akamai points out three in particular. First, is the user’s proximity from a centralized virtualization environment. It has a direct impact on performance and availability. Second, virtual protocols consume large amounts of bandwidth. Third, there is traditionally a high cost, as well as uptime issues, associated with private-WAN connections in emerging territories where outsourcing and off-shoring are commonplace.

We see the desktop virtualization market poised for significant growth and believe that our unique managed services model allows us to work with enterprises on large, global deployments of their virtual desktop infrastructure.

Akamai is not only promising its service will overcome all those challenges, it’s also suggesting that working with its solution on the virtualization front may eliminate the need to build out or upgrade costly private networks limited by a preset reach and scale. How does Akamai do this? By allowing for highly scalable and secure virtual desktop deployments to anyone, anywhere, across an Internet-based platform spanning 70 countries.

According to Akamai, its technology is designed to eliminate latency introduced by
Internet routing, packet loss, and constrained throughput. The company also says that performance improvements can be realized through several techniques including dynamic mapping, route optimization, packet redundancy algorithms, and transport protocol optimization.

The story for Akamai’s IP Application Accelerator targeted for VDI. We’ll have to wait and see the case studies of customers relying on the new solution, but the promises are, well, promising. If you have a lot of PCs in calls centers or managing a lot of remote locations, give VDI a look. It's time has come from a technology, network performance, cost and long-term economics perspective.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Monday, September 21, 2009

Part 1 of 4: Web data services extend business intelligence depth and breadth across social, mobile, web domains

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Kapow Technologies.

See popular event speaker Howard Dresner's latest book, Profiles in Performance: Business Intelligence Journeys and the Roadmap for Change, or visit his website.

The latest BriefingsDirect podcast discussion on the future of business intelligence (BI) -- and on bringing more information from more sources into an analytic process, and thereby getting more actionable intelligence out.

The explosion of information from across the Web, from mobile devices, inside of social networks, and from the extended business processes that organizations are now employing all provide an opportunity, but they also provide a challenge.

This information can play a critical role in allowing organizations to gather and refine analytics into new market strategies, better buying decisions, and to be the first into new business development opportunities. The challenge is in getting at these Web data services and bringing them into play with existing BI tools and traditional data sets.

This is the first in a series of podcasts, looking at the future of BI and how Web data services can be brought to bear on better business outcomes.

So, what are Web data services and how can they be acquired? Furthermore, what is the future of BI when these extended data sources are made into strong components of the forecasts and analytics that enterprises need to survive the recession and also to best exploit the growth that follows?

Here to help us explain the benefits of Web data services and BI is Howard Dresner, president and founder of Dresner Advisory Services, and Ron Yu, vice president of marketing at Kapow Technologies. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Dresner: BI is really about empowering end users, as well as their respective organizations, with insight, the ability to develop perspective. In a downturn, what better time is there to have some understanding of some of the forces that are driving the business?

Of course, it's always useful to have the benefit of insight and perspective, even in good times. But, it tends to go from being more outward-focused during good times, focused on markets and acquiring customers and so forth, to being more introspective or internally focused during the bad times, understanding efficiencies and how one can be more productive.

So, BI always has merit and in a downturn it's even more relevant, because we are really less tolerant of being able to make mistakes. We have to execute with even greater precision, and that's really what BI helps us do.

... The future is about focusing on the information and those insights that can empower the individuals, their respective departments, and the enterprise to stay aligned with the mission of that organization.

... If you're trying to develop [such] perspective, bringing as much relevant data or information to bear is a valuable thing to do. A lot of organizations focus just on lots of information. I think that you need to focus on the right information to help the organization and individuals carry out the mission of that organization.

There are lots of information sources. When I first started covering this beat 20 years ago, the available information was largely just internal stores, corporate stores, or databases of information. Now, a lot of the information that ought to be used, and in many cases, is being used, is not just internal information, but is external as well.

There are syndicated sources, but also the entire World Wide Web, where we can learn about our customers and our competitors, as well as a whole host of sources that ought to considered, if we want to be effective in pursuing new markets or even serving our existing customers.

Yu: I fully agree with Howard. It's all about the right data and, given the current global and market conditions, enterprises have cut really deep -- from the line of business, but also into the IT organizations. However, they're still challenged with ways to drive more efficiencies, while also trying to innovate.

The challenges that are being presented are monumental where traditional BI methods and tools are really providing powerful analytical capabilities. At the same time, they're increasingly constrained by limited access to not only relevant data, but how to get timely access to data.

What we see are pockets of departmental use cases, where marketing departments and product managers are starting to look outside in public data sources to bring in valuable information, so they can find out how the products and services are doing in the market.

... Inclusive BI essentially includes new and external data sources for departmental applications, but that's only the beginning. Inclusive BI is a completely new mindset. For every application that IT or line of business develops, it just creates another data silo and another information silo. You have another place that information is disconnected from others.

... There is effectively a new class of BI applications as we have been discussing, that depends on a completely different set of data sources. Web data services is about this agile access and delivery of the right data at the right time.

With different business pressures that are surfacing everyday, this leads to a continuous need for more and more data sources.

... Critical decision-making requires, as Howard was saying earlier, that all business information is easily leveraged whenever it's needed. But today, each application is separate and not joined. This makes the line of business and decision- making very difficult, and it's not in real time.

An easier way

As this dynamic business environment continues to grow, it’s completely infeasible for IT to update their existing data warehouses or to build a new data mart. That can't be the solution. There has to be an easier way to access and extract data exactly where it resides, without having to move data back and forth from data bases, data marts, and data warehouses, which effectively becomes snapshot.

... Web data services provides immediate access to the delivery of this critical data into the business user's BI environment, so that the right timely decisions can be made. It effectively takes these dashboards, reporting, and analytics to the next level for critical decision-making. So when we look deeper into this and how is this actually playing out, it's all about early and precise predictions.

Dresner: ... Some IT organizations have become pretty inflexible. They are focused myopically on some internal sources and are not being responsive to the end user.

To the extent that they can find new tools like Web data services to help them be more effective and more efficient, they are totally open to giving line of business self-service capabilities.



You need to be careful not to suffer from what I call BI myopia, where we are focused just on our internal corporate systems or our financial systems. We need to be responsive. We need to be inclusive of information that can respond to the user's needs as quickly as possible, and sometimes the competency center is the right approach.

I have instances where the users do wrest control and, in my latest book, I have four very interesting case studies. Some are focused on organizations, where it was more IT driven. In other instances, it was business operations or finance driven.

Yu: ... For example, in leading financial services companies, what they're looking for is on this theme of early and precise predictions. How can you leverage information sources that are publicly available, like weather information, to be able to assess the precipitation and rainfall and even the water levels of lakes that directly contribute to hydroelectricity?

If we can gather all that information, and develop a BI system that can aggregate all this information and provide the analytical capabilities, then you can make very important decisions about trading on energy commodities and investment decisions.

Web data services effectively automates this access and extraction of the data and metadata so that IT doesn't have to.



Web data services effectively automates this access and extraction of the data and metadata and things of that nature, so that IT doesn't have to go and build a brand new separate BI system every time line of business comes up with a new business scenario.

... It's about the preciseness of the data source that the line of business already understands. They want to access it, because they're working with that data, they're viewing that data, and they're seeing it through their own applications every single day.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Kapow Technologies.

See popular event speaker Howard Dresner's latest book, Profiles in Performance: Business Intelligence Journeys and the Roadmap for Change, or visit his website.

Process isomorphism: The critical link between SOA and BPM

This guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

Take the BriefingsDirect middleware/ESB survey now.

By Jason Bloomberg

ZapThink has long championed the close relationship between business process management (BPM) projects and service-oriented architecture (SOA) initiatives. As anyone who has been through our Licensed ZapThink Architect Bootcamp can attest, we have a process-centric view of SOA, where the point to building loosely coupled business services is to support metadata-driven compositions that implement business processes, what we call service-oriented business applications, or SOBAs, for want of a better term.

Nevertheless, there is still confusion on this point, among enterprise practitioners who see BPM as a business effort and SOA as technology-centric, among vendors who see them as separate products in separate markets, and even among pundits who see Services as supporting business functions but not business processes.

On the other hand, there are plenty of enterprise architects who do see the connection between these two initiatives, and who have pulled them together into "BPM enabled by SOA" efforts. This synergy, however, is not automatic, and requires some hard work both among the people focusing on optimizing business processes to better meet changing business needs as well as the team looking to build composable business services that support the business agility and business empowerment drivers for their SOA initiatives.

ZapThink has worked with many such organizations, and over time a distinct best practice pattern has emerged, one that is both fundamental as well as subtle, and as a result, has fallen through the cracks of compendia of SOA patterns: the Process Isomorphism pattern. Understanding this pattern and how to apply it can help organizations pull their BPM and SOA efforts together, and even more importantly, improve the alignment of their SOA initiatives with core business drivers.

What is Process Isomorphism?


An isomorphism is a mathematical concept that expresses a relationship between two concepts that are structurally identical but may differ in their respective implementations. A very simple example would be two tic-tac-toe games, one with the traditional X's and O's, and the other with, say, red dots and blue dots. The game board and the rules are the same, in spite of the difference in symbols the players use to play the games. If two particular games follow the same sequence of moves, they would be isomorphic.

The term process isomorphism usually refers to two processes that are structurally identical, typically between two companies in the same industry. For example, if the order-to-cash process is structurally identical between companies A and B, that is, the same steps in the same order with the same process logic, those processes would be isomorphic, even if the two companies had differences in their underlying technical implementations of the respective processes.

We're using the term differently here, however. Process isomorphism in the SOA context is an isomorphism between a process on the one hand, and the SOBA that implements it on the other. In other words, if you were to model a business process, and as a separate exercise, model the composition of services that implements that process, where those two models have the same structure, then they would be isomorphic.

One conversation that helped crystallize this notion was with John Zachman, who was explaining some of the changes he has recently made to his seminal Zachman Framework. He has renamed Row 3, which had been the System Model row, to the System Logic row. People were confusing the System Model with the physical representation of the system, which resides one row down. Our discussion of process isomorphism is essentially a design practice that relates these two rows of Column 2, the How column. In essence, the process logic model is one level above the service composition model that implements the process logic. The process isomorphism pattern states that these two models should be isomorphic.

Process Isomorphism in Practice

We've been using a wonderful example of Process Isomorphism on our LZA Course for a few years now, courtesy of British Petroleum (BP), who presented at our Practical SOA event in February, 2008 (more about our upcoming Practical SOA event). The presentation focused on how process decomposition is the common language between business and IT efforts, and one of the examples focused on the Well Work Performance process, one of thousands of processes in their oil drilling line of business:



BP's Well Work Performance Process

The Description column in the chart above reflects the four main subprocesses that make up this process. The Sub-Task columns represent individual sub-tasks, or steps in the process. Finally, the Supporting Service Name column indicates the Business Service that implements the corresponding sub-task. The fact that there is a one-to-one correspondence between sub-tasks and supporting Services, combined with the implied correspondence between the process logic and the composition logic, illustrates Process Isomorphism. In this simple example, the process logic is a simple linear sequence, but if the logic were more complex, say with branching and error conditions, then the process would exhibit isomorphism if the composition logic continued to reflect the process logic.

It is important to point out that the one-to-one correspondence between process sub-tasks and supporting Services is by no means a sure thing, and in practice, many organizations fail to design their compositions with such a correspondence. Frequently, the issue is that the SOA effort is excessively bottom-up, where architects specify services based upon existing capabilities. Such bottom-up approaches typically yield services that don't match up with process requirements. Equally common are BPM efforts that are excessively top-down, in that they seek to optimize processes without considering the right level of detail for those processes to enable services to implement steps in the processes. Only by taking an iterative approach where each iteration combines top-down and bottom-up design is an organization likely to achieve Process Isomorphism.

The Process Isomorphism Value Proposition


T
he essential benefit of Process Isomorphism is being able to use the process representation to represent the composition and vice-versa. While these concepts are fundamentally different, in that they live on different rows of the Zachman Framework, the isomorphism relationship allows us the luxury of considering them to be the same thing. In other words, we can discuss the composition as though it were the process, and the process as though it were the composition.

This informal equivalence gives us a variety of benefits. For example, if process steps correspond directly to services, then service reuse is more straightforward to achieve than when the correspondence between steps and services is less clear. Service reuse discussions can be cast in the context of process overlaps. If two processes share a sub-task, then the SOBAs that implement those processes will share the supporting service. In addition, the metadata representation of the composition logic, for example, a BPEL file, will represent the process logic itself. Without process isomorphism, the process logic the BPM team comes up with won't correspond directly to the BPEL logic for the supporting composition. This disconnect can lead directly to misalignment between IT capabilities and business requirements, and also limits business agility, because a lack of clarity into the relationship between process and supporting composition can lead to unintentional tight coupling between the two.

The ZapThink Take


Perhaps the greatest benefit of Process Isomorphism, however, is that it helps to establish a common language between business and IT. The business folks can be talking about processes, and the IT folks can be talking about SOBAs, and at a certain level, they're talking about the same thing. The architect knows they're different concepts, of course, but conversations across the business/IT aisle no longer have to dwell on the differences.

The end result should be a better understanding of the synergies between BPM and SOA. If process specialists want to think of business services as process sub-tasks, then they can go right ahead. Similarly, if technical implementers prefer to think of business processes as being compositions of services, that's fine too. And best of all, when the BPM team draws the process specification on one white board and the SOA team draws the composition specification on another, the two diagrams will look exactly alike. If that's not business/IT alignment, then what is?

This guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

Take the BriefingsDirect middleware/ESB survey now.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Friday, September 18, 2009

Caught between peak and valley -- How CIOs survive today, while positioning for tomorrow

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Download the slides. Sponsor: Hewlett-Packard.

Are CIOs are making the right decisions and adjustments in both strategy and execution as we face a new era in IT priorities? The combination of the down economy, resetting of IT investment patterns, and the need for agile business processes, along with the arrival of some new technologies, are all combining to force CIOs to reevaluate their plans.

What should CIOs make as priorities in the short, medium, and long terms? How can they reduce total cost, while modernizing and transforming IT? What can they do to better support their business requirements? In a nutshell, how can they best prepare for the new economy?

Here to help address the pressing questions during a challenging time -- and yet also a time in which opportunity and differentiation for CIOs beckons -- is Lee Bonham, marketing director for CIO Agenda Programs in HP’s Technology and Solutions Group. The interview is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bonham: We all recognize that we’re in a tough time right now. In a sense, the challenge has become even more difficult over the past six months for CIOs and other decision-makers. Many people are having to make tough decisions about where to spend their scarce investment dollars. The demand for technology to deliver business value is still strong, and it perhaps has even increased, but the supply of funding resources for many organizations has stayed flat or even gone down.

To cope with that, CIOs have to work smarter, not harder, and have to restructure their IT spending. Looking forward, we see, again, a change in the landscape. So, people who have worked through the past six months may need to readjust now.

What that means for CIOs is they need to think about how to position themselves and how to position their organizations to be ready when growth and new opportunity starts to kick in. At the same time, there are some new technologies that CIOs and IT organizations need to think about, position, understand, and start to exploit -- if they’re to gain advantage.

Organizations need to take stock of where they are and implement three strategies:
  • Standardize, optimize, and automate their technology infrastructure -- to make the best use of the systems that they have installed and have available at the moment. Optimizing infrastructure can lead to some rapid financial savings and improved utilization, giving a good return on investment (ROI).
  • Prioritize -- to stop doing some of the projects and programs that they’ve had on their plate and focus their resources in areas that give the best return.
  • Look at new, flexible sourcing options and new ways of financing and funding existing programs to make sure that they are not a drain on capital resources. We’ve been putting forward strategies to help in these three areas to allow our customers to remain competitive and efficient through the downturn. As I said, those needs will carry on, but there are some other challenges that will emerge in the next few months.
Growth may come in emerging markets, in new industry segments, and so on. CIOs need to look at innovation opportunities. Matching the short-term and the long-term is a real difficult question. There needs to be a standard way of measuring the financial benefit of IT investment that helps bridge that gap.

There are tools and techniques that leading CIOs have been putting in place around project prioritization and portfolio management to make sure that they are making the right choices for their investments. We’re seeing quite a difference for those organizations that are using those tools and techniques. They’re getting very significant benefits and savings.

The financial community is looking for fast return -- projects that are going to deliver quick benefits. CIOs need to make sure that they represent their programs and projects in a clear financial way, much more than they have been before this period. Tools like Project and Portfolio Management (PPM) software can help define and outline those financial benefits in a way that financial analysts and CFOs can recognize.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Download the slides. Sponsor: Hewlett-Packard.