Thursday, November 5, 2009
Role of governance plumbed in Nov. 10 webinar on managing hybrid and cloud computing types
The free, live webinar begins at 2 p.m. EDT. Register at https://www2.gotomeeting.com/register/695643130. [Disclosure: WebLayers is a sponsor of BriefingsDirect podcasts.]
Titled "How Governance Gets You More Mileage from Your Hybrid Computing Environment,” the webinar targets enterprise IT managers, architects and developers interested in governance for infrastructures that include hybrids of cloud computing, software as a service (saaS) and service-oriented architectures (SOA). There will be plenty of opportunity to ask questions and join the discussion.
Organizations are looking for more consistency across IT-enabled enterprise activities, and are finding competitive differentiation in being able to best manage their processes more effectively. That benefit, however, requires the ability to govern across different types of systems and infrastructure and applications delivery models. Enforcing policies, and implementing comprehensive governance, acts to enhance business modeling, additional services orientation, process refinement, and general business innovation.
Increasingly, governance of hybrid computing environments establishes the ground rules under which business activities and processes -- supported by multiple and increasingly diverse infrastructure models -- operate.
Developing and maintaining governance also fosters collaboration between architects, those building processes and solutions for companies, and those operating the infrastructure -- be it supported within the enterprise or outside. It also sets up multi-party business processes, across company boundaries, with coordinated partners.
Cambridge, Mass.-based WebLayers provides a design-time governance platform that helps centralize policy management across multiple IT domains -- from SOA through mainframe and cloud implementations. Such governance clearly works to reduce the costs of managing and scaling such environments, individually and in combination.
In the webinar we'll look at how structured policies, including extensions across industry standards, speeds governance implementations and enforcement -- from design-time through ongoing deployment and growth.
So join me and Favazza and me at 2 p.m. ET on Nov. 10 by registering at https://www2.gotomeeting.com/register/695643130.
Wednesday, November 4, 2009
HP takes converged infrastructure a notch higher with new data warehouse appliance
HP Neoview Advantage, HP Converged Infrastructure Architecture, and HP Converged Infrastructure Consulting Services are designed to help organizations drive business and technology innovations at lower total cost via lower total hassle. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
HP’s measured focus
HP isn’t just betting on a market whim. Recent market research it supported reveals that more than 90 percent of senior business decision makers believe business cycles will continue to be unpredictable for the next few years — and 80 percent recognize they need to be far more flexible in how they leverage technology for business.
The same old IT song and dance doesn't seem to be what these businesses are seeking. Nearly 85 percent of those surveyed cited innovation as critical to success, and 71 percent said they would sanction more technology investments -- if they could see how those investments met their organization’s time-to-market and business opportunity needs.
Cost nowadays is about a lot more than the rack and license. The fuller picture of labor, customization, integration, shared services suppport, data-use-tweaking and inevitable unforeseen gotchas need to be better managed in unison -- if that desired agility can also be afforded (and sanctioned by the bean-counters).
HP said its new offerings deliver three key advantages:
- Improved competitiveness and risk mitigation through business data management, information governance, and business analytics
- Faster time to revenue for new goods and services
- The ability to return to peak form, after being compressed or stretched.
First up, HP Neoview Advantage, the new release of the HP Neoview enterprise data warehouse platform, which aims to help organizations respond to business events more quickly by supporting real-time insight and decision-making.
HP calls the performance, capacity, footprint and manageability improvements dramatic and says the software also reduces the total cost of ownership (TCO) associated with industry-standard components and pre-built, pre-tested configurations optimized for warehousing.
HP Neoview Advantage and last year's Exadata product (produced in partnership with Oracle) seem to be aimed at different segments. Currently, HP Neoview Advantage is a "very high end database," whereas Exadata is designed for "medium to large enterprises," and does not scale to the Neoview level, said Deb Nelson, senior vice president, Marketing, HP Enterprise Business.
A converged infrastructure
Next up, HP Converged Infrastructure architecture. As HP describes it, the architecture adjusts to meet changing business needs, specifically what HP calls “IT sprawl,” which it points to as the key culprit in raising technology costs for maintenance that could otherwise be used for innovation.
HP touts key benefits of this new architecture. First, the ability to deploy application environments on the fly through shared service management, followed closely by lower network costs and less complexity. The new architecture is optimized through virtual resource pools and also improves energy integration and effectiveness across the data center by tapping into data center smart grid technology.
Finally, HP is offering Converged Infrastructure Consulting Services that aim to help customers transition from isolated product-centric technologies to a more flexible converged infrastructure. The new services leverage HP’s experience in shared services, cloud computing, and data center transformation projects to let customers design, test and implement scalable infrastructures.
Overall, typical savings of 30 percent in total costs can be achieved by implementing Data Center Smart Grid technologies and solutions, said HP.
With these moves to converged infrastructure, HP is filling out where others are newly treading. Cisco and EMC this week announced packaging partnerships that seek to deliver simiar convergence benefits to the market.
"It's about experience, not an experiment," said Nelson.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post.
Tuesday, November 3, 2009
Aster Data architects application logic with data for speeded-up analytics processing en masse
Aster Data, which provides massively parallel processing (MPP) data management, has tackled the location problem head-on with the announcement this week of Aster Data Version 4.0, (along with Aster nCluster System 4.0), a massively parallel application-data server that allows companies to embed applications inside an MPP data warehouse. This is designed to speed the processing of terabytes to petabytes of data.
The latest offering from the San Carlos, Calif., company fully parallelizes both data and a wide variety of analytics applications in one system. This provides faster analysis for such data-heavy applications as real-time fraud detection, customer behavior modeling, merchandising optimization, affinity marketing, trending and simulations, trading surveillance, and customer calling patterns.
While both data and applications reside in the same system, they are independent of one another, but both execute as "first-class citizens" with their respective data and application management services.
Resource sharing
The Aster Data Application Server is responsible for managing and coordinating activities and resource sharing in the cluster. It also acts as a host for the application processing and data inside the cluster. In its role as data host, it manages incremental scaling, fault tolerance and heterogeneous hardware for application processing.
Aster Data Version 4.0 provides application portability, which allows companies to take their existing Java, C, C++, C#, .NET, Perl and Python applications, MapReduce-enable them and push them down into the data.
The Dynamic Workload Management (WLM) helps support hundreds of concurrent mixed workloads that can span interactive and batch data queries, as well as application execution. Includes granular rule-based prioritization of workloads and dynamic allocation and re-allocation of resources.
Other features include:
- Trickle feeds for granular data loading and interactive queries with millisecond response times
- New online partition splitting capabilities to allow infinite cost-effective scaling
- Dual-stage query optimizer, which ensures peak performance across hundreds to thousands of CPU cores
- Integrations with leading business intelligence (BI) tools and Hadoop.
Many of the core users of high-end analytics are also moving on architecture-wise. The systems designed five or more years ago will not meet the needs of five or even a few years from now.
What's really cool about Aster Data's approach is the analytics apps can be used, and the languages and query semantics most familiar to users can be used with the new systems and architectures.
I suppose we should also expect more of these analytics engines to become available as services, aka cloud services. That would allow joins of more data sets and they the massive analytics applications can open up even more BI cans of worms.
Survey: Virtualization and physical infrastructures need to be managed in tandem
Yet underlying the use of the newer infrastructure approaches lies a budding challenge. The recent Taneja Group survey of senior IT managers working on test/dev infrastructures at North American firms found that 72 percent of respondents said virtualization on its own doesn’t address their most important test/dev infrastructure challenges. Some 55 percent rate managing both virtual and physical resources as having a high or medium impact on their success. The market is clearly looking for ways to bridge this gap.
Sharing physical and virtual infrastructures
Despite the confusion in the market about the economics of the various flavors of cloud computing, Dave Bartoletti, a senior analyst and consultant at Taneja Group, says one thing is clear: Enterprises are comfortable with, and actively sharing, both physical and virtual infrastructures internally.
“This survey reaffirms that shared infrastructure is common in test/dev environments and also reveals it’s increasingly being deployed for production workloads,” Bartoletti says. "Virtualization is seen as a key enabling technology. But on its own it does not address the most important operational and management challenges in a shared infrastructure.”
Half the survey respondents are funding projects starts in 2009. Another 66 percent of respondents will have funded a project started by the end of 2010.
Noteworthy is the fact that 92 percent of test/dev operations are using shared infrastructures, and companies are making significant investments in infrastructure-sharing initiatives to address the operational and budgetary challenges. Half the survey respondents are funding projects in 2009. Another 66 percent of respondents will have funded a project started by the end of 2010.
The survey reveals most firms are turning to private cloud infrastructures to support test/dev projects, and that shared infrastructures are beginning to bridge the gap between pre-production and production silos. A full 30 percent are sharing resource pools between both test/dev and production applications. This indicates a rising comfort level with sharing infrastructure within IT departments.
Virtualization’s cost and control issues
Although 89 percent of respondents use virtualization for test/dev, more than half have virtualized less than 25 percent of their servers. That’s because virtualization adds several layers of control and cost issues that need to be addressed by sharing, process, workflow and other management capabilities in order to fully maximize and integrate both virtual and physical infrastructures.
“Test/Dev environments are one of the most logical places for organizations to begin implementing private clouds and prove the benefits of a more elastic, self service, pay-per-use service delivery model,” says Martin Harris, director Product Management at Platform Computing. “We’ve certainly seen this trend among our own customers and have found that additional management tools enabling private clouds are required to effectively improve business service levels and address cost cutting initiatives.” [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]
Despite the heavy internal investments, however, 82 percent of respondents are not using hosted environments outside their own firewalls. The top barriers to adoption: Lack of control and immature technology.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post.
You'll be far better off in a future without enterprise software
By Ronald Schmelzer
The conversation about the role and future of enterprise software is a continuous undercurrent in the service oriented architecture (SOA) conversation. Indeed, ZapThink’s been talking about the future of enterprise software in one way or another for years.
So, why bother bringing up this topic again, at this juncture? Has anything changed in the marketplace? Can we learn something new about where enterprise software is heading? The answer is decidedly "yes" to the latter two questions. And this might be the right time to seriously consider acting on the very things we’ve been talking about for a while.
The first major factor is significant consolidation in the marketplace for enterprise software. While a decade or so ago there were a few dozen large and established providers of different sorts of enterprise software packages, there are now just a handful of large providers, with a smattering more for industry-specific niches.
We can thank aggressive M&A activity combined with downward IT spending pressure for this reality. As a result of this consolidation, many large enterprise software packages (such as enterprise resource planning (ERP), customer relationship management (CRM), supply chain management (SCM) offerings) have been eliminated, are in the process of being phased out, or are getting merged (or “fused”) with other solutions.
Many companies rationalized the spending of millions of dollars on enterprise software applications because the costs could be amortized over a decade or more of usage, and they could claim that these enterprise software applications would be cheaper, in the long run, than building and managing their existing custom code. But, we’ve now had a long enough track record to realize that the result of mass consolidation, need for continuous spending, and inflexibility is causing many companies to reconsider that rationalization.
We can thank expensive, cumbersome, and tightly-coupled customization, integration, and development for this lack of innovation in enterprise software.
Furthermore, by virtue of their weight, significance in the enterprise environment, and astounding complexity, enterprise software solutions are much slower to adopt and adapt to new technologies that continuously change the face of IT.We refer to this as the “enterprise digital divide.” You get one IT user experience when you are at home and use the Web, personal computing, and mobile devices and applications and a profoundly worse experience when you are at work. It’s as if the applications you use at work are a full decade behind the innovations that are now commonplace in the consumer environment. We can thank expensive, cumbersome, and tightly coupled customization, integration, and development for this lack of innovation in enterprise software.
In addition, no company can purchase and implement an enterprise software solution “out of the box.” Not only does a company need to spend significant money customizing and integrating their enterprise software solutions, but they often spend significant amounts of money on custom applications that tie into and depend on the software.
What might seem to be discrete enterprise software applications are really tangled masses of single-vendor functionality, tightly-coupled customizations and integrations, and custom code tied into this motley mess. In fact, when we ask people to describe their enterprise architecture (EA), they often point to the gnarly mess of enterprise software they purchased, customized, and maintain. That’s not EA. That’s an ugly baby only a mother could love.
Yet, companies constantly share with us their complete dependence on a handful of applications for their daily operation. Imagine what would happen at any large business if you were to shut down their single-vendor ERP, CRM, or SCM solutions. Business would grind to a halt.
While some would insist on the necessity of single-vendor, commercial enterprise software solutions as a result, we would instead assert how remarkably insane it is for companies to have such a single point of failure. Dependence on a single product, single vendor for the entirety of a company’s operations is absolutely ludicrous in an IT environment where there’s no technological reason to have such dependencies. The more you depend on one thing for your success, the less you are able to control your future. Innovation itself hangs in the balance when a company becomes so dependent on another company’s ability to innovate. And given the relentless pace of innovation, we see huge warning signs.
Services, clouds, and mashups: Why buy enterprise software?
In previous ZapFlashes, we talked about how the emergence of services at a range of disparate levels combined with evolutions in location- and platform-independent, on-demand, and variable provisioning enabled by clouds, and rich technologies to facilitate simple and rapid service composition will change the way companies conceive of, build, and manage applications.
Instead of an application as something that’s bought, customized, and integrated, the application itself is the instantaneous snapshot of how the various services are composed together to meet user needs. From this perspective, enterprise software is not what you buy, but what you do with what you have.
One outcome of this perspective on enterprise software is that companies can shift their spending from enterprise software licenses and maintenance (which eats up a significant chunk of IT budgets) to service development, consumption, and composition.
This is not just a philosophical difference. This is a real difference. While it is certainly true that services expose existing capabilities, and therefore you still need those existing capabilities when you build services, moving to SOA means that you are rewarded for exposing functionality you already have.
Whereas traditional enterprise software applications penalize legacy because of the inherent cost of integrating with it, moving to SOA inherently rewards legacy because you don’t need to build twice what you already have. In this vein, if you already have what you need because you bought it from a vendor, keep it – but don’t spend more money on that same functionality. Rather, spend money exposing and consuming it to meet new needs. This is the purview of good enterprise architecture, not good enterprise software.
When you ask these people to show you their enterprise software, they’ll simply point at their collection of Services, Cloud-based applications, and composition infrastructure.
The resultant combination of legacy service exposure, third-party service consumption, and the cloud (x-as-a-service) has motivated the thinking that if you don’t already have a single-vendor enterprise software suite, you probably don’t need one.
We’ve had first-hand experience with new companies that have started and grown operations to multiple millions of dollars without buying a penny of enterprise software. Likewise, we’ve seen billion-dollar companies dump existing enterprise software investments or start divisions and operations in new countries without extending their existing enterprise software licenses. When you ask these people to show you their enterprise software, they’ll simply point at their collection of services, cloud-based applications, and composition infrastructure.
Some might insist that cloud-based applications and so-called software-as-a-service (SaaS) applications are simply monolithic enterprise software applications deployed using someone else’s infrastructure. While that might have been the case for the application service provider (ASP) and SaaS applications of the past, that is not the case anymore. Whole ecosystems of loosely-coupled service offerings have evolved in the past decade to value-add these environments, which look more like catalogs of service capabilities and less like monolithic applications.
Want to build a website and capture lead data? No problem -- just get the right service from Salesforce.com or your provider of choice and compose it using web services or REST or your standards-based approach of choice. And you didn’t incur thousands or millions of dollars to do that.
Open source vs. commercial vs. build your own
Another trend pointing to the stalling of enterprise software growth is the emergence of open source alternatives. Companies now are flocking to solutions such as WebERP, SugarCRM Community Edition, and other no-license and no-maintenance fee solutions that provide 80% of the required functionality of commercial suites.
While some might point at the cost of support for these offerings, we point out the factor of difference between support and license/maintenance costs. At the very least, you know what you’re paying for. It’s hard to justify spending millions of dollars in license fees when you’re using 10% or less of a product’s capabilities.
Enhancing this open source value proposition is that others are building capabilities on top of those solutions and giving those solutions away as well. The very nature of open source enables creation of capabilities that further value-adds a product suite. At some point, a given open source solution reaches a tipping point where the volume of enhancements far outweighs what any commercial vendor can offer. Simply put, when a community supports an open source effort, the result can out-innovate any commercial solution.
There are now a lot of pieces and parts available that are free, cheap, or low cost that companies can assemble into not only workable, but scalable offerings that can compete with many commercial offerings.
Beyond open source, commercial, and SaaS-cum-cloud offerings, companies have a credible choice in building their own enterprise software application. There are now a lot of pieces and parts available that are free, cheap, or low cost that companies can assemble into not only workable, but scalable offerings that can compete with many commercial offerings. In much the same way that companies leveraged Microsoft’s Visual Basic to build applications using the thousands of free or cheap widgets and controls built by the legions of developers, so too are we seeing a movement to free or cheap Service widgets that can enable remarkably complex and robust applications.
The future of commercial enterprise software applications
It is not clear where commercial enterprise software applications go from here. Surely, we don’t see companies tearing out their entrenched solutions any time soon, but likewise, we don’t see much reason for expansion in enterprise software sales either.
In some ways, enterprise software has become every bit the legacy they sought to replace in mainframe applications that still exist in abundance in the enterprise. Smart enterprise software vendors realize that they have to get out of the application business altogether and focus on selling composable service widgets. These firms, however, don’t want to innovate their way out of business. As such, they don’t want to just provide the trains to get you from place to place, but they want to own the tracks as well.
The question is: Is the cost of the proprietary runtime infrastructure you are getting with those widgets worth the cost?
In many ways, this idea of enterprise software-as-a-platform is really just a shell game. Instead of spending millions on a specific application, you’re instead spending millions on an infrastructure that comes with some pre-configured widgets. The question is: Is the cost of the proprietary runtime infrastructure you are getting with those widgets worth the cost? Have you lost some measure of loose coupling in exchange for a “single throat to choke?”
Much of the enterprise software market is heading in direct collision course with middleware vendors who never wanted to enter the application market. As enterprise software vendors start seeing their runtime platform as the defensible position, they will start conflicting with EA strategies that seek to remove their single-vendor dependence.
We see this as the area of greatest tension in the next few years. Do you want to be in control of your infrastructure and have choice, or do you want to resign your infrastructure to the control of a single vendor, who might be one merger or stumble away from non-existence or irrelevance?
The ZapThink take
We hope to use this ZapFlash to call out the ridiculousness of multi-million dollar “applications” that cost millions more to customize to do a fraction of what you need. In an era of continued financial pressure, the last thing companies should do is invest more in technology conceived of in the 1970s, matured in the 1990s, and incrementally made worse since then.
The reliance on single-vendor mammoth enterprise software packages is not helping, but rather hurting the movement to loosely coupled, agile, composition-centric heterogeneous SOA. Now is the time for companies to pull up the stakes and reconsider their huge enterprise software investments in favor of the sort of real enterprise architecture that cares little about buying things en masse and customizing those solutions -- but instead to building, composing, and reusing what you need iteratively to respond to continuous change.
As if to prove a point, SAP stock recently slid almost 10% on missed earnings. Some may blame the overall state of the economy, but we point to the writing on the wall: All the enterprise software that could be sold has been sold, and the reasons for buying or implementing new licenses are few and far between. Invest in enterprise architecture over enterprise software, services over customizations, and clouds over costly and unpredictable infrastructure -- and you’ll be better off.
This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.
SOA and EA Training, Certification,
and Networking Events
In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.
Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.