Wednesday, July 8, 2009
Don’t use an ESB unless you absolutely, positively need one, Mule CTO warns
It would be heresy among marketers at many vendors, but the MuleSource CTO is actively discouraging architects and developers from using an enterprise service bus (ESB), including his company’s open-source version, unless they are sure they really need one.
Misuse of ESBs leads to overly complex architectures that can be more difficult to remedy than a straightforward Web services-based architecture that omits the ESB in early versions of an enterprise application, Mason argued in a phone conversation about his blog.
“There are two main mistakes I see most of the time,” he told BriefingsDirect. “There’s not enough of an integration requirement or there’s not enough use of the ESB features to warrant it.”
You don’t need an ESB if your project involves two applications, or if you are only using one type of protocol, he explains.
“If I’m only using HTTP or Web services, I’m not going to get a lot of value from an ESB as opposed to using a simpler Web services framework,” Mason said. “Web services frameworks are very good at handling HTTP and SOAP. By putting in an ESB, you’re adding an extra layer of complexity that’s not required for that job.”
Architects and developers using an ESB in these cases are probably engaging in "resume-driven development (RDD)." If anybody asks you if you’ve deployed an ESB in an application you’ve worked on you can say, yes. And then you can hope the hiring manager doesn’t ask if the application really required the technology.
Another mistake, Mason cites, is using an ESB and thinking that you are future-proofing an application that doesn’t need it now, but might someday.
“You’ll Never Need It (YNNI), that acronym has been around awhile for a reason,” Mason says. “That’s another killer problem. If you select an ESB because you think you might need it, you definitely don’t have an architecture that lays out how you’re going to use an ESB because you haven’t given it that much thought. That’s a red flag. You could be bringing in technology just for the sake of it.”
Adding his two-cents to the “Is service-oriented architecture (SOA) dead” debate, the MuleSource CTO says such over-architecting is one of the things that contributes to the problems being encountered by IT in SOA that has given the acronym a bad name. “Architecture is hard enough without adding unnecessary complexity,” he said. “You need to keep it as simple as possible.”
Ironically, adding an ESB because you might need it someday can lead to future problems that might be avoided if you left it out to begin with and then added it in later, Mason said.
“The price of architecting today and re-architecting later is going to be a lot less than architecting badly the first time,” he explained. “If you have a stable architecture, you can augment it later with an ESB, which is going to be easier than trying to plug in an ESB where it’s not going to be needed at that time.”
While the conversation focused on the pitfalls of using an ESB where you don’t need one, the MuleSource CTO naturally believes there are architectures where the ESB makes sense. To begin with, you need to be working on a project where you have three or more applications that need to talk to each other, he explained.
“If you’ve got three applications that have to talk to each other, you’ve actually got six integration points, one for each service, and then it goes up exponentially,” Mason said.
The ESB technology is also needed where the protocols go beyond HTTP. “You should consider an ESB when you start using Java Message Service (JMS), representational state transfer (REST), or any of the other protocols out there,” Mason said. “When communications start getting more complicated is when an ESB shows its true value.”
BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.
Monday, July 6, 2009
Consolidation, modernization, and virtualization: A triple-play for long-term enterprise IT cost reduction
Read a full transcript of the discussion.
As the global economic downturn accelerates the need to reduce total technology costs, IT consolidation, application modernization, and server virtualization play self-supporting roles alone -- and in combination.
Taken apart these initiatives offer greater efficiency and reduced IT energy demands. But combined, these initiatives produce much greater costs controls by slashing labor and maintenance costs, producing far better server utilization rates, and removing unneeded or unused applications and data.
These initiatives when done in coordination can do more than cut costs, they improve how IT delivers services to their businesses. A better IT infrastructure enables market agility, supports flexible business processes, and places the enterprise architect in a position to better leverage flexible sourcing options such as cloud computing and SaaS.
To dig into the relationship between a modern and consolidated approach to IT data centers and total cost, I recently interviewed John Bennett, worldwide solution manager for Data Center Transformation Solutions at Hewlett-Packard (HP).
Here are some excerpts:
Bennett: It’s easy to say, "reduce costs." It’s very difficult to understand what types of costs I can reduce and what kind of savings I get from them.Read a full transcript of the discussion.
In my mind, the themes of consolidation, which people have been doing forever; modernization, very consciously making decisions to replace existing infrastructure with newer infrastructure for gains other than performance; and virtualization, which has a lot of promise in terms of driving cost out of the organization can increase aspects like flexibility and agility. ... [These allow companies] to grow quickly, to respond the competitive opportunities or threats very quickly, and offer the ability for IT to enable the business to be more aggressive, rather than becoming a limiting factor in the roll-out of new products or services.
By combining these initiatives, and taking an integrated approach to them, ... you can use them to address a broad set of issues, and realize aspects of a data center transformation by approaching these things in an orderly and planned way.
When you move to a shared infrastructure environment, the value of that environment is enhanced the more you have standardized that environment. That makes it much easier not only to manage the environment with a smaller numbers of sys-admins, but gives you a much greater opportunity to automate the processes and procedures.
... I no longer have the infrastructure and the assets tied to specific business services and applications. If I have unexpected growth, I can support it by using resources that are not being used quite as much in the environment. It’s like having a reserve line of troops that you can throw into the fray.
If you have an opportunity and you can deploy servers and assets in the matter of hours instead of a matter of days or months, IT becomes an enabler for the business to be more responsive. You can respond to competitive threats, respond to competitive opportunities, roll out new business services much more quickly, because the processes are much quicker and much more efficient. Now, IT becomes a partner in helping the business take advantage of opportunities, rather than delaying the availability of new products and services.
We’ve seen some other issues pop up in the last several years as well. One of them is an increasing focus on green, which means a business perspective on being green as an organization. For many IT organizations, it means really looking to reduce energy consumption and energy-related costs.
In some of the generations of servers that we’ve released, we see 15 to 25 percent improvements from a cost perspective and an energy consumption perspective, just based on modernizing the infrastructure. So, there are cost savings that can be had by replacing older devices with newer ones.
We’ve also seen in many organizations, as they move to a bladed infrastructure and move to denser environments, that the data center capacity and energy constrain -- that the amount of energy available to a data center -- is also an inhibiting factor. It’s one of the reasons that we really advise customers to take a look at doing consolidation, modernization, and virtualization together.
[These efficiencies] have been enhanced by a lot of the improvements in the IT products themselves. They are now instrumented for increasingWhat we’re doing as a company is focusing on the management and the automation of that environment, because we see virtualization really stressing data center and infrastructure management environments pretty substantively.
manageability and automation. The products are integrated to provide management support not just for availability and for performance, but also for energy. They're instrumented to support the automation of the environment, including the ability to turn off servers that you don’t know or care about. They’re further enhanced by the enhancements in virtualization.
With virtualization ... it becomes a shared environment, and your shared environment is just more productive and more flexible if it’s one shared environment instead of 3, 4, 5 or 10 shared environments. That increases the density and it goes back to these other factors that we talked about. That’s clearly one of the more recent trends of the last few years in many data centers.
A lot of people are doing virtualization. What we’re doing as a company is focusing on the management and the automation of that environment, because we see virtualization really stressing data center and infrastructure management environments pretty substantively. In many cases, it's impacting governance of the data center. ... So, you really have full control, insight, and governance over everything taking place in the data center.
Our recommendations to many customers would be, first of all, if you identify assets that aren’t being used at all, just get rid of them. The cost savings are immediate. ... Identify all of the assets in the environment, the applications, software they're running, and the interdependencies between them. In effect, you build up a map of the infrastructure and know what everything is doing. You can very quickly see if there are servers, for example, not doing anything.
If I've got 10 servers doing this particular application and I can have that support the environment by using 3 of those servers, get rid of 7. I can modernize the environment, so that if I had 10 servers doing this work before, and the consolidation gives me the opportunity to go to only to 6 or 7, if I modernize, I might be able to reduce it to 2 or 3.
On top of that, I can explore virtualization. Typically, in environments not using virtualization, server utilization rates, especially for industry standard servers, are under 10 percent. That can be driven up to 70 or 80 percent or even higher by virtualizing the workloads. Now, you can go from 10 to 3 to perhaps just 1 server doing the work. Ten to 3 to 1 is an example. In many environments, you may have hundreds of servers supporting web-based applications or email. The number of servers that can be reduced out from that can be pretty phenomenal.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.
Friday, July 3, 2009
Oracle Fusion 11g Middleware: Executed according to plan
This week's announcement by Oracle of the rollouts of Fusion Middleware 11g is a bit anticlimactic in that the details are pretty much according to the plan that came out exactly a year ago today. Although the Fusion stack is comprised of multiple parts, internally developed and acquired, the highlight is that it represents the fruition of the BEA acquisition. Oracle had Fusion middleware prior to acquiring BEA, but there’s little question that BEA was the main event. WebLogic filled the donut hole in the middle of the Fusion stack with a server that was far more popular than Oracle Containers for Java EE (OC4J). Singlehandedly, BEA catapulted Oracle Fusion into becoming a major player in middleware.
Oracle largely stuck to the previously announced roadmap for convergence of BEA products, with the only major surprises being in the details. As planned, Oracle incorporated WebLogic as the strategic Java platform, JDeveloper as the primary development environment, dual business process modeling paths, with master data management, data integration, and identity management driven largely by Oracle offerings with some added BEA content.
Although the Oracle Fusion product portfolio came from far more diverse sources than BEA (as Oracle was obviously a more aggressive acquirer), the result is far more unified than anything that BEA ever fielded. Before getting swallowed by Oracle, BEA had multiple portal, development, and integration technologies lacking a common framework. By comparison, Oracle has emphasized a common framework for mashing the pieces together.
That’s rooted in Oracle’s heritage for developing native tools and utilities, dating back to the Oracle Forms 4GL and the various utilities for managing the Oracle database;
It’s an outgrowth of the mentality at Oracle that good is the enemy of best, and that what Oracle is building is a platform rather than discrete products.
the tools were sufficiently native that they typically were confined to Oracle shops. But that approach to native tooling morphed with development of a broader framework that is optimized for Oracle platforms. It’s an outgrowth of the mentality at Oracle that good is the enemy of best, and that what Oracle is building is a platform rather than discrete products.It’s an approach that also makes Oracle’s tagline of Fusion being standards-based as being more nuanced. Yes, the Fusion products are designed to support Oracle’s “hot pluggable” best of breed strategy to work with other vendors products, but for designing and managing the Fusion environment, Oracle has you surrounded with native tooling if you want them. Call it a subtle pull for encouraging customers to add more Oracle content.
That explains how, 6 – 7 years ago, Oracle began developing what has become the Application Development Framework (ADF) as its own model-view-controller alternative to the Apache Struts framework that it previously used in early versions of the JDeveloper Java tool. That approach has carried through to this day with JDeveloper, which provides a higher level, declarative approach to development that would not fit with traditional Eclipse IDEs. And that approach applies to Oracle Enterprise Manager (EM), which does not necessarily compete with BMC, CA, HP, or IBM Tivoli in application management, but provides the last mile of declarative deployment, monitoring, and performance testing capabilities for the Fusion platform.
Bringing together the Oracle and BEA technologies resulted in some synergies where the value was greater than the sum of its parts. A good example is the pairing of BA’s quasi-real time JRockit JVM with Oracle Coherence data grid, a distributed caching layer for Java objects. In essence, JRockit juices up performance of Coherence, which is used whenever you need higher performance with frequently used objects; conversely, Coherence provides a high end enterprise clustered platform that provides an excellent use case for JRockit.
As noted, while the broad outlines of Fusion 11g are hardly any mystery, there are some interesting departures that occurred along the way. One of the more notable was in BPM where Oracle added another option to its runtime strategy for Oracle BPM Suite.
Make no doubt about it, the Fusion 11g migration was a huge reengineering project, involving nearly 2000 development projects and over 5000 product enhancements. So it’s a shame that Oracle did not take the opportunity of re-architecting its middleware stack by migrating it to microkernel architecture, with OSGi being the most prominent example.
Originally, Oracle BPEL Process Manager was to be the runtime, requiring BPM users to map their process models to BPEL, essentially an XML-based sequential programming language that lacks process semantics. A year later, OMG is putting finishing touches to BPMN 2.0, a process modeling notation that has added support for executable models. And so with release of 11g, Oracle BPM Suite users will gain the option of bypassing BPEL as long as their processes are not that transactionally complex.Make no doubt about it, the Fusion 11g migration was a huge reengineering project, involving nearly 2000 development projects and over 5000 product enhancements. So it’s a shame that Oracle did not take the opportunity of re-architecting its middleware stack by migrating it to microkernel architecture, with OSGi being the most prominent example. Oracle WebLogic Server is OSGi-based, but the BPM/SOA stack is not. Oracle remains mum as to whether it plans to adopt a microkernel architecture throughout the rest of the Fusion stack.
So why are we all hot and bothered about this? OSGi, or the principle of dynamic, modular microkernels in general, offer the potential to vastly reduce Java’s footprint through deployment of highly compact, servers that contain only the Java modules that are necessary to run. The good news is that this is potentially a highly economic, energy-efficient, space efficient green strategy. The bad news is that it’s not enough for the vendor to adopt a microkernel, as the user has to learn how to selectively and dynamically deploy them.
But as we just noted, OSGi seems to have lost its momentum of late. As we noted, in our Ovum research last year, we believed that OSGi was going to become the de facto standard for Java platforms as IBM and SpringSource fully migrated their stacks, and as rivals were providing at least tacit support. A year later, Oracle’s silence is deafening.
As we noted last week, Oracle’s pending acquisition of Sun adds some interesting dynamics to the plot, as Sun has continued to speak on both sides of its mouth on the topic: supporting OSGi for its open source Glassfish Java platform, while putting its weight behind Project Jigsaw that aims to redefine Java modularity as JSR 294. Unfortunately, announcement of Fusion 11g has not cleared up matters.
This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum. His profile is here. You can reach him here.
Thursday, July 2, 2009
Aster targets mid-market with budget-conscious, massively parallel data warehousing appliance
Aster, Redwood City, Calif., this week rolled out the first-ever massively parallel (MPP) data warehousing appliance priced at the $50,000 mark. Finding opportunity in the global recession, Aster is aiming to fundamentally change the economics of data warehousing and business intelligence (BI) by providing a compute-rich appliance on a lower-cost architecture that, Aster says, is also cheaper to administrate and operate.
That's a big promise and one that, if it pans out, may indeed ripple through the $20 billion-plus data warehousing industry, of a few hot growth areas in IT. Only about 10 percent of data warehousing deployments are at the high-end, opening a potentially large market for Aster and its supplier brethren to win over on value-oriented offerings in the mid-market.
Should Oracle Be Worried?
Should Netezza and Teradata be scrambling to roll out a lower cost solution to compete with a scrappy Aster? Teradata has been seeing some wins lately – the State of Ohio, Ruby Tuesday, Hunan Telecom and RealNetworks are some of its newest clients. Netazza has also picked up a few new clients, including WIND Telecom, Esselunga, and Telcel. Oracle, of course, is serving much of the Fortune 500. A recent Forrester report put Teradata, Oracle, IBM and Microsoft at the head of the market, with Netezza, Sybase and SAP noted for niche deployments.
Other warehouse solutions are also being driven into the market, by such vendors as Greenplum. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts]. At the higher end of appliances, Oracle and HP teamed up last year on the Exadata appliance for Oracle warehouse workloads. [Disclosure: HP is a sponsor of BriefingsDirect podcasts]. If the Oracle buy of Sun goes through, we may see other appliance and warehouse packing permutations from Oracle.
For now, Aster is coming out with its lower-cost competitive solution dubbed MapReduce Data Warehouse Appliance – Express Edition. Aster is seeking to level the playing field on the data warehousing entry front, and that message should resonate well with companies that need an entry-level solution that doesn’t compromise on power. Aster – and it won’t be alone -- clearly sees a sweet spot with companies that are value-conscious and growth-minded.
“The Aster MapReduce Data Warehouse Appliance changes the economics of MPP data warehousing appliances by enabling an entry point of $50K for the most compute-rich, analytically-expressive data warehouse appliance on the market,” says Mayank Bawa, CEO of Aster Data. “This contrasts directly with an entry price of $500K for appliances from Teradata, Netezza, and Oracle. With a huge number of data warehouses under one terabyte, this entry pricing now democratizes data warehousing and fast, rich analytics, and brings the power of data within the reach of departments and enterprises, big and small.”
The Big Data Trend
The “Big Data” trend is growing. Although most data warehouses are still under one terabyte, Aster is betting more companies are beginning to see the light on the need for a viable database platform to scale and provide high-speed analysis. MPP data warehouses are often regarded as the most scalable, best analytic performance, highest availability, and most flexible in the data warehousing world. The problem is cost, and complexity of the manpower needed to wring the value from such systems. Many organizations can’t afford a high-end MPP data warehouse or appliances, or find the staff to man them.
Data volumes and complexity continue to explode, and we can expect more as unstructured web data, mobile device data and the need for better BI into dynamic markets to continue to meld into a thorny data management problem. Appliances fit the bill well due to the ability to directly tune the software specifically to the workload (and hardware platform), and further best exploit parallelism and MapReduce approaches.
Throw another monkey wrench into the mix: I expect to see more “data warehouse as a service”-type entries, whereby the entry level moves to the cloud.
Data volumes and complexity continue to explode, and we can expect more as unstructured web data, mobile device data and the need for better BI into dynamic markets to continue to meld into a thorny data management problem.
Remember batch outsourced processing? What’s the difference? Cloud-based warehousing also sets up the ability to mix and match data set joins in the cloud in novel BI extraction and analytics tag-teams. It’s not so far-fetched and could produce a whole new reason to get your data (or subsets or metadata instances) into a/the cloud.This week, Aster is pushing the on-the-ground deployment envelope with the MapReduce Data Warehouse Appliance Express Edition on general warehouse productivity and applicability. Aster’s secret sauce is its approach to parallelism in the data warehouse, the company says. The way Aster goes at parallelism makes it possible to for the data warehouse to run on commodity-grade hardware, albeit with aforementioned appliance tuning.
The appliance pre-packages the Aster nCluster analytic database software on Dell hardware and gives companies the option to include MicroStrategy’s BI software for up to one terabyte of user data. That's an attractive bundle for the small- to mid-sized business. Aster promises significant improvement in analysis speeds by leveraging a MPP architecture – even for smaller data warehouses.
Aster isn’t leaving large enterprises out of the cost-savings equation, of course. The company also launched Aster MapReduce Data Warehouse Appliance – Enterprise Edition in sizes ranging from one terabyte to one petabyte of data.
(BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.)
Wednesday, July 1, 2009
Oracle closes in on 'any'-ware with debut of middleware behemoth 11g suites family
Billed as a "complete, integrated, and hot-pluggable" middleware set of suites, the new software infrastructure offerings, which the Redwood Shores, Calif. computer giant previewed in November 2007, bolsters functionality, integration and business intelligence (BI) benefits across its vast product portfolio, including new capabilities for Oracle SOA Suite, WebLogic Suite, Web Center Suite, and opening debut for Identity Management as a suite.
With the spoils of the BEA acquisition now fully baked into the mix -- and with anticipation for what the pending Sun Microsystems buy brings -- Oracle is well on its way to obviating the middleware moniker. Perhaps we should call it "anyware."
The glaring missing link now, however, is the cloud element of Oracle's destiny. With such a broad infrastructure, data lifecycle, and apps/services development portfolio -- not to mention deep hooks into Oracle's burgeoning business applications offerings -- the only needed outcome to fulfill is the "any" in "anyware." That must include a fluid sourcing, hosting and business model future -- the nearly obvious Oracle Cloud.
Now that it's here, the 11g continental conglomeration must be the gateway for the enveloping 12c, as in "c" for cloud. You don't need to be an oracle to factor that clear and necessary path to the future.
Meanwhile, terrestrial Oracle also announced today that its middleware remains the company's fastest growing business with 90,000 customers worldwide, including 29 of the Dow Jones' top 30, 98 of Fortune's 100 Global, and 10 of the top 10 companies in major industries.
Enhancements across the platform of platforms in the Fusion Middleware 11g include:
- SOA Suite, a unifying system of human and document-centric processes and an event-driven architecture (EDA) with a complete range of SOA capabilities from development to security and governance. Deployed on the Oracle application grid infrastructure, the SOA underpinnings are optimized for building and integrating services on private and public clouds.
- WebLogic Suite (including WebLogic Server) adds new features, including Fusion Middleware GridLink for Real Application Clusters and Fusion Middleware Enterprise Grid Messaging. Fusion Middleware ActiveCache also enables rapid scale-out to meet changing user demand and system load.
- WebCenter Suite provides a broad set of reusable, out-of-the-box WebCenter Services components that can be plugged into any type of portal – intranet, composite application, Web-based community – to enhance social networking and personal productivity.
- Composer, a declarative, browser-based tool, makes it easy for both end-users and developers to create, share, and personalize applications, portals and social sites.
- WebCenter Spaces, a new pre-built social networking solution, enables end-user driven, created and managed communities (Group Spaces and Personal Spaces) to increase productivity, communication, and efficiency.
- Composer, a declarative, browser-based tool, makes it easy for both end-users and developers to create, share, and personalize applications, portals and social sites.
- Identity Management delivers the first components of a fully integrated Identity Management suite and features deeper integration with other Fusion Middleware solutions, as well as new features such as Deployment Accelerators, Universal Federation Framework, and a modern unified user interface based on Oracle’s Application Development Framework (ADF) Faces.
One of the key take-aways from 11g is the infusion of BI and analytics across the portfolio. That will also be a key of any cloud-based offerings from Oracle. Comprehensive BI as a service may very well be the killer application of cloud approaches.
Of the still standing middleware field -- IBM, Microsoft, Software AG, Red Hat/JBoss, Progress, TIBCO, SAP and Sybase -- only a few will be both able to get the "anyware" in terms of product breadth and of cloud delivery. [Disclosure: Progress and TIBCO and sponsors of BriefingsDirect podcasts.]
Oracle has sewn up its field brilliantly via its organic and aquisitions-fueled growth of the past decade. With Sun and its ID management, file system/directory, storage, Solaris community, and speedy silicon, the path to cloud seems inevitable and closer than most thought for Oracle. Incidentally, control of Java is more a strategic weapon than an enabler.
Oracle still needs more total governance (don't we all!), a PaaS play, and a whole lot of globally established and cutting edge, cloud-delivery data centers in place humming along. Oh, and the transition from a licensed to subscription commodity services business models won't be any much easier for Oracle than Microsoft. Has to be done, however.
But, as usual, Oracle will stride like the Rhodes Colossus the build, buy and partner spectrum of opportunity to attain a gobal cloud delivery capability. Nothing but the best will do, of course. Oracle has just about everything else in place, that's abundantly clear.
Monday, June 29, 2009
Oracle adds zest to SQL Developer with standalone data modeling tool, stirs the SQL market pot
Oracle, Redwood Shores, Calif., had originally released a free version of the tool as an "early adopter" release. The full version is now available for $3,000 per named user. The new tool features multi-layered design and generation capabilities to produce conceptual entity relationship diagrams (ERDs) and transform them to relational models. Users can build, extend and modify a model as well as compare with existing designs.
The whole SQL databases and associated tools and modeling ecosystem is ripe for tumult. My best guess is that Oracle's pending Sun Microsystems purchase will provide offense via MySQL, and the associated community, to target the Microsoft SQL Server franchise.
Oracle can both keep tabs on the MySQL evolution while under-cutting Microsoft. Good work, if you can get it. Oh, and they can attract more middleware sales as they seduce the developers and deeply snare the operations folks.
On the other big future directon, to the cloud, modeling and managing data become the points of the arrow to attacting more sticky data into your cloud. We're ready seeing this in business process modeling as IBM is giving away such tools via BlueWorks. The enticement? To bring more process meta data and rules execution to Big Blue's cloud.
My expectation is that Oracle, HP, IBM, Red Hat, Amazon, Google, and Microsoft will begin to offer more "free" cloud-based enticements to enterprise developers and archirects that 1) hurt their competition whenever possible, and 2) solidify their respective advantages to create long-term cloud customers. Then repeat, extend, and solidify. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
Remember when free and open source software began to disrupt the staus quo, and the large enterprise vendors could no longer ignore it? They played the same way. IBM, for example, embraced Linux (to hurt Microsoft and also sell more commodity hardware) and Apache web servers (ditto). But IBM did not open source DB2 or WebSphere.
We'll see the same picking and choosing -- tactical and strategic -- of what is "free" or not, cloud-based or not, rationalized on a similar pattern of combined offense and defense. The good news is that the enterprise architects and developers will have more good choices, lowering costs, and be able to play the beheamoths off of one another -- just like with open source.
Perhaps we need to call the cloud thing ... Any Source.
Back to Oracle and its maneuvers in the SQL space ... The capabilities of the new data modeler include:
- Visual entity relationship modeling, which supports both Barker and Bachman notations so developers can switch between models to suit the audience’s needs or create and save different visual displays
- Forwarding of engineering ERDs to relational models, transforming all rules and decisions made at the conceptual level to the relational model, where details are further refined and updated
- Separate relational and physical models that enable users to develop a single relational model for different database versions or different databases.
- A full spectrum of physical database definitions, supporting physical definitions such as partitions, roles, and tablespaces for specific database versions for multi-database, multi-vendor support
Friday, June 26, 2009
IT Financial Management solutions provide visibility into total operations to reduce IT costs
Read a full transcript of the discussion.
The global economic downturn has accelerated the need to reduce total IT costs through better identification and elimination of wasteful operations and practices. At the same time, IT departments need to better create and implement streamlined processes for delivering new levels of productivity, along with reduced time to business value.
But you can't well fix what you can't well measure. And so the true cost -- and benefits -- of complex and often sprawling IT portfolios too often remain a mystery, shrouded by outdated and often manual IT tracking and inventory tasks.
New solutions have emerged, however, to quickly improve the financial performance of IT operations through automated measuring and monitoring of what goes on, and converting the information into standardized financial metrics. This helps IT organizations move toward an IT shared services approach, with more efficient charge-back and payment mechanisms.
Gaining real-time visibility into dynamic IT cost structures provides a powerful tool for reducing cost, while also maintaining and improving overall performance -- and perception of worth. Holistic visibility across an entire IT portfolio also develops the visual analytics that can help better probe for cost improvements and uncover waste -- and then easily share the analysis and decisions rationale with business leaders.
To better understand how improved financial management capabilities can help enterprise IT departments, I recently interviewed two executives from Hewlett-Packard Software and Solutions: Ken Cheney, director of product marketing for IT Financial Management, and John Wills, practice leader for the Business Intelligence Solutions Group.
Here are some excerpts:
Cheney: The landscape has changed in such a way that IT executives are being asked to be much more accountable about how they’re operating their business to drive down the cost of IT significantly. As such, they're having to put in place new processes and tools in order to effectively make those types of decisions. ... We can automate processes. We can drive the data that they need for effective decision-making. Then, there is also the will there in terms of the pressure to better control cost. IT spend these days composes about 2 to 12 percent of most organizations’ total revenue, a sizable component.Read a full transcript of the discussion.
Wills: If all of your information is scattered around the IT organization and IT functions, and it’s difficult to get your arms around. You certainly can’t do a good job managing going forward. A lot of that has to do with being able to look back and to have historical data. Historical data is a prerequisite for knowing how to go forward and to look at a project’s cost and where you can optimize cost or take cost down and where you have risk in the organization. So, visibility is absolutely the key.
IT has spent probably the last 15 years taking tools and technologies out into the lines of business, helping people integrate their data, helping lines of business integrate their data, and answering business questions to help optimize, to capture more customers, reduce churn in certain industries, and to optimize cost. Now, it’s time for them to look inward and do that for themselves.
Cheney: IT operates in a very siloed manner, where the organization does not have a holistic view across all the activities. ... The reporting methods are growing up through these silos and, as such, the data tends to be worked within a manual process and tends to be error-prone. There's a tremendous amount of latency there.
The challenge for IT is how to develop a common set of processes that are driving data in a consistent manner that allows for effective control over the execution of the work going on in IT as well as the decision control, meaning the right kind of information that the executives can take action on.
Wills: When you look at any IT organization, you really see a lot of the cost is around people and around labor. But, then there is a set of physical assets -- servers, routers, all the physical assets that's involved in what IT does for the business. There is a financial component that cuts across both of those two major areas of spend. ... You have a functional part of the organization that manages the physical assets, a functional part that manages the people, manages the projects, and manages the operation. Each one of those has been maturing its capability operationally in terms of capturing their data over time.
Industry standards like the Information Technology Infrastructure Library (ITIL) have been driving IT organizations to mature. They have an opportunity, as they mature, to take advantage and take it to the next level of extracting that information, and then synthesizing it to make it more useful to drive and manage IT on an ongoing basis.
Cheney: IT traditionally has done a very good job communicating with the business in the language of IT. It can tell the business how much a server costs or how much a particular desktop costs. But it has a very difficult time putting the cost of IT in the language of the business -- being able to explain to the business the cost of a particular service that the business unit is consuming. ... In order to effectively asses the value of a particular business initiative, it’s important to know the actual cost of that particular initiative or process that they are supporting. IT needs to step up in order for you to be able to provide that information, so that the business as a whole can make better investment decisions.
Wills: One of the things that business intelligence (BI) can help with at this point is to identify the gaps in the data that’s being captured at an operational level and then tie that to the business decision that you want to make. ... BI comes along and says, "Well, gee, maybe you’re not capturing enough detailed information about business justification on future projects, on future maintenance activity, or on asset acquisition or the depreciation of assets." BI is going to help you collect that and then aggregate that into the answers to the central question that a CIO or senior IT management may ask.
Cheney: By doing so, IT organizations will, in effect, cut through a lot of the silo mentality, the manual error-prone processes, and they'll beginVirtual computing, cloud computing, and some of these trends that we see really point towards the time being now for IT organizations to get their hands around cost at a detailed level and to have a process in place for capturing those cost.
operating much more as a business that will get actionable cost information. They can directly look at how they can contribute better to driving better business outcomes. So, the end goal is to provide that capability to let IT partner better with the business.
... The HP Financial Planning Analysis offering allows organizations to understand costs from a service-based perspective. We're providing a common extract transform load (ETL) capability, so that we can pull information from data sources. We can pull from our project portfolio management (PPM) product, our asset management product, but we also understand the customers are going to have other data sources out there.
They may have other PPM products they’ve deployed. They may have ERP tools that they're using. They may have Excel spreadsheets that they need to pull information from. We'll use the ETL capabilities to pull that information into a common data warehouse where we can then go through this process of allocating cost and doing the analytics.
Wills: We really want to formalize the way they're bringing cost data in from all of these Excel spreadsheets and Access databases that sit under somebody’s desk. Somebody keeps the monthly numbers in their own spreadsheets in a different department and they are spread around in all of these different systems. We really want to formalize that.
... Part of Financial Planning and Analysis is Cost Explorer, a very traditional BI capability in terms of visualizing data that’s applied to IT cost, while you search through the data and look at it from many different dimensions, color coding, looking at variants, and having this information pop out of you.
Cheney: [Looking to the future], in many respects, cloud computing, software as a service (SaaS), and virtualization all present great opportunities to effectively leverage capital. IT organizations really need to look at it through the lens of what the intended business objectives are and how they can best leverage the capital that they have available to invest.
Wills: Virtual computing, cloud computing, and some of these trends that we see really point towards the time being now for IT organizations to get their hands around cost at a detailed level and to have a process in place for capturing those cost. The world, going forward, obviously doesn’t get simpler. It only gets more complex. IT organizations are really looked at for using capital wisely. They're really looked at as the decision makers for where to allocate that capital, and some of it’s going to be outside the four walls.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.
Wednesday, June 24, 2009
In 'Everything as a Service' era, quality of services and processes grows paramount, says HP's Purohit
Read a full transcript of the discussion.
As services pervade how and what IT delivers, quality assurance early and often becomes the gatekeeper of success -- or the points of failure.
IT's job is evolving to make sure all services really work deep inside of business process -- regardless of their origins and sourcing. Quality of component services is therefore assurance of quality processes, and so the foundation of general business conduct and productivity.
Pervasive quality is no longer an option, especially as more uses of cloud-enabled services and so-called "fluid sourcing" approaches become the norm.
A large part of making quality endemic becomes organizational, of asserting quality in everything IT does, enforcing quality in everything IT's internal and external partners do. Success even now means quality in how the IT department itself is run and managed.
To better learn how service-enabled testing and quality-enabling methods of running IT differently become critical mainstays of IT success, last week at HP Software Universe in Las Vegas I interviewed Robin Purohit, vice president of Software Products at HP Software and Solutions.
Here are some excerpts:
Severe restrictions on IT budgets force you to rethink things. ... What are you really good at, and do you have the skills to do it? Where can you best leverage others outside, whether it’s for a particular service you want them to run for you or for help on doing a certain project for you? How do you make sure that you can do your job really well, and still support the needs of the business while you go and use those partners?Read a full transcript of the discussion.
We believe flexible outsourcing is going to really take off, just like it did back in 2001, but this time you’ll have a variety of ways. We can procure those services over the wire on a rateable basis from whatever you want to call them -- cloud providers, software-as-a-service (SaaS) providers, whatever. IT's job will be to make sure all that stuff works inside the business process and services they’re responsible for.
If you think of it as marketplace of services that you're doing internally with maybe many outsource providers, making sure every one of those folks is doing their job well and that it comes together some way, means that you have to have quality in everything you do, quality in everything your partners do, and quality in the end process. Things like service-enabled testing, rather than service-oriented architecture (SOA), is going to become a critical mainstream attribute of quality assurance.
... What IT governance or cloud governance is going to be about is to make sure that you have a clear view of what your expectations are on both sides. Then, you have an automatic way of measuring it and tracking against it, so you can course correct or make a decision to either bring it back internally or go to another cloud provider. That’s going to be the great thing about the cloud paradigm -- you’ll have a choice of moving from one outsource provider to another.
The most important things to get right are the organizational dynamics. As you put in governance, you bring in outside parties -- maybe you’re doing things like cloud capabilities -- you're going to get resistance. You’ve got to train your team to how to embrace those things in the right way.
What we’re trying to do at HP is step up and bring advisory services to the table across everything that we do to help people think aboutIt’s all about allowing the CEO and their staffs to plan and strategize, construct and deliver, and operate services for the business in a co-ordinated fashion, and link all the decisions to business needs and checkpoints
how they should approach this in their organization, and where they can leverage potentially industry-best practices on the process side, to accelerate the ability for them to get the value out of some of these new initiatives that they are partaking in.
For the last 20 years, IT organizations have been building enterprise resource planning (ERP) systems and business intelligence (BI) systems that help you run the business. Now, wouldn’t it be great if there were a suite of software to run the business of IT?
It’s all about allowing the CEO and their staffs to plan and strategize, construct and deliver, and operate services for the business in a co-ordinated fashion, and link all the decisions to business needs and checkpoints. They make sure that what they do is actually what the business wanted them to do, and, by the way, that they are spending the right money on the right business priorities. We call that the service life cycle.
... There are things that we're doing with Accenture, for example, in helping on the strategy planning side, whether it’s for IT financial management or data-center transformation. We're doing things with VMware to provide the enabling glue for this data center of the future, where things are going to be very dynamically moving around to provide the best quality of service at the best cost.
... But, users want one plan. They don’t want seven plans. If there’s one thing they’re asking us to do more, faster, better, and with all of those ecosystem providers is to show them how they can get from their current state to that ideal future state, and do it in a coherent way.
There's no margin for error anymore.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.