A technology and business partnership between desktop solutions provider OpenSpan and TIBCO Software helps integrate TIBCO SOA solutions with desktop applications without requiring changes to the programs.
OpenSpan of Alpharetta, Ga. and TIBCO of Palo Alto, Calif. will partner on services-oriented architecture (SOA), business process management (BPM), and business optimization solutions. A number of products from both companies will be used to create broader solutions that provide fuller business productivity-level outcomes.
For example, TIBCO's Enterprise Message Service, a standards-based integration platform, brings together IT assets and communications technologies on a common enterprise backbone to manage the real-time flow of information.
The OpenSpan Platform extends the service by enabling a wide range of applications deployed within enterprise desktop environments to consume services and emit events.
TIBCO's ActiveMatrix, a service platform for heterogeneous SOA delivers service-oriented applications by separating the applications from the technology details. This separation enables companies to incrementally add orchestration, integration, mediation, Java and .NET for services to a unified runtime platform. The OpenSpan Platform enables any application, including legacy Windows, client-server and host applications, running on users’ desktops to become service-enabled and participate in TIBCO SOA solutions. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]
Together the products cover SOA infrastructure requirements while ushering the services to the prevalent clients. The proper paths for SOA workflows and processes out to the user has been a subject of much and varied discourse over the past few years. There is no right answer; the more the better. Even rich documents can be part of a SOA landscape.
The TIBCO iProcess Suite delivers BPM Plus, a unified approach to BPM that enables organizations to automate, optimize and improve any type of process – from routine tasks to mission critical, long-lived processes that involve people, information and applications across organizational and geographical boundaries. OpenSpan extends TIBCO’s BPM capabilities to the desktop.
TIBCO BusinessEvents, allows companies to identify and quantify the impact of events and notify people and systems about meaningful events so processes can be adapted on the fly to capitalize on opportunities and remediate threats. OpenSpan enables applications deployed on corporate desktops to be rapidly instrumented to trigger events.
Solutions-based approaches that leverage multiple vendors capabilities is a hallmark of SOA. It's good to see the vendors recognizing it.
Tuesday, November 18, 2008
Sunday, November 16, 2008
BriefingsDirect analysts review new SOA governance book, propose scope for U.S. tech czar
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.
Read a full transcript of the discussion.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Welcome to the latest BriefingsDirect Insights Edition, Vol. 33, a periodic discussion and dissection of software, services, services-oriented-architecture (SOA) and compute cloud-related news and events, with a panel of IT analysts and guests.
In this episode, recorded Nov. 7, our experts examine SOA governance, how to do it right, its scope, its future, and impact. We interview Todd Biske, author of the new Packet Publishing book, SOA Governance. The panel also focuses on the IT policies that an Obama administration should pursue, as well as ruminate about what a cabinet-level IT director appointee might accomplish.
Please join noted IT industry analysts and experts Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum, and Biske, an enterprise architect at Monsanto. Our discussion is hosted and moderated by yours truly, Dana Gardner.
Here are some excerpts:
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Read a full transcript of the discussion.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Welcome to the latest BriefingsDirect Insights Edition, Vol. 33, a periodic discussion and dissection of software, services, services-oriented-architecture (SOA) and compute cloud-related news and events, with a panel of IT analysts and guests.
In this episode, recorded Nov. 7, our experts examine SOA governance, how to do it right, its scope, its future, and impact. We interview Todd Biske, author of the new Packet Publishing book, SOA Governance. The panel also focuses on the IT policies that an Obama administration should pursue, as well as ruminate about what a cabinet-level IT director appointee might accomplish.
Please join noted IT industry analysts and experts Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum, and Biske, an enterprise architect at Monsanto. Our discussion is hosted and moderated by yours truly, Dana Gardner.
Here are some excerpts:
On SOA governance ...Read a full transcript of the discussion.
Biske: The reason that I decided to write a book on this is actually two-fold. First, in my work, both as a consultant, and now as a corporate practitioner, I'm trying to see SOA adoption be successful. The one key thing I always kept coming back to, which would influence the success of the effort the most, was governance. So, I definitely felt that this was a key part of adopting SOA, and if you don't do it right, your chances of success were greatly diminished.
The second part of it was when the publisher actually contacted me about it. I went out and looked and I was shocked to find that there weren't any books on SOA governance. For as long as the SOA trend has been going on now, you would have thought someone would have already written a book on it. I said, "Well, here's an opportunity, and given that it's not really a technology book, it's more of a technology process book, it actually might have some shelf life behind it." So I decided, why not, give a try.
The reason companies should be adopting SOA is that something has to change. There is something about the way IT is working with the rest of the business that isn't operating as efficiently and as productively as it could. And, if there is a change that has to go on, how do you manage that change and how do you make sure it happens? It's not just buying a tool, or applying some new technology. There has to be a more systematic process for how we manage that change, and to me that's all about governance.
If I just blindly say, "We're going to adopt SOA," and I tell all the masses, "Go adopt SOA," and everybody starts building services, I still haven't answered the question, "Why I am doing this, and what do I hope to achieve out of it."
If I don't make that clear, I could easily wind up with a whole bunch of services and building a whole bunch of solutions. I'll have far more moving parts, which are far more difficult to maintain. As a result, I actually go in the opposite direction from where I needed to go. If you don't clearly articulate, "This is the desired behavior. This is why we're adopting SOA," and then let all of the policy decisions start to push that forward, you really are taking a big risk. It's an unknown risk. You're not managing it appropriately if you don't have an end state in mind.
If you look at traditional IT governance, it is more about what projects we execute, how do we fund them, and structuring them appropriately, and that has a relationship to SOA governance. It doesn't go into the deep levels of decisions that are made within those projects.
If you were to try to set up a relationship, I would put IT governance, and even corporate governance, over the SOA governance aspects, at least, the technical side of it. The other piece of that is, when we talk about runtime governance, IT governance probably is focused on the runtime aspects of it. That's really a key part of this, making sure that our systems stay operational and that the operational behavior of the organization is the way we want it to be. So there is a relationship between them.
Baer: My sense is that, given the current economic environment, you're going to see a lot more in the way of tactical projects. ... We need to look at some jump-starts in a sensible, sort of "lite," like, L-I-T-E governance. That's governance that basically federates, or is compatible with, the software-delivery lifecycle. And, when we get to runtime, it's compatible with whatever governance we have at runtime.
The objective of SOA is to achieve reuse, but it's really to achieve business agility. Therefore, whether we shoot for reuse, initially or not, it will not necessarily be the ultimate measure of success for a SOA initiative. SOA Governance Lite would not emphasize very heavily the reuse angle to start off with. You may get to that at Stage 2 in your maturity cycle.
Koblielus: The flip side right now is that you can look at it as a survivor-oriented architecture. You have a survival imperative in tough times. Do you know if your company is going to be around in a year's time? The issue right now in terms of SOA is, "You want to hold on and you want to batten down the hatches. You want to be as efficient as possible. You want to consolidate what you can consolidate in terms of hardware, software, licenses, competency centers, and so forth. And, you're probably going to hold the line on investment, further applications, and so forth."
For SOA, in this survival oriented climate that we're in right now, the issue is not so much reusing what you already have, but holding on to it, so that you are well positioned for the next growth spurt for your business and for the economy, assuming that you will survive long enough. Essentially, SOA Governance Lite uses governance as a throttle, throttling down investments right now to only those that are critical to survive, so that you can throttle up those investments in the future.
Biske: I'm not a believer in the term "lite" governance. I'm of the opinion that you have governance, whether you admit it or not. An alternative view of governance is that it is a decision-rights structure. Someone is always making decision on projects.
The notion of Governance Lite is that we're saying, "Okay, keep those decisions local to the project as much as possible. Don't bubble them up to the big government up there and have all the decisions made in a more centralized fashion." But, no matter what, you always have governance on projects. Whether it's done more at the grassroots level on projects, or by some centralized organization through a more rigid process, it still comes back to having an understanding of what's the desired behavior that we are trying to achieve.
Where you run into problems is when you don't have agreement on what that desired behavior is. If you have that clearly stated, you can have an approach where the project teams are fully enabled to make those decisions on their own, because they put the emphasis on educating them on, "This is what we are trying to achieve, both from a project perspective, as well as from an enterprise perspective, and we expect you to meet both of those goals. And if you run into a problem where you are unsure on priorities, bubble that decision up, but we have given you all the power, all the information you need. So, you're empowered to make those decisions locally, and keep things executing quickly."
Another parallel we can draw to this is the current economic crisis. The risk you have in becoming too federated, and getting too many decisions made locally, is that you lose sight of the bigger picture. You can look at all of these financial institutions that got into the mortgage-backed securities and argue that their main focus was not the stability of the banking system, it was their bottom line and their stock price.
They lost sight of, "We have to keep the financial system stable." There was a risk in pushing too much down to the individual groups without keeping that higher vision and that balance between them. You can get yourself in a lot of trouble. The same thing holds true in [SOA] development.
On PE Obama's technology leader ...
Baer: Obviously, you need somebody who is going to ... think outside the box. Basically, the government has long been a series of lots of boxes or silos, where you have these various fiefdoms. Previous attempts to unify architectures at the agency levels have not always been terribly successful.
The chief priority for anybody who is ... in a CIO-type of role at the cabinet level is ... to look for getting more out of less. That's essential, because there are going to be so many competing needs for so many limited resources. We have to look for someone who can formulate strategic goals -- and I'm going to have to use the term reuse -- to reuse what is there now, and federate what is there now, and federate with as light a touch as possible.
Kobielus: it comes down to the fact that they're driving at many of the same overall objectives that also drive SOA initiatives. One initiative is to breakdown silos in terms of information sharing between the government and the citizenship, but also silos internally within the government, between the various agencies to help them better exchange information, share expertise, and so forth. In fact, if we look at their position statement called "Bring government into the 21st century," it really seems that it's part of the overall modernization push for IT and the government. They're talking really about a federated SOA governance infrastructure or a set of best practices.
Tech modernization in the government is absolutely essential. Reuse and breaking down silos between agencies is critically important. Brokering best practices across the agencies, specific silo IT and CTO organizations, is critically important. It sounds to me as if Obama will be an SOA President, although he doesn't realize it yet, if he puts in place the approach that he laid out about a year ago, considering that the IT infrastructure in the government is probably right now the least of his concerns.
Biske: [Obama] definitely has a challenge, and I am thinking from a governance perspective. He has taken step one, in that the paragraph that Jim just mentioned, of bringing government into the 21st Century. He has articulated that this is the way that he wants our systems to interact and share information with the constituents.
The next step is the policies that are going to get us there, and obviously he's time-boxed by the terms of his presidency. He's got a big challenge ahead of him, or at least the CTO that gets appointed has a huge challenge. Somehow, you have to break it down into what goals are going to be achievable in that timeframe.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Friday, November 14, 2008
Interview: rPath’s Billy Marshall on how enterprises can virtualize applications as a precursor to cloud computing
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.
Read complete transcript of the discussion.
Many enterprises are factoring how to bring more applications into a virtual development and deployment environment to save on operating costs and to take advantage of service oriented architectures (SOA) and cloud computing models.
Finding proven deployment methods and governance for managing virtualized applications across a lifecycle is an essential ingredient in making SOA and cloud-computing approaches as productive as possible while avoiding risk and complexity. The goal is to avoid having to rewrite code in order for applications to work across multiple clouds -- public, private or hybrids.
The cloud forces the older notion of "write-once, run anywhere" into a new level of "deploy correctly so you can exploit the benefits of cloud choices and save a lot of money."
To learn more about how enterprises should begin moving to application-level virtualization that serves as an onramp to cloud benefits, I recently spoke with Billy Marshall, founder and chief strategy officer of rPath.
Here are some excerpts:
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.
Read complete transcript of the discussion.
Many enterprises are factoring how to bring more applications into a virtual development and deployment environment to save on operating costs and to take advantage of service oriented architectures (SOA) and cloud computing models.
Finding proven deployment methods and governance for managing virtualized applications across a lifecycle is an essential ingredient in making SOA and cloud-computing approaches as productive as possible while avoiding risk and complexity. The goal is to avoid having to rewrite code in order for applications to work across multiple clouds -- public, private or hybrids.
The cloud forces the older notion of "write-once, run anywhere" into a new level of "deploy correctly so you can exploit the benefits of cloud choices and save a lot of money."
To learn more about how enterprises should begin moving to application-level virtualization that serves as an onramp to cloud benefits, I recently spoke with Billy Marshall, founder and chief strategy officer of rPath.
Here are some excerpts:
We're once again facing a similar situation now where enterprises are taking a very tough look at their data center expenditures and expansions that they're planning for the data center. ... The [economic downturn] is going to have folks looking very hard at large-scale outlays of capital for data centers.Read complete transcript of the discussion.
I believe that will be a catalyst for folks to consider a variable-cost approach to using infrastructures or service, perhaps platform as a service (PaaS). All these things roll up under the notion of cloud.
Virtualization provides isolation for applications running their own logical server, their own virtual server. ... Virtualization gives you -- from a business perspective -- an opportunity to decouple the definition of the application from the system that it runs on. ... Then, at run-time, you can decide where you have capacity that best meets needs of the profile of an application.
I can begin sourcing infrastructure a little more dynamically, based upon the load that I see. Maybe I can spend less on the capital associated with my own data center, because with my application defined as this independent unit, separate from the physical infrastructure I'll be able to buy infrastructure on demand from Amazon, Rackspace, GoGrid, these folks who are now offering up these virtualized clouds of servers.
That's the architecture we're evolving toward. ... For legacy applications, there's not going to be much opportunity. [But] they may actually consider this for new applications that would get some level of benefit by being close to other services.
[If] I can define my application as a working unit, I may be able to choose between Amazon or my internal architecture that perhaps has a VMware basis, or a Rackspace, GoGrid, or BlueLock offering.
Another big consideration for these enterprises now is do I have workloads that I'm comfortable running on Linux right now, and so can I a take a step forward and bind Linux to the workload in order to take it to wherever I want it to go.
rPath brings a capability around defining applications as virtual machines (VMs), going through a process whereby you release those VMs to run on whichever cloud of your choosing, whether a hypervisor virtualized cloud of machines, such as what's provided by Amazon, or what you can build internally using Citrix XenSource or something like VMware's virtual infrastructure.
It then provides an infrastructure for managing those VMs through their lifecycle for things such as updates for backup and for configuration of certain services on the machines in a way that's optimized to run a virtualized cloud of systems. We specialize in optimizing applications to run as VMs on a cloud or virtualized infrastructure.
With our technology, we enforce a set of policies that we learned were best practices during our days at Red Hat when constructing an operating system. We've got some 50 to 60 policies that get enforced at build time, when you are building the VM. They're things like don't allow any dangling symlinks, and closing the dependency loop around all of the binary packages to get included. There could be other more corporate-specific policies that need to be included, and you would write those policies into the build system in order to build these VMs.
It's very similar to the way you put policies into your application lifecycle management (ALM) build system when you were building the application binary. You would enforce policy at build time to build the binary. We're simply suggesting that you extend that discipline of ALM to include policies associated with building VMs. There's a real opportunity here to close the gap between applications and operations by having much of what is typically been done in installing an application and taking it through Dev, QA and Test, and having that be part of an automated build system for creating VMs.
People are still thinking about the operating system as something that they bind to the infrastructure. In the new case, they're binding the operating system to the hypervisor and then installing the application on top of it. If the hypervisor is now this bottom layer, and if it provides all the management utilities associated with managing the physical infrastructure, you now get an opportunity to rethink the operating system as something that you bind to the application.
When you bind an operating system to an application, you're able to eliminate anything that is not relevant to that application. Typically, we see a surface area shrinking to about 10 percent of what is typically deployed as a standard operating system. So, the first thing is to package the application in a way that is optimized to run in a VM. We offer a product called rBuilder that enables just that functionality.
If you prove to yourself that you can do this, that you can run [applications] in both places (cloud and on-premises), you've architected correctly. ... That puts you in a position where eventually you could run that application on your local cloud or virtualized environment and then, for those lumpy demand periods -- when you need that exterior scale and capacity -- you might just look to that cloud provider to support that application [at scale].
There's a trap here. If you become dependent on something associated with a particular infrastructure set or a particular hypervisor, you preclude any use in the future of things that don't have that hypervisor involved. ... The real opportunity here is to separate the application-virtualization approach from the actual virtualization technology to avoid the lock-in, the lack of choice.
If you do it right, and if you think about application virtualization as an approach that frees your application from the infrastructure, there is a ton of benefit in terms of dynamic business capability that is going to be available to your organization.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.
Wednesday, November 12, 2008
IDC research shows enterprise SOA adoption deepens based on certain critical practices
Listen to the podcast. Download the podcast. Access the webinar. Learn more. Sponsor: Hewlett-Packard.
Download the IDC report "A Study in Critical Success Factors for SOA." Read complete transcript of the discussion.
Fresh research from IDC on service oriented architecture (SOA) adoption patterns shows what users of SOA identify as essential success factors. The perceptions are critical as more companies cross from experimentation to more holistic SOA use and its required governance management and lifecycle functions.
A recent webinar captures the IDC findings and shows how Hewlett-Packard (HP) is working to help companies adopt SOA successfully. That webinar is now captured as a podcast, transcript and blog.
Join me as I moderate a SOA market adoption trends presentation by Sandy Rogers, program director for SOA, Web services, and integration research at IDC. Sandy is followed by a presentation on SOA lifecycle approaches by Kelly Emo, SOA product marketing manager for HP Software.
Here are some excerpts:
Listen to the podcast. Download the podcast. Access the Webinar. Learn more. Sponsor: Hewlett-Packard.
Download the IDC report "A Study in Critical Success Factors for SOA." Read complete transcript of the discussion.
Fresh research from IDC on service oriented architecture (SOA) adoption patterns shows what users of SOA identify as essential success factors. The perceptions are critical as more companies cross from experimentation to more holistic SOA use and its required governance management and lifecycle functions.
A recent webinar captures the IDC findings and shows how Hewlett-Packard (HP) is working to help companies adopt SOA successfully. That webinar is now captured as a podcast, transcript and blog.
Join me as I moderate a SOA market adoption trends presentation by Sandy Rogers, program director for SOA, Web services, and integration research at IDC. Sandy is followed by a presentation on SOA lifecycle approaches by Kelly Emo, SOA product marketing manager for HP Software.
Here are some excerpts:
Sandy Rogers: Organizations are looking for much more consistency across enterprise activities and views, and are really finding a lot of competitive differentiation in being able to manage their processes more effectively. That requires the ability to stand across different types of systems and to respond -- whether in a reactive mode or a proactive mode -- to opportunities.Download the IDC report "A Study in Critical Success Factors for SOA." Read complete transcript of the discussion.
What we’re finding is that, as we go to this generation, SOA, in and of itself, is spawning the ability to address new types of models, such as event-based processing, model-based processing, cloud computing, and appliances. We’re really, as a foundation, looking to make a strategic move.
The issue is not necessarily deciding if they should go toward SOA. What we're finding is that for most organizations this is the way that they are going to move, and the question is just navigating how to best do that for the best value and for better success.
According to the same poll ... What are most interesting are the top challenges in implementing SOA. All of our past studies reinforced that skills, availability of skills, and training in SOA continue to be a number one challenge. What’s really noticeable now is that setting up an SOA governance structure has reached the second most-indicated challenge.
We found in other studies that a lot of organizations did not have strong governance. SOA almost forces these companies to do what they should have been doing all along around incorporating the right procedures around governance, and making that a non-intrusive approach.
... What this is telling us is that we have reached another stage of maturity, and that in order to move forward organization will need to think about SOA as an overall program, and how it impacts both technology and people dimensions within the organization. ... We are indeed moving from project- and application-level SOA to more of a system and enterprise scale.
We [also] wanted to look at how SOA's success is actually defined, ... and what factors and practices in these organizations that are successful have the most impact. ... While technologies are key enablers, most of the study participants focused on organizational and program dynamics as being key contributors to success. Through technology, they are able to influence the impact of the activities that they are introducing into the overall SOA program.
The pervasiveness of SOA adoption in the enterprise was a key determinant of how ... they were being successful. ... If you’re able to handle trust, you’re able to influence organizational change management effectiveness. If you’re able to address business alignment, then you’ll have much more success in understanding the impact on architecture and vice versa.
Domains of SOA success
When we gathered all of this information ... we created a framework of varying components, and elements that impacted success. Then, we aggregated these into seven key domains. ... The seven domains are: Business Alignment, Organizational Change Management, Communication, Trust, Scale and Sustainability, Architecture and Governance. [See full transcript or listen to the podcast for more detail on each domain.]
We found that enforcing policies, not putting off governance until later on, was very important, [as well as] putting more efforts into business modeling, which many of these organizations are doing now. They said that they wished they had done a little bit more when thinking about the services that were created, focusing on preparing the architecture for much more process and innovation.
Kelly Emo: You heard from IDC the seven critical SOA success factors that came from this in-depth analysis of customers. The point that I want to reiterate here that was so powerful in this discussion is the idea that the seven domains are linked. By putting energy and effort in any one of them, you are setting yourself up for more success across the board.
What we are going to do now is drill down into that domain of governance. ... We’ll talk a little bit about the value of using an automated SOA governance platform, to help automate those manual activities and get you there faster.
... We see many of our customers now crossing the enterprise scalability divide with their SOA, looking to incorporate SOA into their mainstream IT organizations, and they’re seeing the benefits of that initial investment in governance help them make that leap.
SOA governance is all about helping IT get to the expected business benefits of their SOA. You can think of SOA governance, in essence, as IT's navigation system to get to the end goal of SOA. What it's going to help IT do, as they look to scale SOA out, is to more broadly foster trust across those distributed domains. It's going to help become a catalyst for communication and collaboration, and it's going to help jump-start that non-expert staff.
The thing that's key about governance is that it helps integrate those silos of IT. It helps integrate the folks who are responsible for designing services with those who actually have to develop the back end implementations and with those who are doing the testing of performance and functionality. Alternately, it integrates them with the organizations that are responsible for both deploying the services and the policies and integration logic that will support accessing those services.
Keeping a perspective on lifecycle governance, your organization can be primed and ready to handle SOA, as it scales, as more and more services go into production, and more and more services are deemed to be ready for consumption and reuse into new composite applications. ... The key is to keep a service lifecycle governance perspective in mind, as you go about your governance program, and automation is key. ... Automating policy compliance can bring a huge pay off.
What we are finding more and more now is that organizations are actually investing in a role known as service manager, someone who oversees the implication of not only delivering a service over time, but those that are consuming it. I see this as a best practice that can be supported by SOA governance, and which helps empower them by giving them a foundation to set up policies and have visibility in terms of how this service is meeting its objective and who is consuming the service.
You can actually get a dialog going between your enterprise architecture and planning teams, your development teams, and your testing teams, in terms of the expectations, and requirements right upfront, as the concept of the service is being ferreted out.
So why invest in SOA governance now ... [when] we’re under a lot of economic pressure, budgets are tight, there's fewer resources to do the same work? This sounds counter-intuitive, absolutely, but this is the right time to make that investment in SOA governance, because the benefits are going to pay off significantly.
Listen to the podcast. Download the podcast. Access the Webinar. Learn more. Sponsor: Hewlett-Packard.
Tuesday, November 11, 2008
Looking forward to webinar on applications modernization trends and techniques with Nexaweb
Application modernization as a precursor and accelerant to IT transformation is the topic of a webinar I'm on this Thursday at 1 p.m. ET.
The topic is a no-brainer. Old apps that waste money need to come out to the web services and RIA model and join the grand mashup.
Application modernization is one of those IT initiatives that packs the one-two wallop of cutting costs while improving agility and business outcomes. That combination of doing more for less makes so much sense these days, and it may be the new number one requirement for any IT budget.
Services and logic locked up in mainframes, COBOL, n-tier Java, and other 3-4GL client-server implementations can find a new life as rich Internet services on virtualized or standard hardware and platforms. The process recovers past investments, closes down wasteful operations spending, and extends value into the platforms that operate at peak efficiency and lower costs. Hard to argue.
Remember the wave of ROI studies back in 2003? Well now you need ROI plus provable business improvements of the qualitative variety. Application modernization fits the bill because application sprawl wastes server utilization, leaves apps and data in silos that resist services orientation and prevents the sun-setting of older, expensive platforms -- plus you can do all kinds of innovative things with the services you couldn't do before.
Oh, and getting these services into a SOA and on virtualized platforms opens the door to more exploitation of cloud and SaaS models, as they make more sense.
I'll be discussing the rationale for application modernization, how to target which apps and platforms, what processes need to be in place, and how to scale app modernization projects appropriately. Joining me on the webinar will be David McFarlane, COO at Nexaweb. [Disclosure: Nexaweb is a sponsor of BriefingsDirect podcasts.]
McFarlane, no doubt, will be explaining how the Nexaweb Reference Framework is engineered to reduce the time, costs, and architectural decisions associated with modernizing business applications and bringing them to the Web.
I like the idea of app modernization for mainframe and COBOL code, but Nexaweb goes further in terms of the webification trend: Sybase PowerBuilder, Microsoft Visual Basic, Oracle Forms and other 3GL/4GL-based applications are what it has in mind, with as much as 67 percent in total costs savings in early customer implementations, says Nexaweb.
Sign up to listen in and watch the slides go by. Q&A to follow. Should be fun.
The topic is a no-brainer. Old apps that waste money need to come out to the web services and RIA model and join the grand mashup.
Application modernization is one of those IT initiatives that packs the one-two wallop of cutting costs while improving agility and business outcomes. That combination of doing more for less makes so much sense these days, and it may be the new number one requirement for any IT budget.
Services and logic locked up in mainframes, COBOL, n-tier Java, and other 3-4GL client-server implementations can find a new life as rich Internet services on virtualized or standard hardware and platforms. The process recovers past investments, closes down wasteful operations spending, and extends value into the platforms that operate at peak efficiency and lower costs. Hard to argue.
Remember the wave of ROI studies back in 2003? Well now you need ROI plus provable business improvements of the qualitative variety. Application modernization fits the bill because application sprawl wastes server utilization, leaves apps and data in silos that resist services orientation and prevents the sun-setting of older, expensive platforms -- plus you can do all kinds of innovative things with the services you couldn't do before.
Oh, and getting these services into a SOA and on virtualized platforms opens the door to more exploitation of cloud and SaaS models, as they make more sense.
I'll be discussing the rationale for application modernization, how to target which apps and platforms, what processes need to be in place, and how to scale app modernization projects appropriately. Joining me on the webinar will be David McFarlane, COO at Nexaweb. [Disclosure: Nexaweb is a sponsor of BriefingsDirect podcasts.]
McFarlane, no doubt, will be explaining how the Nexaweb Reference Framework is engineered to reduce the time, costs, and architectural decisions associated with modernizing business applications and bringing them to the Web.
I like the idea of app modernization for mainframe and COBOL code, but Nexaweb goes further in terms of the webification trend: Sybase PowerBuilder, Microsoft Visual Basic, Oracle Forms and other 3GL/4GL-based applications are what it has in mind, with as much as 67 percent in total costs savings in early customer implementations, says Nexaweb.
Sign up to listen in and watch the slides go by. Q&A to follow. Should be fun.
Monday, November 10, 2008
Solving IT energy conservation issues requires holistic approach to management and planning, say HP experts
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Read complete transcript of the discussion.
The critical and global problem of energy management for IT operations and data centers has emerged as both a cost and capacity issue. The goal is to find innovative means to conserve electricity use so that existing data centers don't need to be expanded or replaced -- at huge cost.
In order to promote a needed close matching of tight energy supply with the lowest IT energy demand possible, the entire IT landscape needs to be considered. That means an enterprise-by-enterprise examination of the "many sins" of energy mis-management. Wasted energy use, it turns out, has its origins all across IT and business practices.
To learn more about how enterprises should begin an energy-conservation mission, I recently spoke with Ian Jagger, Worldwide Data Center Services marketing manager in Hewlett-Packard's (HP) Technology Solutions Group, and Andrew Fisher, manager of technology strategy in the Industry Standard Services group at HP.
Here are some excerpts:
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Read complete transcript of the discussion.
The critical and global problem of energy management for IT operations and data centers has emerged as both a cost and capacity issue. The goal is to find innovative means to conserve electricity use so that existing data centers don't need to be expanded or replaced -- at huge cost.
In order to promote a needed close matching of tight energy supply with the lowest IT energy demand possible, the entire IT landscape needs to be considered. That means an enterprise-by-enterprise examination of the "many sins" of energy mis-management. Wasted energy use, it turns out, has its origins all across IT and business practices.
To learn more about how enterprises should begin an energy-conservation mission, I recently spoke with Ian Jagger, Worldwide Data Center Services marketing manager in Hewlett-Packard's (HP) Technology Solutions Group, and Andrew Fisher, manager of technology strategy in the Industry Standard Services group at HP.
Here are some excerpts:
Data centers typically were not designed for the computing loads that are available to us today ... (and so) enterprise customers are having to consider strategically what they need to do with respect to their facilities and their capability to bring enough power to be able to supply the future capacity needs coming from their IT infrastructure.Read complete transcript of the discussion.
Typically the cost of energy is now approaching 10 percent of IT budgets and that's significant. It now becomes a common problem for both of these departments (IT and Facilities) to address. If they don't address it themselves then I am sure a CEO or a CFO will help them along that path.
Just the latest generation server technology is something like 325 percent more energy efficient in terms of performance-per-watt than older equipment. So simply upgrading your single-core servers to the latest quad-core servers can lead to incredible improvements in energy efficiency, especially when combined with other technologies like virtualization.
Probably most importantly, you need to make sure that your cooling system is tuned and optimized to your real needs. One of the biggest issues out there is that the industry, by and large,drastically overcools data centers. That reduces their cooling capacity and ends up wasting an incredible amount of money.
You need to take a complete end-to-end solution that involves everything from analysis of your operational processes and behavioral issues, how you are configuring your data center, whether you have hot-aisle or cold-aisle configurations, these sorts of things, to trying to optimize the performance or the efficiency of the power delivery, making sure that you are getting the best performance per watt out of your IT equipment itself.
The best way of saving energy is, of course, to turn the computers off in the first place. Underutilized computing is not the greatest way to save energy. ... If you look at virtualizing the environment, then the facility design or the cooling design for that environment would be different. If you weren't in a virtualized environment, suddenly you are designing something around 15-35 kilowatts per cabinet, as opposed to 10 kilowatts per cabinet. That requires completely different design criteria.
You’re using four to eight times the wattage in comparison. That, in turn, requires stricter floor management. ... But having gotten that improved design around our floor management, you are then able to look at what improvements can be made from the IT infrastructure side as well.
If you are able to reduce the number of watts that you need for your IT equipment by buying more energy efficient equipment or by using virtualization and other technologies, then that has a multiplying effect on total energy. You no longer have to deliver power for that wattage that you have eliminated and you don't have to cool the heat that is no longer generated.
This is a complex system. When you look at the total process of delivering the energy from where it comes in from the utility feed, distributing it throughout the data center with UPS capability or backup power capability, through the actual IT equipment itself, and then finally with the cooling on the back end to remove the heat from the data center, there are a thousand points of opportunity to improve the overall efficiency.
We are really talking about the Adaptive Infrastructure in action here. Everything that we are doing across our product delivery, software, and services is really an embodiment of the Adaptive Infrastructure at work in terms of increasing the efficiency of our customers' IT assets and making them more efficient.
To complicate it even further, there are lot of organizational or behavioral issues that Ian alluded to as well. Different organizations have different priorities in terms of what they are trying to achieve.
The principal problem is that they tend to be snapshots in time and not necessarily a great view of what's actually going on in the data center. But, typically we can get beyond that and look over annualized values of energy usage and then take measurements from that point.
So, there is rarely a single silver bullet to solve this complex problem. ... The approach that we at HP are now taking is to move toward a new model, which we called the Hybrid Tiered Strategy, with respect to the data center. In other words, it’s a modular design, and you mix tiers according to need.
One thing that was just announced is relevant to what Ian was just talking about. We announced recently the HP Performance-Optimized Data Center (POD), which is our container strategy for small data centers that can be deployed incrementally.
This is another choice that's available for customers. Some of the folks who are looking at it first are the big scale-out infrastructure Web-service companies and so forth. The idea here is you take one of these 40-foot shipping containers that you see on container ships all over the place and you retrofit it into a mini data center.
... There’s an incredible opportunity to reclaim that reserve capacity, put it to good use, and continue to deploy new servers into your data center, without having to break ground on a new data center.
... There are new capabilities that are going to be coming online in the near future that allow greater control over the power consumption within the data center, so that precious capacity that's so expensive at the data center level can be more accurately allocated and used more effectively.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Thursday, November 6, 2008
ITIL requires better log management and analytics to gain IT operational efficiency, accountability
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Read complete transcript of the discussion.
Implementing best practices from the the Information Technology Infrastructure Library (ITIL) has become increasingly popular in IT departments. As managers improve IT operations with an eye to process efficiency, however, they need to gain operational accountability through visibility and analytics into how systems and networks are behaving.
Innovative use of systems log management and analytics -- in the context of entire IT infrastructures -- produces an audit and performance data trail that both helps implement and refine such models as ITIL. Compliance is also a building requirement that can be solved through verification tools such as systems monitoring and analytics in the context of ITIL best practices.
To learn more about how systems log tools and analysis are aiding organizations as they adopt ITIL, I recently spoke with Sean McClean, principal at consultancy KatalystNow, and Sudha Iyer, director of product management at LogLogic.
Here are some excerpts:
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Read complete transcript of the discussion.
Implementing best practices from the the Information Technology Infrastructure Library (ITIL) has become increasingly popular in IT departments. As managers improve IT operations with an eye to process efficiency, however, they need to gain operational accountability through visibility and analytics into how systems and networks are behaving.
Innovative use of systems log management and analytics -- in the context of entire IT infrastructures -- produces an audit and performance data trail that both helps implement and refine such models as ITIL. Compliance is also a building requirement that can be solved through verification tools such as systems monitoring and analytics in the context of ITIL best practices.
To learn more about how systems log tools and analysis are aiding organizations as they adopt ITIL, I recently spoke with Sean McClean, principal at consultancy KatalystNow, and Sudha Iyer, director of product management at LogLogic.
Here are some excerpts:
IT, as a business, a practice, or an industry is relatively new. The ITIL framework has been one that's always been focused on how we can create a common thread or a common language, so that all businesses can follow and do certain things consistently with regard to IT. ... We are looking to do even more with tying the IT structure into the business, the function of getting the business done, and how IT can better support that, so that IT becomes a part of the business.Read complete transcript of the discussion.
Because the business of IT supporting a business is relatively new, we are still trying to grow and mature those frameworks of what we all agree upon is the best way to handle things. ... When people look at ITIL, organizations assume that it’s something you can simply purchase and plug into your organization. It doesn't quite work that way.
ITIL is generally a guidance -- best practices -- for service delivery, incident management, or what have you. Then, there are these sets of policies with these guidelines. What organizations can do is set up their data retention policy, firewall access policy, or any other policy.
But, how do they really know whether these policies are being actually enforced and/or violated, or what is the gap? How do they constantly improve upon their security posture? That's where it's important to collect activity in your enterprise on what's going on.
Our log-management platform ... allows organizations to collect information from a wide variety of sources, assimilate it, and analyze it. An auditor or an information security professional can look deep down into what's actually going on, on their storage capacity or planning for the future, on how many more firewalls are required, or what's the usage pattern in the organization of a particular server.
All these different metrics feed back into what ITIL is trying to help IT organizations do. Actually, the bottom line is how do you do more with less, and that's where log management fits in. ... Our log management solutions allows [enterprises] to create better control and visibility into what actually is going on in their network and their systems. From many angles, whether it's a security professional or an auditor, they’re all looking at whether you know what's going on.
You want to figure out how much of your current investment is being utilized. If there is a lot of unspent capacity, that's where understanding what's going on helps in assessing, “Okay, here is so much disk space that is unutilized." Or, "it's the end of the quarter, we need to bring in more virtualization of these servers to get our accounting to close on time."
[As] the industry matures, I think we will see ... people looking and talking more about, “How do I quantify maturity as an individual within ITIL? How much do you know with regard to ITIL? And, how do I quantify a business with regard to adhering to that framework?”
There has been a little bit of that and certainly we have ITIL certification processes in all of those, but I think we are going to see more drive to understand that and to formalize that in upcoming years.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Tuesday, November 4, 2008
Genuitec, Eclipse aim for developer kit to smooth rendering of RIAs on mobile devices
The explosion in mobile Web use, due partly to the prevalence of the iPhone and other smart-phone devices -- and a desire to make developers less grumpy -- have led Genuitec to propose a new open-source project at the Eclipse Foundation for an extensible mobile Web developer kit for creating and testing new mobile rich Internet applications (RIAs).
Coming as a sub-project under the Device Software Development Platform (DSDP), the FireFly DevKit project is still in the proposal phase, and the original committers are all from Genuitec, Flower Mound, Tex. [Disclosure: Both Genuitec and the Eclipse Foundation are sponsors of BriefingsDirect podcasts.]
Included in the developer kit will be a previewer and a debugger, a Web rendering kit, a device service access framework, a deployment framework, and educational resources.
The two tool frameworks will enable mobile web developers to visualize and debug mobile web applications from within an Eclipse-based integrated development environment (IDE). Beyond this the FireFly project will develop next-generation technologies and frameworks to support the creation of mobile web applications that look and behavior similarly to native applications and are able to interact with device services such as GPS, accelerometers and personal data.
The issue of developer grumpiness was raised in the project proposal:
Initially, example implementations of the project frameworks will be provided for the iPhone. As resources become available, examples for the G1-Android platform will also be developed. The project will actively recruit and accept contributions for other mobile platforms such as Symbian, Windows Mobile and others.
The current timeframe of the project calls for it to piggyback an incubation release on top of the Eclipse 3.5 platform release. The entire project proposal is available on the Eclipse site.
Coming as a sub-project under the Device Software Development Platform (DSDP), the FireFly DevKit project is still in the proposal phase, and the original committers are all from Genuitec, Flower Mound, Tex. [Disclosure: Both Genuitec and the Eclipse Foundation are sponsors of BriefingsDirect podcasts.]
Included in the developer kit will be a previewer and a debugger, a Web rendering kit, a device service access framework, a deployment framework, and educational resources.
The two tool frameworks will enable mobile web developers to visualize and debug mobile web applications from within an Eclipse-based integrated development environment (IDE). Beyond this the FireFly project will develop next-generation technologies and frameworks to support the creation of mobile web applications that look and behavior similarly to native applications and are able to interact with device services such as GPS, accelerometers and personal data.
The issue of developer grumpiness was raised in the project proposal:
When programming, most developers dislike switching between unintegrated tools and environments. Frequent change of focus interrupts their flow of concentration, reduces their efficiency and makes them generally grumpier :). For mobile web application development, web designers and programmers need to quickly and seamlessly perform incremental development and testing directly within an IDE environment rather than switching from an IDE to a device testing environment and back again.One goal of the Web rendering toolkit is to make Web applications take on the look and feel of the host mobile device. Possibly, an application could run in the Safari browser on an iPhone, but appear similar to a native iPhone app.
Initially, example implementations of the project frameworks will be provided for the iPhone. As resources become available, examples for the G1-Android platform will also be developed. The project will actively recruit and accept contributions for other mobile platforms such as Symbian, Windows Mobile and others.
The current timeframe of the project calls for it to piggyback an incubation release on top of the Eclipse 3.5 platform release. The entire project proposal is available on the Eclipse site.
Subscribe to:
Posts (Atom)