HP is an amalgamation of companies, products and technologies, and its user groups have had a similar legacy. Until today, that is.
Three major HP-focused user groups, from as long ago as Digital Equipment Corp. (DEC) and Tandem Systems days, have banded together to ride the power of social networking to provide a unified and more powerful voice to 50,000 global users managing and maintaining old and new HP products and systems.
The new group, called Connect, will allow its users to share knowledge and contacts while proving a strong customer advocacy voice to HP, said Nina Buik, president of the new non-profit Connect and a prolific blogger. She's also senior vice president at MindIQ, an Atlanta-based technology training company.
By officially banding together today, the former Encompass (once DECUS), HP-Interex EMEA and ITUG communities can gain more power and influence together while still remaining independent of HP.
"There's just more power in numbers, you can more done," said Buik.
Connect made a splash at the HP Technology Forum event, which began Monday in Las Vegas. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.] Users, members and observers toasted the advent of the group at a food and libations fest at the Mandalay Bay resort.
The Connect community reflects users of all of HP's portfolio, which covers a lot of ground from DEC's PDP apps still running in emulation in surprising numbers to the VMS and OpenVMS of old to the latest NonStop, BTO and SOA Center product suites. The unified community is at the outset strongest in the U.S. and EMEA, but will seek more presence in Asia/Pacific and Japan later this year, said Buik.
Connect will hold its next major user event Nov. 10-12 in Manheim, Germany.
Hey, while we're at it integrating communities -- just as we're integrating products and technologies -- why not go for some user and communities federation as well? The HP Software community Vivit, for example, or perhaps some open source communities would make sense to work in tandem with Connect. The large and growing VMWare community also has obvious synergies with Connect.
Furthermore, Connect is leveraging the social media and networks trend by creating what amounts to a LinkedIn or Facebook for HP users on its site at . Users can create a profile that describes their HP product sets, which then heightens their ability to reach out to other similar users and create their social user groups and relationships. There's blogs and wikis, too. If it works for social activities, it works for business activities.
HP is hoping to tap the Connect community for its own market research, a massive feedback loop perpetual focus group on the wants and demands of HP users. The power of the pen, folks -- it's even ore powerful when joined with social networks functions and viral community reach.
Tuesday, June 17, 2008
Monday, June 16, 2008
'Instant replay' helps software developers fast-forward to application problem areas
Fixing software bugs is often easier than finding them. Stepping up to the plate to address this problem is Replay Solutions, which today announced general availability of ReplayDIRECTOR for Java EE, a TiVo-like product that allows instant replays of applications and servers at any stage of the application lifecycle.
ReplayDIRECTOR, which was released in beta by the Redwood City, Calif. company in March, makes deep recordings of applications and servers -- notably non-deterministic inputs and events that affect the application. Engineers can then fast forward directly to the root cause of the problem.
The idea behind the technology is that it allows companies to drill down into source code quickly, eliminating unnecessary IT costs and time spent searching for issues that can't be replicated or easily detected. The software is designed to cut through the complexity that IT departments face with shorter release cycles, multi-tier applications, and dispersed development teams.
According to Replay Solutions, every line of code that an application executes while ReplayDIRECTOR is recording will be re-executed in precisely the same sequence during playback. No source code changes are required and recordings can be played anywhere, without requiring the original environment, inputs, databases, or other servers, all of which are virtualized during replay.
As virtualization becomes more common, these replay approaches may be necessary as instances of apps and runtimes may come and go based on automated demand response provisioning. These left-over breadcrumbs of what once happened in a virtualization container will be quite valuable to then prevent recurrences.
I'm sure innovative developers and testers will come up with other interesting uses, especially as apps and services become supported in more places, inside and outside of enterprises. Got compliance?
Designed to deploy in any environment and have a minimal effect on the environment, ReplayDIRECTOR allows applications to run at near full speed while recording and faster than full speed during re-execution. It also has minimal performance impact, and can run in a production environments as an "always on" solution.
ReplayDIRECTOR for Java EE is available now. You can find more information at the company's Web site.
ReplayDIRECTOR, which was released in beta by the Redwood City, Calif. company in March, makes deep recordings of applications and servers -- notably non-deterministic inputs and events that affect the application. Engineers can then fast forward directly to the root cause of the problem.
The idea behind the technology is that it allows companies to drill down into source code quickly, eliminating unnecessary IT costs and time spent searching for issues that can't be replicated or easily detected. The software is designed to cut through the complexity that IT departments face with shorter release cycles, multi-tier applications, and dispersed development teams.
According to Replay Solutions, every line of code that an application executes while ReplayDIRECTOR is recording will be re-executed in precisely the same sequence during playback. No source code changes are required and recordings can be played anywhere, without requiring the original environment, inputs, databases, or other servers, all of which are virtualized during replay.
As virtualization becomes more common, these replay approaches may be necessary as instances of apps and runtimes may come and go based on automated demand response provisioning. These left-over breadcrumbs of what once happened in a virtualization container will be quite valuable to then prevent recurrences.
I'm sure innovative developers and testers will come up with other interesting uses, especially as apps and services become supported in more places, inside and outside of enterprises. Got compliance?
Designed to deploy in any environment and have a minimal effect on the environment, ReplayDIRECTOR allows applications to run at near full speed while recording and faster than full speed during re-execution. It also has minimal performance impact, and can run in a production environments as an "always on" solution.
ReplayDIRECTOR for Java EE is available now. You can find more information at the company's Web site.
Saturday, June 14, 2008
Kapow takes a jab at challenge of creating mashups from JavaScript and AJAX sites
Kapow Technologies, whose solutions helps companies assemble mashups by harvesting and managing data from across the Web, has enhanced its approach to overcome the obstacle many businesses encounter when targeting sources with dynamic JavaScript and AJAX.
The Palo Alto, Calif. company's Kapow Mashup Server 6.4, which it unveiled this week, features extended JavaScript handling, a response to the burgeoning number of AJAX-based Web sites. [Disclosure: Kapow Technologies is a sponsor of BriefingsDirect podcasts.]
The Web 2.0 Edition, one of four editions of the new Mashup Server, now includes support for Web Application Description Language (WADL), making it easier for applications and mashup-building tools to discover and consume REST services. The WADL support also helps developers leverage the Kapow Excel Connector, an Excel plug-in provided by StrikeIron.
The Portal Content Edition, which enables companies to refurbish existing portal assets, has several enhancements to the web clipping technology for development and deployment of JSR-168 standards based portlets. It now provides the ability to make on-the-fly changes to clipping portlets that enhance portal functionality, while adding a portlet deployment mechanism on major portal platforms such as IBM WebSphere, Oracle Portal and BEA WebLogic.
Last January, I did a podcast with Stefan Andreasen, founder and CTO of Kapow. Andreasen described the mashup landscape. You can listen to the podcast here or read the full transcript here. I also blogged last April about Kapow's Web-to-spreadsheet service. At that time, I said:
All editions are available now. More information can be found on the Kapow Web site. Product pricing is based on a flexible subscription offering.
The Palo Alto, Calif. company's Kapow Mashup Server 6.4, which it unveiled this week, features extended JavaScript handling, a response to the burgeoning number of AJAX-based Web sites. [Disclosure: Kapow Technologies is a sponsor of BriefingsDirect podcasts.]
The Web 2.0 Edition, one of four editions of the new Mashup Server, now includes support for Web Application Description Language (WADL), making it easier for applications and mashup-building tools to discover and consume REST services. The WADL support also helps developers leverage the Kapow Excel Connector, an Excel plug-in provided by StrikeIron.
The Portal Content Edition, which enables companies to refurbish existing portal assets, has several enhancements to the web clipping technology for development and deployment of JSR-168 standards based portlets. It now provides the ability to make on-the-fly changes to clipping portlets that enhance portal functionality, while adding a portlet deployment mechanism on major portal platforms such as IBM WebSphere, Oracle Portal and BEA WebLogic.
Last January, I did a podcast with Stefan Andreasen, founder and CTO of Kapow. Andreasen described the mashup landscape. You can listen to the podcast here or read the full transcript here. I also blogged last April about Kapow's Web-to-spreadsheet service. At that time, I said:
Despite a huge and growing amount of “webby” online data and content, capturing and defining that data and then making it available to users and processes has proven difficult, due to differing formats and data structures. The usual recourse is manual intervention, and oftentimes cut-and-paste chores. IT departments are not too keen on such chores.
But Kapow’s OnDemand approach provides access to the underlying data sources and services to be mashed up and uses a Robot Designer to construct custom Web harvesting feeds and services in a flexible role-based execution runtime. Additionally, associated tools allow for monitoring and managing a portfolio of services and feeds, all as a service.
In addition to the Web 2.0 Edition and the Portal Content Edition, the Kapow Mashup Server is also available in the Data Collection Edition and the OnDemand Edition.But Kapow’s OnDemand approach provides access to the underlying data sources and services to be mashed up and uses a Robot Designer to construct custom Web harvesting feeds and services in a flexible role-based execution runtime. Additionally, associated tools allow for monitoring and managing a portfolio of services and feeds, all as a service.
All editions are available now. More information can be found on the Kapow Web site. Product pricing is based on a flexible subscription offering.
SOA Software, iTKO team up to offer SOA lifecycle management and QA
SOA Software and iTKO have teamed up to offer enterprises continuous management and quality assurance across the entire lifecycle of service-oriented architecture (SOA) applications.
The new offering incorporates the LISA Testing, Validation, and Virtualization Suite from Dallas, Tex.-based iTKO and Policy Manager and Service Manager from Los Angeles-based SOA Software. The two companies say the combined solution will provide protection across the entire design, development, and change lifecycle.
Among the benefits of the combined solution are:
I took a briefing recently on LISA and was really impressed with the approach and value. It's worth a look if you're not familiar with iTKO.
The new offering incorporates the LISA Testing, Validation, and Virtualization Suite from Dallas, Tex.-based iTKO and Policy Manager and Service Manager from Los Angeles-based SOA Software. The two companies say the combined solution will provide protection across the entire design, development, and change lifecycle.
Among the benefits of the combined solution are:
- Continuous compliance and quality automation from concept to production support for SOA, with LISA validation natively executed as part of the workflows within SOA Software Policy Manager.
- Visibility into SOA policy compliance levels, with all tests, test results, endpoint data, and models viewed in a single repository.
- An increase in the types of SOA policy that can be modeled and validated, ensuring reliable service level outcomes.
- Service virtualization of endpoints, locations and binding properties from SOA Software combined with simulation of service behaviors and data from iTKO.
- Enhanced runtime validation of live SOA applications for both functional and performance purposes.
I took a briefing recently on LISA and was really impressed with the approach and value. It's worth a look if you're not familiar with iTKO.
Etelos puts more 'sass' into SaaS with four additional hosted Web 2.0 offerings
Etelos, Inc. has beefed up its software-as-a-service (SaaS) offerings with the addition of four Web 2.0 stalwarts to its Etelos Marketplace. Users can now take advantage of WordPress, SugarCRM, MediaWiki, and phpBB as hosted solutions from the San Mateo, Calif. company.
The new additions are designed to help enterprises, small businesses, bloggers, and individual users connect with customers and other online communities on an on-demand basis. Users can set up a blog or a wiki with nothing more than a browser and Internet access. Technical details are handled by Etelos.
Founded in 1999, Etelos has been a go-to place for open-source developers eager to get their apps into the marketplace without having to go into the software distribution business. It also provides one-stop shopping for businesses looking for those apps, offering common user management, billing, support, and security.
The new additions are designed to help enterprises, small businesses, bloggers, and individual users connect with customers and other online communities on an on-demand basis. Users can set up a blog or a wiki with nothing more than a browser and Internet access. Technical details are handled by Etelos.
Founded in 1999, Etelos has been a go-to place for open-source developers eager to get their apps into the marketplace without having to go into the software distribution business. It also provides one-stop shopping for businesses looking for those apps, offering common user management, billing, support, and security.
6th Sense Analytics adds new features for collecting development productivity metrics
6th Sense Analytics, which collects and provides metrics on software development projects, this week announced several enhancements to its flagship product. These enhancements provide a more user-friendly interface and organize reports into workspaces that more closely align with the way each user works.
The Morrisville, N.C. company targets its products at companies that want to manage outsourced software development. It automatically collects and analyzes unbiased activity-based data though the entire software development lifecycle. [Disclosure: 6th Sense has been a sponsor or BriefingsDirect podcasts.]
Among the enhancements to the product are:
Last August, I reported on the first metrics that 6th Sense Analytics had released to the public. Those findings confirmed things that people already knew, and provided some unexpected insights. I saw a real value in the data:
The Morrisville, N.C. company targets its products at companies that want to manage outsourced software development. It automatically collects and analyzes unbiased activity-based data though the entire software development lifecycle. [Disclosure: 6th Sense has been a sponsor or BriefingsDirect podcasts.]
Among the enhancements to the product are:
- Reports can now be scheduled for daily, weekly or monthly delivery by email, reducing the number of steps required to access reports, providing easier integration into customer work routines.
- Users can now select specific reports providing the ability to see only the information pertinent to their needs.
- The registration process has been streamlined. After inviting a new user to a team, the user’s account is immediately activated and the user is sent a welcome email that provides details for getting started including instructions for desktop installation. The action of removing users has also been simplified.
- Reports are now relevant to any time zone for customers working with resources across a country and on multiple continents.
Last August, I reported on the first metrics that 6th Sense Analytics had released to the public. Those findings confirmed things that people already knew, and provided some unexpected insights. I saw a real value in the data:
And these are not survey results. They are the use data aggregated from some 500 active developers over past several weeks, and therefore make a better reference point than “voluntary” surveys. These are actual observations are on what the developers actually did — not what they said they did, or tried to remember doing (if they decided to participate at all). So, the results are empirical for the sample, even if the sample itself may not yet offer general representation.
Friday, June 13, 2008
OpenSpan to ease client/server modernization by ushering apps from desktop to Web server
Promising lower costs and greater control, OpenSpan, Inc., this week unveiled its OpenSpan Platform Enterprise Edition 4.0, which will allow organizations to move both legacy and desktop applications off the desktop and onto the server. This will allow them to be integrated with each other or rich Internet applications (RIAs) and expressed as Web services.
Key to the new offering is the company's Virtual Broker technology to enable the movement of the applications, allowing companies to rapidly consume Web services within legacy applications or business process automations that span applications. Companies can also expose selective portions of applications over the Web.
According to OpenSpan, of Alpharetta, Ga., the benefits of their approach include lower costs, because companies will have to license fewer copies of software, as well as giving IT greater control over end-user computing by centralizing application management on the server.
Moving applications off the desktop and onto the server means that companies no longer have to install and expensively maintain discrete copies of each application on every desktop. Users access only the application portion they need. This has the added benefit of reducing desktop complexity.
Yep, there are still plenty of companies and apps out there making their journey from the 1980s to the 1990s -- hey, better late than never. If you have any DOS apps still running, however, I have some land in Florida to sell you.
OpenSpan made a splash last year, when it announced its OpenSpan Studio, which allowed companies to integrate siloed apps. At the time, I explained that process:
OpenSpan Platform Enterprise Edition 4.0 will be available this year.
Key to the new offering is the company's Virtual Broker technology to enable the movement of the applications, allowing companies to rapidly consume Web services within legacy applications or business process automations that span applications. Companies can also expose selective portions of applications over the Web.
According to OpenSpan, of Alpharetta, Ga., the benefits of their approach include lower costs, because companies will have to license fewer copies of software, as well as giving IT greater control over end-user computing by centralizing application management on the server.
Moving applications off the desktop and onto the server means that companies no longer have to install and expensively maintain discrete copies of each application on every desktop. Users access only the application portion they need. This has the added benefit of reducing desktop complexity.
Yep, there are still plenty of companies and apps out there making their journey from the 1980s to the 1990s -- hey, better late than never. If you have any DOS apps still running, however, I have some land in Florida to sell you.
OpenSpan made a splash last year, when it announced its OpenSpan Studio, which allowed companies to integrate siloed apps. At the time, I explained that process:
How OpenSpan works is that it identifies the objects that interact with the operating system in any program — whether a Windows app, a Web page, a Java application, or a legacy green screen program — exposes those objects and normalizes them, effectively breaking down the walls between applications.
The OpenSpan Studio provides a graphical interface in which users can view programs, interrogate applications and expose the underlying objects. Once the objects are exposed, users can build automations between and among the various programs and apply logic to control the results.
The OpenSpan Studio provides a graphical interface in which users can view programs, interrogate applications and expose the underlying objects. Once the objects are exposed, users can build automations between and among the various programs and apply logic to control the results.
OpenSpan Platform Enterprise Edition 4.0 will be available this year.
Wednesday, June 11, 2008
Live TIBCO panel examines role and impact of service performance management in enterprise SOA deployments
Listen to the podcast. Read a full transcript. Sponsor: TIBCO Software.
Myriad unpredictable demands are being placed on enterprise application services as Service Oriented Architecture (SOA) grows in use. How will the far-flung deployment infrastructure adapt and how will all the disparate components perform so that complex business services meet their expectations in the real world?
These are the questions put to a live panel of analysts and experts at the recent TIBCO User Conference (TUCON) 2008 in San Francisco. Users such as Allstate Insurance Co. are looking for SOA performance insurance, for thinking through how composite business services will perform and to ensure that these complex services will meet and exceed expected service level agreements.
At the TUCON event, TIBCO unveiled a series of products and services that target service performance management, and that leverage the insights that managed complex events processing (CEP) provides. To help understand how complex events processing and service performance management find common ground -- to help provide a new level of insurance against failure for SOA and for enterprise IT architects -- we asked the experts.
Listen to the podcast, recorded live at TUCON 2008, with Joe McKendrick, an independent analyst and SOA blogger; Sandy Rogers, the program director for SOA, Web services and integration at IDC; Anthony Abbattista, the vice president of enterprise technology strategy and planning for Allstate Insurance Co., and Rourke McNamara, director of product marketing for TIBCO Software. I was the producer and moderated.
Here are some excerpts:
Myriad unpredictable demands are being placed on enterprise application services as Service Oriented Architecture (SOA) grows in use. How will the far-flung deployment infrastructure adapt and how will all the disparate components perform so that complex business services meet their expectations in the real world?
These are the questions put to a live panel of analysts and experts at the recent TIBCO User Conference (TUCON) 2008 in San Francisco. Users such as Allstate Insurance Co. are looking for SOA performance insurance, for thinking through how composite business services will perform and to ensure that these complex services will meet and exceed expected service level agreements.
At the TUCON event, TIBCO unveiled a series of products and services that target service performance management, and that leverage the insights that managed complex events processing (CEP) provides. To help understand how complex events processing and service performance management find common ground -- to help provide a new level of insurance against failure for SOA and for enterprise IT architects -- we asked the experts.
Listen to the podcast, recorded live at TUCON 2008, with Joe McKendrick, an independent analyst and SOA blogger; Sandy Rogers, the program director for SOA, Web services and integration at IDC; Anthony Abbattista, the vice president of enterprise technology strategy and planning for Allstate Insurance Co., and Rourke McNamara, director of product marketing for TIBCO Software. I was the producer and moderated.
Here are some excerpts:
We are describing what could be thought of as insurance. You’ve already gone on the journey of SOA. It’s like going on a plane ride. Are you going to spend the extra few dollars and get insurance? And wouldn't you want to do that before you get into the plane, rather than afterward? Is that how you look at this? Is service performance management insurance for SOA?Listen to the podcast. Read a full transcript. Sponsor: TIBCO Software.
It’s interesting to think of [SOA service performance management] as insurance. I think it’s a necessary operational device, for lack of better words. ... I don’t think it’s an option, because what will hurt if you fall down has been proven over and over again. As the guy who has to run an SOA now -- it’s not an option not to do it.
I actually do look at service performance management as insurance -- but along the lines of medical insurance. Anthony said people fall down and people get hurt. You want to have medical insurance. It shouldn't be something that is optional. It shouldn't be something you consider optional.
It’s something that you need to have, and something that people should look at from the beginning when they go on this SOA journey. But it is insurance, Dana. That’s exactly what it does. It prevents you from running into problems. You could theoretically go down this SOA path, build out your services, deploy them, and just get lucky. Nothing will ever happen. But how many go through life without ever needing to see a doctor?
What we are seeing is that, as services are exposed externally to customers, partners, and other systems, it affects the ability to fail-over, to have redundant services deployed out, to be able to track the trends, and be able to plan, going forward, what needs to be supported in the infrastructure, and to even go back to issues of funding. How are you going to prove what's being used by whom to understand what's happening?
So, first, yes, it is visibility. But, from there, it has to be about receiving the information as it is happening, and to be able to adjust the behavior of the services and the behavior of the infrastructure that is supporting. It starts to become very important. There are levels of importance in criticality with different services in the infrastructure that’s supporting it right now.
But, the way that we want to move to being able to deploy anywhere and leverage virtualization technologies is to break away from the static configuration of the hardware, to the databases, to where all this is being stored now, and to have more of that dynamic resourcing. To leverage services that are deployed external to an organization you need to have more real-time communication.
With something like a Tivoli or a BMC solution, something like a business service management technology, your operational administrators are monitoring your infrastructure.
They are monitoring the application at the application layer and they understand, based on those things, when some thing is wrong. The problem is that’s the wrong level of granularity to automatically fix problems. And it’s the wrong level of granularity to know where to point that finger, to know whom to call to resolve the problem.
It’s right, if what's wrong is a piece of your infrastructure or an entire application. But if it’s a service that’s causing the problem, you need to understand which service -- and those products and that sort of technology won’t do that for you. So, the level of granularity required is at the service level. That’s really where you need to look.
A lot of the initiatives around ITIL Version 3.0 are starting to get some of those teams thinking in terms of how to associate the business requirements for how services are being supported by the infrastructure, and how they are supported by the utility of the team itself. But, we're a long way away from having everything all lined up, and then having it automatically amend itself. People are very nervous about relinquishing control to an automated system.
So, it is going to be step-by-step, and the first step is getting that familiarity, getting those integrations starting to happen and then starting to let loose.
We're dealing with organizations like Allstate, which have massive size and scale, with 750 services. What do people need to be considering, as we moving into to yet more complexity with virtualization, cloud computing, utility grids?
You need to make sure that, as you move from the older ways of doing things -- from the siloed applications, the siloed business unit way of doing things -- to the SOA, services-based way of doing things, you don’t ignore the new complexities you are introducing.
Don’t ignore the new problems that you are introducing. Have a strategy in place to mitigate those issues. Make sure you address that, so that you really do get the advantage, the benefits of SOA.
What I mean by that is with SOA you are reusing services. You are making services available, so that that functionality, that code, doesn’t need to be rewritten time and time again. In doing so you reduce the amount of work, you reduce the cost of building new applications, of building new functionality for your business organization.
You increase agility, because you have reduced the amount of time it takes to build new functionality for your business organization. But, in so doing, you have taken what was one large application, or three large applications, and you have broken them down into dozens or tens of separate smaller units that all need to intercommunicate, play nice with each other, and talk the same language.
Even once you have that in production, you now have a greater possibility for finger-pointing, because, if the business functionality goes down, you can’t say that that application that we just put on is down.
The big question now is what part of that application is down? Whose service is it? Your service, or someone else’s service? Is it the actual servers that support that? Is it the infrastructure that supports that? If you are using virtualization technology, is it the hardware that’s down, or is it the virtualization layer? Is it the software that runs on top of that?
You have this added complexity, and you need to make sure that doesn’t prevent you from seeing the real benefit of doing SOA.
Subscribe to:
Posts (Atom)