Etelos, Inc. has beefed up its software-as-a-service (SaaS) offerings with the addition of four Web 2.0 stalwarts to its Etelos Marketplace. Users can now take advantage of WordPress, SugarCRM, MediaWiki, and phpBB as hosted solutions from the San Mateo, Calif. company.
The new additions are designed to help enterprises, small businesses, bloggers, and individual users connect with customers and other online communities on an on-demand basis. Users can set up a blog or a wiki with nothing more than a browser and Internet access. Technical details are handled by Etelos.
Founded in 1999, Etelos has been a go-to place for open-source developers eager to get their apps into the marketplace without having to go into the software distribution business. It also provides one-stop shopping for businesses looking for those apps, offering common user management, billing, support, and security.
Saturday, June 14, 2008
6th Sense Analytics adds new features for collecting development productivity metrics
6th Sense Analytics, which collects and provides metrics on software development projects, this week announced several enhancements to its flagship product. These enhancements provide a more user-friendly interface and organize reports into workspaces that more closely align with the way each user works.
The Morrisville, N.C. company targets its products at companies that want to manage outsourced software development. It automatically collects and analyzes unbiased activity-based data though the entire software development lifecycle. [Disclosure: 6th Sense has been a sponsor or BriefingsDirect podcasts.]
Among the enhancements to the product are:
Last August, I reported on the first metrics that 6th Sense Analytics had released to the public. Those findings confirmed things that people already knew, and provided some unexpected insights. I saw a real value in the data:
The Morrisville, N.C. company targets its products at companies that want to manage outsourced software development. It automatically collects and analyzes unbiased activity-based data though the entire software development lifecycle. [Disclosure: 6th Sense has been a sponsor or BriefingsDirect podcasts.]
Among the enhancements to the product are:
- Reports can now be scheduled for daily, weekly or monthly delivery by email, reducing the number of steps required to access reports, providing easier integration into customer work routines.
- Users can now select specific reports providing the ability to see only the information pertinent to their needs.
- The registration process has been streamlined. After inviting a new user to a team, the user’s account is immediately activated and the user is sent a welcome email that provides details for getting started including instructions for desktop installation. The action of removing users has also been simplified.
- Reports are now relevant to any time zone for customers working with resources across a country and on multiple continents.
Last August, I reported on the first metrics that 6th Sense Analytics had released to the public. Those findings confirmed things that people already knew, and provided some unexpected insights. I saw a real value in the data:
And these are not survey results. They are the use data aggregated from some 500 active developers over past several weeks, and therefore make a better reference point than “voluntary” surveys. These are actual observations are on what the developers actually did — not what they said they did, or tried to remember doing (if they decided to participate at all). So, the results are empirical for the sample, even if the sample itself may not yet offer general representation.
Friday, June 13, 2008
OpenSpan to ease client/server modernization by ushering apps from desktop to Web server
Promising lower costs and greater control, OpenSpan, Inc., this week unveiled its OpenSpan Platform Enterprise Edition 4.0, which will allow organizations to move both legacy and desktop applications off the desktop and onto the server. This will allow them to be integrated with each other or rich Internet applications (RIAs) and expressed as Web services.
Key to the new offering is the company's Virtual Broker technology to enable the movement of the applications, allowing companies to rapidly consume Web services within legacy applications or business process automations that span applications. Companies can also expose selective portions of applications over the Web.
According to OpenSpan, of Alpharetta, Ga., the benefits of their approach include lower costs, because companies will have to license fewer copies of software, as well as giving IT greater control over end-user computing by centralizing application management on the server.
Moving applications off the desktop and onto the server means that companies no longer have to install and expensively maintain discrete copies of each application on every desktop. Users access only the application portion they need. This has the added benefit of reducing desktop complexity.
Yep, there are still plenty of companies and apps out there making their journey from the 1980s to the 1990s -- hey, better late than never. If you have any DOS apps still running, however, I have some land in Florida to sell you.
OpenSpan made a splash last year, when it announced its OpenSpan Studio, which allowed companies to integrate siloed apps. At the time, I explained that process:
OpenSpan Platform Enterprise Edition 4.0 will be available this year.
Key to the new offering is the company's Virtual Broker technology to enable the movement of the applications, allowing companies to rapidly consume Web services within legacy applications or business process automations that span applications. Companies can also expose selective portions of applications over the Web.
According to OpenSpan, of Alpharetta, Ga., the benefits of their approach include lower costs, because companies will have to license fewer copies of software, as well as giving IT greater control over end-user computing by centralizing application management on the server.
Moving applications off the desktop and onto the server means that companies no longer have to install and expensively maintain discrete copies of each application on every desktop. Users access only the application portion they need. This has the added benefit of reducing desktop complexity.
Yep, there are still plenty of companies and apps out there making their journey from the 1980s to the 1990s -- hey, better late than never. If you have any DOS apps still running, however, I have some land in Florida to sell you.
OpenSpan made a splash last year, when it announced its OpenSpan Studio, which allowed companies to integrate siloed apps. At the time, I explained that process:
How OpenSpan works is that it identifies the objects that interact with the operating system in any program — whether a Windows app, a Web page, a Java application, or a legacy green screen program — exposes those objects and normalizes them, effectively breaking down the walls between applications.
The OpenSpan Studio provides a graphical interface in which users can view programs, interrogate applications and expose the underlying objects. Once the objects are exposed, users can build automations between and among the various programs and apply logic to control the results.
The OpenSpan Studio provides a graphical interface in which users can view programs, interrogate applications and expose the underlying objects. Once the objects are exposed, users can build automations between and among the various programs and apply logic to control the results.
OpenSpan Platform Enterprise Edition 4.0 will be available this year.
Wednesday, June 11, 2008
Live TIBCO panel examines role and impact of service performance management in enterprise SOA deployments
Listen to the podcast. Read a full transcript. Sponsor: TIBCO Software.
Myriad unpredictable demands are being placed on enterprise application services as Service Oriented Architecture (SOA) grows in use. How will the far-flung deployment infrastructure adapt and how will all the disparate components perform so that complex business services meet their expectations in the real world?
These are the questions put to a live panel of analysts and experts at the recent TIBCO User Conference (TUCON) 2008 in San Francisco. Users such as Allstate Insurance Co. are looking for SOA performance insurance, for thinking through how composite business services will perform and to ensure that these complex services will meet and exceed expected service level agreements.
At the TUCON event, TIBCO unveiled a series of products and services that target service performance management, and that leverage the insights that managed complex events processing (CEP) provides. To help understand how complex events processing and service performance management find common ground -- to help provide a new level of insurance against failure for SOA and for enterprise IT architects -- we asked the experts.
Listen to the podcast, recorded live at TUCON 2008, with Joe McKendrick, an independent analyst and SOA blogger; Sandy Rogers, the program director for SOA, Web services and integration at IDC; Anthony Abbattista, the vice president of enterprise technology strategy and planning for Allstate Insurance Co., and Rourke McNamara, director of product marketing for TIBCO Software. I was the producer and moderated.
Here are some excerpts:
Myriad unpredictable demands are being placed on enterprise application services as Service Oriented Architecture (SOA) grows in use. How will the far-flung deployment infrastructure adapt and how will all the disparate components perform so that complex business services meet their expectations in the real world?
These are the questions put to a live panel of analysts and experts at the recent TIBCO User Conference (TUCON) 2008 in San Francisco. Users such as Allstate Insurance Co. are looking for SOA performance insurance, for thinking through how composite business services will perform and to ensure that these complex services will meet and exceed expected service level agreements.
At the TUCON event, TIBCO unveiled a series of products and services that target service performance management, and that leverage the insights that managed complex events processing (CEP) provides. To help understand how complex events processing and service performance management find common ground -- to help provide a new level of insurance against failure for SOA and for enterprise IT architects -- we asked the experts.
Listen to the podcast, recorded live at TUCON 2008, with Joe McKendrick, an independent analyst and SOA blogger; Sandy Rogers, the program director for SOA, Web services and integration at IDC; Anthony Abbattista, the vice president of enterprise technology strategy and planning for Allstate Insurance Co., and Rourke McNamara, director of product marketing for TIBCO Software. I was the producer and moderated.
Here are some excerpts:
We are describing what could be thought of as insurance. You’ve already gone on the journey of SOA. It’s like going on a plane ride. Are you going to spend the extra few dollars and get insurance? And wouldn't you want to do that before you get into the plane, rather than afterward? Is that how you look at this? Is service performance management insurance for SOA?Listen to the podcast. Read a full transcript. Sponsor: TIBCO Software.
It’s interesting to think of [SOA service performance management] as insurance. I think it’s a necessary operational device, for lack of better words. ... I don’t think it’s an option, because what will hurt if you fall down has been proven over and over again. As the guy who has to run an SOA now -- it’s not an option not to do it.
I actually do look at service performance management as insurance -- but along the lines of medical insurance. Anthony said people fall down and people get hurt. You want to have medical insurance. It shouldn't be something that is optional. It shouldn't be something you consider optional.
It’s something that you need to have, and something that people should look at from the beginning when they go on this SOA journey. But it is insurance, Dana. That’s exactly what it does. It prevents you from running into problems. You could theoretically go down this SOA path, build out your services, deploy them, and just get lucky. Nothing will ever happen. But how many go through life without ever needing to see a doctor?
What we are seeing is that, as services are exposed externally to customers, partners, and other systems, it affects the ability to fail-over, to have redundant services deployed out, to be able to track the trends, and be able to plan, going forward, what needs to be supported in the infrastructure, and to even go back to issues of funding. How are you going to prove what's being used by whom to understand what's happening?
So, first, yes, it is visibility. But, from there, it has to be about receiving the information as it is happening, and to be able to adjust the behavior of the services and the behavior of the infrastructure that is supporting. It starts to become very important. There are levels of importance in criticality with different services in the infrastructure that’s supporting it right now.
But, the way that we want to move to being able to deploy anywhere and leverage virtualization technologies is to break away from the static configuration of the hardware, to the databases, to where all this is being stored now, and to have more of that dynamic resourcing. To leverage services that are deployed external to an organization you need to have more real-time communication.
With something like a Tivoli or a BMC solution, something like a business service management technology, your operational administrators are monitoring your infrastructure.
They are monitoring the application at the application layer and they understand, based on those things, when some thing is wrong. The problem is that’s the wrong level of granularity to automatically fix problems. And it’s the wrong level of granularity to know where to point that finger, to know whom to call to resolve the problem.
It’s right, if what's wrong is a piece of your infrastructure or an entire application. But if it’s a service that’s causing the problem, you need to understand which service -- and those products and that sort of technology won’t do that for you. So, the level of granularity required is at the service level. That’s really where you need to look.
A lot of the initiatives around ITIL Version 3.0 are starting to get some of those teams thinking in terms of how to associate the business requirements for how services are being supported by the infrastructure, and how they are supported by the utility of the team itself. But, we're a long way away from having everything all lined up, and then having it automatically amend itself. People are very nervous about relinquishing control to an automated system.
So, it is going to be step-by-step, and the first step is getting that familiarity, getting those integrations starting to happen and then starting to let loose.
We're dealing with organizations like Allstate, which have massive size and scale, with 750 services. What do people need to be considering, as we moving into to yet more complexity with virtualization, cloud computing, utility grids?
You need to make sure that, as you move from the older ways of doing things -- from the siloed applications, the siloed business unit way of doing things -- to the SOA, services-based way of doing things, you don’t ignore the new complexities you are introducing.
Don’t ignore the new problems that you are introducing. Have a strategy in place to mitigate those issues. Make sure you address that, so that you really do get the advantage, the benefits of SOA.
What I mean by that is with SOA you are reusing services. You are making services available, so that that functionality, that code, doesn’t need to be rewritten time and time again. In doing so you reduce the amount of work, you reduce the cost of building new applications, of building new functionality for your business organization.
You increase agility, because you have reduced the amount of time it takes to build new functionality for your business organization. But, in so doing, you have taken what was one large application, or three large applications, and you have broken them down into dozens or tens of separate smaller units that all need to intercommunicate, play nice with each other, and talk the same language.
Even once you have that in production, you now have a greater possibility for finger-pointing, because, if the business functionality goes down, you can’t say that that application that we just put on is down.
The big question now is what part of that application is down? Whose service is it? Your service, or someone else’s service? Is it the actual servers that support that? Is it the infrastructure that supports that? If you are using virtualization technology, is it the hardware that’s down, or is it the virtualization layer? Is it the software that runs on top of that?
You have this added complexity, and you need to make sure that doesn’t prevent you from seeing the real benefit of doing SOA.
Monday, June 9, 2008
Serena's Mashup Composer ushers content and widgets to on-demand business mashups
Acting as a mashup matchmaker, Serena Software is bringing together content -- widgets, RSS feeds, and Flash components -- with enterprise data for on-demand business mashups, giving non-technical users access to powerful customized applications without burdening IT departments.
On Tuesday, June 10, Serena will announce the upcoming major iteration of the Redwood City, Calif. company's Mashup Composer service, which allows users to drag and drop a wide variety of consumer information and combine it with data from internal applications -- such as in salesforce.com, Siebel, and Oracle -- to create rich Internet mashups (RIMs).
Users will be able to leverage any kind of widget or rich Internet application including Adobe Flash, Amazon search, Flickr, Microsoft Silverlight, RSS feeds, YouTube, any of the 30,000 Google gadgets, LinkedIn or Facebook profiles, or external newsfeeds. That's a lot of stuff, and there will soon be even more, especially the fruits of the fast-charging social networking.
Serena explains how this works:
While some IT folks may worry that putting this functionality in the hands of non-technical people, Serena says they have that worry covered, saying that they povide a "proven governance framework that provides the reliability, security, and compliance that IT requires."
I wrote about this issue last August when I blogged on Serena and what was then its upcoming "Project Vail:"
The new functionality in the Mashup Composer will be available free of charge as part of Serena's on-demand release in the third quarter. Word has it that the tool will be free, and that pricing will follow the cloud model, based on infrastructure use over time.
The Serena model augers well for my earlier comments on the power and need for WOA. Again, I'm not locked into the WOA nomenclature, but the goal of spurring on SOA use and methods via energizing users with Web content remains.
Serena defines its Mashup Composer process one that enables "business mashups." I like the imagery that connotes. I'd take it a step further and join it with my WOA value comments, such that business mashups are a catalyst to broader SOA use and adoption while also extending SOA value into the managed cloud.
Consider the power of combining and leveraging the best of SOA, the best of on-demand business mashups, and the powerful insights on users and their communities as defined by the social graph information now available from the social networks.
Effectively bringing together business assets, open web content and defined social relations will offer something quite new and very productive over the next few years. Those companies that jump on this early and master it will develop a broad advantage.
On Tuesday, June 10, Serena will announce the upcoming major iteration of the Redwood City, Calif. company's Mashup Composer service, which allows users to drag and drop a wide variety of consumer information and combine it with data from internal applications -- such as in salesforce.com, Siebel, and Oracle -- to create rich Internet mashups (RIMs).
Users will be able to leverage any kind of widget or rich Internet application including Adobe Flash, Amazon search, Flickr, Microsoft Silverlight, RSS feeds, YouTube, any of the 30,000 Google gadgets, LinkedIn or Facebook profiles, or external newsfeeds. That's a lot of stuff, and there will soon be even more, especially the fruits of the fast-charging social networking.
Serena explains how this works:
"Imagine a scenario where a sales rep is preparing for a big meeting with a new customer. The rep might start with the customer’s record in salesforce.com, and have the mashup fetch related information like a photo and details from the customer’s Linked In or Facebook profile, external news feeds showing the company’s latest stock price, credit report information from a Dunn & Bradstreet Web service, and widgets showing local weather and traffic in the customer’s location. Soon the rep has all the information needed for the meeting. It’s as easy as personalizing a Yahoo! home page."
While some IT folks may worry that putting this functionality in the hands of non-technical people, Serena says they have that worry covered, saying that they povide a "proven governance framework that provides the reliability, security, and compliance that IT requires."
I wrote about this issue last August when I blogged on Serena and what was then its upcoming "Project Vail:"
"The trick is how to allow non-developers to mashup business services and processes, but also make such activities ultimately okay with IT. Can there be a rogue services development and deployment ecology inside enterprises that IT can live with? How can we ignite 'innovation without permission' but not burn the house down?
"Serena believes they can define and maintain such balances, and offer business process mashups via purely visual tools either on-premises or in the cloud."
"Serena believes they can define and maintain such balances, and offer business process mashups via purely visual tools either on-premises or in the cloud."
The new functionality in the Mashup Composer will be available free of charge as part of Serena's on-demand release in the third quarter. Word has it that the tool will be free, and that pricing will follow the cloud model, based on infrastructure use over time.
The Serena model augers well for my earlier comments on the power and need for WOA. Again, I'm not locked into the WOA nomenclature, but the goal of spurring on SOA use and methods via energizing users with Web content remains.
Serena defines its Mashup Composer process one that enables "business mashups." I like the imagery that connotes. I'd take it a step further and join it with my WOA value comments, such that business mashups are a catalyst to broader SOA use and adoption while also extending SOA value into the managed cloud.
Consider the power of combining and leveraging the best of SOA, the best of on-demand business mashups, and the powerful insights on users and their communities as defined by the social graph information now available from the social networks.
Effectively bringing together business assets, open web content and defined social relations will offer something quite new and very productive over the next few years. Those companies that jump on this early and master it will develop a broad advantage.
Thursday, June 5, 2008
Apache CXF: What the future holds for Web services frameworks and dynamic languages
Listen to the podcast. Read a full transcript. Sponsor: IONA Technologies.
More open source server components and frameworks continue to emerge from developer communities. One of the latest, Apache CXF, an open-source Web services framework, graduated from incubation recently to become a full Apache Foundation project.
The progeny of the previous merger of the ObjectWeb-managed Celtix project and the XFire Project at Codehaus, CXF joins a growing pool of Apache and other open source projects supporting services oriented architect (SOA) infrastructure. Many, like CXF, also enjoy commercial support and associated commercial products, such as IONA Technologies' FUSE.
CXF is on the cusp of broadening beyond conventional Web services, however, as users seek to align the framework with JavaScript, and perhaps more dynamic programming languages, such as Groovy and Ruby. Interoperability is the goal, with both backward and forward messaging compatibility, with an expanding set of technologies supported. Community-based open source development is adept at adding such breadth and depth to the benefit of all users, and CXF is no exception.
To learn more about CXF and the direction for SOA, middleware and open source development, I recently spoke with Dan Kulp, a principal engineer at IONA who has been deeply involved with CXF; Raven Zachary, the open-source research director at The 451 Group, and Benson Margulies, the CTO of Basis Technology.
Here are some excerpts:
More open source server components and frameworks continue to emerge from developer communities. One of the latest, Apache CXF, an open-source Web services framework, graduated from incubation recently to become a full Apache Foundation project.
The progeny of the previous merger of the ObjectWeb-managed Celtix project and the XFire Project at Codehaus, CXF joins a growing pool of Apache and other open source projects supporting services oriented architect (SOA) infrastructure. Many, like CXF, also enjoy commercial support and associated commercial products, such as IONA Technologies' FUSE.
CXF is on the cusp of broadening beyond conventional Web services, however, as users seek to align the framework with JavaScript, and perhaps more dynamic programming languages, such as Groovy and Ruby. Interoperability is the goal, with both backward and forward messaging compatibility, with an expanding set of technologies supported. Community-based open source development is adept at adding such breadth and depth to the benefit of all users, and CXF is no exception.
To learn more about CXF and the direction for SOA, middleware and open source development, I recently spoke with Dan Kulp, a principal engineer at IONA who has been deeply involved with CXF; Raven Zachary, the open-source research director at The 451 Group, and Benson Margulies, the CTO of Basis Technology.
Here are some excerpts:
If you are doing any type of SOA stuff, you really need some sort of Web-service stack. There are applications written for ServiceMix and JBI that don't do any type of SOAP calls or anything like that, but those are becoming fewer and farther between. Part of what our Web services bring is the ability to go outside of your little container and talk to other services that are available, or even within your company or maybe with a business partner or something like that.Listen to the podcast. Read a full transcript. Sponsor: IONA Technologies.
The whole Apache model is mix and match, when you are talking about not only a licensing scheme. The Apache license, is a little easier for commercial vendors to digest, modify, and add in, compared to the GPL, but also I think it's the inherent nature of the underlying infrastructure technologies.
When you deploy an application, especially using open source, it tends to be several dozen distinct components that are being deployed. This is especially true in Java apps, where you have a lot of components or frameworks that are bundled into an application. So, you would certainly see CXF being deployed alongside of other technologies to make that work. Things like ServiceMix or Camel, as you mentioned, ActiveMQ, Tomcat, certainly Apache Web Server, these sorts of technologies, are the instrument to which these services are exposed.
A lot of these projects, like Camel and ServiceMix, require some sort of Web-services stack, and they've basically come to CXF as a very easy-to-use and very embeddable service stack that they are using to meet their Web-services needs. ... One of CXF's advantages is what you want to do is deliver to some third-party a stack that they put up containing your stuff that interacts with all of their existing stuff in a nice light-weight fashion. CXF is non-intrusive in that regard.
CXF gets a lot of attention because it is a full open-source framework, which is completely committed to standards. It gives easy-to-use, relatively speaking, support for them and, as in many other areas, focuses on what the people in the outside world seem to want to use the kit for -- as opposed to some particular theoretical idea ... about what to use it for.
Apache CXF has is a fairly different approach of making the code-first aspect primary. ... So, a lot of these more junior-level developers can pick up and start working with Web services very quickly and very easily, without having to learn a lot of these more technical details.
It's actually kind of fascinating, and one of the neatest things about working in an open-source project is seeing where it pops up. ... One of the examples of that is Groovy Web service. Groovy is another dynamic language built in Java that allows you to do dynamic things. I'm not a big Groovy user, but they actually had some requirements to be able to use Groovy to talk to some Web services, and they immediately started working with CXF.
They liked what they saw, and they hit a few bugs, which was expected, but they contributed back to CXF community. I kept getting bug reports from people, but was wondering what they were doing. It turns out that Groovy's Web-services stack is now based on CXF. That's type of thing is very fascinating from my standpoint, just to see that that type of stuff developed.
Apache CXF 2.1 was released about a week after we graduated, and it brought forth a whole bunch of new functionality. The JavaScript was one of them. A whole new tooling was another thing, also CORBA binding, and there is a whole bunch of new stuff, some REST-based APIs. So, 2.1 was a major step forward.
Now that they we're graduated, there are a lot more people looking at it, which is good. We're getting a lot more input from users. There are a lot of people submitting other ideas. So, there is a certain track of people just trying to get some bug fixes and getting some support in place for those other people.
I like the fact that in CXF they are looking at a variety of protocols. It's not just one implementation of Web services. There's SOAP, REST, CORBA, other technologies, and then a number of transports, not just HTTP. The fact is that when you talk to enterprises, there's not a one-size-fits-all implementation for Web services. You need to really look at services, exposing them through a variety of technologies.
I like that approach. It really matches the needs of a larger variety of enterprise organizations and just it's a specific technology implementation of Web services. I mean, that's the approach that you're going to see from open-source projects in the space. The ones that provide the greatest diversity of protocols and transports are going to do quite well.
There's a whole bunch of other ideas that we're working on and fleshing out. The code first stuff that I mentioned earlier, we have a bunch of other ideas about how to make code-first even better.
There are certain tool kits that you kind of have to delve down into either configuration or WSDL documents to accomplish what you want. It would be nice if you could just embed some annotations on your code, or something like that, to accomplish some of that stuff. We're going to be moving some of those ideas forward.
There's also a whole bunch of Web-services standards such as WS-I and WS-SecureConversation that we don't support today, but we are going to be working on to make sure that they are supported.
When you look at growth opportunities, back in 2001, JBoss app server was a single-digit market share, compared to the leading technologies at the time, WebSphere from IBM and WebLogic from BEA. In the course of four years, that technology went from single-digit market share to actually being the number one deployed Java app server in the market. I think it doesn't take much time for a technology like CXF to capture the market opportunity.
So, watch this space. I think this technology and other technologies like it, have a very bright future.
Wednesday, June 4, 2008
JustSystems moves dynamic document management deeper into enterprise
Structured authoring -- it's not just for technical documents any more. JustSystems today announced XMetaL for Enterprise Content Management (ECM), which integrates with more than 20 commercial repositories and file systems.
This new offering provides seamless integration to all leading content management systems, including repositories from IBM FileNet, EMC Documentum, OpenText, Interwoven, and Microsoft. [Disclosure: JustSystems is a sponsor of BriefingsDirect podcasts.]
JustSystems has also announced an original equipment manufacturer (OEM) agreement with IBM, under which the company will embed and resell IBM WebSphere Information Integrator Content Edition (IICE) with the new XMetaL product. This is designed to allow companies to broaden XMetaL deployments and to leverage repositories they're currently using to store and manage content.
According to JustSystems, XMetaL for ECM will allow companies to start using structured authoring, no matter which repositories are already in place. Companies will also be able to deploy it across departments without disrupting current content management, as well as integrate and automate content creation and publishing across repositories.
Structured documents can be a valuable ally of service-oriented architecture (SOA) by providing data to workers in the document formats to which they are accustomed, and, at the same time, allowing them to focus on authoritative data and content, while eliminating the drudgery of validating and reconciling documents.
I recently wrote a white paper on the role of structured authoring, dynamic documents, and their connection to SOA. Read the whole paper here.
Back in April, I recorded a podcast with Jake Sorofman, senior vice president of marketing and business development, for JustSystems North America. The sponsored podcast described the tactical benefits of recognizing the dynamic nature of documents, while identifying the strategic value of exposing documents and making them accessible through applications and composite services viaSOA.
In the podcast, Sorofman explained the value of structured authoring in the enterprise:
You can listen to the podcast here or read a full transcript here.
This new offering provides seamless integration to all leading content management systems, including repositories from IBM FileNet, EMC Documentum, OpenText, Interwoven, and Microsoft. [Disclosure: JustSystems is a sponsor of BriefingsDirect podcasts.]
JustSystems has also announced an original equipment manufacturer (OEM) agreement with IBM, under which the company will embed and resell IBM WebSphere Information Integrator Content Edition (IICE) with the new XMetaL product. This is designed to allow companies to broaden XMetaL deployments and to leverage repositories they're currently using to store and manage content.
According to JustSystems, XMetaL for ECM will allow companies to start using structured authoring, no matter which repositories are already in place. Companies will also be able to deploy it across departments without disrupting current content management, as well as integrate and automate content creation and publishing across repositories.
Structured documents can be a valuable ally of service-oriented architecture (SOA) by providing data to workers in the document formats to which they are accustomed, and, at the same time, allowing them to focus on authoritative data and content, while eliminating the drudgery of validating and reconciling documents.
I recently wrote a white paper on the role of structured authoring, dynamic documents, and their connection to SOA. Read the whole paper here.
Back in April, I recorded a podcast with Jake Sorofman, senior vice president of marketing and business development, for JustSystems North America. The sponsored podcast described the tactical benefits of recognizing the dynamic nature of documents, while identifying the strategic value of exposing documents and making them accessible through applications and composite services viaSOA.
In the podcast, Sorofman explained the value of structured authoring in the enterprise:
"There are really a couple of different issues at work here. The first is the complexity of a document makes it very difficult to keep it up to date. It’s drawing from many different sources of record, both structured and unstructured, and the problem is that when one of the data elements changes, the whole document needs to be republished. You simply can’t keep it up-to-date.
"This notion of the dynamic documents ensures that what you’re presenting is always an authoritative reflection of the latest version of the truth within the enterprise. You never run the risk of introducing inaccurate, out of date, or stale information to field base personnel."
"This notion of the dynamic documents ensures that what you’re presenting is always an authoritative reflection of the latest version of the truth within the enterprise. You never run the risk of introducing inaccurate, out of date, or stale information to field base personnel."
You can listen to the podcast here or read a full transcript here.
Tuesday, June 3, 2008
Spike in enterprise 'events' spurs debut of Event Processing Technical Society
The recent growth -- and expected spike -- in business event data in enterprises has led a group of IT industry leaders to form the Event Processing Technical Society (EPTS), designed to encourage adoption and effective use of event processing methods and technology in applications.
Among the founding members are such heavy hitters as IBM, Oracle, TIBCO Software, Inc., Gartner Research, Coral8 Inc., Progress Software, and StreamBase.
Event processing pioneer Dr. David Luckham, a founding member of EPTS, explained in a press release:
EPTS has five initial goals:
Event processing was a hot topic at the recent TIBCO user conference, TUCON. (Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.)
Fellow ZDNet blogger Joe McKendrick has some thoughts on event processing, too.
The new consortium plans three additional work groups. The first will focus on developing information on event processing architecture. Another will identify requirements for the interoperability among event processing applications and platforms. The third will collaborate with the academic community to develop courses in this area.
The advance in the scale and complexity of streams of events will place a greater burden on infrastructure and architects. But the ability to manage and harvest analysis from these events could be extremely powerful, and provide a lasting differentiator for expert practitioners.
While the processing of such events has its roots in financial companies and transactions, the engine for dealing with such throughputs and variable paths will find uses in many places. The vaulting commerce expected as always-on mobile Web, GPS location and social graph data collide is a prime example.
We hit on these types of transactions as the progeny of online advertising in a recent BriefingsDirect Analyst Insights roundtable podcast.
Consumers and end users should begin to enjoy what they may well perceive as "intelligent" services -- based on the fruits of complex events processing -- from their devices and providers. Harvesting and using more data from sensors and device meshes will also require the scale that event processing requires.
We should also chalk this up to yet another facet of the growing definition of cloud computing, as event processing as a service within a larger set of cloud-based services will also build out in the coming years. The whole trend of event processing bears close monitoring.
EPTS will hold its next meeting Sept, 17-19 in Stamford, Conn. More information on the consortium can be found at the EPTS Web site.
Among the founding members are such heavy hitters as IBM, Oracle, TIBCO Software, Inc., Gartner Research, Coral8 Inc., Progress Software, and StreamBase.
Event processing pioneer Dr. David Luckham, a founding member of EPTS, explained in a press release:
“We've had decades of development of event processing technology for simulation systems, networking, and operations management. Now, the explosion in the amount of business event data being generated in modern enterprises demands a new event processing technology foundation for business intelligence and enterprise management applications.”
EPTS has five initial goals:
- Document usage scenarios where event processing brings business benefit
- Develop a common event-processing glossary for its members and the community-at-large to use when dealing with event processing
- Accelerate the development and dissemination of best practices for event processing
- Encourage academic research to help establish event processing as a research discipline and encourage the funding of applied research
- Work with existing standards development organizations such as Object Management Group (OMG), OASIS and W3C to assist in developing standards in the areas of: event formats, event processing interoperability, event processing (meta) modeling and (meta) languages.
Event processing was a hot topic at the recent TIBCO user conference, TUCON. (Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.)
Fellow ZDNet blogger Joe McKendrick has some thoughts on event processing, too.
The new consortium plans three additional work groups. The first will focus on developing information on event processing architecture. Another will identify requirements for the interoperability among event processing applications and platforms. The third will collaborate with the academic community to develop courses in this area.
The advance in the scale and complexity of streams of events will place a greater burden on infrastructure and architects. But the ability to manage and harvest analysis from these events could be extremely powerful, and provide a lasting differentiator for expert practitioners.
While the processing of such events has its roots in financial companies and transactions, the engine for dealing with such throughputs and variable paths will find uses in many places. The vaulting commerce expected as always-on mobile Web, GPS location and social graph data collide is a prime example.
We hit on these types of transactions as the progeny of online advertising in a recent BriefingsDirect Analyst Insights roundtable podcast.
Consumers and end users should begin to enjoy what they may well perceive as "intelligent" services -- based on the fruits of complex events processing -- from their devices and providers. Harvesting and using more data from sensors and device meshes will also require the scale that event processing requires.
We should also chalk this up to yet another facet of the growing definition of cloud computing, as event processing as a service within a larger set of cloud-based services will also build out in the coming years. The whole trend of event processing bears close monitoring.
EPTS will hold its next meeting Sept, 17-19 in Stamford, Conn. More information on the consortium can be found at the EPTS Web site.
Subscribe to:
Posts (Atom)