Saturday, March 8, 2008
JVM on iPhone adds to seduction of enterprise IT buyers to mobile computing
The iPhone may have hit the trifecta with Microsoft Exchange support (take that RIM!), the new SDK, and now the probable June arrival of a native JVM. These add up to an enterprise-ready mobile endpoint that ushers the iPhone from a smart phone/PDA/browser into the first (but not last) true mobile Internet device (MID) for fun and work.
Apple's SDK and targeted VC funding to spur on native iTunes apps will pay huge dividends eventually for consumers and the media hungry power users. But businesses looking for better mobile endpoints won't rush to another client platform.
The enterprise trend is away from supporting client-installed applications to embracing RIAs, web services, SOA-supported SaaS, and such client frameworks as Flex/Flash/AIR and Silverlight. The browser is king, more than ever, forever. Java can still play in this game quite well, however, and (performance willing) extend enterprise investments in Java to the edge.
Sun will need to make the JVM on iPhone scream. The iPhone and its MID ilk could be what client side Java has needed all along. There will need to be some compelling apps right away for this Java-iPhone mashup to gain traction. That is certainly doable, give the global stable of Java developers.
Iphone won't have the MID field to itself for long, so time is of the essence. There will be JVMs elsewhere, and Android and the OHA could quickly bear fruit. This is a huge potential market. Apple needs to seduce developers and IT architects and executives now. The Safari browser and "pinch" UI are Apple's competitive edge on the edge.
So what will immediately intrigue enterprise IT departments? Secure connections to mainstay enterprise browsers, along with email and groupware. The Microsoft Exchange announcement this week takes care of that. And the OSGi-based Lotus Notes et al from IBM should follow suit.
Incidentally, look for some compelling OSGi runtime announcements at this months EclipseCon. OSGI, having come from the embded world, makes total sense for iPhone.
And there are more than one way to skin the enterprise iPhone cat. You may also recall that Sybase highlighted a way to bring enterprise email and PIM, as well as some apps, to the iPhone several months ago at some additional expense to use their servers. For shops already using the iAnywhere approach, this may be the way to go.
But secure web browser connections to existing enterprise web applications is the real treat the iPhone can deliver to enterprises that would encourage them to actually buy iPhones en masse for their workers. It may be an offer they can't refuse.
I hope that the Mozilla Foundation takes the iPhone SDK and develops a lightweight Firefox browser for the iPhone ASAP. Combine the web apps that the iPhone Firefox and Safari together support, toss in SSL via Java, and create the means to easily set up VPNs -- and that's when iPhone becomes the darling of the mobile enterprise.
Of course the critical mass of such adoption pushes iPhone beyond the role of MID and begins to east away at the definition of a personal computer. Use bluetooth or USB to hook up the enterprise iPhone to a keyboard and mouse and maybe monitor and get rid of those PCs altogether. All mobile, all the time. The iPhone becomes the ubiquitous enterprise thin client, at less than $500, and it's a phone too. And you can take it anywhere and work. One device. Nice.
But for now, I don't see the cost-benefits in writing native iPhone apps or porting existing enterprise apps to iPhone. Maybe never. You don't need native computing and local data storage to make great use of iPhone for businesses purposes. As the PC goes to mostly browser use, the MID takes over.
Yes, there will be oddles of interesting innovation, native iPhone apps that can aid user productivity and make them better connected wherever they go.
The iPhone can become the MID for business, and start to replace the PC outright for a significant portion of workers. The only question is whether the users will buy the iPhone and have their IT departments set it up for enterprise use, or whether the IT departments will buy it for the workers first.
Thursday, March 6, 2008
It's good news, bad news: Microsoft gets its Internet act together
The Google fear on the business model disruption, the Apple fear on the client disruption, and the Amazon fear on the cloud disruption, seems to be making Microsoft do what anti-trust regulators, Java, open source developers, Linux, Firefox, OpenDocument, IBM, Novell, and a chorus of Microsoft bashers like myself have been trying for many years. And that is ultimately to save Microsoft from itself.
At the PDC in LA a mere 2.5 years ago it seemed like Redmond was slipping backwards in time into a gradual descent with its Connected Computing drive, and with us all connected to the Indigo bus using only MS file formats. This was, as I said at the time, an attempt to make the web a client/server affair, with Microsoft's fat clients (not its browser) the client bits. Microsoft seemed to think it has whipped the wed sufficiently to go back to the old tricks -- integrated tools plus client monopoly plus closed packaged apps equals total domination.
Now, we're seeing a much different approach, of actually meeting the Internet on its terms, and making the Microsoft way shift -- and not the other way around. We'll see more open tools, plus less lock-in to the client monopoly, plus less closed and packaged services, with a differentiated subscription and ad-supported business model. Total domination, perhaps not; but long slog to irrelevance and demise -- no way.
With Silverlight, we see RIA tools that bridge client environments -- even non-Microsoft mobile runtimes and Linux. We're seeing an IE 8 that supports (rather than subverts) de facto and official web standards. With Microsoft Online Services you can side-step the closed fat client apps. We're seeing low-cost commodity infrastructure in the cloud with SQL Server Data Services instead of server lock-in. [Message to Sun: Get MySQL Services on your cloud ASAP, and for free!]
Yes, all those that have been surrounding Microsoft with 1,000 cuts for years, ganging up on them, picking on them, teasing them, disrupting their cash cows and taking the punch out of their arrogance -- you have done a great job. You mooned the giant, and the giant changed instead of charged. Jack did not get a chance to cut the beanstalk while the giant was still in descent. The giant went back to the lab in his castle, lead by Ray Ozzie.
As a result, Google is not going to get away with chopping down the vine unmolested. Yahoo and Amazon are not going to combine to form the perfect web services/ecommerce cloud. Apple remains an elitist playground with a nice music businesses. Time Warner, AT&T, Motorola, Novell and Red Hat remain out to lunch. Microsoft will still generate enough gravity to hold IBM, SAP, HP, Dell, Intel, Nokia, and the global SIs in a tight orbit. And if Microsoft plays the advertising network card (with Yahoo) right, it will form a new center of gravity for media and entertainment (and perhaps business services) to provide the second source to Google.
Trouble is, this is a good news, bad news moment.
The good news is that Microsoft can change and adapt (a least in its intentions and early deliverables so far). The bad news is that Microsoft can change and adapt, even if they need to hamstring their traditional cash cows to do it.
Microsoft used to want to prevent the need for a web monopoly play (almost impossible by definition) by embracing and extending its way to keeping its monopoly as the gatekeeper to the business and commerce Web. Now it making the bold move to convert its old monopoly into the new largest comprehensive web player. It may not be number one in all things web, but it might be in the top three for most everything web -- and that is also the bad news.
Microsoft, the violator of anti-trust laws and the consent decrees and EU rulings, is now poised to become the second source to Google in the ad-supported media world. Meet the new boss, same as the old boss.
And that raises the same old questions. Will the power increase to a point where the openness declines? Will the standards over time be increasingly set by the de facto marker leader? Will the Internet and its efficiencies work best for consumers and users, or those that can manipulate it best?
On the other hand, has Microsoft shot itself in the foot by going so open that they can never go back? Is the lock-in on the web no longer possible, for one vendor to create a choke-hold with critical mass with enough influence to reinstall the Church and shut down the bazaar?
These are the questions we'll need to revisit in three years. Seriously.
Wednesday, March 5, 2008
Cloud computing for enterprises, work it through your head
In combination, cluster computing and multi-core computers have the potential to provide unprecedented performance, scalability and reliability for enterprise software.The new paper goes on to detail several enterprise computing use-case scenarios that show how cloud computing architectures and methodologies, if enterprise developers can exploit them, will rapidly advance cost-benefits.
Much of the significant benefit evident in the ideology of multicore and cluster computing -- lower costs, higher availability and scalability -- is effectively negated by the cost, time, risk and complexity involved in developing and deploying software that can run on these systems.
... What hinders businesses from taking advantage of multicore and clustered hardware is the lack of a simple means – such as a Rapid Application Development (RAD) method – so that software developers can quickly develop, test and deploy
enterprise software on these systems.
By taking the engineering complexity away from multi-core and cluster-computing, Hiperware Platform makes it significantly easier for developers to write software that can be partitioned across multiple computers or CPU-cores or virtual machines.
Cloud computing is not just for Google and Amazon, folks. It will be synonymous with high performance and then good old enterprise mission-critical computing, in all its forms, in the coming years.
The new neat trick will be managing how the clouds and SOAs relate and interact. And that spells more integration as a service, and more federated policy management and enforcement as a service. It's a whole new abstraction for middleware.
Cloud computing could be the next big opportunity for middleware.
Tuesday, March 4, 2008
Splunk goes 'platform' to extend IT search benefits across more IT management functions
Splunk's approach to this problem has been to index and make searchable the flood of constantly generated log files being emitted from IT systems, and then aligning the time stamps to draw out business intelligence inferences about actual IT performance.
The San Francisco company took the IT information assembly and digestion process a step further two years ago by creating Splunk Base, an open reservoir of knowledge about IT searched systems for administrators to share and benefit from. [Disclosure: Splunk is a sponsor of BriefingsDirect podcasts, including this one on Splunk Base.]
Now, recognizing the power of mashed up services and Enterprise 2.0 tools for associating applications, services, and data, Splunk has gone "platform." Instead of only providing the fruits of IT search to sys admins and IT operators, Splunk has created the means to offer developers easy access to that data and the powerful inferences gleaned from comprehensive IT search. That means the data can go places no log file has gone before.
Through a common set of services and APIs, the Splunk Platform now allows developers and equipment makers to build and integrate applications that include IT-search generated data. Because Splunk collects and manages logs, configurations, messages, traps and alerts -- compiling statistics from nearly every IT component -- the makers of IT equipment can build better management and maintenance applications (not to mention billable services).
In trial use, the Splunk Platform has already been leveraged by OEMs and systems integrators in the form of bundling and embedding Splunk with their own hardware, software and services. The opportunity there is for these OEMs and systems integrators to seek new business opportunities for offering ongoing maintenance and support values for their products and services.
What's more, the applications that the various OEMs, service providers, hosting organizations, and service bureau outsourcers build on Splunk, the more the applications can be used in coordination together, and the findings then integrated for faster problem solving, greater threat response, heightened compliance reporting, and for gaining business intelligence insight into user activity and transactions.
I like this approach because gaining an insight into total datacenter behavior in near real-time has been so difficult, but its importance is growing with the advances in virtualization, mixed-hosting arrangements, co-location, and SOA-based systems and infrastructure. In effect, both the complexity and heterogeneity of systems has kept growing, while the ability to gain common-denominator meta data about systems behaviors hasn't kept pace. We've long needed a way to make all systems "readable" in common ways.
With Splunk Platform and the applications it will spawn, IT information can now much better support and interact with distributed management applications. And we certainly need more innovative applications that can leverage this common meta data about systems to produce better management and quick feedback from systems and users.
Taking this all a step further, many of these applications and services can and should support an ecosystem. By easily distributing their applications and gaining the ability to download other applications created by anyone in the Splunk ecosystem, IT managers and the makers of IT equipment will benefit. To kick-start the effort, the first Splunk-built application on the platform was announced this week. Splunk for PCI Compliance is available for download from SplunkBase.
The application provides 125 searches, reports and alerts to help satisfy PCI requirements, including secure remote access, file integrity monitoring, secure log collection, daily log review, audit trail retention, and PCI control reporting, says Splunk. The goal is to make it simpler and faster for IT managers to comply, to answer auditor questions, and to control access to sensitive systems data. Splunk has taken pains to provide security and access control to the sensitive data, while opening up access to the non-sensitive information for better analysis.
Consequently, Splunk's foray into the developer world and applications ecosystems coincides with the company's release of Splunk 3.2, which now includes a Splunk for Windows version (on the same single code base that runs on Linux, Mac OSX, Solaris, FreeBSD and AIX). New features in Splunk 3.2 include transaction search and interactive field extraction to create easier ways for end users to generate their own applications. The update also extends the platform's capabilities with filesystem change monitoring, flexible roles, data signing and audit trails. A new REST API and SDKs for .Net and Python further opens the platform for more developers.
The Splunk Platform and associated ecosystem should quickly grow the means to bridge the need for transparency between runtime actualities and design-time requirements. When developers can easily know more about what applications and systems do in the real world in real time, they can make better decisions and choices in the design and test phases. This obviously has huge time- and money-saving implications.
The need for such transparency will quickly grow as virtualization and a services-based approach to applications gains stream and acceptance. We have seen some very powerful productivity improvements as general enterprise data has been mined for business intelligence. Now its time to better mine systems data for better IT intelligence.
Monday, March 3, 2008
Nexaweb Advance takes RIA value to the enterprise application modernization imperative
Oh, and modernization allows you to gracefully get out of the costly fat PC client software support business and focus on the browser-only end points.
The building interest in virtualization is also a spur to getting out the client/server business and making more applications Web-facing and services-based. These moves, in turn, allow for better organizing data into common warehouses and SANs, allowing for BI and other benefits, while reducing storage and back-ups costs. Business continuity also gets a boost, because everything is on the server-side (often of low-cost x86 Linux).
In short, what enterprise's are really up to these days is datacenter transformation, the whole ball of wax, and in which applications modernization is an early and essential ingredient to begin enjoying the larger holistic productivity and costs benefits.
The trick is to keep those same older (and often mission critical) applications performing well, with the rich GUIs that users expect, and quickly leading to the back-end integration flexibility to make the legacy logic also part of any enterprise's SOA patterns.
For those applications deemed no longer mission-critical, application modernization allows for proper sunsetting. It is often worthwhile to cull out the still valued logic, transactional mappings, and data -- and apply them anew to other applications or processes -- before pulling the plug.
Yep, so many reasons to modernize, so few ways to do it without pain, confusion, and cost. And so into this gapping need, Nexaweb today takes its rich Internet application (RIA) solution value with Nexaweb Advance. [Disclosure: Nexaweb is a sponsor of BriefingsDirect podcasts.]
For more on the whole rationale and business case for application modernization, check out a sponsored podcast I did with HP Services. ITIL v3 factors into this in a big, so here's some background on that, too.
For Nexaweb, the end game for enterprises is flexible composite workflows, and so the newest offerings are more than tools and platform, there's a professional services component, to take the best practices and solutions knowledge to market as well. The process includes applications assets capture and re-factoring (sort of like IT resources forensics), re-composition, deployment and then proper maintenance. In the bargain, you can gain a enhanced platform, increased automation, and services orientation.
The goal is to harvest all those stored procedures, but target them to newer architectures -- from Struts to Spring -- and move from client/server to Enterprise 2.0, is a leap-frog of sorts. The re-use of logic then allows those assets to be applied to model-driven architectures and the larger datacenter transformation values.
Nexaweb Advance pairs Nexaweb’s Enterprise Web Suite with automated code generation tools and professional services to deliver a model-driven architecture approach to the transformation of legacy PowerBuilder, ColdFusion, C++, VisualBasic, and Oracle Forms applications, according to the Burlington, Mass. company.
We have seen quite a bit of associating RIA values with SOA in the past few years, so I'm happy to see RIAs also becoming essential to other mainstream enterprise imperatives, like datacenter transformation.
Microsoft opens Pandora's box on online services, betting convenience is the killer app
It's not that different from the choices developers have been making for years: Do you want the convenience of neat packaging (at the cost of flexibility and choice) or do you want to pick ala carte components that may best meet your needs and avoid lock-in?
Microsoft Online Services (MOS) is being launched for the U.S. today by Bill Gates at the annual Microsoft Office SharePoint Conference. The bevy of applications is designed to appeal to many kinds of users, and businesses of most sizes and character. A limited beta has been set up, with general availability during the second half of this year.
Core services will include Web-based e-mail, calendaring, contacts, shared workspaces, and webconferencing and videoconferencing over the Web. Microsoft is characterizing the services as part of its "software plus services" drive, so it's hard to tell how much of the "software" (that stuff installed on the PC or server) you'll need to use MOS.
Microsoft says these services will be "managed through a single Web-based interface," which sounds like a portal you'll need to log in to to add or manage users. "IT professionals can monitor the performance of the services, add and configure users, submit and track support requests, and manage users and licenses," says Microsoft.
As in development, some shops like a nice big package, with per developer seat licenses. Others give their developers more choice on tools, utilities, desktop OS, frameworks. They seem more interested in the work the developers do, than in how they do it.
We could see a similar breakdown among more general computing users, given the MOS versus Google services offerings so far. This is more than a matter of style or taste, one model is born of and imbued with client/server, and the other is of and imbued with the Web. You know which is which.
So, in effect, Microsoft is placing a Web shell on its old model, just like it put a GUI shell on DOS with DOS 5, and another shell on that with Windows 95.
Of course on costs, the beauty and/or devil is in the details. This is a subscription service, designed for businesses. Those businesses will pay on a per-user subscription basis. Those Microsoft shops, existing customers with Software Assurance on their Microsoft Client Access Licenses (CALs) will get a discount.
So there are two big issues here: Total cost, and convenience. And those will break down differently if you're a Microsoft "Assurance"-level user or a non-Microsoft user. We don't know the numbers yet, but it's going to be the real nut in this.
Microsoft will need to skate delicately on thin ice to make the total cost close enough to the way assurance users pay to prevent them from moving too quickly. But, the total cost will need to be low enough so that the Microsoft way to online SaaS will be marginally competitive against Google and other providers of online productivity applications and communications/groupware as services.
And they way this is set up, it's almost as if Microsoft has given up on competing for individuals, students, SOHOs, and perhaps businesses of less than 50 people. It's almost as if they don;t think they can compete with Google there -- at least not for the foreseeable future.
This is, then, about maintaing the base of the small businesses and department-level buyers of Microsoft products. In essence, this is defense. It is designed to make it confusing or economically difficult to calibrate total costs, given the complexity of factoring installations, older apps, licenses, and the entire 20-year-old hairball.
And what Microsoft must do, in addition to making the true cost-benefits analysis murky, is to absolutely win on packaging and convenience. And this is where Google is vulnerable. Google has still to show, aside from costs, how businesses of all sorts can adopt their services and approach in an easy to manage way, that packages things up neatly for the IT folks, and that make a transition from the hairball easy, convenient, and well-understood.
And so Google continues the march into businesses via the organic, user-generated interest and convenience level. Google takes the early lead on the individuals and younger, greenfield companies.
And Microsoft places a bulwark around its empire This could be a long slog.
Sunday, March 2, 2008
OpSource releases OpSource Connect for better integrating SaaS and Web services
OpSource, a software as a service (SaaS) delivery company, is making it easier for SaaS and Web companies to consume and publish multiple Web services with the announcement of OpSource Connect, which will a core component of the OpSource On-Demand Summer 2008 release.
OpSource Connect, which is available immediately, provides a common platform -- the OpSource Service Bus (OSB) -- that will enable integrating SaaS applications in the cloud with legacy enterprise applications behind the firewall, freeing SaaS applications from silos.
OpSource, of
SaaS is where the growth is expected to be for the foreseeable future. Gartner, for example, sees SaaS growing at a 22.1 percent compound annual rate, which is roughly double the growth of enterprise software as a whole.
Rumor has it that Microsoft isn't waiting around for Gartner to be proven right or wrong and is ramping up its cloud-based applications to mimic its shrink-wrapped offerings.
OpSource Connect APIs provide integration capability for any application. Companies can also use Boomi for OpSource Connect, a visual drag-and-drop application integration environment from Boomi, Inc. This allows integrations with popular non-OSB applications including Salesforce and NetSuite.
Behind the firewall integrations use OpSource Sockets, which provide integration with legacy enterprise applications such as SAP
OpSource Connect APIs, Boomi for OpSource Connect and OpSource Sockets are available immediately.
When OpSource On Demand Summer 2008 is released, OpSource Connect will add the ability to use the OSB to not only consume, but publish applications as Web services, allowing each application to become a platform in its own right.
OpSource is also creating a range of services to assist companies in integration and enabling applications. Among these are:
- Web Services Enablement Program: To assist with enabling applications as Web services.
- Certified Integrator Program: To provide assistance in integrating applications in the cloud or behind the firewall.
- Application Directory: To make it easier for companies to find Web services that use the OSB.