Such an all embracing concept as utility or on-demand computing is not going to mature overnight. It will depend on the development of a lot of under-pinning technology, including a lot more progress on the relevant open standards (which we all know take a lot of time to become accepted).
The advent of the Internet has forced IT suppliers to implement open standards in their otherwise proprietary systems and they are having to contribute towards standards development these days. It was interesting to note that in a press release IBM referred to grid computing as being a key component of on-demand computing. I can’t believe that IBM intends to include everybody’s home PC in their systems!
The grid computing implications are one of the more intriguing aspects. Personally I think the general concept is ridiculous. Simply to exploit the enormous unused computing capacity represented by domestic PCs we must have an enormous management, security and network problem. It simply isn’t worth it. Specialised centralised systems are a much cheaper and better idea. However similar technology that can be applied to Internet connected PCs can be applied to a limited, controlled group of managed systems. That I am sure is what IBM is after with grid computing, not PCs.. Thus while IBM, Sun and Microsoft can all talk of a common interest in grid computing, they have very different agendas!
Over and above the established techniques used for fault tolerance, IBM have made some big strides in technology for dynamic configuration in order to continually satisfy service level agreements. IBM mainframes have long been designed to support partitioning but with the release of the Intelligent Resource Director with z/OS this has been extended to a dynamic facility, something that Sun, H-P, etc. can’t match yet. This is the first major product to come from the research which was code named Project e-Liza. The "e" is the usual marketing emphasis on Web applications, but in fact it is applicable to all applications.
Project e-Liza technology has a very ambitious agenda. It sets out to deliver self-optimising, self-configuring, self-protecting and self-healing capabilities across a wide range of computer systems. The initial application to mainframes, exploiting their established partitioning facilities, is the obvious primary target. However IBM have stated that e-Liza technology is to be developed for all their numerous platforms, iSeries (AS/400), pSeries (RS/6000) and xSeries (Intel based), and including Linux systems. This is a typically ambitious statement and it will obviously be a long time after the mainframes that PC systems have full on-demand capabilities. However this is not simple market hype from IBM. They are well ahead of the competition in this sector because of long standing mainframe experience. This is probably the best way they have to stay ahead of the game.
In the past it has only been large scale data processing installations that have been involved with fault tolerance, partitioning, mixed workloads, etc. But with the advent of e-commerce, particularly business-to-business and supply chain applications, a wide variety of systems can be involved. Failure of a small but critical computer in a chain could be as problematic as the main systems. Thus IBM is serious about the need to expand the on-demand technology beyond mainframes. They will probably have to consider at some stage applying their technology to "foreign" systems. They will obviously do this for Wintel and Linux systems, which in the long term could be a problem for Sun, H-P, etc. It would be ironic if such a major commitment from IBM turned out to give Microsoft an advantage over Sun!< BR>
Martin Healey, pioneer development Intel-based computers en c/s-architecture. Director of a number of IT specialist companies and an Emeritus Professor of the University of Wales.