With all the differing requirements being placed on the centralised systems of tomorrow, more attention must be paid to the appropriate support. It boils down to lots of local networked "specific" function machines versus very big machines with specific support for mixed workloads. The conventional NT or Unix operating system will cope with the networked solution but not with the centralised machine alternative. But the latter is very attractive because it will be much easier to manage and probably a lot cheaper.
The primary requirement of the single physical machine is to create multiple "Virtual Machines" on the one platform. The physical machine is then controlled by a software layer, a hypervisor, and not an operating system in the conventional sense. The hardware should be designed to specifically support the functionality of the hypervisor, and this is IBM’s great strength. They have been following this path with mainframe systems for many years. They produced an earlier system, VM, which added time-sharing capabilities to the hypervisor (CMS), but Unix superseded this type of system. VM was then used to run multiple copies of a transaction operating system, VSE, with great success, but this was eventually replaced by an integrated system, MVS (later called OS/390 and now z/OS), the mainstay of IBM’s product range. The key feature of MVS was to support multiple virtual machines by logically partitioning the machine. Each partition was independent of the others
With MVS, the machine could be set up with multiple "Logical Partitions (LPAR)", in each of which an independent subsystem could be run. In addition internal facilities are provided so that messages can be passed and calls made from one partition to another. Thus one partition supports CICS and another DB2, each with totally different optimisation as appropriate, and yet the CICS applications can call the database. In practice multiple CICS, DB2, etc. partitions would be implemented, e.g. one for production and another for test and development. But for many years now the outstanding feature of this mainframe architecture has been to use a partition for running batch control systems, e.g. JESS. No other architecture can support batch processing as well. For some inexplicable reason batch processing has a low profile, which must now change as new e-commerce applications are integrated with legacy systems; passing messages to batch systems is a very effective integration approach.
As multi-processor Intel computers, running Windows 2000 or preferably Linux, become more popular, they pose a threat to the bigger Unix systems more so than the mainframes. Interesting new architectures such as Sequent Non Uniform Memory Architecture (NUMA) and Unisys’s Cellular Multiprocessing (CMP) products are already supporting NT in large-scale systems. It follows then that Sun Microsystems and Hewlett-Packard, not to forget IBM’s own i/Series and p/Series (RS/6000) machines must react to the threat and that means moving upmarket into the realm of the IBM mainframe. The first move is obvious and that is simply to keep pushing the performance up by exploiting multi-processor and clustering technology developments, but that is simply not enough, they too must introduce logical partitioning. This is the key feature of Sun’s E10000 and H-P Superdome products, but they have a long way to go yet to match the maturity of the hardware/software combination of IBM’s z/Series and z/OS. Unisys can run multiple copies of NT, Sun multiple copies of Solaris and H-P multiple copies of HP/UX, but there is no equivalent to the batch processing or CICS functionality. They depend upon third party TP monitors, databases and schedulers. In comparison the mature IBM systems run the specialised services and multiple copies of other operating systems. In particular IBM have taken a big step forward and introduced support for multiple copies of Linux alongside CICS, DB2, etc., in so doing giving their customers the option of standard Linux products such as Apache, Oracle, etc. z/OS will reportedly run thousands of single user copies of Linux! IBM have also taken advantage of their experience by introducing dynamic partitioning, in which an operator can reallocate processor and I/O resources to meet transient demands.
It is always interesting to see how a technology such as logical partitioning that has been taken for granted for years can suddenly become high profile due to changes in the industry as a whole. In so doing it is giving IBM an advantage, well deserved for once.