Only trivial systems such as PC DOS or Windows 3.1 can run only one program at a time. Today’s PCs, with Windows 98 or 2000 (NT), can concurrently execute more than one program, but can only support a single user. Most operating systems can support both multiple programs and multiple users. It gets more complicated if some of the multiple users want to share the same program. Internally it gets more complex still because programs can be broken into smaller tasks, some of which will be shared by different programs.
The client/server architecture has added another dimension because a multi-user system can be achieved by networking multiple single user systems, with a multi-tasking, but not necessarily multi-user server, e.g. 50 users can be serviced by 51 PCs, each running the single-user Windows 2000 operating system. Thus the server can be either a dedicated machine running Windows 2000 or it can be one of a number of programs on a multi-user system such as Unix.
Multi-user systems, with dumb terminals, fell out of vogue in the nineties as users demanded graphical interfaces (at any cost!), but now it is back in favour as browsers slowly replace PCs. Note however that a single-user Windows 2000 system can exploit middleware, a Web Server in this case, to provide the multi-user support missing from the operating system itself. But while dumb terminals are disappearing in new applications, Unix still has a basic advantage over NT because of the support for legacy ASCII terminals and appropriate applications. This is not insignificant in enterprise systems based on Unix, but there is an even bigger number of satisfactory (and very cheap) applications running on 3270 terminals attached to IBM mainframe systems, or 5250 with AS/400 installations.
From the above it should be clear that a lot of nonsense is talked about "one operating system fits all", particularly as Microsoft gang up with PC hardware vendors to sell far more equipment than is needed. The requirements of a single-user client (graphics and simple I/O, with network support) are different to those of a server (simple operator interface, heavy load, high I/O demands, resilience, scalability, etc.). The needs of an enterprise server are even higher, needing support for legacy systems, and higher reliability and scalability. Thus at a departmental level an AS/400 (i/Series) server with NT workstations is a good combination, better than NT all round.
One way to achieve support for a big mixture of programs is to use a specific machine for each identifiable function, a concept that will grow over the coming years. This however creates as many problems as it solves, because there will inevitably be the need for one function to interact with another, introducing cross machine interfacing and with it a serious increase in management and network problems.
The older technique of supporting multiple functions is to use the operating system on a single computer to run multiple tasks. It is the same logical concept as a network of machines but easier to manage and since the interaction between functions is an internal service, with lower overheads than a network, it can be faster.
There is however still a big difference. With multiple dedicated machines, each could be optimised for the specific task it is performing, whereas relying on the scheduling of a single operating system must tend to average out performance. Modern operating systems can adjust the priorities of each task, but that can never be as efficient as dedicated tuning.
Consider the basic requirements of a typical enterprise application. There will be a high volume of interactive transactions, a high performance database management system and batch processing of data. The interactive services must be optimised for quickest response and must provide basic recovery and rollback functions. This is best achieved with a Transaction Processing Monitor (TPM) such as CICS or Tuxedo. The DBMS must support multi-threading of requests so that a long query can’t hold short transactions up, but its performance is dominated by heavy I/O activity. The batch processing is not real-time critical but must provide sophisticated event-driven workflow scheduling to minimise operator intervention. Batch processing must be optimised for maximum throughput at the expense of response time, e.g. sorting data compared to the direct access to individual data sets required by interactive transactions. Each function has significantly different characteristics.
Today the interest in e-commerce has placed additional requirements to add to the above. The workload of Internet transactions is far less predictable than in-house applications and the Web server (Application Server) functionality must also be incorporated, although that has similar support requirements to a TPM.