The simplest definition of a server is a combination of hardware, software and communications that services a multiplicity of users or clients. Most servers are interactive with mixtures of terminals, PCs and now browsers as clients, but there are also requirements for batch and store-and-forward servers as well
Servers also vary a lot in scale, from supporting a few PC users to mission-critical business systems. The load on typical departmental office servers is fairly consistent and predictable, but big e-commerce servers suffer from a huge range of workload. This is in fact a major problem to be faced by the introduction of e-commerce, since most mission-critical systems have historically benefited from a predictable workload.
The need for resilience of servers also varies a lot. All servers should be more reliable than the individual clients, but the more resilient a system is, the more expensive it must be. Thus it is uneconomical to pay extra for an office server, while the cost to the business of down-time in a mission critical system would exceed the extra cost of a resilient system many times over.
It has been common to run more than one server function on a single server. Even Netware supported both the file and print server functions. With NT, Unix, etc it is common to run the database on the same system as the file server. However there is also an interest in exploiting multiple single-function servers where appropriate. Common examples of dedicated servers are Web, e-mail, print, fax and specialist information databases. In many cases however it is necessary for two or more server functions to inter-work e.g. an application server, Web server and database server. If these are implemented on separate boxes, then messages must be passed across the Lan which introduced management and performance problems.
The simplest servers can be implemented with ordinary PCs today, running Netware, NT or Unix (SCO or UnixWare). Netware can probably be displaced by Linux today, NT displaced in a few years and PC Unix will migrate to Linux. All the PC hardware suppliers have produced PCs with special attention to quality, power supplies, multiple I/O slots, multi-processors, rack-mounting, etc. which make for very cost-effective departmental servers. With multi-gigabyte discs, hundreds of MB memory and fast Intel processors, these machines are challenging the bigger, more expensive Unix boxes from H-P, Sun and IBM. These suppliers then are favouring the bigger servers, which not only means more capacity but far more reliability and above all scalability. These servers must aim to reach 7x24x365 availability and "active" upgrading and servicing. In these large scale systems, the complexity of managing multiple single-function servers is unacceptable and so they must support multiple functions. While this is nothing new, the new systems must cope with the huge increase in peak loading. Thus large-scale servers are being designed to provide partitioning to isolate server functions, but more features are also needed to support the varying workload.
The longest established systems with logical partitioning are IBM mainframes, but H-P, Sun, Compaq and Unisys are all now marketing machines which can be partitioned. Thus as the simpler Intel servers push upmarket, then the established Unix Server must push up into the IBM mainframe market.
H-P, Sun, Compaq and Unisys are all favouring very advanced symmetric multi-processors (SMP) architectures, compared to the simpler PC SMP architectures. These new machines have three-level caches (L1 is the processor, L2 on the processor interface and L3on the bus). The key difference however is the bus interconnecting processors, memory and I/O. To support logical partitioning these buses must be very fast. The Unisys CMP machines for instance use a multi-way "cross-bar" switch and a number of machines are licencing the Compaq (Tandem) ServerNet high-speed interconnecting technology. These advanced SMP architectures are competing with the Non-Uniform Memory Architecture (NUMA), which uses a Cache to map remote memory into a local address space, supported by SGI and IBM (Sequent).
IBM run their own MVS or z/OS operating system on the mainframe, with extensive partitioning and clustering support, while the others run proprietary versions of Unix. However the Unisys CMP architecture exploits partitioning to run an emulation of the older Unisys operating systems and a version of Windows NT. This is the nearest then that NT has been taken to an Enterprise-level system by effectively running NT as a "guest" in the CMP environment. Yet again though it looks as an ideal architecture to follow the IBM route and to introduce a Linux option in preference to NT.