The original model for client/server systems that I formulated (before Gartner claimed credit) differentiated between architectures which placed all the processing in the client (thick client), and those which placed the presentation space in the client and the application logic and data in the server (thin client). Since those simple days it has become a lot more complex with the influence first of relational databases and then the Web. It doesn’t help either that there are no consistent definitions.
The most common thick client architecture is a bad one, pioneered by Novell, who were then mugged by Microsoft. Netware and Lan Manager simply shared a common DOS file server, which is a multiple single-user concept, without the multi-user functionality commonly needed The next move, to introduce a shared relational database, accessed via SQL, was a step in the right direction, but there was still too much common functionality left in the client. The use of shared procedures in the RDBMS is a better idea but unfortunately there are no practical standards to work with so that applications were tied into specific RDBMS products. General purpose remote procedure calls, an excellent idea, also suffered from the lack of standards. SOAP could help, but it’s a bit late in the day. The best client/server architecture for business applications was to use existing Transaction Processing Monitors, with the existing transaction applications, and to call these from a GUI client. However IBM, in a commanding position with CICS, missed the boat by failing to use common de facto standards (Ethernet and TCP/IP), so that by when the appropriate CICS client was eventually released, everyone was tied into Netware and, to a lesser extent, Unix. It is ironic that one of today’s leading Web Application servers, BEA’s Web Logic, has its roots in the old Unix TP monitor, Tuxedo.
While Unix did well out of client/server developments, particularly Sun and H-P, they too were not as successful as they could have been. A Unix server, with Sybase, Oracle, etc., was a far better server (and still is) than Windows or Netware. TCP/IP could have been the standard for Lans from the beginning instead of having the cost and aggravation of migrating from Novell and Microsoft protocols. It is amazing in retrospect to realise how gullible the IT industry was in the early days of PCs and Lans.
The Web brought some focus and forced everybody, user or supplier, to recognise the superiority of the thin client model. If only we had all insisted on the Java enhanced client model in the first place we would be well away today. However Microsoft pulled a rabbit out of the hat with ActiveX, which forced developers to create client routines which only ran in a Windows PC, thereby diminishing the attraction of dedicated Web browser terminals. IBM, Wyse and a few others had some success with these terminals, their functionality enhanced by the inclusion of a Citrix client, but they didn’t progress as they should have because of ActiveX and thus the lack of applications. IBM made it worse by calling these terminals “thin clients”, causing confusion with a PC client which is running the presentation logic only.
So what is a “smart client” then? Well it is what we called a thin client in the beginning, but with a free hand to make the GUI user interfacing logic very sophisticated, without loosing the value of a robust, shared server, which handles all the communications, business logic, data, etc. It might help us to differentiate between a Web browser and a PC thin client.< BR>
Martin Healey, pioneer development Intel-based computers en c/s-architecture. Director of a number of IT specialist companies and an Emeritus Professor of the University of Wales.