The concept of agent technology is not new in the IT world, but the applications of it are as yet rather limited. The appalling service provided by "automatic" telephone help systems should provide an incentive to change that and in so doing expand the range of applications.
Agents are software sub-systems that work independently in the background on behalf of a requester. They can be event-driven or interactive. In the former case they search for something and when a match is found they send an alert to a supervisor. In the latter case a user makes a request and in so doing activates the agent; the answer may be interactive, but in many cases the response takes a long time and would be handled in a similar fashion to an alert.
The most common application to date has been in system management. The agents trace the activity of specific management tools and report results back to a higher level management system. This works because each agent is mapped onto a known, static, subsystem and they can exploit a common interface up to the higher level system. The simple snmp protocol for instance worked well in the management of communication networks, often extended to operating systems, data bases, etc. Unfortunately few other applications can be so formally represented. The tremendous problems encountered with Internet searches are a clear pointer to the problems involved with handling unstructured systems.
Agent technology has been applied to individual PCs (Microsoft Office Assistant) and to "crawlers" in the Internet. Neither has been accepted in general and Office Assistant must be a contender for the most turned off feature of any software. The problem though is not the concept, but the implementation. It is reasonable to assume that with experience these systems will get better. Office Assistant is dubious because it is intrusive, but the basic problem is that the Wizards rely on a comprehensive help library, which Windows simply hasn’t got.
As usual the root of the problem is the lack of standards. For instance while Tivoli can exploit snmp standards, the higher levels are proprietary. If standards are to be developed, then the architecture of an agent system needs better defining from the beginning. Using the Tivoli model, the following elements of an architecture can be recognised.
Base agents. These are the basic "mail" elements which filter, sort and keep requesters and agents in touch.
Avatars. This is a term stolen from games machines. These are responsible for attracting attention of a user to an event (the dreaded Microsoft paper clip is an example). They can of course interact with other software as in management products rather than visually, and they can be intelligent. The UK Post Office and the University of East Anglia have been experimenting with direct translation to sign language for the deaf. This area is ripe for innovation.
Communication. Different systems have to inter-work unambiguously. This is relatively easy if all sub-systems are from the same supplier, but this is a real problem with, say, the Internet, which involves an undefinable group of providers. This is another area where XML will help.
Intelligence. There is a vast range of intelligence required of agents. Some are trivial, but the future lies in applications of Knowledge-based technology and Artificial Intelligence. There must also be a range of capability such that simpler agents can run in lightweight clients, with higher functionality in big servers. It seems inevitable that AI should keep on cropping up whenever there is a new concept in the IT world.
Dialogues. Not for the first time, the IT industry is faced with the need to interact effectively with the user, and if the bulk of the Web and help desk systems are examples, then we still have a huge gulf to cross. We are only capable of creating structured dialogues, which are not what users want. Perhaps we will at last see some progress with Natural Languages now.
There are however a lot of subsidiary topics to ponder on. At the head of the list are the ability to map the users ideas into the correct question and then how to assess the quality of the data being accessed. People will obviously rely on the answers to an incorrectly formulated query derived from inadequate data, with serious consequences. Following this minefield there are questions of ownership, tariffing and security to be addressed. There is a long way to go, but there are some encouraging signs that progress is being made.