This section is about expected or announced developments in the agent technique itself in the forthcoming years.


Agents will have a great impact, as was seen in the previous chapter. Some, mostly researchers, say they will appear in everyday products as an evolutionary process. Others, such as large companies, are convinced it will be a revolutionary process. The latter does not seem very likely as many parties are not (yet) familiar with agents, especially the future users of them. The most probable evolution will be that agents, initially, leverage simpler technologies available in most applications (e.g. word processors, spreadsheets or knowledge-based systems). After this stage, agents will gradually evolve into more complicated applications.

Developments that may be expected, and technical matters that will need to be given a lot of thought, are:
The chosen agent architecture / standards:
This is a very important issue. On a few important points consensus already seems to have been reached: ACL (Agent Communication Language) is adopted and used by many parties as their agent communication language. ACL uses KIF (Knowledge Interchange Format) and KQML to communicate knowledge and queries to others. KIF and KQML are also used by many parties, for instance by the Matchmaker project we saw in chapter four, and is currently being further extended. In general, standards are slow to emerge, but examples such as HTML have shown that a major standard can emerge in two to three years when it is good enough and meets the needs of large numbers of people.

Another, related and equally important issue, is the agent architecture that will be persued and will become the standard. No consensus has been reached about this yet.
There are two possible architectures that can be persued, each of which has strong influences on such aspects as required investments and agent system complexity [1]:
Homogeneous Architecture:
here there is a single, all-encompassing system which handles all transactions [2]  and functions [3]. Most of the current agent-enabled applications use this model, because the application can, itself, provide the entire agent system needed to make a complete, comprehensive system; [4]
Heterogeneous Architecture:
here there is a community within which agents interact with other agents. This community model assumes agents can have different users, skills, and costs.

There are various factors that influence which path the developments will follow, i.e. which of these two types of architectures will become predominant: [5]
1. The producer of the agent technique (i.e. used agent language) that has been chosen to be used in a homogeneous model: this producer will have to be willing to give out its source code so others are able to write applications and use it as the basis for further research.
If this producer is not willing to do so, other parties (such as universities) will experiment with and start to develop other languages. If the producer does share the source code with others, researchers, but also competitors, will be able to further elaborate the technique and develop applications of their own with it. It is for this last consequence, that most producers in this situation, at least all the commercial ones, will chose to keep the source code to themselves, as they would not want to destroy this very profitable monopoly.
In the end, this 'protectionism' of this producer, combined with findings of (university) research and market competition, will result in multiple alternative techniques being developed (i.e. lead to a heterogeneous architecture);
2. Interoperability requirements, i.e. the growing need to co-operate/interact with other parties in activities such as information searches (because doing it all by yourself will soon lead to unworkable situations). Here, a homogeneous architecture would clearly make things much easier compared to a heterogeneous architecture as one then does not need to worry about which agent language or system others may be using.
However, multi-agent systems - especially those involved in information access, selection, and processing - will depend upon access to existing facilities (so-called legacy systems). Application developers will be disinclined to rewrite these just to meet some standard. A form of translation will have to be developed to allow these applications to participate. In the final analysis it is clear that this can only be done when using a heterogeneous agent model. [6]
Furthermore, agent systems will be developed in many places, at different times, with differing needs or constraints. It is highly unlikely that a single design will work for all;
3. Ultimately, the most important factor will be "user demand created by user perceived or real value". People will use applications that they like for some reason(s). The architecture that is used by (or best supports) these applications will become the prevailing architecture, and will set the standard for future developments and applications.

Although a homogeneous architecture has its advantages, it is very unlikely that all the problems that are linked to it can be solved. So, although the agent architecture of the future may be expected to be a heterogeneous one, this will not be because of its merits, but rather because of the demerits of a homogeneous one.

Legal and ethical issues (related to the technical aspects of agents):
This relates to such issues as:
Authentication: how can be ensured that an agent is who it says it is, and that it is representing who it claims to be representing?
Secrecy: how can be ensured that an agent maintain a user's privacy? How do you ensure that third parties cannot read some user's agent and execute it for their own gains?
Privacy: how can be ensured that agents maintain a user's much needed privacy when acting on his behalf?
Responsibility which goes with relinquished authority: when a user relinquishes some of his responsibility to one ore more software agents (as he would implicitly), he should be (explicitly) aware of the authority that is being transferred to it/them;
Ethical issues, such as tidiness (an agent should leave the world as it found it), thrift (an agent should limit its consumption of scarce resources) and vigilance (an agent should not allow client actions with unanticipated results).
Enabling, facilitating and managing agent collaboration/multi-agent systems:
A lot of research has to be done into the various aspects of collaborating agents, such as:
Interoperability/communication/brokering services: how can brokering/directory type services for locating engines and/or specific services, such as we have seen them in chapter four, be provided?
Inter-Agent co-ordination: this is a major issue in the design of these systems. Co-ordination is essential to enabling groups of agents to solve problems effectively. Co-ordination is also required due to the constraints of resource boundedness and time;
Stability, scalability and performance issues: these issues have yet to be acknowledged, yet alone tackled in collaborative agent systems. Although these issues are non-functional, they are crucial nonetheless;
Evaluation of collaborative agent systems: this problem is still outstanding. Methods and tests need to be developed to verify and validate the systems, so it can be ensured that they meet their functional specifications, and to check if such things as unanticipated events are handled properly.
Issues related to the User Interface:
Major (research) issues here are: [7]
Determining which learning techniques are preferable for what domains and why. This can be achieved by carrying out many experiments using various machine learning techniques over several domains;
Extending the range of applications of interface agents into other innovative areas (such as entertainment);
Demonstrating that the knowledge learned with interface agents can be truly used to reduce users' workload, and that users, indeed, want them;
Extending interface agents to be able to negotiate with other peer agents.
Miscellaneous technical issues:
There are many other technical issues which will need to be resolved, such as:
Legacy systems: techniques and methodologies need to be established for integrating agents and legacy systems;
Cash handling: how will the agent pay for services? How can a user ensure that it does not run amok and run up an outrageous bill on the user's behalf?
Improving/extending Agent intelligence: the intelligence of agents will continuously need to be improved/extended in all sorts of ways;
Improving and extending agent learning techniques: can agent learning lead to instability of its system? How can be ensured that an agent does not spend (too) much of its time learning, instead of participating in its set-up?
Performance issues: what will be the effect of having hundreds, thousands or millions of agents on a network such as the Internet (or a large WAN)?

[1] But also on such aspects as marketing, development and investments. See, for instance, [JANC95].
[2] i.e. correspondence between one or more agents (or users).
[3] i.e. tasks that are performed by an agent.
[4] General Magic's Telescript expands this premise into multi-agent systems. As long as all agents in the system use Telescript conventions, they are part of a single, all-encompassing system. Such a system can support multiple users, each (in theory) using a different application.
[5] See chapter five of [JANC95].
[6] Either that, or by means of a very complicated and extensive homogeneous architecture (as it has to be able to accommodate every possible legacy system).
[7] See (also) section 5.2 of [NWAN96].

 previous page  next page  to the chapter's TOC  to the main TOC

"Intelligent Software Agents on the Internet" - by Björn Hermans