Currently, when someone is looking for certain information on the Internet, there are many possible ways to do that. One of the possibilities that we have seen earlier, are search engines.
The problem with these is that:
They require a user to know how to best operate every individual search engine;
A user should know exactly what information he is looking for;
The user should be capable of expressing his information need clearly (with the right keywords).
However, many users do neither know exactly what they are looking for, nor do they have a clear picture of which information can and which cannot be found on the Internet, nor do they know what the best ways are to find and retrieve it.
A supplier of services and/or information is facing similar or even bigger problems. Technically speaking, millions of Internet users have access to his service and/or information. In the real world however, things are a little more complicated. Services can be announced by posting messages on Usenet, but this is a 'tricky business' as most Usenet (but also Internet) users do not like to get unwanted, unsolicited messages of this kind (especially if they announce or recommend commercial products or services). Another possibility to draw attention to a service is buying advertising space on popular sites (or pages) on the World Wide Web. Even if thousands of users see such a message, it still remains to be seen whether or not these users will actually use the service or browse the information that is being offered. Even worse: many persons that would be genuinely interested in the services or information offered (and may even be searching for it), are reached insufficiently or not reached at all.

In the current Internet environment, the bulk of the processing associated with satisfying a particular need is embedded in software applications (such as WWW browsers). It would be much better if the whole process could be elevated to higher levels of sophistication and abstraction.
Several researchers have addressed this problem. One of the most promising proposals is a model where activities on the Internet are split up into three layers: one layer per activity.


Figure 2 - Overview of the Three Layer Model

Per individual layer the focus is on one specific part of the activity (in case of this thesis and of figure 2: an information search activity), which is supported by matching types of software agents. These agents will relieve us of many tedious, administrative tasks, which in many cases can be taken over very well, or even better, by a computer program (i.e. software agents). What's more, the agents will enable a human user to perform complex tasks better and faster.

The three layers are:
1. The demand side (of information), i.e. the information searcher or user; here, agents' tasks are to find out exactly what users are looking for, what they want, if they have any preferences with regard to the information needed, etcetera;
2. The supply side (of information), i.e. the individual information sources and suppliers; here, an agent's tasks are to make an exact inventory of (the kinds of) services and information that are being offered by its supplier, to keep track of newly added information, etcetera;
3. Intermediaries; here agents mediate between agents (of the other two layers), i.e. act as (information) intermediaries between (human or electronic) users and suppliers.

When constructing agents for use in this model, is it absolutely necessary to do this according to generally agreed upon standards: it is unfeasible to make the model account for any possible type of agent. Therefore, all agents should respond & react in the same way (regardless of their internal structure) by using some standardised set of codes. To make this possible, the standards should be flexible enough to provide for the construction of agents for tasks that are unforeseen at present time.

The three layer model has several (major) plus points:
1. Each of the three layers only has to concern itself with doing what it is best at.
Parties (i.e. members of one of the layers) do no longer have to act as some kind of "jack-of-all-trades";
2. The model itself (but the same goes for the agents that are used in it) does not enforce a specific type of software or hardware.
The only thing that has to be complied to are the standards that were mentioned earlier. This means that everybody is free to chose whatever underlying technique they want to use (such as the programming language) to create an agent: as long as it responds and behaves according to the specifications laid down in the standards, everything is okay. A first step in this direction has been made with the development of agent communication and programming languages such as KQML and Telescript.
Yet, a lot of work has to be done in this area as most of the current agent systems do not yet comply to the latter demand: if you want to bring them into action at some Internet service, this service needs to have specific software running that is able to communicate and interact with that specific type of agent. And because many of the current agent systems are not compatible with other systems, this would lead to a situation where an Internet service would have to possess software for every possible type of agent that may be using the service: a most undesirable situation;
3. By using this model the need for users disappears to learn the way in which the individual Internet services have to be operated;
the Internet and all of its services will 'disappear' and become one cohesive whole;
4. It is easy to create new information structures or to modify existing ones without endangering the open (flexible) nature of the whole system.
The ways in which agents can be combined become seemingly endless;
5. To implement the three layer model no interim period is needed to do so, nor does the fact that it needs to be backward-compatible with the current (two layer) structure of the Internet have any negative influences on it.
People (both users and suppliers) who chose not to use the newly added intermediary or middle layer, are free to do so. However, they will soon discover that using the middle layer in many cases leads to quicker and better results in less time and with less effort. (More about this will follow in the next sections.)

The "only" current deficiency of this model is the lack of generally agreed upon standards, such as one for the used agent communication language. Such standards are a major issue for the three layer model, as they ensure that (agents in) the individual layers can easily interface with (agents in) the other ones. Organisations such as the Internet Engineering Task Force (IETF) and its work groups have been, and still are, addressing this issue.

 previous page  next page  to the chapter's TOC  to the main TOC

"Intelligent Software Agents on the Internet" - by Björn Hermans