In chapter one, two statements have been formulated. Let us see now how these statements - a claim and a prediction - have turned out. [1]



The claim that was made with regard to the first part of this thesis consisted of two parts. The first part was:

"Intelligent Software Agents make up a promising solution for the current (threat of an) information overkill on the Internet."

Judging from the information that we have seen in chapters two and three, and also judging from published research reports, new product announcements and articles in the media, it seems safe to conclude that agents are starting to lift off, and are judged by many as valuable, promising and useful. Numerous agent-like as well as real agent-enabled applications are available on the Internet (albeit often as test or beta versions). These are already able to offer a broad range of functions, which make it possible to perform all sorts of tasks on the Internet (some of which were not feasible in the past), and/or support users while doing them.

There are only a few objections that can be raised against the claim that agents "make up a promising solution" for the information overkill on the Internet. Objections that can be made, concern the lack of standards with regard to vital agent aspects (such as the communication language and the architecture that will be used) and about the vagueness of some of the agent's aspects (as seen in section 2.2). While these are indeed valid objections, none of them really are insurmountable obstacles for further development of the agent-technique as a whole, and of agent-enabled applications in particular.


The second part of the claim elaborated on the first part:
"The functionality of agents can be maximally utilised when they are employed in the (future) three layer structure of the Internet."

The current structure of the Internet seems to be missing something. Users complain that they are increasingly unable to find the information or services they are looking for. Suppliers are complaining that it gets increasingly difficult to reach users, let alone reach the right ones. Both seem to find "it's a jungle out there". A worrying development, also for governments and many others who want the Internet (and all the information and services that are available through it) to be easily accessible and operable for all. What many seem to be wanting, either implicitly (e.g. by stating that some sort of intermediary services are needed) or explicitly, is that a third party [2]  or layer be added to the Internet. This layer or party will try to bring supply (i.e. suppliers) and demand (i.e. users) together in the best possible way. The three layer model, as seen in chapter four, is a way in which this can be accomplished.

So, adding a third layer or party to the Internet seems to be very promising and a way of offering new and powerful services to all on the Internet. But does it lead to agents being "maximally utilised"? First and foremost: it does not mean that agents have little to offer if they are not employed in a three layer structure for the Internet. Individual agents (or agent systems) are capable of doing many things, even when not employed in a three layer structure. But some of the offered functionality can be done more efficiently, and probably quicker or at lesser costs, when the three layer structure is used (as was shown in chapter four). Moreover, the structure will enable tasks that a single agent is incapable of doing (well, or not at all), such as finding information within a foreseeable period of time on (ideally) the whole Internet.


Adding the conclusions and remarks about the two sub-statements together, it can be safely concluded that agents, either individually or (preferably) employed in the three layer structure, have the potential to become a valuable tool in the (Internet's) information society.



With regard to the trends and developments of the second part of this thesis, the following prediction was stated:

"Agents will be a highly necessary tool in the process of information supply and demand. However, agents will not yet be able to replace skilled human information intermediaries. In the forthcoming years their role will be that of a valuable personal assistant that can support all kinds of people with their information activities."

In the previous section it has been shown that agents are able to contribute in many ways to improve "the process of information supply and demand" (e.g. as intermediary agents). The question now is: are they better at doing this than, say, a human information broker?
When I started writing this thesis, i.e. when I formulated this prediction, I assumed agents are not - and would not - be able to replace human intermediaries (at least not in the next three to five years). Now, lots of information, six chapters, and five months later, I would say that this assumption was more or less correct. "More or less" because it paints the future situation with a dither brush than necessary: agents will not (yet) be able to replace skilled human information intermediaries in all areas. There are tasks that are so complicated (in the broadest sense) that they cannot be done by agents (yet, or maybe never at all). But there still are numerous other tasks that agents are very well capable of doing. What's more, there are tasks that (soon) agents will be better at then their human counterparts (such as performing massive information searches on the Internet, which agents can do faster and twenty-four hours a day).
So, agents will be 'nothing more' than "a valuable personal assistant" in some cases, but they will also be (or become) invaluable in other ones. And there will be cases where humans and agents are (more or less) equally good at. For instance, in case there has to be chosen between a human or an electronic intermediary, the decision which of these two to approach (i.e. 'use') will then depend on such factors as costs/prices and additional services that can be delivered.
More generally, it may probably be the choice between doing it yourself (which leaves you in control, but may lead to a task being done inefficiently, incompletely or more expensively) or trusting agents to do it for you (with all the (dis)advantages as we have seen them in this thesis).

[1] about six months after they have been formulated.
[2] users and suppliers being the first and second one.

 previous page  next page  to the chapter's TOC  to the main TOC

"Intelligent Software Agents on the Internet" - by Björn Hermans