Intelligent software agents are a popular research object these days in such fields as psychology, sociology and computer science. Agents are most intensely studied in the discipline of Artificial Intelligence (AI). Strangely enough, it seems like the question what exactly an agent is, has only very recently been addressed seriously.
Because of the fact that currently the term "agent" is used by many parties in many different ways, it has become difficult for users to make a good estimation of what the possibilities of the agent technology are. At this moment, there is every appearance that there are more definitions than there are working examples of systems that could be called agent-based.
Agent producers that make unjust use of the term agent to designate their product, cause users to draw the conclusion that agent technology as a whole has not much to offer. That is - obviously - a worrying development:
On the other hand, the description of agent capabilities should not be too rose-coloured either.
Not everybody is that thrilled about agents. Especially from the field of computer science, a point of criticism often heard about agents is that they are not a new technique really, and that anything that can be done with agents "can just as well be done in C". According to these critics, agents are nothing but the latest hype.
Particularly by researchers in the field of AI, these points of criticism are refuted with the following arguments:
The 'pros' and 'cons' with regards to agents as they are mentioned here, are by no means complete, and should be seen as merely an illustration of the general discussions about agents. What it does show is why it is necessary (in several respects) to have a definition of the concept "intelligent software agent" that is as clear and as precise as possible. It also shows that there is probably a long way to go before we arrive at such a definition - if we can come to such a definition at all.
 Unfortunately that question opens up the old AI can-of-worms about definitions of intelligence. E.g., does an intelligent entity necessarily have to possess emotions, self-awareness, etcetera, or is it sufficient that it performs tasks for which we currently do not possess algorithmic solutions?
 The 'opposite' can be said as well: in many cases the individual agents of a system aren't that intelligent at all, but the combination and co-operation of them leads to the intelligence and smartness of an agent system.
 These researchers see a paradigm shift from those who build intelligent systems and consequently grapple with problems of knowledge representation and acquisition, to those who build distributed, not particularly, intelligent systems, and hope that intelligence will emerge in some sort of Gestalt fashion. The knowledge acquisition problem gets solved by being declared to be a 'non-problem'.
|previous page||next page||to the chapter's TOC||to the main TOC|