"The fall in the cost of gathering and transmitting information will boost productivity in the economy as a whole, pushing wages up and thus making people's time increasingly valuable. No one will be interested in browsing for a long while in the Net trying in whatever site whatever information! He wants just to access the appropriate sites for getting good information."
from "Linguistic-based IR tools for W3 users" by Basili and Pazienza


The main functions of the middle layer are:
1. Dynamically matching user demand and provider's supply in the best possible way.
Suppliers and users (i.e. their agents) can continuously issue and retract information needs and capabilities. Information does not become stale and the flow of information is flexible and dynamic. This is particularly useful in situations where sources and information change rapidly, such as in areas like commerce, product development and crisis management.
2. Unifying and possibly processing suppliers' responses to queries to produce an appropriate result.
The content of user requests and supplier 'advertisements' [1]  may not align perfectly. So, satisfying a user's request may involve aggregating, joining [2]  or abstracting the information to produce an appropriate result. However, it should be noted that normally intermediary agents should not be processing queries, unless this is explicitly requested in a query. [3]
Processing could also take place when the result of a query consists of a large number of items. Sending all these items over the network to a user (agent), would lead to undesirable waste of bandwidth, as it is very unlikely that a user (agent) would want to receive that many items. The intermediary agent might then ask the user (agent) to make refinements or add some constraints to the initial query.
3. Current Awareness, i.e. actively notificate users of information changes.
Users will be able to request (agents in) the middle layer to notificate them regularly, or maybe even instantly, when new information about certain topics has become available or when a supplier has sent an advertisement stating he offers information or services matching certain keywords or topics.
There is quite some controversy about the question whether or not a supplier should be able to receive a similar service as well, i.e. that suppliers could request to be notified when users have stated queries, or have asked to receive notifications, which match information or services that are provided by this particular supplier. Although there may be users who find this convenient, as they can get in touch with suppliers who can offer the information they are looking for, there are many other users which would not be very pleased with this invasion on their privacy. Therefore, a lot of thought should be given to this dilemma and a lot of things will need to be settled, before such a service should be offered to suppliers as well.
4. Bring users and suppliers together.
This activity is more or less an extension of the first function. It means that a user may ask an intermediary agent to recommend/name a supplier that is likely to satisfy some request without giving a specific query. The actual queries then take place directly between the supplier and the user.
Or a user might ask an intermediary agent to forward a request to a capable supplier with the stipulation that subsequent replies are to be sent directly to the user himself.

These functions (with exception of the second one) bring us to an important issue: the question whether or not a user should be told where and from whom requested information has been retrieved. In case of, say, product information, a user would certainly want to know this. Whereas with, say, a request for bibliographical information, the user would probably not be very interested in the specific, individual sources that have been used to satisfy the query.
Suppliers will probably like to have direct contact with users (that submit queries) and would like to by-pass the middle layer (i.e. intermediary agent). Unless a user specifically request to do so (as is the case with the fourth function), it would probably not be such a good idea to fulfil this supplier's wish. It would also undo one of the major advantages of the usage of the middle layer: eliminating the need to interface with every individual supplier yourself.


At this moment, many users use search engines to fulfil their information need. There are many search engines available, and quite a lot of them are tailored to finding specific kinds of information or services, or are aimed at a specific audience (e.g. at academic researchers).
Suppliers use search engines as well. They can, for instance, "report" the information and/or services they offer to such an engine by sending the URL of it to the search engine. Or suppliers can start up a search engine (i.e. information service) of their own, which will probably draw quite some attention to their organisation (and its products, services, etcetera), and may also enable them to test certain software or hardware techniques.

Yet, although search engines are a useful tool at this moment, their current deficiencies will show that they are a mere precursor for true middle layer applications. In section 1.2.2, we saw a list of the general deficiencies of search engines (compared to software agents). But what are the specific advantages of usage of the middle layer over search engines, and how does the former take the latter's limitations away (completely or partially)?
Middle layer agents and applications will be capable of handling, and searching in, information in a domain dependent way.
Search engines treat information domain-independently (they do not store any meta-information about the context information has been taken from), whereas most supplier services, such as databases, offer (heavily) domain-dependent information. Advertisements that are sent to middle layer agents, as well as any other (meta-)information middle layer agents gather, will preserve the context of information (terms) and make it possible to use the appropriate context in such tasks as information searches (see next point).
Middle layer agents do not (like search engines) contain domain specific knowledge, but obtain this from other agents or services, and employ it in various sorts of ways.
Search engines do not contain domain specific knowledge, nor do they use it in their searches. Middle layer agents will not possess any domain specific knowledge either: they will delegate this task to specialised agents and services. If they receive a query containing a term that matches no advertisement (i.e. supplier description) in their knowledge base, but the query does mention which context this term should be interpreted in, they can farm out the request to a supplier that indicated he offers information on this more general concept (as it is likely to have information about the narrow term as well) [4]. If a query term does not match any advertisement, specialised services (e.g. a thesaurus service, offered by a library) can be employed to get related terms and/or possible contexts. Or the user agent could be contacted with a request to give (more) related terms and/or a term's context.
Middle layer agents and applications are better capable of dealing with the dynamic nature of the Internet, and the information and services that are offered on it.
Search engines hardly ever update the (meta-)information that has been gathered about information and service suppliers and sources. The middle layer (and its agents), on the other hand, will be well capable of keeping information up-to-date. Suppliers can update their advertisements whenever and as often as they want. Intermediary agents can update their databases as well, for instance by removing entries that are no longer at their original location (it may be expected that future services will try to correct/update such entries, if possible). They may even send out special agents to find new suppliers/sources to add to the knowledge base. Furthermore, this information gathering process can be better co-ordinated (compared to the way search engines operate) in that a list is maintained of domains/sites/servers information has been gathered about (which avoids double work from being done).
Middle layer agents will be able to co-operate and co-ordinate efforts better than search engines do now.
The individual search engines do not co-operate. As a result of this, a lot of time, bandwidth and energy is being wasted by search engines working in isolation. Middle layer agents will try to avoid doing so, by co-operating with other agents (in both the middle as well as the supplier layer) and by sharing knowledge and gathered information (such as advertisements). One possibility to achieve this could be the construction of a few "master" middle layer agents, which receive all the queries and advertisements from all over the world and act as a single interface towards both users and suppliers. The information in advertisements and user queries is distributed or farmed out to specialised middle layer agents. These "master" middle layer agents could also contact supporting agents/services (such as the earlier mentioned thesaurus service), and would only handle those requests and advertisements that no specialised agent has (yet) been constructed for.
In fairness it should be remarked that expected market forces will make it hard to reach this goal. In section 4.4.2 we will come back to this.
Middle layer agents are able to offer current awareness services.
Search engines do not offer such services as current awareness. Middle layer agents and applications will be able to inform users (and possibly suppliers) regularly about information changes regarding certain topics.
Middle layer agents are not impeded in their (gathering) activities by (suppliers') security barriers.
Many services do not give a search engine's gathering agents access to (certain parts of) their service, or do - in case of a total security barrier such as a firewall - not give them access at all. As a result of this, a lot of potentially useful information is not known to the search engine (i.e. no information about it is stored in its knowledge base), and thus the information will not appear in query results.
In the three layer model, suppliers can provide the middle layer with precise information about offered services and/or information. No gathering agent will need to enter their service at all, and thus no security problems will arise on this point.

[1] i.e. the list of offered services and information individual suppliers provide to the middle layer/middle layer agents.
[2] Responses are joined when individual sources come up with the same item or answer. Of course, somewhere in the query results it should be indicated that some items (or answers) have been joined.
[3] For instance, when information about second-hand cars is requested, by stating that only the ten cheapest cars or the ten cars best fitting the query, should be returned.
[4] This can be very handy in areas where a lot of very specific jargon is used, such as in medicine or computer science. A query (of either a user of intermediary agent) could then use common terms, such as "LAN" and "IBM", whereas the agent of a database about computer networks would automatically translate this to a term such as "Coaxial IBM Token-ring network with ring topology".

 previous page  next page  to the chapter's TOC  to the main TOC

"Intelligent Software Agents on the Internet" - by Björn Hermans