Touching hands
3.3.2Intermediaries and Information Brokers
Navigation:

Back to the previous page Previous page

To the Table of Contents of this chapter Chapter ToC

To the Table of Contents of this paper Paper ToC

The most prominent actors in the third 'stream' will be actors called Intermediaries and Information Brokers1. In this section an overview is given of services that they could provide to other parties on the Internet, and what surplus value this would offer to these other parties.

First, let us have a look at how these brokers (will) fit into the process of information exchange.
The idea is that each supplier will provide information brokers with an 'advertisement' of all the information and services that this supplier has to offer. The content of such an advertisement will have to adhere to certain rules, conventions and (possibly) to a certain type of knowledge representation language (e.g. KIF or KQML), and it will only contain meta-information that best describes that available information, documents, and so on. The purpose of the advertisement is to give the broker a concise, yet complete overview of all that a supplier has to offer.
Now, when an information broker receives an information query, it should be able to determine (on the basis of all the advertisements it has collected) to which suppliers it should or could send this query to; the intermediary will not store any of the actual information as it is offered by suppliers. It is crucial for this system to work that this should never be the task of the information brokers, since this would burden them with a maintenance task that is virtually impossible to accomplish. Software-driven search engines, capable of processing thousands of pages a second, find it increasingly difficult to maintain their information base: who is the (human or computer) information broker who dares to pretend it will be able to really get this job done?2
In case a received information query or (one or more) advertisements are missing the required meta-information and/or a context in which it should be viewed, an information broker can delegate this task of deriving, collecting or getting this to a third party (e.g. specialised agents, thesaurus services, etcetera).
After it has send out the query to the appropriate sources, the information broker will collect the results of each individual source. Before sending these results to the party that it received the query from, the broker will probably enhance the results by ranking them, sorting out double entries, etcetera.

So in short: an information broker takes input from information providers as well as information consumers (in the form of advertisements and queries), it then may enrich this input with additional information (information about the appropriate/intended context, meta-information, thesaurus terms, etcetera) and will then - based on the meta-information it possesses - try to best match the input with the most fit parties for it.3
This set-up has several advantages, the most important probably being that the update problem that services like search engines are struggling with is greatly diminished, as brokers only store meta-information about each source or service. This information tends to get stale at a much slower rate than the content itself does, and the meta-information is also several factors smaller in volume then the information as it is currently stored and maintained by search engines (and the like). Another advantage is that this way of working is more efficient than the current situation.


What will be the main activities of Intermediaries and Information Brokers? These activities include:
Dynamically matching Information Pull and Information Push in the best possible way.
Brokers will receive, gather and store information about both information and service needs (i.e. about the consumer domain) as well as about information and service offerings (i.e. about the supplier domain). By adding value - where and when needed - to the (meta-)information they receive, e.g. by adding all kinds of extra information and data such as information context, brokers will aim to make an optimal match between supply and demand.
An example of this added value could be the pre-processing and refining of both consumer information queries as well as the information suppliers responses that result from these; it may very well be that the content of a query and supplier's 'advertisements' (i.e. the list of offered services and information individual suppliers provide to the broker) do not align perfectly. A broker could then try to find the most appropriate match by adding extra terms to the query (e.g. based on the context of the query), or by using other means;
Acting as a Trusted Third Party (TTP) to enable all kinds of safe (i.e. trusted) services and to ensure the privacy and integrity of both other parties.
To accomplish mundane and simple tasks, people probably won't have many objections to employ brokers and agents (e.g. intermediary agents, information agents). However, when using them leads to such activities as making financial transactions or ensuring the creditability of sources and authenticity of information, people will want to involve a trusted, human third party into the process.
On the supply side, trusted third parties have several advantages to offer as well, most of which are similiar to the advantages at the client side. For example, quite often parts of (or even entire) information sources cannot be indexed by such services as search engines because the information and or the offered services have been put out of reach of the search engine's crawler programs. Usually, this doesn't mean that this information should not be available at all; it just shouldn't be available freely and directly (for a number of reasons). Trusted third parties can offer suppliers a sure and reliable mechanism to make information available to the public without losing control over it (i.e. without the risk of leaving valuable information unprotected and without the risk of forgoing revenues).
Another example of a valuable TTP-service brokers and intermediaries could offer is ensuring the privacy of parties that are involved in some transaction, query, or any other activity, for instance by relaying messages between these parties without revealing the exact identity of any of the parties involved;
Offering current awareness services; i.e. actively notifying users of information or service changes and/or the availability of new information or services that match their needs.
People will want to be able to request intermediary persons and agents to notify them regularly, or maybe even instantly, when a source sends in an advertisement stating it offers information or services which match certain topics in someone's (information) profile.
There is quite some controversy about the question whether or not parties on the supply-side should be able to receive a similar service as well: i.e. if they should be able to request to be notified when users have stated queries, or have asked to receive notifications, which match information or services that are provided by this particular supplier. Although there may be some who find this convenient, as they can get in touch with suppliers who can offer the information they are looking for, there are many others which would not be very pleased with this invasion on their privacy (they could very well consider this to be "spamming"). Therefore, this issue should be treated with great consideration and it should be given a lot of thought.
Bring parties seeking and supplying services or information together.
This activity is more or less an extension of the first function. It means that users may ask an intermediary to recommend/name a supplier or source of which is likely to satisfy some request of a certain kind (with no specific query given). The actual queries then take place directly between the supplying and the seeking parties.

Bringing in intermediary services in the information market almost immediately raises a couple of questions, one of the most important ones being whether or not you should be told where and from whom requested information has been retrieved. In case of, say, product reviews, you would certainly want to know this . Whereas with information like a bibliography, you would probably not be that interested in the individual sources that have been used to compile it.
Suppliers, on the other hand, will probably like to have direct contact with you, and would like to by-pass the intermediaries. Unless specifically requested to do this (as is the case with the fourth function), it would probably not be such a good idea to fulfil this wish. It would also undo an important advantage of using intermediaries: eliminating the need to interface with every individual supplier yourself.

    By using intermediaries, each party only has to concern itself with doing what it is best at and responsibilities are laid there where they should be.3
No longer will it be necessary to be a "jack-of-all-trades"; by letting parties (themselves or their agents) continuously issue and retract information needs and capabilities, information (e.g. meta-information about it) does not become stale and the flow of information is flexible and dynamic. This is particularly useful in situations where sources and the information (e.g. stock data) itself change rapidly;
    Using intermediaries (or not) is not something that is enforced, and it does not enforce the usage of certain, proprietary techniques or software.
The choice whether or not to make use of intermediaries is not a choice between being 'compatible' or 'incompatible'; everyone is free to start or stop using them. The 'only' thing that will have to be enforced (or rather: has to be complied to) are standards for means to state and interchange queries and 'advertisements' (e.g. the usage of KIF, KQML, and the like).
    By using this model the need disappears to delve into the ways in which the individual Internet services have to be operated and interfaced, and all energy can be focused on what is really important - the task at hand or the problem to be solved.
The network with its complexities can gradually descend into the background, and all of the services offered on it can become a cohesive whole. The whole online market place can be elevated to higher levels of sophistication and abstraction:

"Whenever people learn something sufficiently well, they cease to be aware of it. When you look at a street sign, for example, you absorb its information without consciously performing the act of reading.. Computer scientist, economist, and Nobelist Herb Simon calls this phenomenon "compiling"; philosopher Michael Polanyi calls it the "tacit dimension"; psychologist TK Gibson calls it "visual invariants"; philosophers Georg Gadamer and Martin Heidegger call it "the horizon" and the "ready-to-hand", John Seely Brown at PARC calls it the "periphery". All say, in essence, that only when things disappear in this way are we freed to use them without thinking and so to focus beyond them on new goals."
from [WEIS91]

    Intermediaries will (more easily) be able to also include off-line resources in their database of information sources.
While online information sources are very valuable sources for information brokers, they are not the only source of information available. The other important resources include print materials, CD-ROM, various other electronic sources (such as large databases), and also human experts (e.g. scientists). Which of the available sources are best used, depends on the type of information being sought, the thoroughness sought, the time available, and the available budget for the search. ‘Traditional’, human information brokers already use this methodology when performing searches; for instance, researchers at Find/SVP have access to the hundreds of publications the firm receives each month, its more than 2000 online databases, tens of thousands of subject and company files, hundreds of reference books, an extensive microfiche collection, and computer disk sources. Using both online as well as off-line sources, Information Brokers will be able to offer information that is both more extensive as well as of a higher quality (compared to, say, a search engine). What’s more, the owners of off-line sources (e.g. publishers) will be able to extend their reach/services to the growing online market place.
    As intermediaries work with meta-information, instead of the content itself, higher-level services can be offered and higher-level information can be stored about both consuming as well as supplying parties, without excessive efforts. The responsibility for maintaining and advertising the information and services themselves rests where it should be: at the source.
Brokers work with the higher-level (meta-)information, which relates to an entire information source, instead of to each, individual document that this source can supply (which occupies only a fraction of the amount of space needed when working with summaries of every individual document, as many search engines do now). The time & energy that is saved by working with meta-information, can be used to enrich it with all kinds of extra information and data, such as the context of information. This means that information queries can be executed more precisely and more accurate.
At the same time, suppliers of information and services do not have to deal with the traditional constraints of, say, search engines, where only parts of the content and services they offer is stored and available for querying. Now, they can supply rich and complete descriptions (and other data) about the products they have to offer. They can actively maintain, advertise and update this information at any moment: it's not longer necessary to sit and wait until a search engine (re)visits your service to (re)collect information about it. What's more: the source now controls the meta-information that is available on its service. And who could know better how to describe this than the source itself?5

"Analyses of a site's purpose, history and policies are beyond the capabilities of a crawler program. Another drawback of automated indexing is that most search engines recognize text only. The intense interest in the Web, though, has come about because of the medium's ability to display images, whether graphics or video clips. [...] No program can deduce the underlying meaning and cultural significance of an image (for example, that a group of men dining represents the Last Supper)."
from [LYNC97]

    Intermediaries are able to offer asynchronous and priority-based query services.
In the current situation, it is usually not possible to issue a query, disconnect, and come back later to collect the results (possibly after receiving a notification of the availability of these results). It's also (usually) not possible to indicate the priority of a query: there are times when an (near) immediate response to a query is needed (e.g. when you're facing a deadline), whereas on other times you wouldn't mind when you'd have to wait a while before the query gets processed (which could be rewarded by having to pay lower costs6 for this query).
Intermediaries may be expected to posses personal information (e.g. an e-mail address) of the person or party that issued some information need/query/ request. This enables brokers to offer asynchronous and/or priority-based services to these parties. They can also offer information and services at a usage-based pricing.


The sudden potential for intermediaries, brokers and other go-between services seems quite ironic, as only a few years ago media and research reports told us that the Net would spell the end of such "middlemen" and intermediary services. When all the information and services you could possibly want are just a click away on the Internet, who needs brokers?
Following in the footsteps of this wave of "disintermediation" came loads of software packages, cleverly labelled "(intelligent) agents", which would give people the power to fulfil every information they need all by themselves. Not very surprisingly, the "agent" applications were not able to meet the expectations users had built up about them and their capabilities.
At this moment, the time seems right to give both brokers as well as software agents a fair (second) chance. Current developments and research into this area shows promising results, and may be expected to deliver applications which people will appreciate because of they ways in which they empower them in their daily working routine. How exactly these applications will turn out is hard to say right now, but one thing will be sure: they will not be technology-driven, but driven by the advantages and functionality they can offer to people (i.e. user-driven).
An example of an intermediary service which has grasped this idea well, is the service the Amazon online bookstore is offering through its Web site. The input for the service is feedback that is offered to Amazon by customers, which are rewarded for this by receiving recommendations of books they might like based on their preferences (so a barter-like trade-off);

"Amazon [now] has a vast database of customers' preferences and buying patterns, tied to their e-mail and postal addresses. Publishers would kill for this stuff: they know practically nothing about their readers, and have no way of contacting them directly. That relationship has traditionally been monopolised by the bookshops, and even they rarely keep track of what individual customers like. Amazon offers publishers a more immediate link. "Ultimately, we're an information broker," says Mr Bezos. "On the left side we have lots of products, on the right side we have lots of customers. We're in the middle making the connections. The consequence is that we have two sets of customers: consumers looking for books and publishers looking for consumers. Readers find books or books find readers."
quote taken from "A Survey of Electronic Commerce" in The Economist

Amazon seems to have come up with the right concept at the right time: consumers greatly appreciate the recommendations they receive, and contacts they can have with other, like-minded people (Amazon offers news group facilities as well). What's more, the services seem to have struck the perfect balance between customers giving up some of their privacy (information about the books they like and the books they buy), and them receiving personalised services (such as book recommendations);

"New types of middlemen will arise, based on information about the consumer that they own. [That is] less about brokering based on controlling access to product, but more about being intimate with consumers. [This] is just the sort of intermediary that online media such as the Web will enable. The trick is to fashion services that offer consumers or businesses a shortcut.

Intermediaries exist in the real world for a reason, and those reasons don't go away on the Net."

quote taken from Inter@ctive Week


1= In the remaining part of this section, instead of continuously using both the term "intermediary" as well as "information broker", the term "information broker" will be used to denote both parties. This is to avoid endless repetition of these terms.
2= And even if it were possible to do this, a lot of time and energy would be wasted checking sites which have not (or only little) changed since they were last visited. Who knows best when a site has changed and which things have been changed but the supplier (or the supplier's agent) himself?
3= If no direct matches can be found for a query on the basis of the available advertisements, specialised services (e.g. a thesaurus service) can be employed to get related terms and/or possible contexts. Another possible way to solve this problem is to contact the user (agent) and send him a request to give more (related) terms and/or a (more specific) context.
4= These two issues have been discussed previously in section 3.1.1.
5= There will probably still be some checking on the intermediary side on the contents of such descriptions, to prevent sources from supplying misleading descriptions for their services; e.g. a description which mentions popular search terms (which are obviously not related to this specific source) so it will be selected in a lot of queries. As brokers are concerned with meta-information only, it is very feasible to let humans do (part of) this scanning process, as they can easily see around simpleminded tricks like that of using (lots of) popular search terms.
6= The costs of issuing a query are not a real issue on the current Internet (as most content is free), but it may be expected that soon (micro-)amounts of money will need to paid for such activities as issuing a query, consuming bandwidth, etcetera.
On to the next page Next page
To the Hermans' Home Page Home Page
Chapter 3 - From Internet to Online Market Place "Desperately Seeking: Helping Hands and Human Touch" -
by Björn Hermans