IRE
Information Retrieval Experiment
Evaluation within the enviornment of an operating information service
chapter
F. Wilfrid Lancaster
Butterworth & Company
Karen Sparck Jones
All rights reserved. No part of this publication may be reproduced
or transmitted in any form or by any means, including photocopying
and recording, without the written permission of the copyright holder,
application for which should be addressed to the Publishers. Such
written permission must also be obtained before any part of this
publication is stored in a retrieval system of any nature.
`il-u
iilI;I[OCRerr]III I
106 Evaluation within the environment of an operating information service
have their value since it is important for the managers of information services
to know how users feel about these services. On the other hand, purely
subjective studies have obvious limitations. Generally speaking, the
satisfaction of a user with a service is a relative thing, based on what the user
knows and what he does not know. To take an obvious example, the recipient
of the results of a literature search may express satisfaction with these results
based upon what is known to him. Thus, if most of the references retrieved
are relevant to his interests, he may be quite happy with the results. But the
unknowns of the situation have not entered at all into this evaluation. The
requester might be much less satisfied with the results if he knew that a
substantial number of relevant references were missed by the search,
especially if some of these were, in some sense, `more relevant' to his interest
than those retrieved. An objective evaluation would try to quantify the
results of the search: to determine how many relevant items were missed as
well as to determine how many of the items retrieved are considered relevant.
An objective, quantitative approach is usually needed for diagnostic
evaluation purposes.
6.1 Levels of evaluation
An information service can be studied at any of the following levels:
(1) cost;
(2) effectiveness;
(3) benefit;
(4) cost-effectiveness;
(5) cost-benefit;
(6) cost-performance-benefit;
which approximates to a sequence of increasing complexity.
The cost of an information service, obviously, refers to the resources
expended on that service. While, in theory, the cost analysis of a service may
seem quite straightforward, the danger exists of overlooking some tangible
or intangible costs. The most common pitfall is that of overlooking costs to
the user of the service. The fact that a user may not be required to pay out
money for the service does not mean that there is no user cost involved.
Clearly, the time and effort of the user is a cost that must be charged against
the service in any realistic cost analysis. The user of a literature searchin8
service may spend one hour of his time in making his information needs
known to the service and another hour in examining and evaluating the
results of the search. Allowing for overheads, this time could be worth, say,
$80 in an industrial organization. To ignore this user's time would lead to a
completely distorted picture of the cost or the cost-effectiveness of the
literature search within the complete institutional environment.
The effectiveness of an information service is the extent to which the needs
of the users are satisfied by the service. An evaluation of effectivenen
attempts to determine satisfaction level. It should preferably be objective and
quantitative, expressed in such terms as `80 per cent of the document delivery
needs of users are satisfied' or `60 per cent of factual questions are answered
completely and correctly'.
I