IRE
Information Retrieval Experiment
The pragmatics of information retrieval experimentation
chapter
Jean M. Tague
Butterworth & Company
Karen Sparck Jones
All rights reserved. No part of this publication may be reproduced
or transmitted in any form or by any means, including photocopying
and recording, without the written permission of the copyright holder,
application for which should be addressed to the Publishers. Such
written permission must also be obtained before any part of this
publication is stored in a retrieval system of any nature.
References 101
want to attempt to reproduce the graphs. If they are impressionistic rather
than exact, and have no accompanying tables that is not possible.
The traditional method of presenting experiments is not a chronological
narrative. It may seem to put the cart before the horse in that the true
purpose of the experiment may not have been clearly defined until affer some
initial `messing around'. This can be indicated in the `Background' section,
but a reader should not be subject to long personal histories. The important
questions to the reader and to the discipline are:
What was the problem?
How was it solved?
What is the solution or conclusion?
In presenting results and conclusions, the experimenter must be careful to
avoid exaggerated claims. It is difficult not to have a personal interest in
confirming a particular hypothesis. However, this tendency must continually
be restrained and objectivity sought, particularly in evaluating results.
Nothing should be claimed that could not be verified by an independent
investigator. On the other hand, the investigator should not neglect to point
out results that are interesting or unusual, though not adequately tested.
These frequently provide the seeds for future investigations.
To summarize, the presentation of results must maintain a delicate balance
between completeness and conciseness. Previous reports which seem
particularly interesting and comprehensible should be studied as models for
the presentation of results.
This paper has given some guidelines and practical suggestions for
investigators embarking on an information retrieval experiment. Some of
the recommendations may be questioned by others in the field. Some are
based on the author's personal experiences or the experiences of her students.
The model has been prevailing practice among people the author considers
to be serious investigators in the social, biological, and physical sciences.
Information retrieval experiments must meet the same criteria if information
science is to become a respectable area of scientific inquiry.
References
I CLEVERDON, C. W[OCRerr], MILLS, 3. and KEEN, E. M. Factors Determining the Performance of Indexing
Systems, 2 Vols, College of Aeronautics, Cranfield (1966)
2. BROOKES, B.C. A measure of categorical dispersion, Information Scientist 11, 11-17 (1977)
3. KEEN, E. M. and WHEATLEY, A. E[OCRerr]luation of Printed Subject Indexes by Laboratory
Investigation, British Library Research and Development Report 5454, College of
Librarianship, Aberystwyth, Wales (1978)
4. SALTON, G., WONG, A. and yu, C. T. Automatic indexing using term discrimination and term
precision measurements, Information Processing and Management 12, 43-51(1976)
5. MARCUS, R. S., KUGEL, P. and BENENFELD, A. R. Catalog information and text as indicators of
relevance, Journal of the American Society for Information Science 30, 25-30 (1978)
6. LEMON, N. Attitudes and Their Measurement, Batsford, London (1973)
7. ZIPPERER, w. C. User interface models for multidisciplinary information dissemination
systems, ERIC Microfiche FD122846 (1976)
8. SARACEVIC, T. Relevance: a review of and a framework for the thinking on the notion in
information science, Journal of the American Society for Information Science 26, 321-343
(1975)