<DOC> 
<DOCNO> IRE </DOCNO>         
<TITLE> Information Retrieval Experiment </TITLE>         
<SUBTITLE> Opportunities for testing with online systems </SUBTITLE>         
<TYPE> chapter </TYPE>         
<PAGE CHAPTER="7" NUMBER="132">                   
<AUTHOR1> Elizabeth D. Barraclough </AUTHOR1>  
<PUBLISHER> Butterworth & Company </PUBLISHER> 
<EDITOR1> Karen Sparck Jones </EDITOR1> 
<COPYRIGHT MTH="" DAY="" YEAR="1981" BY="Butterworth & Company">   
All rights reserved.  No part of this publication may be reproduced 
or transmitted in any form or by any means, including photocopying 
and recording, without the written permission of the copyright holder, 
application for which should be addressed to the Publishers.  Such 
written permission must also be obtained before any part of this 
publication is stored in a retrieval system of any nature. 
</COPYRIGHT> 
<BODY> 
                                                             V'

132  Opportunities for testing with online systems

considerable detail by collecting all the information from a terminal session,
both the input from the user and the responses from the system. Agreement
must, of course, be obtained from the user to take part in the experiment
before observing his behaviour in this way. Suppression of identifying
information should also take place. Having once understood that the
experiment is going on, the user ceases to be aware that his interaction with
the system is being investigated. It is thus possible to get a completely
unbiased user view of the system.
   Collection of logged data gives a very clear picture of what the user does
with the system, in some cases why he does it is also obvious, i.e. correcting
mistakes, but the same difficulty that exists in determining the user's
information need from the search statements applies to the analysis of the
logged information. Some extra information from the user is necessary.
   In addition to the collection of complete sessions which can be subsequently
analysed, statistics of overall use of the system can be derived and
investigations done on the usage of commands, for example, which are
misused, which tend to be repeated and which are hardly ever used.


7.7 System aspects that warrant testing
The overall testing of the performance of the system can be done by treating
it merely as a `black box' with input at one end and bibliographical references
at the other. However, if methods for improving the system are being sought,
then the separate functions provided within the system must be considered.
The majority of operational systems can be thought of as having three
functional parts.
(1) Search formulation and checking.
(2) Index search.
(3) Database search and print.
   The methods used in each section can vary from system to system with
perhaps the greatest differences being in the search formulation area. Briefly,
the functions are as follows.

Search formulation and checking
The user wijh his intermediary approaches the system with an information
need which can be merely thoughts in the user's head or, more likely, a
written statement of the information need, or a detailed search request with
appropriate terms already selected. The function of the system obviously
varies depending upon the amount of work that has been done beforehand.
The minimum work that the system will do is to check that terms exist in the
dictionary or index and provide a count of the number of occurrences. At the
other extreme the user can try terms to see if they exist and look for related
terms where the relation can be alphabetical proximity or a subject
relationship; the user can then select terms suggested by the system rather
than having to think of them ab initio. Thus in this area the user can get
considerable assistance if he is prepared to pay for it. Some intelligence is
built into most systems in that suffixes and prefixes can be ignored if

                                                             4'

</BODY>                  
</PAGE>                  
</DOC>