SP500207
NIST Special Publication 500-207: The First Text REtrieval Conference (TREC-1)
Appendix C: System Features
appendix
National Institute of Standards and Technology
Donna K. Harman
2. r£inkin[OCRerr] time (totil CPU [OCRerr]ec()I1ds tO .`;ort d([OCRerr]uIneI1t list)
B. Which methods best describe [OCRerr]()U[OCRerr] niachiiie seirchiug methods?
6. fuzzy loiTic (juclude your deilnition)
10. other (describe) Software uses a fuzzy AND and Proximity measure to rank documents.
C. What factors &`irC jucluded ill your rauking?
1. tenn frequency
5. position ill document
7. proxilnity of terms
IV. What mach[OCRerr]ie did you conduct the TREC experilnent on?
How much RAM did it h[OCRerr]ive?
What w£[OCRerr]s the clock rate of die CPU?
The experiments were run on an HP 486/33 with 8 MI)ytes under SCO UNIX. The CD ROM
drive was accessed via NFS.
V. Some systems are rese[OCRerr]irch prototypes and others [OCRerr]`tre c()lnlnerci£il.
To help comp[OCRerr]ire these systems:
1. How much "softw(lle en[OCRerr]i'ieeriIlg" went into the development of your system?
2. Given [OCRerr]Ippr()pri[OCRerr][OCRerr]te resources, could your system be made (0 fun f[OCRerr][OCRerr]ster? By how much
(estimate)?
3. What features is your syslein missing that i( would benerit by if it had them?-
AMR is c()mn[OCRerr]rciaI strength software developed hy Computer Power Group. Its
colilfllercialisati()n software engineering phase to([OCRerr]k 5([OCRerr]me three person-years.
493