SP500215
NIST Special Publication 500-215: The Second Text REtrieval Conference (TREC-2)
Appendix A: TREC-2 Results
Appendix
National Institute of Standards and Technology
D. K. Harman
suits - New York University
L
Summary Statistics j
Run Number nyuir2-category B, automatic 1 Recall-Precision Curve
1
Num of Queries 50
Total number of documents over all queries
Retrieved: 50000 0.8
Relevant: 2064
Rel[OCRerr]ret: 1610
0.6
call Level Averages
[OCRerr]ecall Precision
0.00 0.6435
0.10 0.4610
0.20 0.3105
0.30 0.3031
0.40 0.2631
0.50 0.2282
0.60 0.1934
0.10 0.1542
0.80 0.1002
0.90 0.0456
1.00 0.0186
[OCRerr]e precision over all
[OCRerr]t docs
terpolated 0.2331
Document Level Averages
Precision
At S docs 0.4280
At 10 docs 0.4000
At 15 docs 0.3613
At 20 docs 0.3260
At 30 docs 0.2740
At 100 docs 0.1108
At 200 docs 0.1018
At 500 docs 0.0515
At 1000 docs 0.0322
R-Precision (precision after R
docs retrieved (where R is
the number of relevant docu-
ments))
Exact 0.2513
Precision
0.4
0.2
0
1
0.8
0 0.2 0.4 0.6 0.8
Recall
Fallout-Recall Curve
I I I I
evel Averages
Recall
0.2600
0.3484
0.3946
0.4336
0.4551
0.4858
0.5033
0.5126
0.5246
0.5355
0.5486
NORMAL DEVIATE - Fallout-Recall
0.6
Recall
101 a.02 0.84 0.98 0 111111999 0.4
2 0.98
0.2
1 0[OCRerr]84
0 0.5 0
0 0.05 0.1 0.15 0.2
-1 0.16 Fallout x 100
-2 0.02
-3 _______________ I I 0.001
-3 -2 -1 0 1 2 3