|TIPSTER Text Program A multi-agency, multi-contractor program|
TABLE OF CONTENTS
TIPSTER Technology Overview
TIPSTER Related Research
Phase III Overview
Reinvention Laboratory Project
Generic Information Retrieval
Generic Text Extraction
12 Month Workshop Notes
Text Retrieval Conference
TREC-7 Participation Call
Multilingual Entity Task
Other Related Projects
Document Down Loading
Request for Change (RFC)
Glossary of Terms
TIPSTER Source Information
Return to Retrieval Group home page
Return to IAD home page
Date created: Monday, 31-Jul-00
Message Understanding Conferences (MUC)
MUC preforms evaluations of information extraction system according to pre-established tasks. To date there have been six conferences.
The latest in a series of natural language processing system evaluations was concluded in October 1995 and was the topic of the Sixth Message Understanding Conference (MUC-6) in November, co-chaired by Ralph Grishman (NYU) and Beth Sundheim (SPAWAR). Participants were invited to enter their systems in as many as four different task-oriented evaluations. The Named Entity and Coreference tasks entailed Standard Generalized Markup Language (SGML) annotation of texts and were being conducted for the first time. The other two tasks, Template Element and Scenario Template, were information extraction tasks that followed on from previous MUC evaluations. All except the Scenario Template task are defined independent of any particular domain. The evolution and design of the MUC-6 evaluation are described in the conference proceedings (in preparation).
Testing was conducted using Wall Street Journal texts provided by the Linguistic Data Consortium. The test set for the two information extraction tasks consisted of 100 articles. A subset of 30 articles was selected for use as the test set for the two SGML annotation tasks. The evaluation began with the distribution of the scenario definition and training data at the beginning of September 1995. The test data was distributed four weeks later, with results due by the end of the week. Sixteen sites participated in the evaluation; 15 systems were evaluated for the Named Entity task, 7 for Coreference, 11 for Template Element, and 9 for Scenario Template.
The variety of tasks that were designed for MUC-6 reflects the interests of both participants and sponsors in assessing and furthering research that can satisfy some urgent text processing needs in the very near term and can lead to solutions to more challenging text understanding problems in the longer term. The hard work carried out by the planning committee over nearly two years led to extremely interesting and useful evaluation results. The tasks designed for MUC-6 were as follows:
MUC-7 will be held in 1998, with Government coordination led by Elaine Marsh of the Naval Research Laboratory.