TACP logo

National Institute of Standards and Technology Home Page
TIPSTER Text Program A multi-agency, multi-contractor program


TABLE OF CONTENTS


Introduction

TIPSTER Overview
TIPSTER Technology Overview
TIPSTER Related Research
Phase III Overview
TIPSTER Calendar
Reinvention Laboratory Project
What's New

Conceptual Papers
Generic Information Retrieval
Generic Text Extraction
Summarization Concepts
12 Month Workshop Notes

Conferences
Text Retrieval Conference
TREC-7 Participation Call
Multilingual Entity Task
Summarization Evaluation

More Information
Other Related Projects
Document Down Loading
Request for Change (RFC)
Glossary of Terms
TIPSTER Contacts
TIPSTER Source Information

Return to Retrieval Group home page
Return to IAD home page

Last updated:

Date created: Monday, 31-Jul-00

Message Understanding Conferences (MUC)

MUC preforms evaluations of information extraction system according to pre-established tasks. To date there have been six conferences.

MUC-6

The latest in a series of natural language processing system evaluations was concluded in October 1995 and was the topic of the Sixth Message Understanding Conference (MUC-6) in November, co-chaired by Ralph Grishman (NYU) and Beth Sundheim (SPAWAR). Participants were invited to enter their systems in as many as four different task-oriented evaluations. The Named Entity and Coreference tasks entailed Standard Generalized Markup Language (SGML) annotation of texts and were being conducted for the first time. The other two tasks, Template Element and Scenario Template, were information extraction tasks that followed on from previous MUC evaluations. All except the Scenario Template task are defined independent of any particular domain. The evolution and design of the MUC-6 evaluation are described in the conference proceedings (in preparation).

Testing was conducted using Wall Street Journal texts provided by the Linguistic Data Consortium. The test set for the two information extraction tasks consisted of 100 articles. A subset of 30 articles was selected for use as the test set for the two SGML annotation tasks. The evaluation began with the distribution of the scenario definition and training data at the beginning of September 1995. The test data was distributed four weeks later, with results due by the end of the week. Sixteen sites participated in the evaluation; 15 systems were evaluated for the Named Entity task, 7 for Coreference, 11 for Template Element, and 9 for Scenario Template.

The variety of tasks that were designed for MUC-6 reflects the interests of both participants and sponsors in assessing and furthering research that can satisfy some urgent text processing needs in the very near term and can lead to solutions to more challenging text understanding problems in the longer term. The hard work carried out by the planning committee over nearly two years led to extremely interesting and useful evaluation results. The tasks designed for MUC-6 were as follows:

  • Identification of names, which constitutes a large portion of the Named Entity task and a critical portion of the Template Element task, has proven to be largely a solved problem. The majority of systems evaluated on Named Entity had recall and precision over 90%; the highest-scoring system had a recall of 96% and a precision of 97%, which was judged to be comparable to human performance on the task.
  • Recognition of alternative ways of identifying an entity constitutes a large portion of the Coreference task and another critical portion of the Template Element task; it has been shown to represent only a modest challenge when the referents are names or pronouns. All but two of the Template Element systems posted combined recall-precision (F-measure) scores in the 70-80% range; four of the systems were able to achieve recall in the 70-80% range while maintaining precision in the 80-90% range. The top-scoring system had 75% recall, 86% precision. Five of the seven Coreference systems were in the 51%-63% recall range and 62%-72% precision range.
  • The Scenario Template task concerned changes in corporate executive management personnel; the extracted information includes answers to the basic questions of "Who is creating or filling what vacancy at what organization?". The mix of challenges that the task represents -- extraction of domain-specific events and relations along with the pertinent entities (template elements) -- yielded levels of performance that are similar to those achieved in previous MUCs (40%-50% recall, 60%-70% precision), but with a much shorter time required for porting. The highest Scenario Template performance overall was 47% recall and 70% precision.

MUC-7

MUC-7 will be held in 1998, with Government coordination led by Elaine Marsh of the Naval Research Laboratory.

MUC-6 Participants

  • BBN Systems and Technology Corporation
  • University of Durham (Durham, UK)
  • Knight-Ridder
  • Lockheed-Martin
  • University of Manitoba
  • University of Massachusetts, Amherst
  • The MITRE Corporation
  • New Mexico State University CRL, Las Cruces
  • New York University
  • University of Pennsylvania
  • SAIC
  • University of Sheffield
  • SRA
  • SRI
  • Sterling Software
  • Wayne State University

MUC-5 Participants

  • BBN Systems and Technologies, Inc.
  • Boston University
  • Brandeis University
  • Carnegie Mellon University
  • GE Research and Development
  • Hughes Research Laboratories
  • Institute for Defense Analyses
  • Language Systems, Inc.
  • Martin Marietta
  • NCCOSC
  • NEC Corporation
  • New Mexico State University / CRL
  • New York University
  • Northeastern University
  • PRC, Inc.
  • Science Applications International Corporation
  • Systems Research and Applications
  • SRI International
  • The MITRE Corp.
  • TRW Systems Development Division
  • U.S. Department of Defense
  • Unisys Corporation
  • University of Connecticut
  • University of Massachusetts
  • University of Manitoba
  • University of Michigan
  • University of Southern California
  • University of Sussex

Multi-colored horizontal rule