Since early 1990, the MUC evaluations have been funding the development of metrics and statistical algorithms to support government evaluations of emerging information extraction technologies. In the mid-nineties MUC evaluations began to provide prepared data and task definitions in addition to providing fully automated scoring software to measure machine and human performance. The tasks grew from just production of a database of events found in newswire articles from one source to the production of multiple databases of increasingly complex information extracted from multiple sources of news in multiple languages. The databases now include named entities, multilingual named entities, attributes of those entities, facts about relationships between entities, and events in which the entities participated.
The results of these evaluations were reported at conferences during the 1990's where developers and evaluators shared their findings and government specialists described their needs. These conferences were called "Message Understanding Conferences (MUC)" as a results of the use of such technology to process military messages. The multilingual portion was known as "Multilingual Entitity Task (MET)" The proceedings of these conferences have all been published, the last of which appears on this website. All previous proceedings were published in bound form by Morgan Kaufmann Publishers.