These MER results are preliminary results provided to facilitate analysis prior to the TRECVID workshop and not to be released outside the TRECVID community

 

Preliminary CMU Results of the MER 2014 Evaluation


Date: October 10, 2014

Caveat Emptor

The results below are for a single participant in the TRECVID MER Evaluation.  The results are provided to facilitate analysis of MER technologies prior to the TRECVID workshop.  NIST is not providing a cross-participant set of results at this time because results are not directly comparable across teams.

MER Annotation Questions

The tables below are preliminary and "likely" to change based on NIST's continued analysis.

The outputs of MER system were evaluated on two levels: the duration of the key evidence snippets and questions posed to judges about the query used to generate the recounting, the extracted evidence, and how well the MER output convinced the judge that the video contained the event.

Nominally 5 judges per video per team were asked to answer the following questions posed as Likert-style statements.

Event Query Quality:
    Likert text: "This seems like a concise and logical query that would be created for the event."
    Scope: Answered for each judged event query

Evidence Quality:
    Likert text: "The evidence presented convinces me that the video contains the [Event name] event."
    [Event Name]: The name of the MED event
    Scope: Answered for each judged recounting

Tag Quality:
    Likert text: "The evidence presented convinces me that the video contains the [Event name] event."
    [Event Name]: The name of the MED event
    Scope: Answered for each judged recounting

Temporal Evidence Localization:
    Likert text: "The system chose the right window of time to present the evidence"
    Scope: Answered for snippets containing 2 or more frames.

Spatial Evidence Localization:
    Likert text: "The system chose the right bounding box(es) to isolate the evidence"
    Scope: Answered for snippets that include bounding boxes

Results


Recounting Percent (as a Percent of Original Video Duration)

CMU

8%


Event Query Quality
  CMU
Strongly Disagree 7%
Disagree 15%
Neutral 18%
Agree 48%
Strongly Agree 12%


Evidence Quality
  CMU
Strongly Disagree 11%
Disagree 15%
Neutral 17%
Agree 34%
Strongly Agree 23%


Tag Quailty
  CMU
Strongly Disagree 34%
Disagree 32%
Neutral 9%
Agree 12%
Strongly Agree 13%


Temporal Evidence Localization
  CMU
Strongly Disagree 13%
Disagree 12%
Neutral 16%
Agree 28%
Strongly Agree 30%
Not Available 0%


Spatial Evidence Localization
  CMU
Strongly Disagree 0.00%
Disagree 0.00%
Neutral 0.00%
Agree 0.00%
Strongly Agree 0.00%
Not Available 100%

* - Debugged MER submissions

History: