These MER results are preliminary results provided to facilitate analysis prior to the TRECVID workshop and not to be released outside the TRECVID community

 

Preliminary VIREO Results of the MER 2014 Evaluation


Date: October 10, 2014

Caveat Emptor

The results below are for a single participant in the TRECVID MER Evaluation.  The results are provided to facilitate analysis of MER technologies prior to the TRECVID workshop.  NIST is not providing a cross-participant set of results at this time because results are not directly comparable across teams.

MER Annotation Questions

The tables below are preliminary and "likely" to change based on NIST's continued analysis.

The outputs of MER system were evaluated on two levels: the duration of the key evidence snippets and questions posed to judges about the query used to generate the recounting, the extracted evidence, and how well the MER output convinced the judge that the video contained the event.

Nominally 5 judges per video per team were asked to answer the following questions posed as Likert-style statements.

Event Query Quality:
    Likert text: "This seems like a concise and logical query that would be created for the event."
    Scope: Answered for each judged event query

Evidence Quality:
    Likert text: "The evidence presented convinces me that the video contains the [Event name] event."
    [Event Name]: The name of the MED event
    Scope: Answered for each judged recounting

Tag Quality:
    Likert text: "The evidence presented convinces me that the video contains the [Event name] event."
    [Event Name]: The name of the MED event
    Scope: Answered for each judged recounting

Temporal Evidence Localization:
    Likert text: "The system chose the right window of time to present the evidence"
    Scope: Answered for snippets containing 2 or more frames.

Spatial Evidence Localization:
    Likert text: "The system chose the right bounding box(es) to isolate the evidence"
    Scope: Answered for snippets that include bounding boxes

Results


Recounting Percent (as a Percent of Original Video Duration)

VIREO

8%


Event Query Quality
  VIREO
Strongly Disagree 1%
Disagree 15%
Neutral 18%
Agree 37%
Strongly Agree 28%


Evidence Quality
  VIREO
Strongly Disagree 10%
Disagree 14%
Neutral 16%
Agree 33%
Strongly Agree 27%


Tag Quailty
  VIREO
Strongly Disagree 26%
Disagree 23%
Neutral 11%
Agree 17%
Strongly Agree 22%


Temporal Evidence Localization
  VIREO
Strongly Disagree 22%
Disagree 14%
Neutral 16%
Agree 23%
Strongly Agree 25%
Not Available 0%


Spatial Evidence Localization
  VIREO
Strongly Disagree 0.00%
Disagree 0.00%
Neutral 0.00%
Agree 0.00%
Strongly Agree 0.00%
Not Available 100%

* - Debugged MER submissions

History: