These MER results are preliminary results provided to facilitate analysis prior to the TRECVID workshop and not to be released outside the TRECVID community

 

Preliminary BBNVISER Results of the MER 2014 Evaluation


Date: October 10, 2014

Caveat Emptor

The results below are for a single participant in the TRECVID MER Evaluation.  The results are provided to facilitate analysis of MER technologies prior to the TRECVID workshop.  NIST is not providing a cross-participant set of results at this time because results are not directly comparable across teams.

MER Annotation Questions

The tables below are preliminary and "likely" to change based on NIST's continued analysis.

The outputs of MER system were evaluated on two levels: the duration of the key evidence snippets and questions posed to judges about the query used to generate the recounting, the extracted evidence, and how well the MER output convinced the judge that the video contained the event.

Nominally 5 judges per video per team were asked to answer the following questions posed as Likert-style statements.

Event Query Quality:
    Likert text: "This seems like a concise and logical query that would be created for the event."
    Scope: Answered for each judged event query

Evidence Quality:
    Likert text: "The evidence presented convinces me that the video contains the [Event name] event."
    [Event Name]: The name of the MED event
    Scope: Answered for each judged recounting

Tag Quality:
    Likert text: "The evidence presented convinces me that the video contains the [Event name] event."
    [Event Name]: The name of the MED event
    Scope: Answered for each judged recounting

Temporal Evidence Localization:
    Likert text: "The system chose the right window of time to present the evidence"
    Scope: Answered for snippets containing 2 or more frames.

Spatial Evidence Localization:
    Likert text: "The system chose the right bounding box(es) to isolate the evidence"
    Scope: Answered for snippets that include bounding boxes

Results


Recounting Percent (as a Percent of Original Video Duration)

BBNVISER

43%


Event Query Quality
  BBNVISER
Strongly Disagree 1%
Disagree 7%
Neutral 15%
Agree 59%
Strongly Agree 17%


Evidence Quality
  BBNVISER
Strongly Disagree 7%
Disagree 13%
Neutral 15%
Agree 38%
Strongly Agree 27%


Tag Quailty
  BBNVISER
Strongly Disagree 22%
Disagree 28%
Neutral 15%
Agree 17%
Strongly Agree 18%


Temporal Evidence Localization
  BBNVISER
Strongly Disagree 18%
Disagree 17%
Neutral 22%
Agree 24%
Strongly Agree 19%
Not Available 0%


Spatial Evidence Localization
  BBNVISER
Strongly Disagree 0.17%
Disagree 0.26%
Neutral 0.13%
Agree 0.83%
Strongly Agree 0.60%
Not Available 98%

* - Debugged MER submissions

History: