TRECVID 2019 Video Data Schedule Contacts Active Participants ViRaL2019 Workshop Attending TRECVID Workshop

Ad-hoc Video Search (AVS)

Task Coordinators: Georges Quénot and George Awad

The Ad-hoc search task ended a 3 year cycle from 2016-2018 with a goal to model the end user search use-case, who is searching (using textual sentence queries) for segments of video containing persons, objects, activities, locations, etc. and combinations of the former. While the Internet Archive (IACC.3) dataset was adopted between 2016 to 2018, starting in 2019 a new data collection based on Vimeo Creative Commons (V3C) will be adopted to support the task for at least 3 more years.

System Task

Given the test collection (V3C1), master shot boundary reference, and set of Ad-hoc queries (approx. 30 queries) released by NIST, return for each query a list of at most 1000 shot IDs from the test collection ranked according to their likelihood of containing the target query.

Starting in 2019 there will be two evaluation tasks:

1- Main Task:

Systems will be asked to return results for *New unseen* queries annually as follows:
30 New queries in 2019, where all 30 queries will be evaluated and scored
20 New queries in 2020, where all 20 queries will be evaluated and scored
20 New queries in 2021, where all 20 queries will be evaluated and scored

2- Progress Subtask:

Systems will be asked to return results for 20 Common (fixed) queries annually from 2019 to 2021 where evaluation schedule will be as follows:
Although systems will submit results for the 20 common queries, no evaluation for them will be conducted in 2019. NIST will just save the runs for subsequent years.
NIST will evaluate and score subset of 10 out of the 20 common queries submitted in 2019 AND 2020 to allow comparison of performance between the two years.
NIST will evaluate and score the other 10 subset out of the 20 common queries submitted in 2019, 2020 AND 2021 to allow comparison of performance across the three years.

Given the above schedule of query distribution from 2019 to 2021, in total systems should submit results for:
50 (30 New + 20 common) queries in 2019
40 (20 New + 20 common) queries in 2020
40 (20 New + 20 common) queries in 2021

Data Resources

  • Previous collaborative annotations:

    The results of past collaborative annotations on Sound and Vision as well as Internet Archive videos from 2007-2013 are available for use in system development. No new community annotation effort is planned in conjunction with TRECVID 2019.
  • Sharing of components:

    • The common organization, exchange formats and associated tools will be proposed for the sharing of elements among the interested TRECVID Ad-hoc participants. More information will be made available on a dedicated wiki, which will be accessible to TRECVID active participants.
    • The Centre for Research and Technology Hellas (ITI-CERTH) team shared their concept detection scores for the IACC.3 dataset
    • Frame-level CNN features for IACC.3 dataset are available by RUCMM team here

Participation types

There will be 3 types of participation:
  1. Participation by submitting only automatic runs for TRECVID evaluation
  2. Participation by submitting only automatic runs for TRECVID evaluation and by using an interactive system during the next VBS (Video Browser Showdown)
  3. Participation by using only interactive system during the next VBS (Video Browser Showdown)
    Important for VBS participants:

    The same data V3C1 will be used by VBS participants. While VBS supports two kind of tasks: Known-item search and Ad-hoc search, participation in any of the two tasks is optional and teams may choose to join both tasks. Interactive systems at VBS joining the Ad-hoc task will be tested real-time on a subset of random selected queries (from the 30 selected for TRECVID 2019).

    For questions about participation in the next VBS please contact the VBS organizers: Werner Bailer, Cathal Gurrin, or Klaus Schoeffmann.

Allowed training categories:

The task supports experiments using a no annotation condition. The idea is to promote the development of methods that permit the indexing of concepts in video shots using only data from the Web or archives without the need of additional annotations. The training data could for instance consist of images or videos retrieved by a general purpose search engine (e.g. Google) using only the query definition with only automatic processing of the returned results.
By "no annotation", we mean here that no annotation should be manually done on the retrieved samples (either images or videos). Any annotation done by somebody else prior to the general search does not count. Methods developed in this context could be used for building indexing tools for any concept starting only from a simple query defined for it. This will be implemented by using additional categories (E and F) for the training types besides the A and D ones.

P l e a s e   n o t e   t h e s e   r e s t r i c t i o n s   and this information on training types.

Run submission types:

Three main submission types will be accepted:
  1. Fully automatic (F) runs (no human input in the loop): System takes official query as input and produced result without any human intervention.
  2. Manually-assisted (M) runs: where a human can formulate the initial query based on topic and query interface, not on knowledge of collection or search results. Then system takes the formulated query as input and produces result without further human intervention.
  3. Relevance-Feedback (R) runs: System takes the official query as input and produce initial results, then a human judge can assess the top-5 results and input this information as a feedback to the system to produce a final set of results. This feedback loop is strictly permitted only once.

A new extra 1 (Novelty) run type (N) is allowed to be submitted within the main task. The goal of this run is to encourage systems to submit novel and unique relevant shots not easily discovered by other runs. Each team may submit only 1 novelty run. Please note the new required xml field in the dtd indicating if the run is of novelty or common type

The submission types (automatic, manually-assisted, relevance feedback) are orthogonal to the training types (A, D, E, F).

Each team may submit a maximum of 4 prioritized runs, per submission type and per task type (Main or Progress), with 2 additional if they are of the "no annotation" training type (E or F) and the others are not. In addition, 1 novelty run is allowed in the Main task. The submission formats are described below.

Please note: Only submissions which are valid when checked against the supplied DTDs will be accepted. You must check and correct if needed your submission before submitting it. NIST reserves the right to reject any submission which does not parse correctly against the provided DTD(s). Various checkers exist, e.g., Xerces-J: java sax.Counter -v YourSubmision.xml.

Run submission format:

  • Participants will submit results against V3C1 data in each run for all and only the 30 main queries or the 20 progress queries released by NIST and for each query at most 1000 shot IDs.
  • The DTD includes an xml field to determine the task type (Main or Progress) as well as the run types. Each of the Main and Progress tasks will have their own set of Query IDs to use within run submissions.
  • Here for download (right click and choose "display page source" to see the entire files) is a DTD for Adhoc search results of one main run, the container for one run, and a small example of what a site would send to NIST for evaluation. Please check all your submissions to see that they are well-formed.
  • Please submit each of your runs in a separate file, named to make clear which team has produced it. EACH file you submit should begin, as in the example submission, with the DOCTYPE statement:
    <!DOCTYPE videoAdhocSearchResults SYSTEM "https://www-nlpir.nist.gov/projects/tv2019/dtds/videoAdhocSearchResults.dtd">
    that refers to the DTD at NIST via a URL and with a videoAdhocSearchResults element even though there is only one run included. Each submitted file must be compressed using just one of the following: gzip, tar, zip, or bzip2.
  • Remember to use the correct shot IDs in your submissions. The shot IDs take the form of "shotX_Y", where X is the original video ID and Y is the segmented shot ID. Please do NOT use any keyframe associated file names in your submissions. Please consult the V3C1 readme file for more information about submitting the correct shot file names and the master shot reference for V3C1 dataset
  • Please Submit your compressed run files (1 run per file) to NIST through this password-protected webpage
  • .

Evaluation:

  • All 2019 main 30 queries will be evaluated by assessors at NIST after pooling and sampling.
  • In 2019, all 20 progress subtask runs will be stored where 10 queries will be evaluated in 2020 (using 2019 and 2020 runs), while the other 10 queries will be evaluated in 2021 (using 2019 - 2021 runs).
  • Please note that NIST uses a number of rules in manual assessment of system output.

Measures:

  • Mean extended inferred average precision (mean xinfAP), which allows sampling density to vary e.g. so that it can be 100% in the top strata, which are most important for average precision.
  • As in past years, other detailed measures based on recall, precision will be provided by the sample_eval software.
  • Speed will also be measured: clock time per query search, reported in seconds (to one decimal place) must be provided in each run.
  • A special metric will be developed to score Novelty runs such that more credit can be given to unique shots.

Open Issues:

  • Participants are welcomed to provide feedback about ideas to motivate novelty (unique shots) retrieval and how to encorporate (encourage systems) them within the task evaluation.
  • Participants are welcomed to provide feedback about ideas for Concept bank fusion and how teams can collaborate towards that goal.

Digital Video Retrieval at NIST

Digital Video Retrieval at NIST
News magazine, science news, news reports, documentaries, educational programming, and archival video

Digital Video Retrieval at NIST
TV Episodes

Digital Video Retrieval at NIST
Airport Security Cameras & Activity Detection

Digital Video Retrieval at NIST
Video collections from News, Sound & Vision, Internet Archive,
Social Media, BBC Eastenders