Call for Participation in TRECVID 2017
CALL FOR PARTICIPATION in the
2017 TREC VIDEO RETRIEVAL EVALUATION (TRECVID 2017)
January 2017 - November 2017
Conducted by the National Institute of Standards and Technology (NIST)
with additional funding from other US government agencies
I n t r o d u c t i o n:
The TREC Video Retrieval Evaluation series (trecvid.nist.gov) promotes
progress in content-based analysis of and retrieval from digital video
via open, metrics-based evaluation. TRECVID is a laboratory-style
evaluation that attempts to model real world situations or significant
component tasks involved in such situations. In it's 17th annual evaluation
cycle TRECVID will evaluate participating systems on 6 different video
analysis and retrieval tasks using various types of real world datasets.
D a t a:
In TRECVID 2017 NIST will use at least the following data sets:
The IACC.3 is a new dataset introduced in 2016 of approximately 4600 Internet Archive
videos (144 GB, 600 h) with Creative Commons licenses in MPEG-4/H.264
format with duration ranging from 6.5 min to 9.5 min and a mean duration
of almost 7.8 min. Most videos will have some metadata provided by the
donor available e.g., title, keywords, and description.
* IACC.2.A, IACC.2.B, and IACC.2.C
Three datasets - totally approximately 7300 Internet Archive
videos (144GB, 600 hours) with Creative Commons licenses in
MPEG-4/H.264 format with duration ranging from 10 seconds to 6.4
minutes and a mean duration of almost 5 minutes. Most videos will have
some metadata provided by the donor available e.g., title, keywords,
* IACC.1.A, IACC.1.B, and IACC.1.C
Three datasets - totaling approximately 8000 Internet Archive videos
(160GB, 600 hours) with Creative Commons licenses in MPEG-4/H.264
format with duration between 10s and 3.5 minutes. Most videos will
have some metadata provided by the donor available e.g., title,
keywords, and description
Approximately 3200 Internet Archive videos (50GB, 200 hours)
with Creative Commons licenses in MPEG-4/H.264 with durations
between 3.6 and 4.1 minutes. Most videos will have some metadata
provided by the donor available e.g., title, keywords, and
* BBC EastEnders
Approximately 244 video files (totally 300GB, 464 hours) with
associated metadata, each containing a week's worth of BBC EastEnders
programs in MPEG-4/H.264 format.
* Twitter Vine videos
Approximately 50,000 video clips URLs have been collected by NIST
from the public Twitter stream of Vine videos. Only a subset of at
least 2,000 videos will be used with their human annotated 1 sentense
Consists of 14,838 videos for a total of 3,288 hours from blip.tv.
The videos cover a broad range of topics and styles. It has automatic
speech recognition transcripts provided by LIMSI; user-contributed metadata
and shot boundaries provided by TU Berlin. Also, video concepts based on the
BLVC Reference CaffeNet model (bvlc_reference_caffenet on
http://caffe.berkeleyvision.org/model_zoo.html) are provided by EURECOM.
* Gatwick and i-LIDS MCT airport surveillance video
The data consist of about 150 hours obtained from airport
surveillance video data (courtesy of the UK Home Office). The
Linguistic Data Consortium has provided event annotations for
the entire corpus. The corpus was divided into development and
evaluation subsets. Annotations for 2008 development and test
sets are available.
HAVIC is a large collection of Internet multimedia constructed
by the Linguistic Data Consortium and NIST. Participants will
receive training corpora, event training resources, and two
development test collections. Participants will also receive an
evaluation collection which is the same as in 2014, either the
8,000 hour MED14 search collection (used for the evaluation) or
the 1,300 hour MED14 search subset for participants with limited
computing resources (used for the evaluation).
The Yahoo Flickr Creative Commons 100M dataset (YFCC100M) is a large
collection of images (99.3 million) and video (0.7 million) available
on Yahoo! Flickr. All photos and videos listed in the collection are
licensed under one of the Creative Commons copyright licenses. TRECVID
MED task will use a subset of about 100,000 video clips.
T a s k s:
In TRECVID 2017 NIST will evaluate systems on the following tasks
using the [data] indicated:
* AVS: Ad-hoc Video Search [IACC.3]
A new Ad-hoc search task started in TRECVID 2016 and will continue in 2017
to model the end user search use-case, who is looking for segments of video containing
persons, objects, activities, locations, etc. and combinations of the
former. Given about 30 multimedia topics created at NIST, return
for each topic all the shots which meet the video need expressed by it,
ranked in order of confidence. Although all evaluated submissions will be
for automatic runs, Interactive systems will have the opportunity to
participate in the Video Browser Showdown (VBS) in Jan. 2018 using
the same testing data (IACC.3).
* SED: Interactive surveillance video event detection (interactive) [i-LIDS]
Detecting human behaviors efficiently in vast amounts
surveillance video, both retrospectively and in realtime, is
fundamental technology for a variety of higher-level
applications of critical importance to public safety and
security. In this task participants will examine the performance
of interactive surveillance video search for a known set of
events. NIST will continue to work on adding new
annotations to additional Gatwick videos to create a new test
set - community involvement will likely be critical in this effort.
* INS: Instance search (interactive, automatic) [BBB EastEnders]
An important need in many situations involving video collections
(archive video search/reuse, personal video organization/search,
surveillance, law enforcement, protection of brand/logo use) is to
find more video segments of a certain specific person, object,
or place, given a visual example. In 2016 new query type was tested
by asking systems to retrieve specific persons in specific locations.
The same query type will continue in 2017 but with new testing topics.
A set of master locations with various video examples will be given and
each topic will include few examples (image and video) of a person and
ask systems to find that person in one of the known locations.
* MED: Multimedia event detection [HAVIC and YFCC100M]
Video is becoming a new means of documenting everything from
recipes to how to change a tire of a car. Ever-expanding
multimedia video content necessitates development of new
technologies for retrieving relevant videos based solely on the
audio and visual content of the video. Participating MED teams
will create a system that quickly finds events in a large
collection of search videos. Given an evaluation collection of
videos (files) and a set of event kits. The system will provide
a rank and confidence score for each evaluation video as to
whether the video contains the event. Both the Pre-Specified
and AdHoc Event tasks will be supported. NIST will create up to
10 new AdHoc event kits. The development data will be the same
as last year. The evaluation search collection will be subset of
the HAVIC data, with the addition of a subset of videos from the YFCC100M data.
* LNK: Video Hyperlinking [Blip10000]
This tasks envisages a scenario where users are interested to
find further information on some aspect of the topic of interest
contained within a video segment, and that they do this by
navigating via a hyperlink to other parts of the video
collection. To facilitate this searchers need to be provided with
the capability of jumping from one part of a video to another within
the archive. This requires the construction of a network of hyperlinks between
different parts of the videos based on a combination of visual and
audio content features, and potentially metadata annotations.
Given a set of test videos with metadata with a defined set of
anchors, each defined by start time and end time in the video,
the system will return for each anchor a ranked list of hyperlinking
targets: video segments defined by a video ID and start time
and end time (possibly of segmented media/video fragments).
* VTT: Video to Text Description [Twitter Vine videos]
Automatic annotation of videos using natural language text descriptions
has been a long-standing goal of computer vision. The task involves
understanding of many concepts such as objects, actions, scenes, person-object
relations, temporal order of events and many others. In recent years there has
been major advances in computer vision techniques which enabled researchers to
start practically to work on solving such problem. A lot of use case application
scenarios can greatly benefit from such technology such as video summarization,
facilitating the search and browsing of video archives using such descriptions,
describing videos to the blind, etc.
Given a set of Vine videos URLs and number of reference sets of text descriptions,
systems are asked to work and submit results for two subtasks. A "Matching and Ranking"
subtask to return for each video URL a ranked list of the most likely text description
that correspond (was annotated) to the video from each of the reference sets. And a
"Description Generation" subtask to Automatically generate for each video URL a text
description (1 sentence).
In addition to the data, TRECVID will provide uniform scoring
procedures, and a forum for organizations interested in comparing
their approaches and results.
Participants will be encouraged to share resources and intermediate
system outputs to lower entry barriers and enable analysis of various
components' contributions and interactions.
* You are invited to participate in TRECVID 2017 *
The evaluation is defined by the Guidelines. A draft version is
available: http://www-nlpir.nist.gov/projects/tv2017/index.html and
details will be worked out starting in February based in part on input
from the participants.
You should read the guidelines carefully before applying to participate in one or more tasks:
P l e a s e n o t e:
1) Dissemination of TRECVID work and results other than in the
(publicly available) conference proceedings is welcomed, but the
conditions of participation specifically preclude any advertising claims based on TRECVID results.
2) All system output and results submitted to NIST are published in
the Proceedings or on the public portions of TRECVID web site archive.
3) The workshop is open only to participating groups that submit
results for at least one task and to selected government personnel
from sponsoring agencies and data donors.
4) Each participating group is required to submit before the November
workshop a notebook paper describing their experiments and results.
This is true even for groups who may not be able to attend the
5) It is the responsibility of each team contact to make sure that
information distributed via the call for participation and the
email@example.com email list is disseminated to all team members with
a need to know. This includes information about deadlines and
restrictions on use of data.
6) By applying to participate you indicate your acceptance of the
above conditions and obligations.
There is a tentative schedule for the tasks included in the Guidelines
W o r k s h o p f o r m a t
Plans are for a 2 1/2 day workshop at NIST
in Gaithersburg, Maryland - just outside Washington, DC. Confirmation
and details will be provided to participants as soon as available.
The TRECVID workshop is used as a forum both for presentation of
results (including failure analyses and system comparisons), and for
more lengthy system presentations describing retrieval techniques
used, experiments run using the data, and other issues of interest to
researchers in information retrieval. As there is a limited amount of
time for these presentations, the evaluation coordinators and NIST
will determine which groups are asked to speak and which groups will
present in a poster session. Groups that are interested in having a
speaking slot during the workshop will be asked to submit a short
abstract before the workshop describing the experiments they
performed. Speakers will be selected based on these abstracts.
H o w t o r e s p o n d t o t h i s c a l l
Organizations wishing to participate in TRECVID 2017 must respond
to this call for participation by submitting an on-line application by
3. April. Only ONE APPLICATION PER TEAM please, regardless of how
many organizations the team comprises.
*PLEASE* only apply if you are able and fully intend to complete the
work for at least one task. Taking the data but not submitting any
runs threatens the continued operation of the workshop and the
availability of data for the entire community.
Here is the application URL:
You will receive an immediate automatic response when your application
is received. NIST will respond with more detail to all applications
beginning just after the application deadline. At that point you'll be
given the active participant's userid and password, be subscribed to
the tv17.list email discussion list, and can participate in finalizing
the guidelines as well as sign up to get the data, which is controlled
by separate passwords.
T R E C V I D 2 0 1 7 e m a i l d i s c u s s i o n l i s t
The tv17.list email discussion list (firstname.lastname@example.org) will serve as
the main forum for discussion and for dissemination information about
TRECVID 2017. It is each participant's responsibility to monitor the
tv17.list postings. It accepts postings only from the email addresses
used to subscribe to it. At the bottom of the guidelines there is a
link to an archive of past postings available using the active
Q u e s t i o n s
Any administrative questions about conference participation,
application format/content, subscriptions to the tv17.list,
etc. should be sent to george.awad at nist.gov.
updated: [an error occurred while processing this directive]
For further information contact