Call for Participation in TRECVID 2011


CALL FOR PARTICIPATION in the
2011 TREC VIDEO RETRIEVAL EVALUATION (TRECVID 2011)

February 2011 - December 2011

Conducted by the National Institute of Standards and Technology (NIST)
With support from other US government agencies

Please note that due to the timing of the ACM Multimedia Conference
TRECVID will occur later than usual, namely 5-7. December.


I n t r o d u c t i o n:

The TREC Video Retrieval Evaluation series (trecvid.nist.gov) promotes
progress in content-based analysis of and retrieval from digital video
via open, metrics-based evaluation. TRECVID is a laboratory-style
evaluation that attempts to model real world situations or significant
component tasks involved in such situations.


D a t a:

In TRECVID 2011 NIST will use the following data sets:


      * IACC.1.B

      Approximately 8000 Internet Archive videos (50GB, 200 hours)
      with Creative Commons licenses in MPEG-4/H.264 with duration
      between 10 seconds and 3.5 minutes. Most videos will have some
      metadata provided by the donor available e.g., title, keywords,
      and description

      * IACC.1.A

      Approximately 8000 Internet Archive videos (50GB, 200 hours)
      with Creative Commons licenses in MPEG-4/H.264 with duration
      between 10 seconds and 3.5 minutes. Most videos will have some
      metadata provided by the donor available e.g., title, keywords,
      and description

      * IACC.1.tv10.training

      Approximately 3200 Internet Archive videos (??GB, 200 hours)
      with Creative Commons licenses in MPEG-4/H.264 with durations
      between 3.6 and 4.1 minutes. Most videos will have some metadata
      provided by the donor available e.g., title, keywords, and
      description

      * BBC rushes

      Approximately 460 hours divided into 30-second clips (files) of
      unedited video (MPEG-1) provided by the BBC Archive. Rushes are
      the raw material for programming and in the case of this data
      include mostly travel video and material from several dramatic
      series.

      * Gatwick surveillance video

      The data consist of about 150 hours obtained from Gatwick
      Airport surveillance video data (courtesy of the UK Home
      Office). The Linguistic Data Consortium has provided event
      annotations for the entire corpus. The corpus was divided into
      development and evaluation subsets. Annotations for 2008
      development and test sets are available.

      * HAVIC

      HAVIC is designed to be a large new collection of Internet
      multimedia. Construction by the Linguistic Data Consortium and
      NIST began in 2010.


T a s k s:

In TRECVID 2011 NIST will evaluate systems on the following tasks
using the [data] indicated:

    * Known-item search task (interactive, manual, automatic) [IACC.1]

      The known-item search task models the situation in which someone
      knows of a video, has seen it before, believes it is contained
      in a collection, but doesn't know where to look. To begin the
      search process, the searcher formulates a text-only description,
      which captures what the searcher remembers about the target
      video. 300 topics are planned for automatic systems, a
      subset of 24 for human-in-the-loop systems.

    * Semantic indexing [IACC.1]

      Automatic assignment of semantic tags to video segments can be
      fundamental technology for filtering, categorization, browsing,
      search, and other video exploitation. New technical issues to be
      addressed include methods needed/possible as collection size and
      diversity increase, when the number of features increases, and
      when features are related by an ontology.

    * Content-based multimedia copy detection [IACC.1]

      As used here, a copy is a segment of video derived from another
      video, usually by means of various transformations such as
      addition, deletion, modification (of aspect, color, contrast,
      encoding, ...), camcording, etc. Detecting copies is important
      for copyright control, business intelligence and advertisement
      tracking, law enforcement investigations, etc. Content-based
      copy detection offers an alternative to watermarking.

    * Surveillance video event detection [Gatwick]

      Detecting human behaviors efficiently in vast amounts
      surveillance video, both retrospectively and in realtime, is
      fundamental technology for a variety of higher-level
      applications of critical importance to public safety and
      security. In light of results for 2009/10, in 2011 we will rerun
      the 2009 task/data using the 2009 ground truth but on a subset
      of the 2009 events.

    * Instance search [BBC rushes] (interactive, automatic)

      An important need in many situations involving video collections
      (archive video search/reuse, personal video organization/search,
      surveillance, law enforcement, protection of brand/logo use) is
      to find more video segments of a certain specific person,
      object, or place, given a visual example.

      In 2011 this will still be a pilot task - evaluated by NIST but
      intended mainly to explore task definition and evaluation issues
      using data and an evaluation framework in hand - in a first
      approximation to the desired full task using a smaller number of
      topics, a simpler identification of the target entity, and less
      accuracy in locating the instance than would be desirable in a
      full evaluation of the task.

    * Multimedia event detection [HAVIC]

      Exploding multimedia content in the Internet necessitates
      development of new technologies for content understanding and
      search for a wide variety of commerce, research, and government
      applications.  


Much like TREC, TRECVID will provide, in addition to the data, uniform
scoring procedures, and a forum for organizations interested in
comparing their approaches and results.

Participants will be encouraged to share resources and intermediate
system outputs to lower entry barriers and enable analysis of various
components' contributions and interactions.


*********************************************
* You are invited to participate in TRECVID 2011 *
*********************************************

The evaluation is defined by the Guidelines. A draft version is
available: http://www-nlpir.nist.gov/projects/tv2011/tv2011.html and
details will be worked out starting in February based in part on input
from the participants.

You should read the guidelines carefully before applying to
participate.

Organizations may choose to participate in one or more of the tasks.
TRECVID participants must submit results for at least one task in
order to attend the TRECVID workshop in Gaithersburg in November.

*PLEASE* only apply if you are able and fully intend to complete the
work for at least one task. Taking the data but not submitting any
runs threatens the continued operation of the workshop and the
availability of data for the entire community.


P l e a s e   n o t e:
 
1) Dissemination of TRECVID work and results other than in the
(publicly available) conference proceedings is welcomed, but the
conditions of participation specifically preclude any advertising
claims based on TRECVID results.

2) All system results submitted to NIST are published in the
Proceedings and on the public portions of TRECVID web site archive.

3) The workshop is open only to participating groups that submit
results for at least one task and to selected government personnel
from sponsoring agencies and data donors.

4) By applying to participate you indicate your acceptance of the
above restrictions.


T e n t a t i v e   s c h e d u l e

There is a tentative schedule for the tasks included in the guidelines
webpage, which may be changed as part of defining the final guidelines.

Please note that due to the timing of the ACM Multimedia Conference
TRECVID will occur later than usual, namely 5-7. December.

Here is a snapshot of that schedule:


*************************************************
FILL IN SCHEDULE WHEN MED AND SED DATES ARE KNOWN
*************************************************
*************************************************
FILL IN SCHEDULE WHEN MED AND SED DATES ARE KNOWN
*************************************************
*************************************************
FILL IN SCHEDULE WHEN MED AND SED DATES ARE KNOWN
*************************************************


W o r k s h o p   f o r m a t

The 2 1/2 day workshop itself, December 5-7 at NIST in Gaithersburg,
Maryland near Washington,DC, will be used as a forum both for
presentation of results (including failure analyses and system
comparisons), and for more lengthy system presentations describing
retrieval techniques used, experiments run using the data, and other
issues of interest to researchers in information retrieval. As there
is a limited amount of time for these presentations, the evaluation
coordinators and NIST will determine which groups are asked to speak
and which groups will present in a poster session. Groups that are
interested in having a speaking slot during the workshop will be asked
to submit a short abstract before the workshop describing the
experiments they performed. Speakers will be selected based on these
abstracts.

As some organizations may not wish to describe their proprietary
algorithms, TRECVID defines two categories of participation:

 *Category A: Full participation*
 Participants will be expected to present full details of system
 algorithms and various experiments run using the data, either in a talk
 or in a poster session.
 
 *Category C: Evaluation only* Participants in this category will be
 expected to submit results for common scoring and tabulation. They
 will not be expected to describe their systems in detail, but will be
 expected to provide a general description and report on time and
 effort statistics in a notebook paper.


H o w   t o   r e s p o n d   t o   t h i s   c a l l

Organizations wishing to participate in TRECVID 2011 should respond
to this call for participation by submitting an application by
18. February. Only ONE APPLICATION PER TEAM please, regardless of how
many organizations the team comprises.

An application consists of an email to Lori.Buckland at nist.gov
with the following parts. Please send the application as part of
the body of an email - in plain ASCII text.

1)  Name of the TRECVID 2011 main contact person

2)  Mailing address of main contact person (no post office box, please)

3)  Phone for main contact person

4)  Fax for main contact person

5)  Complete (unique) team name (if you know you are one of multiple
    groups from one organization, PLEASE consult with your colleagues
    to make your name unique)

6)  Short (unique) team name (20 chars or less) that you will use 
    to identify yourself in ALL email to NIST

7)  Optional - names and email addresses of additional team members you 
    would like added to the tv11.list mailing list.

8)  What years, if any, has your team participating in TRECVID before?

9) A one paragraph description of your technical approaches

10) A list of tasks you plan to participate in:

    KIS Known-item search task
    SIN Semantic indexing
    CCD Content-based multimedia copy detection
    SED Surveillance video event detection
    INS Instance search
    MED Multimedia event detection

11) Participation category:

    Category A: Full participation - Participants will be expected to
      present full details of system algorithms and various experiments
      run using the data, either in a talk or in a poster session.
 
    Category C: Evaluation only - Participants in this category will
      be expected to submit results for common scoring and tabulation. 
      They will not be expected to describe their systems in detail, 
      but will be expected to provide a general description and report 
      on time and effort statistics in a notebook paper.

Once you have applied, you'll be subscribed to the tv11.list email discussion list, can participate in finalizing the guidelines, and sign up to get the data. The tv11.list email discussion list will serve as the main forum for such discussion and for dissemination of other information about TRECVID 2011. It accepts postings only from the email addresses used to subscribe to it. All applications must be submitted by *February 18, 2011* to Lori.Buckland at nist.gov. Any administrative questions about conference participation, application format, content, etc. should be sent to the same address. If you would like to contribute to TRECVID in one or both of the following ways, please contact Paul Over (info at bottom of page) directly as soon as possible: - agree to host 2011 test video data for download by other participants on a fast, password-protected site. (Asian and European sites especially needed) - agree to provide the output of your automatic speech recognition system run on the IACC test/development video (at least for the English speech) Best regards, Paul Over Alan Smeaton Wessel Kraaij Georges Quenot


National Institute of
Standards and Technology Home Last updated: Tuesday, 01-Feb-2011 05:30:12 MST
Date created: Thursday, 28-Jan-11
For further information contact