Call for Participation in TRECVID 2010


CALL FOR PARTICIPATION in the
2010 TREC VIDEO RETRIEVAL EVALUATION (TRECVID 2010)

February 2010 - November 2010

Conducted by the National Institute of Standards and Technology (NIST)
With support from other US government agencies


I n t r o d u c t i o n:

The TREC Video Retrieval Evaluation series (trecvid.nist.gov)
promotes progress in content-based analysis of and retrieval from
digital video via open, metrics-based evaluation. TRECVID is a
laboratory-style evaluation that attempts to model real world
situations or significant component tasks involved in such situations.


D a t a:

In TRECVID 2010 NIST will use the following data sets:

new>  * IACC.1.A

      Approximately 8000 Internet Archive videos (50GB, 200 hours)
      with Creative Commons licenses in MPEG-4/H.264 with duration
      between 10 seconds and 3.5 minutes. Most videos will have some
      metadata provided by the donor available e.g., title, keywords,
      and description

new>  * IACC.1.tv10.training

      Approximately 3200 Internet Archive videos (??GB, 200 hours)
      with Creative Commons licenses in MPEG-4/H.264 with durations
      between 3.6 and 4.1 minutes. Most videos will have some metadata
      provided by the donor available e.g., title, keywords, and
      description

      * Sound and Vision TV9 Test

      tv9.sv.test (114.8 GB) in MPEG-1 is available from NIST by
      download.

      * Gatwick surveillance video

      The data consist of about 150 hours obtained from Gatwick
      Airport surveillance video data (courtesy of the UK Home
      Office). The Linguistic Data Consortium has provided event
      annotations for the entire corpus. The corpus was divided into
      development and evaluation subsets. Annotations for 2008
      development and test sets are available.

new>  * HAVIC

      HAVIC is designed to be a large new collection of Internet
      multimedia. Construction by the Linguistic Data Consortium and
      NIST will begin early in 2010.


T a s k s:

In TRECVID 2010 NIST will evaluate systems on the following tasks
using the [data] indicated:

    * Known-item search task (interactive, manual, automatic) [IACC.1]

      The known-item search task models the situation in which someone
      knows of a video, has seen it before, believes it is contained
      in a collection, but doesn't know where to look. To begin the
      search process, the searcher formulates a text-only description,
      which captures what the searcher remembers about the target
      video. 100-200 topics are planned for automatic systems, a
      much smaller subset for human-in-the-loop systems.

    * Semantic indexing [IACC.1]

      Automatic assignment of semantic tags to video segments can be
      fundamental technology for filtering, categorization, browsing,
      search, and other video exploitation. New technical issues to be
      addressed include methods needed/possible as collection size and
      diversity increase, when the number of features increases, and
      when features are related by an ontology.

    * Content-based multimedia copy detection [IACC.1]

      As used here, a copy is a segment of video derived from another
      video, usually by means of various transformations such as
      addition, deletion, modification (of aspect, color, contrast,
      encoding, ...), camcording, etc. Detecting copies is important
      for copyright control, business intelligence and advertisement
      tracking, law enforcement investigations, etc. Content-based
      copy detection offers an alternative to watermarking.

    * Event detection in airport surveillance video [Gatwick]

      Detecting human behaviors efficiently in vast amounts
      surveillance video, both retrospectively and in realtime, is
      fundamental technology for a variety of higher-level
      applications of critical importance to public safety and
      security. In light of results for 2009, in 2010 we will rerun
      the 2009 task/data using the 2009 ground truth but on a subset
      of the 2009 events.

    * Instance search [TV2009 S&V]

      An important need in many situations involving video collections
      (archive video search/reuse, personal video organization/search,
      surveillance, law enforcement, protection of brand/logo use) is
      to find more video segments of a certain specific person,
      object, or place, given a visual example.

      In 2010 this will be a pilot task - evaluated by NIST but
      intended mainly to explore task definition and evaluation issues
      using data and an evaluation framework in hand - in a first
      approximation to the desired full task using a smaller number of
      topics, a simpler identification of the target entity, and less
      accuracy in locating the instance than would be desirable in a
      full evaluation of the task.

TRECVID 2010 will also offer the following exploratory task:

    * Event detection in Internet multimedia [HAVIC]

      Exploding multimedia content in the Internet necessitates
      development of new technologies for content understanding and
      search for a wide variety of commerce, research, and government
      applications.  In 2010 this task will be treated as exploratory,
      i.e., the emphasis will be on supporting initial exploration of
      the new video collection, task definition, evaluation framework,
      and a variety of technical approaches to the system task,- not
      system rankings.

Much like TREC, TRECVID will provide, in addition to the data, uniform
scoring procedures, and a forum for organizations interested in
comparing their approaches and results.

Participants will be encouraged to share resources and intermediate
system outputs to lower entry barriers and enable analysis of various
components' contributions and interactions.


*********************************************
* You are invited to participate in TRECVID 2010 *
*********************************************

The evaluation is defined by the Guidelines. A draft version is
available: http://www-nlpir.nist.gov/projects/tv2010/tv2010.html and
details will be worked out starting in February based in part on input
from the participants. 

You should read the guidelines carefully before applying to
participate.

Organizations may choose to participate in one or more of the tasks.
TRECVID participants must submit results for at least one task in
order to attend the TRECVID workshop in Gaithersburg in November.

*PLEASE* only apply if you are able and fully intend to complete the
work for at least one task. Taking the data but not submitting any
runs threatens the continued operation of the workshop and the
availability of data for the entire community.


P l e a s e   n o t e:
 
1) Dissemination of TRECVID work and results other than in the
(publicly available) conference proceedings is welcomed, but the
conditions of participation specifically preclude any advertising
claims based on TRECVID results.

2) All system results submitted to NIST are published in the
Proceedings and on the public portions of TRECVID web site archive.

3) The workshop is open only to participating groups that submit
results for at least one task and to selected government personnel
from sponsoring agencies and data donors.

4) By applying to participate you indicate your acceptance of the
above restrictions.


T e n t a t i v e   s c h e d u l e

There is a tentative schedule for the tasks included in the guidelines
webpage, which may be changed as part of defining the final guidelines.
Here is a snapshot of that schedule:

  1. Feb    NIST sends out Call for Participation in TRECVID 2010 
 19. Feb    Applications for participation in TRECVID 2010 due at NIST
  1. Mar    Final versions of TRECVID 2009 papers due at NIST
  1. Apr    Guidelines complete
            Development/test data download begins 
 30. Jun    Copy detection component queries available for download
            Audio+video copy detection query plans available for download 
     Jul    Surveillance event detection dry run (systems run on Dev data) 
  6. Aug    Known-item Search topics available from TRECVID website. 
  9. Aug    Semantic indexing task submissions due at NIST for evaluation. 
 16. Aug    Unevaluated semantic indexing submissions available for active participants 
 17. Aug - 8. Sep    Semantic indexing assessment at NIST
 27. Aug    Audio+video copy detection submissions due at NIST for evaluation 
     Sep    Surveillance event detection submissions due at NIST for formal evaluation 
  8. Sep    Known-item search task submissions due at NIST for evaluation
 17. Sep    Results of semantic indexing evaluations returned to participants
 16. Sep - 8. Oct Instance search task assessment at NIST
  1. Oct    Surveillance event detection preliminary results returned to participants 
  8. Oct    Audio+video copy detection results returned to participants
            Results of known-item search evaluations returned to participants 
 13. Oct    Results of instance search task evaluations returned to participants 
 18. Oct    Speaker proposals due at NIST
 22. Oct    Notebook papers due at NIST
  1. Nov    Copyright forms due back at NIST (see Notebook papers for instructions)
  8. Nov    TRECVID 2010 Workshop registration closes
15,16,17 Nov    TRECVID Workshop (2.5 days) at NIST in Gaithersburg, MD


W o r k s h o p   f o r m a t

The 2 1/2 day workshop itself, November 15-17 at NIST in Gaithersburg,
Maryland near Washington,DC, will be used as a forum both for
presentation of results (including failure analyses and system
comparisons), and for more lengthy system presentations describing
retrieval techniques used, experiments run using the data, and other
issues of interest to researchers in information retrieval. As there
is a limited amount of time for these presentations, the evaluation
coordinators and NIST will determine which groups are asked to speak
and which groups will present in a poster session. Groups that are
interested in having a speaking slot during the workshop will be asked
to submit a short abstract before the workshop describing the
experiments they performed. Speakers will be selected based on these
abstracts.

As some organizations may not wish to describe their proprietary
algorithms, TRECVID defines two categories of participation:

 *Category A: Full participation*
 Participants will be expected to present full details of system
 algorithms and various experiments run using the data, either in a talk
 or in a poster session.
 
 *Category C: Evaluation only* Participants in this category will be
 expected to submit results for common scoring and tabulation. They
 will not be expected to describe their systems in detail, but will be
 expected to provide a general description and report on time and
 effort statistics in a notebook paper.


H o w   t o   r e s p o n d   t o   t h i s   c a l l

Organizations wishing to participate in TRECVID 2010 should respond
to this call for participation by submitting an application by
19. February. Only ONE APPLICATION PER TEAM please, regardless of how
many organizations the team comprises.

An application consists of an email to Lori.Buckland at nist.gov
with the following parts. Please send the application as part of
the body of an email - in plain ASCII text.

1)  Name of the TRECVID 2010 main contact person

2)  Mailing address of main contact person (no post office box, please)

3)  Phone for main contact person

4)  Fax for main contact person

5)  Complete (unique) team name (if you know you are one of multiple
    groups from one organization, PLEASE consult with your colleagues
    to make your name unique)

6)  Short (unique) team name (20 chars or less) that you will use 
    to identify yourself in ALL email to NIST

7)  Optional - names and email addresses of additional team members you 
    would like added to the tv10list mailing list.

8)  What years, if any, has your team participating in TRECVID before?

9) A one paragraph description of your technical approaches

10) A list of tasks you plan to participate in:

    KIS Known-item search task
    SIN Semantic indexing
    CCD Content-based multimedia copy detection
    SED Event detection in airport surveillance video
    INS Instance search
    MED Event detection in Internet multimedia

11) Participation category:

    Category A: Full participation - Participants will be expected to
      present full details of system algorithms and various experiments
      run using the data, either in a talk or in a poster session.
 
    Category C: Evaluation only - Participants in this category will
      be expected to submit results for common scoring and tabulation. 
      They will not be expected to describe their systems in detail, 
      but will be expected to provide a general description and report 
      on time and effort statistics in a notebook paper.

Once you have applied, you'll be subscribed to the tv10list email discussion list, can participate in finalizing the guidelines, and sign up to get the data. The tv10list email discussion list will serve as the main forum for such discussion and for dissemination of other information about TRECVID 2010. It accepts postings only from the email addresses used to subscribe to it. All applications must be submitted by *February 19, 2010* to Lori.Buckland at nist.gov. Any administrative questions about conference participation, application format, content, etc. should be sent to the same address. If you would like to contribute to TRECVID in one or both of the following ways, please contact Paul Over (info at bottom of page) directly as soon as possible: - agree to host 2010 test video data for download by other participants on a fast, password-protected site. (Asian and European sites especially needed) - agree to provide the output of your automatic speech recognition system run on the IACC test/development video (at least for the English speech) Best regards, Paul Over Alan Smeaton Wessel Kraaij


National Institute of
Standards and Technology Home Last updated: Tuesday, 16-Feb-2010 07:33:18 MST
Date created: Thursday, 28-Jan-10
For further information contact