Call for Participation in TRECVID 2009


CALL FOR PARTICIPATION in the
2009 TREC VIDEO RETRIEVAL EVALUATION (TRECVID 2009)

February 2009 - November 2009

Conducted by the National Institute of Standards and Technology (NIST)
With support from other US government agencies


I n t r o d u c t i o n:

The TREC Video Retrieval Evaluation series (trecvid.nist.gov)
promotes progress in content-based analysis of and retrieval from
digital video via open, metrics-based evaluation. TRECVID is a
laboratory-style evaluation that attempts to model real world
situations or significant component tasks involved in such situations.

TRECVID 2009 will test systems on the following tasks:

 * high-level feature extraction
 * search (interactive, manually-assisted, and/or fully automatic)
 * content-based copy detection
 * surveillance event detection

Changes to be noted by 2008 participants:

  * For the feature and search tasks:
       o There will be an optional high-precision (at 10) condition
         in the search task.
       o The number of search topics will be 24
       o Training category "B" has been merged into "A" and "b" into
         "a". The main distinction is whether training data specific
         to Sound and Vision video has been used or not.
       o Text-only baseline runs are no longer required. 
       o The test features will include 10 from 2008 and the test data
         will include the 2008 feature test data so that we can 
	 measure progress on those 10 features.

 * For the copy detection task:
       o Runs on video-only and video+audio queries will be required;
         runs on audio-only queries will be optional.
       o Runs for two application profiles will be required. One
         profile aims for elimination of false alarms and then
         reduction of misses. The other weights the costs of false
         alarms and misses the same. Speed should be maximized for
	 both profiles.


D a t a:

A number of datasets are available for use in TRECVID 2009. We describe
them here and then indicate below which data will be used for
development versus test for each task.


 Sound and Vision

  * The Netherlands Institute for Sound and Vision has generously
    provided news magazine, science news, news reports, documentaries,
    educational programming, and archival video in MPEG-1 for use
    within TRECVID.

    For the search and feature tasks:

        o In 2007 we used about 50 hours for development
          (tv7.sv.devel) and 50 hours for search and feature test
          (tv7.sv.test). These 100 hours will be available as
          development data for the search and feature tasks in 2009.

          The 100 hours used as test data for 2008 (tv8.sv.test)
          will be combined with 180 hours of new test data
          (tv9.sv.test) to create the 2009 test set for the search
          and feature tasks. 

    For the copy detection task:

        o The 180 hours of new data (tv9.sv.test) will also be used
          as test data for the copy detection task. The 200 hours
          used in 2007 (tv7.sv.devel, tv7.sv.test) and 2008
          (tv8.sv.test) will be available for development in 2009.

  * Distribution: by download from password-protected servers at
    NIST and elsewhere.

  * Training truth data for search and feature tasks: 

        o feature annotations of the search and feature task data in
          2008, 2007, 2005, and 2003 are available from the "Past
          data" section of the TRECVID website.

        o a community effort to annotate the 2009 development data
          for a set of features (yet to be determined) will likely
          be organized by Georges Quenot (LIG)

  * Master shot reference: Christian Petersohn at the Fraunhofer
    (Heinrich Hertz) Institute in Berlin has again provided the
    master shot reference for the search/feature tasks..

  * Automatic speech recognition (Dutch): The University of Twente
    has offered to provide the output of their automatic speech
    recognition system on the Sound and Vision data.

  * Machine translation (Dutch to English): Christof Monz of Queen
    Mary, University London will again contribute machine translation
    (Dutch to English) for the Sound and Vision video (ASR output or
    speech)

  * Keyframes: NIST will *not* be supplying keyframes for the Sound
    and Vision video. This will require groups to look afresh at how
    best to train their systems - tradeoffs between processing speed,
    effectiveness, amount of the video processed.


 BBC rushes

  * The BBC Archive has provided unedited material in MPEG-1 from
    about five dramatic series for use within TRECVID.

  * The BBC rushes used in 2008 for summarization system development
    (tv7.bbc.devel, tv7.bbc.test) and testing (tv8.bbc.test), a total
    of 53 hours, will be used for copy detection system development.
    30 hours of new data (tv9.bbc.test) will be used for testing in
    the 2009 copy detection task.

  * Distribution: by download from password-protected servers at
    NIST and elsewhere. 


 TRECVID 2009 surveillance video


  * The data will consist of at least 100 hours (10 days * 2 hours/day
  x 5 cameras), obtained from Gatwick Airport surveillance video data
  (courtesy of the UK Home Office). The Linguistic Data Consortium
  will provide event annotations for the entire corpus. The corpus
  will be divided into development and evaluation subsets. The exact
  partitioning of the data is to be determined.  

  * Tasks: surveillance event detection.

  * Distribution: to be determined

  * Training truth data: to be determined



T a s k s:

1) High-level feature extraction

          Task: - Given a test collection of video and 20 feature
	          definitions (half chosen from those tested in 2008
	          and half new, suggested by participants), return for
	          each feature all the shots from the test collection
	          that contain it, ranked in order of confidence.
    Evaluation: - manual judgments at NIST of 20 features 
      Measures: - same as in 2008


2) Search

          Task: - Given a test colleciton of video and 24 multimedia
                  topics created at NIST, return for each topic all
                  the shots which meet the video need expressed by it,
                  ranked in order of confidence. An optional
                  high-precision (at 10) condition will be available.
    Evaluation: - manual judgments at NIST
      Measures: - same as in 2007 


5) Content-based copy detection

          Task: - Given 180 hours of video and a large number of
	          queries, most containing a segment of the test
	          collection that has been transformed along with the
	          rest of the query in one or more of a number of
	          ways, find any occurrence of the segment in the test
	          collection
    Evaluation: - automatic comparison to automatically produced ground truth
      Measures: - same as in 2008


4)  Event detection in airport surveillance video

          Task: - Detect a set of predefined events in a test collection of
	          airport surveillance video
    Evaluation: - automatic comparison to manual annotations
      Measures: - same as in 2008


Much like TREC, TRECVID will provide, in addition to the data, uniform
scoring procedures, and a forum for organizations interested in
comparing their approaches and results.

Participants will be encouraged to share resources and intermediate
system outputs to lower entry barriers and enable analysis of various
components' contributions and interactions.

*You are invited to participate in TRECVID 2009*.

The evaluation is defined by the Guidelines. A draft version is
available: http://www-nlpir.nist.gov/projects/tv2009/tv2009.html and
details will be worked out starting in February based in part on input
from the participants. You should read the guidelines carefully before
applying to participate.

Organizations may choose to participate in one or more of the tasks.
TRECVID participants must submit results for at least one task in
order to attend the TRECVID workshop in Gaithersburg in November.

*PLEASE* only apply if you are able and fully intend to complete the
work for at least one task. Taking the data but not submitting any
runs threatens the continued operation of the workshop and the
availability of data for the entire community.


P l e a s e   n o t e:
 
1) Dissemination of TRECVID work and results other than in the
(publicly available) conference proceedings is welcomed, but the
conditions of participation specifically preclude any advertising
claims based on TRECVID results.

2) All system results submitted to NIST are published in the
Proceedings and on the public portions of TRECVID web site archive.

3) The workshop is open only to participating groups that submit
results for at least one task and to selected government personnel
from sponsoring agencies and data donors..

4) By applying to participate you indicate your acceptance of the
above restrictions.


T e n t a t i v e   s c h e d u l e

Here is a tentative schedule for the tasks, which may be changed as
part of defining the final guidelines.

 2. Feb     NIST sends out Call for Participation in TRECVID 2009 
27. Feb     Applications for participation in TRECVID 2009 due at NIST
 1. Mar     Final versions of TRECVID 2008 papers due at NIST
 3. Apr     Guidelines complete 
    Apr     Download of feature/search development data 
    Jun     Download of feature/search test data 
30. Jun     Video-only copy detection queries available for download 
 3. Aug     Video-only copy detection submissions due at NIST
            Audio-only copy detection queries available for download 
 7. Aug     Search topics available from TRECVID website. 
10. Aug     Feature extraction tasks submissions due at NIST
            Feature extraction donations due at NIST 
17. Aug     Feature extraction donations available for participants 
18. Aug - 9. Sep     Feature assessment at NIST
28. Aug     Audio-only copy detection submissions due at NIST
            Audio+video copy detection query plans available for download 
 9. Sep     Search task submissions due at NIST for evaluation
17. Sep     Results of feature extraction evaluations returned
17. Sep - 9. Oct     Search assessment at NIST
 1. Oct     Audio+video copy detection submissions due at NIST
            Video-only and audio-only copy detection results returned
 9. Oct     Audio+video copy detection results returned
            TRECVID workshop registration opens 
14. Oct     Results of search evaluations returned
19. Oct     Speaker proposals due at NIST
26. Oct     Notebook papers due at NIST
 1. Nov     Copyright forms due back at NIST
 9. Nov     TRECVID 2009 Workshop registration closes
16,17 Nov   TRECVID Workshop at NIST in Gaithersburg, MD
18. Dec     Workshop papers publicly available
---------------- 
 1. Mar 2010     Final versions of TRECVID 2009 papers due at NIST



W o r k s h o p   f o r m a t

The workshop itself, November 16-17 at NIST in Gaithersburg, Maryland
near Washington,DC, will be used as a forum both for presentation of
results (including failure analyses and system comparisons), and for
more lengthy system presentations describing retrieval techniques
used, experiments run using the data, and other issues of interest to
researchers in information retrieval. As there is a limited amount of
time for these presentations, the evaluation coordinators and NIST
will determine which groups are asked to speak and which groups will
present in a poster session. Groups that are interested in having a
speaking slot during the workshop will be asked to submit a short
abstract before the workshop describing the experiments they
performed. Speakers will be selected based on these abstracts.

As some organizations may not wish to describe their proprietary
algorithms, TRECVID defines two categories of participation:

 *Category A: Full participation*
 Participants will be expected to present full details of system
 algorithms and various experiments run using the data, either in a talk
 or in a poster session.
 
 *Category C: Evaluation only* Participants in this category will be
 expected to submit results for common scoring and tabulation. They
 will not be expected to describe their systems in detail, but will be
 expected to provide a general description and report on time and
 effort statistics in a notebook paper.


H o w   t o   r e s p o n d   t o   t h i s   c a l l

Organizations wishing to participate in TRECVID 2009 should respond to
this call for participation by submitting an application.
An application consists of an email to Lori.Buckland at nist.gov with the following parts: 1) Name of the TRECVID 2009 main contact person 2) Mailing address of main contact person (no post office box, please) 3) Phone for main contact person 4) Fax for main contact person 5) Complete (unique) team name (if you know you are one of multiple groups from one organization, please consult with your colleagues) 6) Short (unique) team name (20 chars or less) that you will use to identify yourself in ALL email to NIST 7) Optional - names and email addresses of additional team members you would like added to the tv9list mailing list. 8) What years, if any, has your team participating in TRECVID before? 9) A one paragraph description of your technical approaches 10) A list of tasks you plan to participate in: ED - event detection in surveillance FE - high-level feature extraction SE - search CD - content-based copy detection 11) Participation category: Category A: Full participation - Participants will be expected to present full details of system algorithms and various experiments run using the data, either in a talk or in a poster session. Category C: Evaluation only - Participants in this category will be expected to submit results for common scoring and tabulation. They will not be expected to describe their systems in detail, but will be expected to provide a general description and report on time and effort statistics in a notebook paper.

Once you have applied, you will be subscribed to the tv9list email discussion list, can participate in finalizing the guidelines, and sign up to get the data. The tv9list email discussion list will serve as the main forum for such discussion and for dissemination of other information about TRECVID 2009. It accepts postings only from the email addresses used to subscribe to it. All applications must be submitted by *February 27, 2009* to Lori.Buckland at nist.gov. Any administrative questions about conference participation, application format, content, etc. should be sent to the same address. If you would like to contribute to TRECVID in one or more of the following ways, please contact Paul Over (info at bottom of page) directly as soon as possible: - agree to host 2009 test video data for download by other participants on a fast, password-protected site. (Asian and European sites especially needed) Paul Over Alan Smeaton Wessel Kraaij


National Institute of
Standards and Technology Home Last updated: Friday, 30-Jan-2009 13:15:06 EST
Date created: Monday, 26-Jan-09
For further information contact