Call for Participation in TRECVID 2008


CALL FOR PARTICIPATION in the
2008 TREC VIDEO RETRIEVAL EVALUATION (TRECVID)

February 2008 - November 2008

Conducted by the National Institute of Standards and Technology (NIST)
With support from NIST and IARPA


I n t r o d u c t i o n:

In 2007 the TREC Video Retrieval Workshop (TRECVID) series
(http://trecvid.nist.gov), which encourages and facilitates research
in analysis of and information retrieval from digital video, began
exploring new data (cultural, news magazine, documentary, and
education programming) and an additional, new task - summarization of
unedited BBC "rushes" video.

In 2008 TRECVID will continue the core feature and search tasks
against larger amounts of the new Sound and Vision data and complete a
second, final round of summarization of BBC rushes. TRECVID 2008 will
also explore two very different video analysis problems: event
detection in airport surveillance data and content-based copy
detection.


T a s k s:


1)  Event detection in airport surveillance video (PILOT)

          Task: - detect predefined events in 100 hours airport surveillance
	          video
    Evaluation: - automatic comparison to manual annotations
      Measures: - under discussion - see Guidelines

2) High-level feature extraction on Sound and Vision data

          Task: - detect 20 LSCOM features (to be chosen, appropriate 
                  to the Sound and Vision data) 
    Evaluation: - manual judgments at NIST of 20 features 
      Measures: - estimated precision, recall, inferred average precision

3) Search on Sound and Vision data

          Task: - find shots meeting need expressed by 24 multimedia 
                  topics (50 for automatic systems) created at NIST  
		  - emphasis on events
    Evaluation: - manual judgments at NIST
      Measures: - estimated precision, recall, inferred average precision


4) Summarization of BBC rushes to be proposed as workshop at ACM MM '08 
   in Vancouver, BC, Canada, 27-31 Oct. with results reported at TRECVID 
   2008. 

         Task: - automatically create an MPEG-1 summary clip no 
                 longer than 2% of the full video that shows the main 
		 objects and events in the video to be summarized using 
		 the minimal number of frames and presenting the information 
		 in a way to maximize usability and speed of object/event 
		 recognition.
 Ground truth: - lists of significant segments identified in terms of a 
                 major object/event created at NIST for each video to
                 be summarized 
   Evaluation: - by DCU with support from the European Commission under
                 contract FP6-027026 (K-Space) using ground truth 
		 produced by NIST. A human will view the summary using
		 only the controls: play and pause will check off the 
		 objects/events in the ground truth list that appear 
                 in the video summary and answer questions about 
		 summary quality. This evaluation process will be timed.
     Measures: - fraction of ground truth segments found in summary
               - time needed to check summary against ground truth
               - size of summary (# of frames)
               - elapsed system time to create summary
	       - usability/quality of summary

      * although playback at evaluation will be limited as stated
        under above Task, summaries can contain picture-in-picture, 
        split screens, and results of other techniques for organizing 
        the summary that raise questions of usability etc.

      If the summarization workshop is accepted by ACM MM '08 (we will
      know by 1. March), the schedule would have to be approximately as
      follows:

        1  Mar  development data available for download
	2  Mar  early bird workshop proposal decision
        1  Apr  test data available for download
        5  May  system output submitted to NIST for judging at DCU
        1  Jun  evaluation results distributed to participants
        7  Jul  papers (max 5 pages) due in ACM format 
	        The organizers will provide an intro paper with information 
		 about the data, task, groundtruthing, evaluation, measures, etc.
       25  Jul  acceptance notification  
        1  Aug  camera-ready papers due via ACM process
       31  Oct  video summarization workshop at ACM Multimedia '08, 
                 27-31 Oct, Vancouver, Canada


      If the ACM MM workshop proposal is not accepted, we may adjust
      the schedule and will include the summarization task like the 
      others at the TRECVID workshop in November in Gaithersburg.

5) Content-based copy detection (PILOT)

          Task: - Given 200 hours of video and a number of queries, most
	          containing a segment of the test collection that has been
		  transformed along with the rest of the query in one or more
		  of a number of ways, find any occurrence of the segment in 
		  the test collection
    Evaluation: - automatic comparison to automatically produced ground truth
      Measures: - under discussion - see Guidelines


D a t a:

The Netherlands Institute for Sound and Vision
(http://portal.beeldengeluid.nl/) has generously provided 400 hours of
news magazine, science news, news reports, documentaries, educational
programming, and archival video in MPEG-1 for use within TRECVID. We
may have an additional 200 hours of non-commercial news and news
magazine video in time to include.  This is plenty for 2 or 3 years of
work. We will use ~100 hours of this data in 2008:

    ~ 100 hours for development of search and feature detection (from 2007)

    ~ 100 hours for test of search and feature detection (new)

    ~ 200 hours (combination of above) for copy detection test

  Additional related data:

    Master shot reference for search/feature development/test 

    Output of automatic speech recognizer (for Dutch) will
    be provided by the University of Twente.

    Output of a machine translation system (Dutch -> English)
    will be provided by Christof Monz of Queen Mary University,
    London.

The BBC Archive has provided  unedited material in
MPEG-1 from about five dramatic series. We will use 40 files
totalling about 18 hours for test.

    18 hours (40 files for development (2007 rushes test data)

    18 hours (40 files) for test (new)

  Additional related data

    groundtruth from 2007 summarization task
    summaries submitted in 2007


TRECVID 2008 surveillance video
The UK Home Office Scientific Development Branch has provided
surveillance video for use by TRECVID in 2008. It comprises about 100
hours - the output of 5 cameras from the same period of 20 hours (2
hours per day over 10 days). Details under
discussion for this data are available.



MUSCLE-VCD-2007 data for use in development of copy detection systems
This is the data that was used for the copy detection evaluation at CIVR 2007



D i s t r i b u t i o n:


Search / feature / summarization / copy detection data: by download from NIST 
(or mirror servers)

         (VOLUNTEERS NEEDED IN US, EUROPE, ASIA,...TO HOST MIRROR SERVERS)

Airport surveillance data will be distributed on disk drives sent
to NIST by participants. Check the guidelines for final distribution
arrangements.




Much like TREC, TRECVID will provide, in addition to the data, uniform
scoring procedures, and a forum for organizations interested in
comparing their approaches and results.

Participants will be encouraged to share resources and intermediate
system outputs to lower entry barriers and enable analysis of various
components' contributions and interactions.

The details about the predecessor TREC video track (2001/2002)and the
latest about TRECVID (2003-2008) can be found at the TRECVID web site:
trecvid.nist.gov. The evaluation is defined by the guidelines. A draft
version is there now and details will be worked out starting in
mid-February based in part on input from the participants.

*You are invited to participate in TRECVID 2008*.

Organizations may choose to participate in one or more of the tasks.
TRECVID participants must submit results for at least one task in
order to attend the TRECVID workshop in Gaithersburg in November. 
Participation in the ACM MM 08 TRECVID Summarization Workshop (if
accepted) will be based on normal ACM MM workshop attendance rules.

*PLEASE* only apply if you are able and fully intend to complete the
work for at least one task. Taking the data but not submitting any
runs threatens the continued operation of the workshop and the
availability of data for the entire community.


P l e a s e   n o t e:
 
1) Dissemination of TRECVID work and results other than in the
(publicly available) conference proceedings is welcomed, but the
conditions of participation specifically preclude any advertising
claims based on TRECVID results.

2) All retrieval results submitted to NIST are published in the
Proceedings and on the public portions of TRECVID web site archive.

3) The workshop is open only to participating groups that submit
results for at least one task and to selected government personnel
from sponsoring agencies.

4) By applying to participate you indicate your acceptance of the
above restrictions.


T e n t a t i v e   s c h e d u l e

Here is a tentative schedule for the tasks - excluding the
summarization and event detection pilot. This tentative schedule will
be revised and made precise as part of defining the final guidelines.

 1. Feb  NIST sends out Call for Participation in TRECVID 2008
22. Feb  Applications for participation in TRECVID 2008 due at NIST
 1. Mar  Final versions of TRECVID 2007 papers due at NIST
 1. Apr  Guidelines complete
11. Apr  Extended participant decision deadline for event detection task
    Apr  Download of feature/search development data
    Jun  Download of feature/search test data
30. Jun  Copy detection queries available for download
 1. Aug  Copy detection submissions due at NIST
 8. Aug  Search topics available from TRECVID website.
15. Aug  Feature extraction tasks submissions due at NIST for evaluation.
         Feature extraction donations due at NIST 
22. Aug  Feature extraction donations available for active participants 
25. Aug - 12. Sep Feature assessment at NIST
12. Sep  Search task submissions due at NIST for evaluation
19. Sep  Results of feature extraction evaluations returned to participants
22. Sep - 10. Oct Search assessment at NIST
15. Oct  Results of search evaluations returned to participants
20. Oct  Speaker proposals due at NIST
27. Oct  Notebook papers due at NIST
 1. Nov  Copyright forms due back at NIST (see Notebook papers for instructions)
10. Nov  TRECVID 2008 Workshop registration closes
17,18 Nov TRECVID Workshop at NIST in Gaithersburg, MD
15. Dec  Workshop papers publicly available (slides added as they arrive)
 1. Mar 2009 Final versions of TRECVID 2008 papers due at NIST


W o r k s h o p   f o r m a t

The workshop itself, November 17-18 at NIST in Gaithersburg, Maryland
near Washington,DC, will be used as a forum both for presentation of
results (including failure analyzes and system comparisons), and for
more lengthy system presentations describing retrieval techniques
used, experiments run using the data, and other issues of interest to
researchers in information retrieval. As there is a limited amount of
time for these presentations, the evaluation coordinators and NIST
will determine which groups are asked to speak and which groups will
present in a poster session. Groups that are interested in having a
speaking slot during the workshop will be asked to submit a short
abstract before the workshop describing the experiments they
performed. Speakers will be selected based on these abstracts.

As some organizations may not wish to describe their proprietary
algorithms, TRECVID defines two categories of participation. Almost
all groups participate in category A:

 *Category A: Full participation*
 Participants will be expected to present full details of system
 algorithms and various experiments run using the data, either in a talk
 or in a poster session.
 
 *Category C: Evaluation only* Participants in this category will be
 expected to submit results for common scoring and tabulation. They
 will not be expected to describe their systems in detail, but will be
 expected to provide a general description and report on time and
 effort statistics in a notebook paper.


H o w   t o   r e s p o n d   t o   t h i s   c a l l

Organizations wishing to participate in TRECVID 2008 should respond to
this call for participation by submitting an application.
An application consists of an email to Lori.Buckland at nist.gov with these parts: 1) Name of the TRECVID 2008 main contact person 2) Mailing address of main contact person (no post office box, please) 3) Phone for main contact person 4) Fax for main contact person 5) Complete (unique) team name (if you know you are one of multiple groups from one organization, please consult with your colleagues) 6) Short (unique) team name (20 chars or less) that you will use to identify yourself in ALL email to NIST 7) Optional - names and email addresses of additional team members you would like added to the trecvid2008 mailing list. 8) What years, if any, has your team participating in TRECVID before? 9) A one paragraph description of your technical approaches 10) A list of tasks you plan to participate in: ED - event detection in surveillance FE - high-level feature extraction SE - search RU/ACM - rushes summarization at ACM MM** RU/TV - rushes summarization at TRECVID (iff ACM MM workshop does not happen)** CD - content-based copy detection **Please indicate one or BOTH rushes workshops depending on whether you could participate in October and/or November - even though only one will actually take place. 11) Participation category: *Category A: Full participation* Participants will be expected to present full details of system algorithms and various experiments run using the data, either in a talk or in a poster session. *Category C: Evaluation only* Participants in this category will be expected to submit results for common scoring and tabulation. They will not be expected to describe their systems in detail, but will be expected to provide a general description and report on time and effort statistics in a notebook paper. Once you have applied, you will be subscribed to the trecvid2008 email discussion list, can participate in finalizing the guidelines, and sign up to get the data. (If you plan to participate only in the event detection pilot, we will wait to subscribe you until 11. April.) The trecvid2008 email discussion list will serve as the main forum for such discussion and for dissemination of other information about TRECVID 2008. It accepts postings only from the email addresses used to subscribe to it. All applications must be submitted by *February 22, 2008* to Lori.Buckland at nist.gov. Any administrative questions about conference participation, application format, content, etc. should be sent to the same address. If you would like to contribute to TRECVID in one or more of the following ways, please contact Paul Over (email at bottom of page) directly as soon as possible: - agree to host 2008 video data for download by other participants on a fast, password-protected site. (Asian and European sites especially needed) Paul Over Alan Smeaton Wessel Kraaij

National Institute of
Standards and Technology Home Last updated: Thursday, 21-Feb-2008 06:24:02 MST
Date created: Monday, 01-Feb-08
For further information contact