Subject: [tv19.list] Welcome to the VTT Task - TRECVID 2019
From: "'Butt, Asad (IntlAssoc)' via tv19.list" <[email protected]>
Date: 4/5/19, 3:10 PM
To: 'George Awad' via tv19.list <[email protected]>
Reply-to:
"Butt, Asad (IntlAssoc)"

Dear VTT Participants,

 

We are very glad that you have decided to participate in this task. This task addresses the problem of automatically generating text descriptions for short videos. Video description (or captioning) is rapidly gaining popularity in the computer vision and multimedia research communities.

 

In this email, we will provide a quick start guide for the task.

For detailed instructions, please read the guidelines provided at: https://www-nlpir.nist.gov/projects/tv2019/vtt.html  (Strongly Recommended)

 

Subtasks:

The VTT task is divided into two subtasks:

  1. Description Generation Subtask (Core)
  1. Matching and Ranking Subtask (Optional)

 

Training Data:

The testing data from previous years can be used to train systems, and is available: 2016, 2017, 2018

You can also use other image and video captioning datasets to train your systems for the VTT task.

 

Testing Data:

The test dataset will be available to participants on July 15. It will consist of two parts:

  1. Twitter Vine Video links will be provided. Participants will download the videos directly from Vine.
  2. Creative Commons licensed videos (TFCC) will be available to download from NIST servers. You must sign and return this agreement to access the data. Please sign and return the form as soon as possible to [email protected] to ensure that you can download the test data as it becomes available.

 

The test data will consist of approximately 2000 videos in total. It is imperative that regardless of the source, the dataset is considered as a whole, and each run file is the output of a single system run over the entire dataset.

 

Run Submissions:

Please refer to the following link for the run submission formats:

https://www-nlpir.nist.gov/projects/tv2019/vtt.html#Runs

 

Evaluation:

The scoring is done as follows:

  1. The mean inverted rank metric is used for the Matching and Ranking Subtask.
  2. For the Description Generation subtask, automatic metrics used are: METEOR, BLEU, and CIDEr. Direct Assessment (DA), which uses crowd workers to score the descriptions, may be used to score the primary runs of each team.

 

The evaluation packages used are available at: https://www-nlpir.nist.gov/projects/trecvid/trecvid.tools/video.to.text/


Important Dates:

  1. July 15: Test dataset will be made available.
  2. Aug 15: Submissions are due at NIST.
  3. Aug 30: Evaluation results returned.
  4. Oct 1: Workshop speaker proposals due at NIST.
  5. Oct 29: Workshop notebook papers due.
  6. Nov 6: TRECVID 2019 Workshop registration closes.
  7. Nov 12-13: TRECVID Workshop at NIST in Gaithersburg, MD, USA.

 

Detailed schedule is available at: https://www-nlpir.nist.gov/projects/tv2019/schedule.html

 

Please don't hesitate to send any questions, feedback, suggestions or concerns that you have to the task coordinators or the general tv19.list mailing list.

 

Best of Luck!

 

The TRECVID Team

 

--
Visit this group at https://groups.google.com/a/list.nist.gov/d/forum/tv19.list
View this message at https://groups.google.com/a/list.nist.gov/d/msg/tv19.list/topic-id/message-id
---
You received this message because you are subscribed to the Google Groups "tv19.list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tv19.list+[email protected].