The video_to_text (VTT) task testing dataset consists of the following folders: 1- description.generation.subtask: - Txt file: testing.URLs.video.description.subtask: This file has 2 columns: the first column is the Id of the URL, and the second column the URL. - You will have to download all URLs to work on the task and submit results using the URLs Ids. 2- matching.ranking.subtask: - tv18.vtt.testing.URLs - tv18.vtt.descriptions.A - tv18.vtt.descriptions.B - tv18.vtt.descriptions.C - tv18.vtt.descriptions.D - tv18.vtt.descriptions.E In each of the above files, the first column represents the Id of a vine URL or a text description. The task is to match the URLs to their descriptions in each of the description files. ***You only need to download the URLs in the Description Generation Subtask***, since the two lists are identical. A list of removed video IDs is now available in the file tv18.removed.video.list. The results for these videos need not be reported since the corresponding URLs are not available anymore. To submit results for the subtask of "Matching and Ranking": ----------------------------------------------------------- For each testing subset: return for each video URL a ranked list of the most likely text description that correspond (was annotated) to the video from each of the sets A, B, C, ...etc. Please use the following format in your submission files: rank URL_ID TextDescription_ID where: rank - Is an integer number represents the likelihood that the "textual description" represented by TextDescription_ID taken from tv18.vtt.description.A,B,..etc most likely describes the video URL. Please be careful to use the correct corresponding descriptions for each testing subset independently from other subsets. (the lower rank numbers, the higher the confidence/rank). URL_ID - Is the URL id taken from the first column in the file tv18.vtt.testing.URLs (for each testing subset) TextDescription_ID - Is the textual description id taken from the first column in the files tv18.vtt.description.A,B,..etc Please submit different run files for each of the textual description sets A, B, C, ...etc . Example of a snippet from a run file: 1 1 367 2 1 78 3 1 1289 . . 1 2 278 2 2 902 . . 1915 1915 10 To submit results for the subtask of "Description Generation": ------------------------------------------------------------- Automatically generate for each video URL a text description (1 sentence) independently and without taking into consideration the existence of any descriptions in the matching and ranking subtask. Please use the following format in your submission files: URL_ID TextDescription Where: URL_ID - Is the URL id taken from the first column in the file testing.URLs.video.description.subtask TextDescription - Is the system generated 1 sentence text description. Example of a snippet from a run file: 10 a man and a woman riding in a car at night Notes: Run Types Systems are required to choose between two run types based on the type of training data they used: Run type 'V' : Traning using Vine videos (can be TRECVID provided or non-TRECVID VIne data). Run type 'N' : Training using only non VIne videos. - Each run file has to start the first line declaring the run type as in the example below for V runs: runType=V - For the Descripotion Generation task, please identify a single submission run as your team's primary submission run out of the 4 allowed runs. - Systems are allowed to submit up to 4 runs for each description set (A, B, etc) in the "Matching and Ranking" subtask and 4 runs in the Description Generation subtask. - Please use the strings ".A.", ".B.", etc as part of your run file names to differentiate between different description sets run files in the "Matching and Ranking" subtask. - A run should include results for all the testing video URLS (no missing video URL_ID will be allowed, other than the removed videos). - No duplicate result pairs of AND are allowed (please submit only 1 unique set of ranks per URL_ID). - All automatic text descriptions should be in English. - Please validate your run submissions for errors using the provided submission checkers - Submissions will be transmitted to NIST via a password-protected webpage