TRECVID Contact Citations Task Proposals ActEV Sequestered Data Leaderboard

Please use the following guidelines to cite TRECVID related publication

When referring to TRECVID 2023 in general, please cite the following paper which will be on the TRECVID website by 1. March 2024 but not before:

    @inproceedings{2023trecvidawad,
    author= {George Awad and Keith Curtis and Asad A. Butt and Jonathan Fiscus
             and Afzal Godil and Yooyoung Lee and Andrew Delgado and Eliot Godard 
            and Lukas Diduch and Deepak Gupta and Dina Demner Fushman and Yvette Graham 
            and Georges Quénot},
    title = {TRECVID 2023 - A series of evaluation tracks in video understanding},
    booktitle = {Proceedings of TRECVID 2023},
    keywords={TRECVid, Video Retrieval, Multimedia Retrieval, Video Understanding, IR Evaluation},
    year = 2023,
    organization = {NIST, USA},
    pdf={http://www-nlpir.nist.gov/projects/tvpubs/tv23.papers/tv23overview.pdf},
}

When referring to the V3C vimeo collection, please cite the following publication:

BitTex
@inproceedings{rossetto2019v3c,
  title={V3C--A Research Video Collection},
  author={Rossetto, Luca and Schuldt, Heiko and Awad, George and Butt, Asad A},
  booktitle={International Conference on Multimedia Modeling},
  pages={349--360},
  year={2019},
  organization={Springer}
}

When using BBC EastEnders video or images snapshots, remember to add the following wording based on the data permission agreement:

"For non-commercial individual research and private study use only. BBC content included
 courtesy of the BBC"

In papers referring to TRECVID's work in deep video understanding movies annotations,
please cite the following publication:

  Erika Loc, Keith Curtis, George Awad, Shahzad Rajput, and Ian Soboroff. 2022. 
  Development of a MultiModal Annotation Framework and Dataset for Deep Video Understanding. 
  In Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind, 
  pages 12–16, Marseille, France. European Language Resources Association.

While, for all work referring to the DVU (Deep Video Understanding) task in general, please cite the following publication:

    Keith Curtis, George Awad, Shahzad Rajput, and Ian Soboroff. 2020. HLVU: A New Challenge 
    to Test Deep Understanding of Movies the Way Humans do. In Proceedings of the 2020 
    International Conference on Multimedia Retrieval (ICMR '20). Association for Computing 
    Machinery, New York, NY, USA, 355–361. https://doi.org/10.1145/3372278.3390742
  

In papers referring to TRECVID's work in automatic ad-hoc video search and Interactive video retreival at VBS please cite the following publication:

Jakub Lokoc, Werner Bailer, Klaus Schoeffmann, Bernd Muenzer, George Awad. 2018.
On Influential Trends in Interactive Video Retrieval: Video Browser Showdown 2015-2017.
IEEE Transactions on Multimedia (TMM), IEEE, New York, NY, 16 pages

When acknowledging the common annotation efforts coordinated by Georges Quenot and team, please cite the following publication:

    Stéphane Ayache and Georges Quénot, "Video Corpus Annotation using
    Active Learning", 30th European Conference on Information Retrieval
    (ECIR'08), Glasgow, Scotland, 30th March - 3rd April, 2008
    URL: http://mrim.imag.fr/georges.quenot/articles/ecir08.pdf

When acknowledging the ASR output provided by LIMSI please cite the following:

 J.-L. Gauvain. The Quaero Program: Multilingual and Multimedia
 Technologies IWSLT 2010, Paris, Dec. 2010.

 L. Lamel. Multilingual Speech Processing Activities in Quaero:
 Application to Multimedia Search in Unstructured Data The Fifth
 International Conference Human Language Technologies - The Baltic
 Perspective Tartu, Estonia, October 4-5, 2012

In papers referring to TRECVID's work in Instance Search please cite the following publication:

@article{awad2017instance,
  title={Instance search retrospective with focus on TRECVID},
  author={Awad, George and Kraaij, Wessel and Over, Paul and Satoh, Shin’ichi},
  journal={International Journal of Multimedia Information Retrieval},
  volume={6},
  number={1},
  pages={1--29},
  year={2017},
  publisher={Springer}
}

When referring to the TREC Video Retrieval Evaluation's (TRECVID's) general goals, guidelines, general results, etc. please cite tbe following publication:

BibTex
    @inproceedings{1178722,
    author = {Alan F. Smeaton and Paul Over and Wessel Kraaij},
    title = {Evaluation campaigns and TRECVid},
    booktitle = {{MIR} '06: {P}roceedings of the 8th {ACM} {I}nternational {W}orkshop on
                 {M}ultimedia {I}nformation {R}etrieval},
    year = {2006},
    isbn = {1-59593-495-2},
    pages = {321--330},
    location = {Santa Barbara, California, USA},
    doi = {http://doi.acm.org/10.1145/1178677.1178722},
    publisher = {ACM Press},
    address = {New York, NY, USA},
    }
ACM Ref
    Smeaton, A. F., Over, P., and Kraaij, W. 2006. Evaluation campaigns
    and TRECVid. In Proceedings of the 8th ACM International Workshop on
    Multimedia Information Retrieval (Santa Barbara, California, USA,
    October 26 - 27, 2006). MIR '06. ACM Press, New York, NY,
    321-330. DOI= http://doi.acm.org/10.1145/1178677.1178722

In papers referring to TRECVID's work in high-level feature extraction please cite the following publication:

BibTex
    @incollection{trecvid.features,
    author = {Alan F. Smeaton and Paul Over and Wessel Kraaij},
    title = {High-{L}evel {F}eature {D}etection from {V}ideo in {TRECV}id:
    a 5-{Y}ear {R}etrospective of {A}chievements},
    booktitle = {Multimedia Content Analysis, Theory and Applications},
    pages = {151--174},
    editor = {Ajay Divakaran},
    year = {2009},
    isbn = {978-0-387-76567-9},
    publisher = {Springer Verlag},
    address = {Berlin}
    }

When referring to TRECVID's Content-Based Copy Detection task and associated research, please cite the following publication:

BibTex
@article{Awad:2014:CVC:2647579.2629531,
 author = {Awad, George and Over, Paul and Kraaij, Wessel},
 title = {Content-Based Video Copy Detection Benchmarking at TRECVID},
 journal = {ACM Trans. Inf. Syst.},
 issue_date = {June 2014},
 volume = {32},
 number = {3},
 month = jul,
 year = {2014},
 issn = {1046-8188},
 pages = {14:1--14:40},
 articleno = {14},
 numpages = {40},
 url = {http://doi.acm.org/10.1145/2629531},
 doi = {10.1145/2629531},
 acmid = {2629531},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {TRECVID, Video copy detection, evaluation, multimedia},
}

When referring to the HAVIC collection, please cite the following publication:

BitTex
@InProceedings{STRASSEL12.885,
  author = {Stephanie Strassel and Amanda Morris and Jonathan Fiscus and Christopher Caruso
            and Haejoong Lee and Paul Over and James Fiumara and Barbara Shaw and Brian
            Antonishek and Martial Michel},
  title = {Creating HAVIC: Heterogeneous Audio Visual Internet Collection},
  booktitle = {Proceedings of the Eight International Conference on
               Language Resources and Evaluation (LREC'12)},
  year = {2012},
  month = {may},
  date = {23-25},
  address = {Istanbul, Turkey},
  editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck
            and Mehmet Uğur Doğan and Bente Maegaard and Joseph Mariani and Jan Odijk and
            Stelios Piperidis},
  publisher = {European Language Resources Association (ELRA)},
  isbn = {978-2-9517408-7-7},
  language = {english}
 }

Digital Video Retrieval at NIST

Digital Video Retrieval at NIST
News magazine, science news, news reports, documentaries, educational programming, and archival video

Digital Video Retrieval at NIST
TV Episodes

Digital Video Retrieval at NIST
Airport Security Cameras & Activity Detection

Digital Video Retrieval at NIST
Video collections from News, Sound & Vision, Internet Archive,
Social Media, BBC Eastenders