Feedback on TRECVID 2003 and questions/suggestions for TRECVID 2004 and beyond o Shift to analysis of natural video (surveillance, amateur movies, meetings, etc) -> interested parties need to develop proposals during 2003/4 for presentation at TRECVID 2004; need to complete 2-yr cycle in 2004 using CNN/ABC news data. Plans open after 2004. VACE already investigating some sorts of natural video. o Shot boundary determination - Replace/supplement with low-level event detection (flash, camera motion, content-free structural artifacts of the video production process)? - Increase variety of news data sources? - Report complexity/time of approaches? -> The task will be rerun as in 2003 to complete the 2-year cycle using the same data sources. No others are available in time. Participants can experiment with ways of reporting complexity/time and make proposals at the workshop. o Story segmentation/classification - Look into evaluating some TDT runs for the sources used in TRECVID ? (remember TDT runs optimized for TDT cost function; IBM included a TDT run so maybe this enough of a comparison) -> Not, needed. IBM submitted a run that used their TDT system output. - Continue story detection/classification task or not [5 groups want to continue] Classification task as measured may be too easy and therefore should be dropped. -> Yes, classification will be dropped. - Provide precision/recall curves (based on confidence measure for each boundary) Consider whether it is worth the effort if participation is going to be small. -> Waiting to see participation o High-level feature extraction - Add more dynamic event features? -> this has been done - Do we really need so many runs? -> Consensus is yes. May reduce the number depending on lateness of data - Increase the size/depth of judged pools -> yes, to the extent possible - Problems with too frequent features -> NIST is looking into the consequences of this... - Even out types of features? -> this has been done - Choose a subset of the annotated features for evaluation -> Yes, a subset of the annotated features or features based on earlier features or topics - Let groups create some of the topics (little enthusiasm for this) -> No. cons outway pros - Attach training type to feature result set not the run? -> NIST will consider and decide before submissions due o Search - No new development data? (focus effort on use not creation of training data) -> Right - Search for stories, not (just) shots? -> For continuity continue to use the master shot reference but support participant experiments in additional scoring not requiring predefined shots Use of stories more problematic in agreeing on truth. - Judge more of pooled submissions -> Making adjustments to increase judging - Incorporate requirements for camera motion in topics -> Topics may include an example or two. - Restrict human involvement in manual runs (e.g., no enhancement of the text) -> Same rules as in 2003 - still in stage of looking for any way to get reasonable results - More non-text examples? -> Current number seems reasonable assuming a human has found them. - Agree on some simple characterization of the role/relationship each example? -> Research issue... no changes for 2004 - Visual-only condition for search? -> Possible within current guidelines; reluctant to require a visual-only run because various groups may want to test other hypotheses: text+visual+audio vs text-only, last year's system versus this year's, etc. and numbers of runs are limited as they are expensive to run and evaluate. - Provide output of NE tagger on ASR? -> BBN is doing this on the ASR for the 2003 data - Who used the suggested interactive design to block for main effects of topic and searcher? -> Good question - maybe require some mention/evaluation in papers/presentations for 2004 - Allow (in DTD) empty result set for interactive searches -> Yes o Add filtering task ? - Complicated information need, known in advance (incoporate in feature task?) o Send out set of questions about what worked (results to be included in the expanded overview) Wait and see what information is missing from the papers, then ask as needed. -> No, but consider seriously requiring section in papers/presentations that answer a standard set of questions. o Final papers (no length limit) must be in by 1 March 2004 -> Done o No printed proceedings. -> Done o Look into attaching TRECVID to ACM MM 2004 in NYC in mid-October 2004; Application due by end of January. Shortens development/assessment/analysis time. -> Done; consensus was for later date o Some groups did not submit workshop papers. What to do. -> Situation improved by March 1.