All these results are a team effort with co-workers from University of Amsterdam, Qualcomm Research and/or Euvision Technologies.
For the ImageNet Large Scale Visual Recognition Challenge, the following results were obtained:
Year | Track | Overall Performance | Accuracy | Proceedings |
2015 | Object detection (DET, provided data) | 2nd | 0.54 (Mean Average Precision) | ILSVRC2015 |
2015 | Object localization (CLS+LOC) | 3rd | 0.13 (Flat cost) | ILSVRC2015 |
2014 | Object detection (DET, provided data) | 3rd | 0.32 (Mean Average Precision) | ILSVRC2014 |
2014 | Object detection (DET, external data) | 4th | 0.35 (Mean Average Precision) | ILSVRC2014 |
2013 | Object detection (DET) | 1st | 0.23 (Mean Average Precision) | ILSVRC2013 |
2012 | Image categorization (CLS) | 5th | 0.29 (Flat cost) | ILSVRC2012 |
2011 | Object localization (CLS+LOC) | 1st | 0.43 (Flat cost) | ILSVRC2011 |
2011 | Image categorization (CLS) | 2nd | 0.31 (Flat cost) | ILSVRC2011 |
TRECVID is organized by the US National Institute of Standards and Technology (NIST), with participation from leading academic and industrial research labs from all continents. The evaluation measures and references to the full conference proceedings are specified in the table below for the concept detection task (SIN) by the University of Amsterdam MediaMill/Qualcomm Research team:
Year | Overall Performance | Mean Average Precision | Proceedings |
2015 | 1st | 0.36 | to be released |
2014 | 1st | 0.33 | TRECVID 2014 |
2013 | 1st | 0.32 | TRECVID 2013 |
2012 | 2nd | 0.30 | TRECVID 2012 |
2011 | 2nd | 0.17 | TRECVID 2011 |
2010 | 1st | 0.09 | TRECVID 2010 |
2009 | 1st | 0.23 | TRECVID 2009 |
2008 | 1st | 0.19 | TRECVID 2008 |
For the PASCAL Visual Object Classes (VOC) Challenge, the following results were obtained:
Year | Track | Overall Performance | Accuracy | Proceedings |
2012 | Image categorization | 2nd | 0.74 (Mean Average Precision) | VOC2012 |
2012 | Object detection | 1st | 0.41 (Mean Average Precision) | VOC2012 |
2011 | Image categorization | 3rd | 0.73 (Mean Average Precision) | VOC2011 |
2011 | Object detection | 3rd | 0.36 (Mean Average Precision) | VOC2011 |
2010 | Image categorization | 4th | 0.69 (Mean Average Precision) | VOC2010 |
2010 | Object detection | 3rd | 0.33 (Mean Average Precision) | VOC2010 |
2009 | Image categorization | 3rd | 0.62 (Mean Average Precision) | VOC2009 |
2008 | Image categorization | 1st | 0.54 (Mean Average Precision) | VOC2008 |
The evaluation measures and references for the photo annotation task by the University of Amsterdam team at the Image Cross Language Evaluation Forum are:
Year | Overall Performance | Accuracy | Proceedings |
2011 | 4th | 0.43 (Mean Average Precision) | Image CLEF 2011 |
2010 | 2nd | 0.41 (Mean Average Precision) | Image CLEF 2010 |
2009 | 1st | 0.84 (Area Under Curve) | Image CLEF 2009 |
The site and its contents are © 2008-2022 Koen van de Sande, except for the files (and other contents) that are © of the respective owners. This site is not affiliated with or endorsed by my employer. Any trademarks used on this site are hereby acknowledged. Should there be any problems with the site, please contact the webmaster.