Tutorials on Benchmarking in Computer Vision


The PCCV Project:
Benchmarking Vision Systems

Case studies
Test datasets
Our image file format
HATE test harness

General links
Mailing lists
Research groups

Techniques (CVonline)
Image databases

Other stuff
Linux on ThinkPad

The PCCV project has organized a series of tutorials and workshops to promote performance characterization in computer vision:

  • People new to performance characterization might like to start with an introduction written to introduce the main ideas: 19pp, available as PDF (138 Kbytes), PostScript (2.4 Mbytes)
    ABSTRACT: This document provides a tutorial on performance characterization in computer vision. It explains why learning to characterize the performances of vision techniques is crucial to the discipline's development. It describes the usual procedure for evaluating vision algorithms and its statistical basis. The use of a software tool, a so-called test harness, for performing such evaluations is described. The approach is illustrated on an example technique.

  • Tutorial given to the British Machine Vision Conference, 10-13 September 2001: 105pp overheads, available as PDF (330 Kbytes), PostScript (5 Mbytes)
    ABSTRACT: The three hours of tutorial material presented here discusses the issue of the use of vision algorithms in the context of a larger computational system. We start by examining the many reasons for algorithmic testing. We draw analogy between practices in engineering disciplines such as electrical engineering where components are specified in terms of enough characteristics to be able to design larger systems with them. Several such characteristics are suggested for vision algorithms based upon quantitative estimates of performance using conventional statistical methods. Examples are given for many algorithms, including simple image processing tasks as well as feature matching and recognition tasks. Software used in demonstrations is now available from www.tina-vision.net

  • Tutorial given to the EPSRC Summer School on Computer Vision, 17-21 June 2002, Surrey, UK: 23pp handout, available as PDF (255 Kbytes), PostScript (5 Mbytes); 60pp overheads, available as PDF (233 Kbytes), PostScript (800 Kbytes)
    ABSTRACT: We consider the relationship between the performance characteristics of vision algorithms and algorithm design. In the first part we discuss the issues involved in testing. A description of good practice is given covering test objectives, test data, test metrics and the test protocol. In the second part we discuss aspects of good algorithmic design, including understanding of the statistical properties of data and common algorithmic operations, and suggest how some common problems may be overcome.

    This tutorial also appeared as

    P. Courtney and N. A. Thacker
    Performance Characterisation in Computer Vision: The Role of Statistics in Testing and Design
    In "Imaging and Vision Systems: Theory, Assessment and Applications"
    Jacques Blanc-Talon and Dan Popescu (Eds.)
    NOVA Science Books, 2001
    and you can download the camera-ready copy of it as PDF (400 Kbytes) and as PostScript (5.1 Mbytes).

  • Tutorial given to the European Conference on Computer Vision, 27 May-2 June 2002: 46pp overheads, available as PDF (161 Kbytes), PostScript (2 Mbytes)
    ABSTRACT: This two-hour tutorial explains how the use of quantitative statistical methods in the design of vision algorithms leads to the ability to make predictions regarding the expected performance which can then be tested using Monte-Carlo methods. We explain how the use of such a design methodology involves the explic it identification of assumptions, including constrains and error models. We explain the origins of likelihood from first principles and show how quantita tive estimates of performance can be obtained using covariance estimation and error p ropagation. We discuss common error models and assumptions used in many algorithms and the role that these factors play in the success or failure in practical task s. We illustrate these issues on a 3D stereo object location system.

  • Keynote presentation given to Optoelectronics, Photonics and Imaging 2002, 5-6 September 2002, Ireland: 19pp paper, available as PDF (200 KBytes, PostScript), PostScript (900 KBytes).
    ABSTRACT: This paper describes a design methodology for constructing machine vision systems. Central to this is the use of empirical design techniques and in particular quantitative statistics. The approach views both the construction and evaluation of systems as one and is based upon what could be regarded as a set of self-evident propositions:
    • Vision algorithms must deliver information allowing practical decisions regarding interpretation of an image.
    • Probability is the only self-consistent computational framework for data analysis, and so must form the basis of all algorithmic analysis processes.
    • The most effective and robust algorithms will be those that match most closely the statistical properties of the data.
    • A statistically based algorithm which takes correct account of all available data will yield an optimal result (where the definition of optimal can be unambiguously defined by the statistical specification of the problem).

    Machine vision research has not emphasised the need for (or necessary methods of) algorithm characterisation, which is unfortunate, as the subject cannot advance without a sound empirical base. In general this problem can be attributed to one of two fa ctors; a poor understanding of the role of assumptions and statistics, and a lack of appreciation of what is to be done with the generated data. The methodology described here focuses on identifying the statistical characteri stics of the data and matching these to the assuptions of the underlying techniques. The methodolo gy has been developed from more than a decade of vision design and testing, which h as culminated in the construction of the TINA open source image analysis/machine vision system.

Last updated on 16-Jan-2004 by Adrian F. Clark. [Comment on this page]