Questions and Answers Concerning the Methodology for Designing Computer Vision Systems


Pilot
European
Image
Processing
Archive


The PCCV Project:
Benchmarking Vision Systems

Overview
Tutorials
Methodology
Case studies
Test datasets
Our image file format
HATE test harness



Information
General links
Conferences
Mailing lists
Research groups
Societies

Techniques (CVonline)
Software
Image databases


Other stuff
Linux on ThinkPad

Here are some comments and questions concerning the methodology document that have accumulated following tutorials and discussions over the last two years.

  • Chris Taylor: Don't you think that your methodology is too prescriptive and might stifle originality in research?

    We hope not, we are not saying anything about how people design their algorithms, only how to interpret them from a statistical viewpoint, and what is needed to make them useful to others. Hopefully this will just mean that any ideas researchers have will be better developed. We would like to eliminate the cycle of people believeing they have invented another substitute for statistics and probablility, only to find later that they haven't. Understanding conventional approaches to begin with should help researchers to identify genuine novelty.

  • Chris Taylor and Tim Cootes: Wouldn't this tutorial document be easier to understand if you included examples?

    Yes it would. But as there are many possible design paths for statistical testing and we would need many exaples to demonstrate all of it. The document would get very long and people probably wouln't find time to read it. We have cited relevant papers for all aspects of the methodology and intend to produce two detailed studies, one on the use of "mutual information" for medical image coregistration and the other on location of 3D objects using image features. Both take the approach of explicitly relating the theoretical construction to likelihood and estimating covariances, followed by quantitative testing. These look like they will be sizeable documents in themselves.

  • Adrian Clark: Is it right to say that all algorithms can be related to the limited number of theoretical statistical methods that you suggest? This wouln't seem to be obvious from the literature.

    As far as we can see they can, it's just that people don't do it. You can take any algorithm which has a defined optimisation measure and relate the computational form to likelihood, and you can take any algorithm that applies a threshold and relate it to hypothesis tests or Bayes theory. In the process you will find the assumptions neccesary in order to achieve this and you are then faced with a harsh choice. Do you accept that these are the assumptions you are making, or have you just invented a new form of data analysis? We have only ever seen one outcome to this.

  • Tim Cootes: You seem to have dismissed Bayesian approaches rather abruptly without much justification.

    There seems to be an attitude among those working in our area that provided a paper has the word Bayes in the title then it is beyond reproach. We thought it was important to explain to people that our methodology did not follow from Bayesian methods. The use of likelihood and hypothesis tests for quantitative analysis of data is a proven self-standing theoretical framework. Allowing people to think that using Bayes fixes everything might allow people with this view to be dismissive of the methodology. The reasons why we say that it is difficult to make Bayes approaches quantitative are explained in detail with worked examples in a paper in the references but we really didn't want to get into all of that here.

  • E. Roy Davies: You do not address the creative aspect of algorithm design at all in your methodology.

    That is correct. Researchers will still need to define the problem they wish to solve and decide how they may wish to extract salient information from images in order to solve the task. This methodology just provides guidelines for how to identify and test the assumptions being made in a specific approach.

  • Paul Whelan: The methodology doesn't addresss any of the broader aspects of vision system construction, such as hardware, lighting and mechanical handling. All of these are important in practical systems.

    Yes. We have only concerned ourselves with the academic problem of designing computational modules which extract useful information from images. Clearly a methodology document could also be written which covers practical aspects of physical systems too. We hope someone takes the time to do this. We would however suggest that what we suggest here should be the precursor to the process of hardware or system design. There is little point buying (or building) acceleration hardware and then finding that it cannot do the computations necessary to solve the processing task. In the meantime we will be more careful to use "machine vision" and "computer vision" at relevant places in the document to reflect your comment.

  • Henrik Christensen: Do you feel that academics will have any interest in this methodology? Isn't it really intended mainly for industrialists?

    Although the methodology has as its primary purpose explaining what is necessary to make vision systems that work, and that may be seen as a leaning toward the more practical side, the main thrust of the paper is a theoretical analysis of the approach we should take to our research, therefore we believe that academics should be interested. An understanding of the ideas involved could improve the academic quality and value of work in the area. The current mantra of novelty and mathematical sophistication, as far as we can see, is never going to make consistent progress. The task of building computer vision systems is vast and will require that we can build on each others work.

  • Jim Crowley: Isn't this just a statistics tutorial which could have been found in any statistical text book?

    Vision research is a forcing domain for statistics, many of the ideas presented here are not found easlly in the statistical literature, some of them have also been invented by us to solve specific problems. Also, you will not find a case for the quantitative use of statistics in vision research in any text book. Most sciences aleady accept that this is the correct way to do research, only computer vision seems to take the attitude that it is acceptable to ignore established analysis techniques. This was another reason for writing the document.

  • Alberto Broggi: Could this methodology be automated?

    You may be able to procedurise some of the tests and data transformations, but the problems still need to be formulated and structured to begin with. Also, we are not suggesting that this methodology is complete, far from it, it cannot be. There will be problems with data which we have not even encountered that will need to be addressed. However, we believe that the appropriate route to developing solutions is quantitative statistics and testing. Our methodology cannot eliminate the role of the vision researcher and we would not wish to.


Last updated on 16-Oct-2003 by Adrian F. Clark. [Comment on this page]