Vision Online: Electronic Publishing for Computer Vision


Pilot
European
Image
Processing
Archive


The PCCV Project:
Benchmarking Vision Systems

Overview
Tutorials
Methodology
Case studies
Test datasets
Our image file format
HATE test harness



Information
General links
Conferences
Mailing lists
Research groups
Societies

Techniques (CVonline)
Software
Image databases


Other stuff
Linux on ThinkPad

Adrian F. Clark, Dept. ESE, University of Essex, Colchester, CO4 3SQ, UK
and
Patrick Courtney, ITMI-Aptor, 61 chemin de Vieux Chêne, 38240 meylan cedex, FRANCE July 1995

Introduction

Pattern recognition, computer vision, and related research areas are some of the most rapidly developing ones. And yet, despite the ubiquity of computer technology in the research work, almost all formal professional communication takes place via the printed page --- an astonishing state of affairs when one considers it dispassionately. This essay explores the possibilities of, and the potential difficulties posed by, online publication of research results in vision and related fields. It is principally intended to promote discussion of this topic, something that the authors believe is long overdue. However, the reader may care to note that much of this discussion is quite general and may be applied to any research field.

We shall proceed by examining the purposes of communicating the fruits of research, and then consider existing publication mechanisms and their drawbacks. Online publication will then be considered and its advantages and disadvantages explored. Some potential publication mechanisms are discussed, including ways for creating a complementary relationship with conventional publication routes. Finally, some outstanding issues are outlined.

The Reasons for Professional Communication

Before discussing the reasons for publishing, it is perhaps appropriate to state the reasons for doing research at all. The authors are of the opinion that any type of research is driven by the desire to extend the corpus of human knowledge. Starting from this point, it follows that professional communication is intended principally to inform other researchers of one's results, in the spirit of openness and free exchange of ideas that characterizes scientific enquiry. It thereby avoids unnecessary duplication of work and archives those results for future generations of researchers and developers. In other words, publication helps to make research efficient by allowing science to be built upon previous science. With the pressure on us to produce research results ever increasing --- and the funding available to do so generally decreasing --- it is vital that this notion of efficient communication of results continues and is developed.

Further reasons for publication have arisen over the years, largely for validation of research work. In many countries, the numbers and types of publication are taken as indicators of research activity and used as ``objective'' measures in the assessment of individuals, departments, and institutions. (Many people question the objectiveness of such measures since they take no account of significance; nor are publications the only measure of one's research activities. But this is not the place for such a discussion.) There is also the wish to receive recognition for one's own work, and to advance the reputation of one's institution, both of which are commendable and perfectly natural.

Existing Publication Routes

Let us explore this idea of extending human knowledge further by considering the publication routes currently available to us. Journals exist principally to disseminate research methodology and results to other workers active in the research area. Books typically provide overviews of the research field in a more tutorial fashion than do papers: their coverage is normally wider but less detailed. (There are exceptions to this, of course: many monographs summarize research work in book form. Nevertheless, the general argument stands.) Conversely, conferences are generally used for exploring ideas and presenting preliminary results, their proceedings being principally a record of people's opinions before the event. They are usually regarded as being more ephemeral than journal or book publications, though the increasing tendency to publish conference proceedings in book form mitigates against this. Indeed, there are many very widely-referenced conference publications --- and equally many rarely-referenced journal ones.

The crucial factor of these publication routes, developed over the years, is that of peer review: an author must be able to convince other experts in the research area that his or her work is worthy of publication. The standard of refereeing for journal publications is typically quite rigourous (though, as we all know, of variable quality) and the golden rule is that the publication should make a contribution to the field --- this is the abovementioned principle of extending human knowledge. Other types of paper are also of value, of course: reviews of particular techniques or applications, tutorials, and so on.

For books, peer review is also an intrinsic part of the writing process, though the overriding factor here --- at least from an author's point of view --- is the quality of the explanations. (Publishers are, quite rightly, concerned that the book should have a market.) Originality is less important in book production since the intended audience is usually non-expert.

Conference publications are also subject to peer review but this is typically less rigourous than for other types of publication. This is partly because conferences are intended to be a place where one may discuss ideas and results with other researchers, and so the ability to present work in progress is advantageous. There are other reasons too: the review panel for a conference is typically of limited size and the deadlines required to ensure rapid decisions are such that errors may be overlooked. The more cynical among us might even remark that conferences exist principally to make money and consequently even poor submissions may be accepted by some conference organizers to increase the number of delegates --- the recent VIDEA controversy [7] illustrates this type of problem.

There is another type of publication that must enter our consideration, and that is the technical report. These are, quite rightly, regarded with some caution as they have not been subject to the peer review process; hence, their accuracy and correctness are always suspect. Nevertheless, there are many excellent research reports. Indeed, in many cases where both research report and published paper exist, the research report is actually the better work of reference because there is more detail of the methodology and results.

The number of research reports made available is rapidly increasing; some of the reasons for this phenomenon are discussed later. Although this is generally a convenient state of affairs, since reports are available quickly after a piece of work has been completed, there are also dangers: the lack of peer review and the possibilities for bias are obvious. More worrying in the longer term however, is the potential for the loss of knowledge: technical reports can --- and often do --- disappear as quickly as they appear, and when that happens they are rapidly forgotten.

Drawbacks of Existing Publication Routes

Having considered these routes for dissemination of results and their success, let us turn our attention to the problems associated with them.

From the above discussion, it might appear that the journal publication route would be difficult to improve upon. This may be true in principle, but the practice is somewhat different. Firstly, the number of journals has increased dramatically in recent years, due to a number of factors:

  • the advent of computer typesetting has simplified and streamlined the production process, reducing the overheads involved in journal publishing and hence facilitating journals with smaller readerships;

  • with the pressure to publish upon researchers, and with the long-established journals being unwilling or unable to greatly increase the number of papers published, there is a market for other journals;

  • the increasing specialization of every research discipline means that there are more niche journals (usually termed `specialist' publications).

These developments, each of which seems beneficial individually, have some unfortunate consequences when considered in toto: although journal publication has become easier and arguably cheaper, the cost to institutional libraries has certainly not decreased; and, since the number of journals required to maintain a good coverage of a research area has typically increased, many libraries are now in the position where they are having to terminate their subscriptions to journals. This does not benefit the researcher, who must either take out personal subscriptions (not feasible if he or she is working alone or as part of a small group) or find some other means of monitoring published papers, perhaps by travelling to a neighbouring institution or by making use of abstracting services (which also cost the institution money, of course). It is easy to say that many of the `upstart' journals are of indifferent quality, but this does not imply that all papers published in them should be disregarded.

A related issue, also consequent on the increasing specialization of researchers, is the duplication of work that is well-established in other fields. For example, few vision researchers read the computer graphics literature or vice versa even though the geometrical basis for the two is identical; people working in remote sensing and medical image processing who need to align images are not familiar with techniques developed in electron microscopy; and so on. There are probably two reasons for this: the relevant journals may not be available to researchers, and they may not have time to study them in any detail. It might be argued that this is a direct consequence of there being too many journals; however, the authors would argue that, with effective abstracting services, it should be easy to locate relevant papers. (We shall return to this topic later.)

The second major problem with journal publication is that their latency --- the delay between submission and publication --- is too long, up to two or three years for the more prestigious journals. Although accepting that rigourous peer review takes time, it is difficult to argue that this is efficient professional communication. Since many research groups work on closely-related problems at the same time, this lag in making new ideas and results known does us a disservice. Indeed, the increasing tendency to make available internal research reports is partially in response to this phenomenon.

The final point to make about journal publications has already been touched upon above: constraints are invariably placed upon authors regarding the lengths of papers. In order to be able to discuss methodology and present experimental data, this results in papers that are often difficult to read and understand. These same constraints mean that there is rarely space for authors to describe why they have taken the approach being described and so on. (Indeed, one of the authors has been asked, on more than one occasion, to remove such material from papers because it is `not relevant.') However, this type of discussion offers insight to the reader. There is a well-publicized case in which researchers involved in the early development of atomic energy are being interviewed by the current generation of researchers in order to capture just this type of information while it is still possible: they have found that reading research papers and reports is simply not enough.

The space constraints mentioned for journal publications normally apply with a vengeance to conference proceedings: three or four quarto-sized pages is a not uncommon limit. This is enough space to present a technique in a compact form, along with one or two (hopefully representative) results, but nothing more.

We have to ask ourselves whether these types of publication are serving us well. To give a topical example, one of the main problems facing vision at the moment is producing techniques that are robust. Do publications that show only one or two carefully-selected examples really further the discipline? The authors would argue that they do not. Of much more value are reports of work that present a technique, test it on a significant corpus of data, and characterize the technique's performance in some way, perhaps by comparison with other published techniques. It is not necessary that a new technique perform better than a well-established one, merely that it shows promise. (This has long been accepted in the image coding field, where objective quality measures are in widespread use. Perversely, coding is one of the few fields where subjective quality measures are actually more important than objective ones! But even subjective responses may be assessed in an objective way.) The page limits of paper journals does not encourage this type of analysis.

Online Publication

Having expounded the basis for making others aware of research and identified the major drawbacks of existing publication routes, let us turn our attention to the dissemination of information over computer networks. Just as with conventional journals, there are several classes of online communication, which we shall consider in the following paragraphs.

The first class, familiar to almost every researcher, comprises direct electronic mail (email) and email lists. There are several email lists in the vision area (indeed, one of the authors moderates one). A culture has grown whereby these lists are used principally for distributing conference calls, announcing technical reports or software, and so on. There is almost no technical discussion in them --- which is something of a surprise, since that is precisely what most of them were set up for!

Closely related to email lists are newsgroups. Although there are many newsgroups that carry relevant and useful information, there is only one dedicated to vision --- and that simply mirrors a mailing list! In fact, with the opening up of the Internet, the signal-to-noise ratio of most newsgroups is rapidly approaching zero, and the politeness and respect for other people's opinions has already vanished. None of the information dissemination routes in this email and newsgroup class provide for the more formal types of communication that are the subject of this essay.

The second class of communication route, more relevant to our discussion, comprises two separate phenomena:

  • Pre-print and technical report servers. There are several disciplines where rapid dissemination of research work has been achieved by instigating servers for pre-prints (i.e., papers also being submitted for conventional publication) and technical reports. These typically operate by distributing abstracts by email from a central server; interested readers then acquire the complete paper by anonymous FTP. The best-known examples of this currently lie in the particle physics field [2] (but see also [3]), where papers are typically written with TeX or LaTeX mark-up, figures being in POSTSCRIPT.

  • The first generation of ``electronic journals.'' There are over 800 such journals, covering all the natural and social sciences and the humanities. (Indeed, some of those in the humanities have been the most successful.) Some are, to be candid, little more than mailing lists, while others do involve peer review of varying degrees of rigour. They are typically text-only, unable to provide quality output, and have no real support for mathematics or graphical output. Many of them are distributed only by email, or are retrievable from only one server, so there are potentially access problems.

The third class --- and the current state of the art --- consists of journals on the World-Wide Web (WWW) which offer quality output via POSTSCRIPT and are able to accommodate mathematics, tabular and graphical material. They offer some advantages over printed journals too: they are searchable and are able to offer hypertext links within papers and to cited works. Many of these journals are organized analogously to conventional publications, having rigourous peer review, editorial boards, and so on. A good example is [5] (but see also [11]) and it is worth noting that several well-established conventional journals and profressional societies are slowly moving towards online publication (e.g., [6, 1]).

This latter category is almost enough to meet the needs of the vision, pattern recognition, and related communities --- but not quite. The additional technology required, however, does exist, albeit in a rather experimental form in some cases.

Images, the staple diet of vision research, are supported by all the popular WWW browsers, both on workstations and on PC-class machines. Both uncompressed (PBMPLUS) and compressed (lossy via JPEG or lossless via GIF) formats are supported. Online viewing of imagery has four major advantages over printed imagery:

  1. Colour images are significantly more expensive to print than monochrome ones, and so colour images are at a premium in most journals. This constraint does not, of course, apply to online viewing: the additional bandwidth required to carry colour is not great and all serious researchers have access to colour displays.

  2. The printing process invariably degrades the appearance of images and can introduce visual artefacts [10]. The situation is even worse for colour images since the colour gamut available via printing ink differs significantly from monitor gamuts [9].

  3. Images with poor visual detail may be enhanced interactively by the online reader, perhaps yielding further insight into the operation of an algorithm.

  4. Imagery available online may be used by other researchers. This is a particularly significant point, one that we shall consider further below.

A topic that is important in many areas is motion --- and this is something that no printed journal can accommodate. (Actually, there are image coding journals that have tried distributing videos with issues, but this is problematic: libraries are not well-equipped for storing or viewing them, and the degradations introduced by bulk copying onto even U-matic videos can mask fine detail.) In the online case, of course, replaying motion sequences comes down to having an appropriate ``helper application'' for a WWW browser. The current norm is to use MPEG-1 decoders; while this is not necessarily appropriate for all applications due to the nature of the compression, it is certainly acceptable for viewing. Indeed, with a careful choice of viewer, one may replay sequences in slow motion or frame-by-frame.

Although vision per se is concerned principally with imagery and information derived therefrom, the closely-related discipline of pattern recognition need not. Research in areas such as speech recognition --- which has made important contributions to image analysis --- would also benefit from the ability to play back audio data. As with motion sequences, most of the popular WWW browsers support audio. It must be admitted that there are a number of formats for storing audio data, but it is easy to acquire software (e.g., sox) that performs format inter-conversion.

There are other developments in terms of presenting data on the WWW that will be beneficial. The most significant of these is likely to be the Virtual Reality Modelling Language (VRML), a ``language'' for specifying 3D environments. Since one of the most important topics in vision is the retrieval of 3D structure (from stereo, motion, shading, etc), a facility whereby an author may show his or her results by means of a 3D model, one that the reader may rotate using a VRML browser, would be invaluable. The current generation of VRML browsers display static scenes; however, adding ``behaviours'' to VRML is under active discussion, so it will soon be possible to produce animated 3D displays, with obvious applications in displaying results.

The features just mentioned can be supported by an online publication much more easily than by a conventional publication and without the degradations caused by using paper as an intermediate medium. Moreover, this can be extended even further if the final destination of the research is considered to be not just the human reader but also his or her computer. This raises the possibility of exchangeable datasets. Although databases of images exist, it is not clear how widely they are used and they seem to be somewhat decoupled from the publication aspect of research. By making an explicit link between the publication and the image data within it, the reuse of this data is encouraged, facilitating comparative evaluation of algorithms.

What applies to data applies equally to code. Authors are discouraged from presenting code in their papers even though this is often an easy way of explaining the operation of an algorithm. (This has not always been the case: consider, for example, Singleton's classic FFT paper [8].) If source code were available within a publication, it would encourage readers to try out their own experiments and build upon the work presented. Not all authors would wish to do this of course, but at present the path is not open those who do.

We feel that all these features --- especially quality machine readable image data sets and code --- would be of substantial value to the vision community.

Possibilities for Online Publication

The obvious medium for disseminating online publications is via the WWW, as WWW browsers such as Netscape and Mosaic already support images, audio, and video on both PC-class machines and workstations. Since there are no `page' constraints on WWW documents, authors are free to write arbitrarily long papers if their work justifies it. (Of course, extraneous and over-long text will have to be shortened, but this is a common outcome of reviewing.) Moreover, many of the shortcomings of HTML, the mark-up scheme for WWW documents, are being eradicated: HTML3 will be SGML-compliant, support the concept of style sheets, and incorporate facilities for mathematics and tabular material.

A further advantage of the WWW for `publication' is that utilities exist to convert from the major text preparation tools used in science and engineering (e.g., LaTeX, MS Word, WordPerfect, FrameMaker) to HTML. This allows authors to prepare their submissions in an environment that is familiar, if they so wish, leaving the conversion to HTML until the point of submission. (Or perhaps even later: it might be easier for the editors to perform the conversion, as they may wish to use customized converters that impose the journal's style.) The viability of this process is illustrated by this document, which was written using LaTeX mark-up and converted to HTML via latex2HTML.

To be successful, an online journal should not be tied to a particular server: there should be servers in different geographical regions that mirror each others' papers and WWW pages. (There is, of course, software that can carry this out automatically.) As well as providing fault tolerance, so that if one server were down, a reader need only point his or her WWW browser to another, this scheme would ensure researchers obtain the best response for reading or submitting papers.

Having produced papers on the WWW, it is but a small step to contemplate recording them on CD-ROM. This might, for example, be an appropriate way to solve the problem of distributing online journals to libraries. (It is unlikely that this could be arranged free of charge, and so this has potential as a source of revenue for individual journals.) Paper copies are still required in order for the journal to be assigned an ISSN; but the number of copies that must be lodged is small and they are easily generated from the online version.

Let us consider how an online journal might appear, for both a reader and an author. The scenario described below is just one possibility, one that the authors consider is most likely to be successful; alternative suggestions are, of course, welcomed.

For a reader.
Imagine, for the sake of argument, a researcher who is not working in the vision field but who wishes to make use of vision or pattern recognition techniques in his or her work. This person learns of the online journal and wishes to use it to learn about potentially relevant work. The first step is to connect to the main WWW page for the journal. That page offers the reader links to further information, including:
  • the journal's ``mission statement''
  • information on how to read papers online and how to print versions of papers
  • a list of the keywords used to classify papers
  • searching for a paper by keyword, by author or by institution
  • free-text search through papers
  • a complete list of published papers
  • a list of abstracts of papers under review
  • news and conference announcements
  • how to ``subscribe'' to the journal
  • how to submit a paper
The person could learn about relevant papers by identifying appropriate keywords from the list, then searching for papers that supply that keyword. If that is not successful, the person could perform a free-text search through papers using, for example, WAIS. Any papers found by the searching process will be presented to the reader as a WWW page of titles, authors, and abstracts, with links to their actual texts. This is essentially how WWW search engines, such as the popular Yahoo one, work. An implication of this is that it will be beneficial for a journal to have a wide remit, as it will still pick up papers that might be peripheral to most people's interests but highly relevant to a few readers. Alternatively, related online journals might share publications databases.

There is one item in the list that may appear strange for an online journal:

  • how to ``subscribe'' to the journal
The authors envisage that individuals would subscribe by filling in a WWW form that defines a ``profile'' of interests (probably using the same set of keywords). Then, when a paper is published, subscribers with profiles that match the keywords in the paper are automatically notified by email of its title and abstract, the message also containing the URL of its text. In an altruistic world, this type of subscription would be free; but individual journals might in principle charge for it, as a way of recouping the cost of the system that hosts the journal on the network.

For an author.
The ``home page'' by which readers access the online journal can also point potential authors to submission instructions. It is envisaged that the submission process will run as follows:
  1. The author completes a WWW fill-in form detailing the paper's title etc and emails it to a submission address at the journal. The paper is assigned a unique identifier (probably containing the year and the site to which the paper has been submitted) which the author uses in all further communication.
  2. The author uploads his or her paper (perhaps in LaTeX format) by anonymous FTP to the journal's server compressed tar or zip file which also includes images and figures.
  3. The editor extracts the paper from the uploaded version, saves it in an appropriate directory, and makes it available from the WWW server with password protection, perhaps as POSTSCRIPT. The paper may be anonymized if double-blind refereeing is employed.
  4. The editor selects reviewers and notifies them of the paper's URL and the associated password, without which the paper cannot be viewed.
  5. The reviewers return to the editor their comments via email.
  6. The editor forwards the reviewers' comments (suitably anonymized) to the paper's authors by email.
  7. Assuming the paper was accepted, the author uploads the revised version by anonymous FTP.
  8. The editor `publishes' the paper by allowing public access to the directory, linking the paper in the journal's WWW pages, and emailing the paper's details to subscribers as appropriate.
It is worth noting that many of the individual steps in this sequence may be partially automated by means of appropriate software. Indeed, there are programs (e.g., edas) already in existence that perform several of the tasks. If confidentiality is regarded as problematic --- since email is clear text, anyone can in principle read it --- then public-key encryption packages such as PGP may be employed for most of the communication steps outlined above. Finally, authors are often concerned that they have little visibility of the status of their papers. With the scheme outlined above, all transactions for each paper being recorded, it is easy for a WWW CGI script to inform a requestor that, say, one review has been received but that two others are outstanding.

The great advantage of this `publication' route is that the only delays built into the system are those involved in the peer review process. It is also entirely possible that editors may link together papers. This might, for example, be used to trace the development of a technique through a number of references; or it might be used to link to further papers that agree or disagree with an author's findings.

Outstanding Issues

The key to a high quality professional journal is the peer review system. The present system works to an extent, but poorly selected, inexperienced and/or overworked referees mean that the process is often less than perfect. There are many other ways of applying peer review to maintain high quality which an online journal would facilitate. One option is that of ``open peer commentary'' practised in Brain and Behavioural Sciences [4]. In this journal, papers are accompanied by as many as twenty or thirty 1000-word invited commentaries from experts, together with a response from the authors of the initial paper. This allows the papers to be placed in context and encourages fruitful discussion of issues raised. A hyperlinked online journal without page limits would allow such a scheme to be applied. Another possibility is to include, within the online journal, a list of links to unrefereed papers or technical reports, to encourage readers to keep up to date with the latest work and to communicate with their authors.

Apart from the question of peer review, the major concern expressed by potential authors when the idea of online publishing is suggested is that of plagiarism, of someone copying sections from published material in order to construct a sensible-looking document. This is, of course, entirely possible --- but the technology that makes plagiarism simple also aids its detection: a wily referee or editor can easily perform searches through published online papers to locate such transgressions. And if a note to the effect that a paper contains plagiarized material is attached to a publication by the linking mechanism discussed elsewhere in this document, the repercussions to the plagiarist outweigh the potential advantages. In fact, plagiarism works best if the work of the original author is little known for reasons such as geography or the time lag in publication; but if the original work is diffused immediately and globally via the network, effective plagiarism becomes much harder. In any case, with the widespread availability of scanners and OCR software, the budding plagiarist already has all the required tools at his or her disposal in respect of conventional journals.

Another vexed area is that of copyright. Most existing journals require authors to transfer copyright to them before publication. Many authors are unhappy with this arrangement. We need to decide what rights need to be transferred from the author to a journal for publication; we can then decide upon appropriate mechanisms for so doing.

There are still a number of other unanswered questions concerning the precise procedures to follow and the tools to use. These issues may perhaps be addressed best by actually setting up a pilot online journal. The authors have some concrete ideas in this respect, which they will make public after some discussion has taken place: we do not wish to constrain discussion by imposing them at this juncture.

Final Words

We feel that there is a definite need for our existing publication mechanisms to evolve to accommodate the computer technology with which we are all familiar. We feel that doing so will bring substantial benefits to us all. It will break the space and time barriers imposed by the traditional system by removing page limits and reducing the time lags involved. We feel that viewing the computer as one of the consumers of research results and methods, rather than just as an intermediary, has the potential to make the field more collaborative and to enable us all to make greater progress.

References

1
Peter J. Denning and Bernard R. Rous. The ACM electronic publishing plan. http://www.acm.org/pubs/epub_plan.txt, November 1994.
2
Paul Ginsparg. High energy physics preprint server. http://xxx.lanl.gov/.
3
Paul Ginsparg. First steps towards electronic research communication. Computers in Physics, 8(4):390, 1994. http://xxx.lanl.gov/blurb.
4
Stevan Harnad. Post-Gutenberg galaxy: The fourth revolution in the means of production of knowledge. Public-Access Computer Systems Review, 2(1):39--53, 1991.
5
Journal of Universal Computer Science.
http://jucs.aifb.uni-karlsruhe.de:8000/5C05B0EC/Cjucs_root.
6
Kurt Paulus. IOP sets the electronic pace. Physics World, 8(8):49, August 1995.
7
Werner Purgathofer. Beware of VIDEA!
http://www.cg.tuwien.ac.at/~wp/videa.html,
1995.
8
Richard C. Singleton. An algorithm for computing the mixed-radix fast Fourier transform. IEEE Trans. Audio, Electroacoust., AU-17:93--103, June 1969.
9
Maureen C. Stone, Willian B. Cowan, and John C. Beatty. Color gamut mapping and the printing of digital color images. ACM Transactions on Graphics, 7(44):249--292, October 1988.
10
Robert Ulichney. Digital Halftoning. MIT Press, Cambridge, Massachusetts, USA, 1987.
11
various. Special issue on digital libraries. Comm. ACM, 38(4), April 1995.

Last updated 2012-06-21 11:13:08 by Adrian F. Clark. [Accessibility information.]