Why am I doing this: Collecting Crackpot Referee Reports
The formal refereeing process lies at the heart of what makes a science: at the very end, a collection of "good" conferences or journal papers advances academic careers and research grants, i.e. it controls the future directions of a research field. The formal refereeing process of a scientific conference consists in given a paper to three or four "referees" which are hopefully expert for it. The process is fundamentally un-democratic and non-wikipedia: 1) science is ideally on truth and not on majorities, and 2), not everyone can contribute, and decisions can not level out over a long time-period. In contrast, the process is founded on something exclusive and apparently vague like "authority" and "expertise".
Beeing a referee of a scientific conference is therefore a particular responsability; a responsibility which is reflected in a number of written and unwritten rules, evaluation criteria (like significance, relevance, technical correctness, ...), by a formal refereeing process technically enforced by web-based services like easychair, and by institutions such as program comittees, their chairs and scientific organizations.
Admittedly, refereeing is not easy: I encountered in my own practice as a referee (about 200 reports) a few papers which I did not understand at all. In most cases, it turned out nobody understood it; such cases end up by a search for formal arguments why the paper is actually uncomprehensible (or "does not meet formal standards"). I encountered papers, where my own expertise and background was indeed insufficient --- this happens in particular, when the conference does not use a "bidding phase" (where members of the program comittee indicate their interest for submitted papers to referee them) or where I simply misunderstood the abstract which I took as basis for a bidding. I remember a very painful discussion with two very respected collegues, who knew by personal contacts more over the impressing work behind a certain paper as could actually be inferred from it; they wanted to force it through although we all agreed that the presentation was poor.
In a well-organized conference (comprising a reasonably balanced program comittee, a good bidding phase, active chairs that stimulate the debate and criticize insulting, vague or inconsistent referee reports), about 70 % of the reports are roughly uncontroversial and differ only gradually in necessarily subjective criteria such as "significance for a scientific community". Above all, this shows that science exists and is not just a tribal behaviour, as philosophers like Kuhn or Feyerabend suggest. Unfortunately, there are certain temptations that may prevent a conference to turn into a "scientific event": being considered an expert may inflate egos, and being a program committee member may advance a career.
To stir a debate and to make referees and chairs a bit more aware on their
responsability, I put my most spectacular crap referee
reports on this web page - for fairness together with the
original submitted paper, the complete report and
sometimes the reports of the other referees (which are
not necessarily criticized here or are not obviously factual nonsense).
Note,
what I attack on this list is strictly obvious factual nonsense,
inconsistency or lack of common computer science background, but
not (necessarily subjective) opinions on significance,
relevance, or originality.