[e2e] double bland reviewing

Kostas Pentikousis kostas at cs.sunysb.edu
Mon May 10 21:32:08 PDT 2004

Dear David,

On Thu, 29 Apr 2004, David P. Reed wrote:

|Separate the two problems: conference integrity and constructive feedback.
|If an author needs honest and constructive feedback, that is best obtained
|by seeking out reviewers separate from a "judging" process such as the

Then, they should be called "judges", not reviewers, right?
Although the outcome is binary (pass/reject), "judgements" should
be justified. I do expect a review, i.e. a critical evaluation. A
score and empty comments mean the "reviewer" is simply a grader.
Is this how the state of the art advances?

A reviewer for a flagship ComSoc conference back in 2001 made
clear that ECN was studied thoroughly, and that nothing more was
left to be investigated. I am happy to see that Partridge et al.
believe otherwise. More recently, a reviewer in a measurement
conference was clueless about a predominant traffic trace record
format. I wonder how many reviewers spent 3-5 hours on a paper.

|program committee.   The goal of the program committee and peer review
|judgment of quality is not well aligned to the process of improving
|unselected papers.   Selection comes first, and improving comes after.

Alexander Hars points out that it may not be long till journals
and conferences will have only one value-added feature: enhance
and improve papers. Publishing is not a scarce resource anymore.
When was the last time you visited the library to photocopy a
journal article?

|If you want to eliminate the effect of cheap shots and logrolling, make the
|program committee process transparent and public - it's there (in my
|experience) that the integrity of a conference is lost.

That is, follow an open source model and bring meritocracy back in
rule: Novel ideas or approaches at examining a problem; solid,
reproducible results; sound discussion and comments; justified,
verifiable conclusions; thoughtful, engaged reviews. In this brave
new world it is correctness, utility, and availability that
matter, not numbers of papers.

Allman in his reviewer plea (http://www.icir.org/mallman/plea.txt)
makes it clear: If [the reviewer] cannot understand the paper
[s/he] will recommend rejecting it. Very typical of the human
nature: destroy what you cannot comprehend. Is it possible that
uninitiated, untrained reviewers are in the loop (and, no, I do
not mean Mark ;)?

Maybe it's time for an author's plea.

Best regards,

Kostas Pentikousis                   www.cs.stonybrook.edu/~kostas

More information about the end2end-interest mailing list