Future publication models in scientific research

Or: Do we need to perform interdisciplinary research into “science of publication”? (… and hence another dozens of international conferences and journals?)

NIPS2009 has held a special session, addressing the problem of publication models in future machine learning research. How serious the problem is can be somehow reflected by the overwhelming responses from the community, which in turn is described as “simultaneous concerns as well as many simultaneous proposals” that even paralyzed the full conversation (John Langford’s ML blog).

Obviously the problem has been itches to the community for ages. The massive responses shouldn’t come as surprise to the  seniors of the community,  because obviously they have sensed how serious the problem turns up before they’re brave  and perhaps cynical enough collectively to propose the special session. Actually, everybody watching NIPS and equivalent machine learning or computer vision conference should be able to notice the obvious regression of publication quality of NIPS papers. And seriously, NIPS is not a total machine learning conference, as the proposal and justification put forwards by Prof. Yann Lecun have mentioned of ICCV, CVPR, etc as well. In fact, Prof. Lecun has described the problem as a serious problem faced by the whole discipline of computer science (Prof. Lecun’s page).

What is the problem, to be exact? Prof. Lecun has identified mainly as low rate of scientific progress due to the obstacle put forwards by the current conference publication system that favors incremental works and against brave new ideas. And this branches out many problems in the publication system: overwhelmed reviewers without incentives, steering submissions of incremental works, abundant time spent to assimilate the core of previous works. Yet more some unstated ones revealed from his reply to various comments: biases due to the  language (native English authors vs. non-native authors), due to social networks and connection with the leads of the fields (esp. true for junior researchers). It’s also interesting to see some feminist arguments brought up by several comments there.

Obviously for most the answers point towards public review in an arXiv-like system. And numerous reviewers submit their reviews in loosely-organized way, with coordinates from the chairs. The final acceptance or publication will depend on the collections of the public reviews and the authors’ revision and responses to the problems raised by the reviewers. Personally I classify this kind of proposal as “Darwinist” as they preferred “natural selections” to be applied in scientific publication systems as well.

While the proposal somewhat addresses the fairness in publication and the equality in disseminating ideas, there’s another side of coin that is, I hope, not intentionally left unspoken of. As highly principled science disciplines, maths and physics normally depend on cautious theoretic and empirical justifications. This is obviously not true most of the times for lesser principled disciplines such as computer science, where artificial intelligence, machine learning, computer vision are loosely embedded and interacted. In these areas where theoretic evidence is usually not direct, experimental verification becomes critically important. What a reviewer can say about a paper by reading off the experiments? 50%? I believe even less. So how to proceed to judge the quality of the paper? Mostly from experience, and based on the descriptions presented by the authors and evaluate how valid it could be.  Can we imagine how the arXiv-style publication goes for CS disciplines? I’m not totally optimistic of this.

So the issue of accountability of publication in CS cannot be singled out in the anticipated publication system, even if we can safely assumed all reviewers are fully rational and dedicated. Papers that report results that are not reproducible (the divine nature of humanity refrains myself from using the word “genuine” here) should be kicked out by the review process. This most effective way for this, somewhat crude, is to require the authors to provide the source codes and data for experiments. This should be a critical factor to decide about the quality of a particular paper for public review. (It’s pleased to see that CVPR’2010 has tried to move towards this direction, though the effect remains to be seen.)

Yet another hidden side, which is profoundly related to the publication issue, is the number of papers to be accepted. So under the public review system, how many percentages of papers should be accepted once? What constitute the reasons to reject a paper? If this fundamental problem cannot be properly settled, the changes in other aspects will eventually turn out to be immaterial. No doubt nowadays scientific conferences flourish not solely driven by the need for exchange of ideas and scientific progress, by considerably driven by commercial benefits. Besides the obvious contributions to the tourism, shopping, transportation, etc, the organizers are making money from the authors. For CVPR and ICCV, for instance, the organizer will deliberately allow for two extra pages for a six-page paper, at the cost of US$100 per page. In a time when electronic proceedings prevail, what calculation can miraculously amounts one pdf page to US$100? And organizations such as IEEE, are making money in the form of subscription fees, from the global, by selling the electronic pages.

So would IEEE be happy with the reduction in the number of publications for one conference, or more radically happy with the public accessibility of academic papers? I’m completely pessimistic of this. And don’t forget the market need. How many lecturers around the globe are ordering their tenures with their publications? How many Ph.D. students are trying to strength their publication lists, to get satisfactory jobs (in this time of economic crisis …) ? And how many technical researchers are securing their jobs by turning in their publications to Boss. So would everybody be happy with the reduction in the acceptance of conferences?

Goes democratic, goes transparent, and makes progress. To ensure the fairness and equality can promote the progress of science, with the premise of achieving good balances in both the social and commercial contexts of scientific publications. Of course, there’s still much to see than expected. What’s the last  time you mess up with a conference deadline?

Further readings:

Tagged , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: