[** Edit 2015-03-16 I remain extremely skeptical of a system where we would rely on research papers finding skilled reviewers by accident rather than targeting the papers directly at skilled reviewers. 

http://www.vox.com/2015/3/14/8203595/pubpeer ]

[Edit 2014-07-11 My text below is a specific critique of comments made in the article linked at the end. It's an open review!]

The idea that modern web and social network technology, applied to scientific publication, should automatically improve it is an oversimplification and simply not correct. I am against the holding of scientific research hostage by distribution channels that enforce rights protections and limit distribution but useful science needs a strong, reliable, trusted peer review system. The essay (linked below) by +Richard Price seems to be addressing the wrong issues. Peer-review is the important part of the process that we need to keep not throw away.

We have a powerful example of the strength and importance of traditional peer review and the danger of crowd- and social-sourced review in the false debate over climate change that is taking place online and in the lay media. Crowd and social review may eventually work toward what the science and data reveal about the climate system but it takes a long time for lay people to catch up to scientists, if they ever do. In the modern era, where technical specialization makes it difficult for anyone outside of a speciality to knowledgeably critique science, the importance of a trusted system cannot be understated. The issue of climate change is political but the publication of data, analysis and theory can not, must not, be influenced by politics. This is true for any scientific exploration. Traditional peer review undertaken by trained scientists doesn't eliminate political influences and related problems but it does move the focus of attention toward the science where it belongs.

The idea that the modern web provides powerful discovery systems is true but irrelevant. Discovery systems like Google don't necessarily promote what's better or more correct. They promote what people point out themselves. There is no inherent intelligence (yet) in these tools. They are wonderful at what they do but they don't do what's necessary for science. Link sharing or social networks, though driven more intimately by human intellect don't do any better. Unless the shared content has some real, inherent scientific value, it doesn't matter how many times it's shared or how dense is a particular network that promotes it. The real value of a scientific paper is the science it contains not its popularity or the opposite, the derision or scorn applied to it, in the case where it's not popular.

Traditional peer review is also not a simple two (or few) person review process. Missing in that analysis is the role of the editors of the journals. These people are not copy editors, they aren't there to correct errors of language, they are typically volunteer scientists, well-respected in their fields, who play an important role in the process. Editors provide an additional level of oversight on paper quality, reviewer selection, and review quality. Editors may reject papers that demonstrate an obvious lack of background understanding or make errors in analysis before they are even sent to reviewers. Conversely, an editor may promote a paper that has merit even in the face of reviews that are partisan, poorly done, or incorrect.

Another function of the editor is to act as a conduit for trust. Since the reviewers are (typically) anonymous the only way that our collective faith in their work can be confirmed is via the editor. We know the editor and can challenge them, they know the reviewers and can get a response from them when problems arise. Whether or not anonymity is something worth keeping is, I suppose, a different but related question. I think it can add value but only if there is someone who can take responsibility for the process. The scientific editor at a peer-reviewed journal is a vital part of the process.

Another aspect of the peer-review process neglected by Price is the fact that it is not simply gate-keeping. Peer review results, not just in better papers with correct grammar and few (hopefully no) mathematical errors but, in better science. That's because often the peer reviewers point out problems and make suggestions that have the result of causing additional research to be done. That may come as a surprise to people who haven't participated in the process. It is sometimes the reason why papers take a long time to be published. It is impossible to imagine that this could occur in any meaningful way in a crowd- or social-sourced promotion system. It is also a part of the process that is of enormous benefit to the authors of papers. Their research may actually improve because the community of peers hopes for, wants, demands better science.

The idea the papers might be published more quickly under a crowd and social sourced reviewing system seems obvious but is not true. In the current system, important and interesting papers are available immediately within the scientific community that needs them most. Scientists love the internet and were (obviously) among the earliest users of the technology. An important, puzzling, or interesting result is passed around the community before or as soon as it is submitted to a journal. No scientist who is reasonably well connected is surprised when an important result appears in print, having been submitted two years earlier. It's very likely that new research based on the work in a slow to appear paper has already been undertaken and follow-up papers are already underway or submitted. The important distinction is that no one will cite an unpublished result. Hot topics are often cited as "in press" once their status has been confirmed and in any case their actual publication details, once accepted by a journal, are usually known long in advance of the paper's actual appearance in print.

Another missed point is the level of interest in the subject of a paper. There is absolutely no doubt in my mind that the level of attention for some esoteric aspect of a particular speciality in chemistry (say, or climate science or particle physics, there are many, many fields that could be inserted here) is simply not significant enough for social and crowd-sourcing to be of any more value than a small number of anonymous, qualified, peer reviewers. That's not to say that these esoteric specialities with few interested participants are not important. It's never certain where the next important breakthrough in a discipline that can exert significant influence on health (as an example that would cause intense interest among non-scientists) will come from. It would astonish me if even a tiny fraction of important papers published now draw a significant amount of interest in the broader community of science geeks let alone the general public. (The issue of the step from the publication of a peer reviewed result to the dissemination and promotion of new research by the popular press and other lay publications is, I believe, completely separate from the issue under discussion here. The lay press seems to almost universally misinterpret scientific results and I have no idea how to fix that.) I don't want to draw this out too far but it occurs to me that one area where we certainly want to rely on highly trained, qualified experts to vet and promote research even if it is a slow process is medicine. I can't image that any meaningful progress could be made to medical research by allowing crowd-sourced reviews and promotion. I wouldn't want my surgeon or oncologist to be relying on that kind of research. And if it's not good enough for medicine, really, why would it be good enough in any other scientific field.

The question of the quality of the science in a published paper is an important one. In many scientific fields research is an expensive, time-consuming process. Grants must applied for, laboratories planned and built, staff and students organized, budgets planned years in advance. It seems impossible to imagine that individuals, scientific organizations and institutions, and governments could even begin to take on this level of preparation and commit to spending large amounts of (what is typically) money collected from tax-payers based on the number of up-votes that a paper receives on a social networking site, even one devoted to scientific matters. It would be a completely irrational, irresponsible way to organize the funding for research. I am certain there is waste in the current funding system but, based on my knowledge of the state of climate science and the disconnect it has from the public discussion of that state, I cannot believe it would be anything but many times more wasteful. Or, nothing may be spent on research at all if it seemed politically difficult to allocate funds. Another example of a field where crowd-sourced reviews would (it seems to me) cause immense harm to the integrity and value of the science is in biological sciences where there is intense interest by narrow interest groups who have motivations that stray far from what is scientifically plausible. I'm thinking here of life-sciences like evolution and reproductive health which, in certain parts of the world, cause enormous unease among some segments of the public. Can you imagine the papers in those fields that might get voted up under a social network review process? As a matter of fact, you don't have to imagine it, you can use the powerful search and discovery tools at your disposal on the internet to find them. They are often more popular than meaningful research is.

Finally, I believe it is true that the expense of the publication process as well as distribution control enforced by some organizations are actually antithetical to the goal of improving science and providing for the wide dissemination of research. It is perhaps here that current and future technology can be used to the best advantage. There are many worthy scientists at less well provisioned research institutions all over the world who cannot publish often or at all simply because of the costs of the process. A more universal, free (in both senses of the word) publication and promotion system would benefit them and all of us. However, such a system must not introduce a new group of gatekeepers to replace the old and must be free from the influence of commercial and political entities as well. A new, international, free publication system must not be undertaken by sacrificing the quality of the science being published. That's what's most important.

It is undeniable that the current peer review process is slow compared to what seems possible. However, simply speeding up the process may not improve what's important, namely the quality of the science being published (slow isn't always bad, sometimes fast is a problem, perhaps we'll need a scientific bureau of sabotage). I hope by way of the examples and discussion here that it is clear that simply moving to a faster system that throws away the merits of peer review will not improve the science. Let's continue the discussion and see if we can, collaboratively, like scientists should, come up with something that provides the benefits of peer review and faster dissemination.

Edit 2012-02-05: I added some comments about trust to the paragraph about the scientific editor for a journal. I also added some thoughts about reviewing medical research later.
Shared publicly