Shared publicly  - 
I have been arguing that science would be better served by a publish-then-filter system (like blogs or arXiv) rather than a filter-then-publish (the dominant mode in science right now). 

Whenever we want to reform the peer review process, the argument usually ends up with people saying things like this: 

"Without these venues and journals, how would we know what to read? How would I know what to trust?"

The answer: it is your freaking job as a scientist.

For all sorts of reasons, most work is just not that insightful or important in the greater schemes of things. This includes papers who receive awards in the best conferences. A lot of work, especially the work that sounds impressive, ends up being wrong or irrelevant, sometimes for complicated reason. But it is your job to figure it out! To check things for yourself! To build a trust network... To take a chance and read this technical report from a guy you never heard from.... but to also stay on your guard for mistakes and misleading information... 

If you are delegating the determination of truth to authorities, you are not doing science. Period.
Ya think? Look, one of the primary tasks of a scientist is to sift through the literature. To review data that has been presented by other scientists and to decide, for herself, where these data fit. ...
David Eppstein's profile photoP Swartzfager's profile photoDaniel Lemire's profile photoJason Eisner's profile photo
Too simplistic of an answer. Cannot ignore cost of evaluation to the average user/scientist. There will always be some filtering mechanism in some form or the other and it is good to take that into account when designing new peer evaluation proposals.
I agree.  While there are costs (time), taking things on faith from an "authority" is not science.
Some sort of distributed peer based reputation management system really might make sense as part of the filter process - in the general case it seems to me that credibility is probably relative to some extent. The 'trust network' is critical to making a filter based system work in my opinion.

Doing something similar with bibliographies would also be tasty, especially if more automation could be used in the pub process.
What if the filtering mechanism is filtering the good, groundbreaking papers? This happens all the time on other filter mechanisms in the web. 

The cost of evaluation is there anyway. A good researcher will always evaluate the paper and the comments/citations about it. Does not matter where it comes from. 

The filtering mechanism of post-review processes is the web itself. Blogs, twitter, facebook, gPlus. We can get a much better filter by watching my peers talking about a paper than accepting the hidden discussion of a standard peer review process. 
Some sort of distributed peer based reputation management system — We have that now.  It's called citation.
+Jeff Erickson Kidding aside, I find the "number of times a paper was cited" as a useful indication when I do a lit. search in Google Scholar. It is not a measure of quality, but if an old paper has never been cited, I feel less compelled to check it out. If a somewhat recent paper has been widely cited, I am more likely to check it out. I'll also take a hint from the people I trust: if someone I know mentions a paper, I'm much more likely to at least read its abstract.  I also care a great deal about who wrote the paper. For example, if I find a good paper on a topic my a researcher, I'll immediately look for other papers on the same topic by the same author. What I don't look for is "where" the paper was published, except maybe in a negative manner... 
+Daniel Lemire - Thanks for fixing the "than" vs. "then" --- My mom (+P Swartzfager ) is an English teacher and I could hear her in my ear every time I saw this post update....  ;)

That aside, in general, it seems like there is a place for both systems - or a hybrid.  For students, have some sort of "tier" system to have a general confidence in the material is nice.  Until I've been working in a particular subject for a while, weeding out the irrelevant fluff is difficult -- yes, as a scientist, it's something I need to learn to do, but it's nice if the publishing system supports that learning instead of (a) hindering it as it sometimes does or (b) flat out abandoning students.
+Erika Mesh The problem with the tier system is that it is forgoes critical thinking. The net result without critical thinking is that people blindly follow fads like sheep. This becomes dangerously close to cargo cult science.

I think my research papers don't confuse then and than because I read them over... ;-)
+Daniel Lemire - Agreed - the tier system shouldn't be to filter quality from garbage -- but something to filter higher quality from "useful, but not earth shattering" would be nice.

And yes, the stunning lack of critical thinking or even basic proofreading skills is mind boggling.... but that's yet another discussion. ;)
I find the "number of times a paper was cited" as a useful indication when I do a lit. search in Google Scholar. It is not a measure of quality, — Agreed.  (That's why I didn't say "number of citations".)   How a paper is cited is a reasonable indicator of the perceived quality of that paper.
+Jeff Erickson Yeah, what we have are citations. And they havent really changed for a very, very long time. You would think that with full text search/indexing, semantic analysis, a variety of clustering algorithms, and other automated, dynamic approaches we might be able to do a bit better at augmenting traditional citations these days. Assuming, of course, that we could get past the 'full text for most papers are behind paywalls' issue among others, but those are another kettle of fish entirely ... ;-)
I would love for Google Scholar to throw some NLP behind trying to figure out which paper citations appear to be "I'll add this here because of that stupid reviewer" versus which are "we will use the lemma introduced by this paper"
I don't think it's accurate to say that we have a filter-then-publish system. It's more like filter-then-publish-then-filter. At least, I don't see how it's humanly possible to keep up with everything in my field, so I only read a small filtered fraction of what gets published, and I expect that's true for most others as well. What you seem to be saying is that you think the filtering would be more accurate if we got rid of the front-end filtering and left only the back end. It's not impossible, but I'm not convinced.

Another aspect of this issue is that the consumers of the scientific literature are not all scientists themselves. So when you say "If you are delegating the determination of truth to authorities, you are not doing science": yes, but that doesn't mean that we should make it impossible for nonscientists to do that delegation. The peer review system is far from perfect but it does keep a large fraction of the nonsense out, enough that e.g. Wikipedia can succeed reasonably well without requiring its editors to be subject experts.
I like to think of academic research and publication as Kenneth Burke describes:

Burke, Kenneth. The Philosophy of Literary Form. Berkeley: University of California Press, 1941: (110-11)

"Imagine that you enter a parlor. You come late. When you arrive, others have long preceded you, and they are engaged in a heated discussion, too long for them to pause and tell you exactly what it is about. In fact, the discussion had already begun long before any of them got there, so that no one present is qualified to retrace for you all the steps that had gone before. You listen for awhile until you decide that you have caught the tenor of the argument; then you put in your oar. Someone answers; you answer him; another comes to your defense; another aligns himself against you. . . . However, the discussion is interminable. The hour grows late, you must depart. And you do depart, with the discussion still in progress."

In my own writing, and in the writing/research classes I teach, I find Burke's metaphor to describe the conversation research allows and promotes, a discourse which has as its goal to forward and extend knowledge. Perhaps this is why this thread has come full circle from filter to publish to filter. I think the process at its best is more organic than the words filter and publish seem to indicate. 
+David Eppstein If you trace back the source, the issue comes down to this... some people say "look, it is too hard to assess how good papers are, we are just going to look at where they appear". DrugMonkey then says: we should ridicule this". Others reply "how are you going to assess the work then?" He replies: "it is your freaking job". He is not saying "don't use filters". 

Ok. So you go see your doctor. He says you need this procedure. You ask why. He says because this paper says so. You ask why he trusts this paper. He says "because it appeared in Nature". I argue that at this point, you would be best served by leaving his office.
Add a comment...