Shared publicly  - 
 
It's nice that even the business magazine Forbes has noticed the Elsevier ban. But their business model will only go up in smoke if we develop a new model. It's easy to distribute information; the hard part is evaluating it. Bean-counters at universities and funding agencies need these evaluations to decide who to hire and promote, which departments to give more money to, and so on.

I like Andrew Stacey's idea of putting papers on free archives like the arXiv and then having independent "review boards" evaluate those papers. He writes:

"My proposal would be to have “boards” that produce a list of “important papers” each time period (monthly, quarterly, annually – there’d be a place for each). The characteristics that I would consider important would be:

1. The papers themselves reside on the arXiv. A board certifies a particular version, so the author can update their paper if they wish.

2. A paper can be “certified” by any number of boards. This would mean that boards can have different but overlapping scopes. For example, the Edinburgh mathematical society might wish to produce a list of significant papers with Scottish authors. Some of these will be in topology, whereupon a topological journal might also wish to include them on their list.

3. A paper can be recommended to a board in one of several ways: an author can submit their paper, the board can simply decide to list a particular paper (without the author’s permission), an “interested party” can recommend a particular paper by someone else.

4. Refereeing can be more finely grained. The “added value” from the listing can be the amount of refereeing that happened, and (as with our nJournal) the type of refereeing can be shown. In the case of a paper that the board has decided themselves to list, the letter to the author might say, “We’d like to list your paper in our yearly summary of advances in Topology. However, our referee has said that it needs the following polishing before we do that. Would you be willing to do this so that we can list it?”"

Someone needs to start one of these boards for math and/or physics. We can discuss it endlessly, but the time is ripe for action. If someone starts one, other people will start more, and natural selection will optimize the concept.
33
19
Christopher Hanusa's profile photoKaan Öztürk's profile photoSebastian Pokutta's profile photoSamuel Monnier's profile photo
37 comments
 
This sounds like just what is needed. However it may help if there is a smooth transition from the journal system to the review boards. I would suggest the following:
(1) The author can submit to a review board - request for consideration of the paper (cuts down on both chance and editors work).
(2) There is no need to restrict to `serial submission', but where the work has been submitted should be public - so someone who submits everywhere will not be taken seriously.
(3) Something that looks to the outside world like journals need to be built on these, with editors using verdicts of the review boards plus potentially further (open or anonymous) refereeing. Again journals can be created, experiment and be subject to natural selection.
 
In many fields it is standard practice to do double blind reviewing as a way of promoting objectivity and fairness. It seems like this is no longer an option under this proposal.
 
+Gabriel Perren - I mentioned the good points of the Faculty of 1000 here:

http://johncarlosbaez.wordpress.com/2012/01/31/the-faculty-of-1000/

but a trackback to that blog article mentioned some bad points:

http://gasstationwithoutpumps.wordpress.com/2012/01/30/f1000-research-yet-another-open-access-publisher/

I'd need to learn more about them to say something truly intelligent. For now, I can just say mathematicians and physicists should learn about this this experiment and try to do something similar but better!
 
They look good - especially papercritic.com (but does it link to the arXiv?). But this is the first I have heard of them. Start a movement to do get active on these and I will be glad to join, and encourage graduate students to join the fray (I have already started getting them to update Wikipedia and Formalize mathematics as course assignments).
 
Just to clarify, I don't think these are substitutes as yet for practical reasons, rather they can play a complementary role and grow till they hopefully can be substitutes.
 
+Peter Krautzberger wrote: "Couldn't we try to skip new editorial structures?"

Review boards can take arbitrary forms, from things that resemble journals to crowd-sourced free-for-alls to "Ed Witten's favorite physics papers" to The Faculty of 1000 to arbitrary other things.

Until the bean-counters at universities consider one or more of these review boards to be a trustworthy way to make hiring and promotion decisions, faculty will feel compelled to publish in the existing journals. So, that's a crucial factor to consider. But an amusing and little-noticed fact is that nobody needs to subscribe to these journals for this system to work. All we need to know is who got papers accepted in which journal.

It turns out that in quantum information there was a very useful crowd-sourced system for rating papers on the arXiv... until hackers tried to attack the server it was hosted on and the guy running it got a job at Google. Now it's gone. :-(
 
Systems like papercritic seem ripe for a Google "pagerank" style approach - each paper gets a score calculated from reviews, citations, etc. Reviews (positive or negative) from people with highly scored papers contribute more, etc. Arguably you could even have a Slashdot style metareview where people review the reviews to further refine how "useful" they are.

You could imagine that the resulting score could be a paper-specific impact factor suitable for Universities assessing the significance of papers.
 
I feel it is much better to not view reviewing as primarily rating (though a layer on top could rate). The main purpose should be summarizing, useful comments, alerting others to the paper etc.
 
Agreed. Even in terms of "rating" a paper, there's many different metrics - the paper might interesting, but not important, comprehensive but not of great impact (or very narrow.

It's hard to imagine, though, that someone like Google couldn't combine all of that data into a meaningful algorithm...
 
Again, from the viewpoint of an academic insider, the main role of refereeing, journals etc. is to rate papers so university administrators can decide who deserves to be hired, who deserves tenure and promotion, and which departments deserve more money. The old system can't go away until a new system of doing these jobs is found.
 
Indeed, what good is a boycott of Elsevier when their journals have high impact indices and hiring committees use impact factors in hiring decisions? Tenured academics can safely boycott the commercial journals, but is a young academic who pledges not to publish in an Elsevier journal committing career suicide, and what are the established scientists who join the boycott doing about it?
 
+Miguel Carrion Alvarez it is okay if young researchers do not join the boycott for these perfectly understandable reasons; what matters is that enough prominent, established scientists do, for that's what it takes to collapse the impact factor.
 
+Dmitry Roytenberg your mention of "keeping the bean counters happy" reminds me of the financial havoc wreaked by credit rating agencies and the way they have been written into financial regulation... to keep the bean counters happy about people's credit risk policies.

There is really no substitute for due diligence, in particular there is no substitute for judging the quality of research yourself. And if you can't judge the quality of research, what are you doing on a research committee?
 
+Miguel Carrion Alvarez wrote: "Indeed, what good is a boycott of Elsevier when their journals have high impact indices and hiring committees use impact factors in hiring decisions?"

Simple: the boycott draws attention to these deeper problems. You can see already that it's gotten a lot more journalists and scientists writing and talking about these problems, and planning the next steps. That's important.
 
+Miguel Carrion Alvarez - also, you shouldn't think someone decided to boycott Elsevier because that was an optimal strategy. Gowers got Tyler Neylon to start that boycott website because Gowers was already boycotting Elsevier, as were a lot of other mathematicians - and he thought it might be good to make this publicly visible. I don't think anyone expected it to catch on as much as it did! Now that it has, it's time to plan the next move.
 
+Dmitry Roytenberg - ideas like that sound good. What we need is for someone who likes programming to actually try one.
 
+Miguel Carrion Alvarez "research" is a very broad notion. These committees have to pass judgement on applications coming from disparate areas, so a simple metric like impact factor is often the only thing they have to go on. I agree that this system is very far from ideal, and there's a lot of randomness involved in eventual awarding of grants, positions and the like. I hope a better way can eventually be found, but let's take one step at a time. Committees want a simple numeric score they can trust, so let's give them that with the middle man excised.
 
Thanks for the blog posts about F1000 +John Baez. I was excited by the look and idea of F1000 but now I am not so sure. Could definitely do something better.
 
I like the idea of review boards attached to arXiv type things. You upload/submit a paper and the review board goes over it. After some to-ing and throwing the review board decides that the paper passes peer review and is in an acceptable state. They then give it an electronic stamp like "published" with a citation. Of course being online the paper could continue to be edited and evolve possibly even getting multiple "published" stamps if sufficient new work is done (with version control to keep track of the multiple papers).

We could be sneaky and give review boards names that sound like journals. That way the citations to stamped/published papers would look perfectly acceptable in your CV with your other papers and funding agencies and promotion bodies might not notice the difference.

This is obviously very similar to the status-quo but would enable all the science 2.0 things we want to be incorporated.
 
The wiki model, combined with a social networking model, is the best option we've got. Offer edited (peer reviewed) ever changing, current "objective reviews" of each subject, as well as individual profiles that show personal theories/results. This way you can see an overview of what the majority think, as well as being able to go deeper into each individual theory/experience. Because, as always, the best information is never the mainstream "objective" view/experience, and is instead somewhere out on the fringes quietly being ignored, but the mainstream view is still important as it provides healthy critique for that fringe information.
 
.."and natural selection will optimize the concept." The genetic algorithm of journals! Let's publish that one +John Baez lol :)
 
I think it is time to award the "Ghandi noticeable fairness force condensation nucleus of general interest" badge to +John Baez (in German I could concatenate this to get one single word :-)
 
Agreed. Ratings are not important (but not completely useless either, just like MO reputation points or number of wikipedia edits -- they tell us where to look for actual content).

The key, I think, is the tough cultural change of getting people to talk openly about each others work.
 
Doing it is simple. I've done it already, with a specific focus on simple answers that anyone can do (especially school age folks. It's called Binikou.org. If I can do it, so can anyone with even a bit if DIY spirit, or friends who have some. :-) And if you're at a school, you can get a free website too, so it doesn't cost you the $130 or so in hosting fees that I pay. I imagine an official non-profit would probably be the best home base for this project, so that it's easier for people to see the value of donating resources to it, and so that it's better protected from corporate interests.

The keys to success come in making sure that everyone who edits has to also provide their own page with background information about their information, in making sure that the main page on a given topic is inclusive while also allowing for popularity bias (what people call the "objective" view), and in promoting the project to a wide variety of interested individuals and groups (a problem my own project hasn't yet achieved).
 
+Turil Cronburg - It may be simple to set up an arXiv overlay website, but we still need someone to actually do it. I haven't seen anyone yet say they both can do it and want to do it.
 
John, if you read what I said, I'm not talking about "an arXiv overlay website", I'm talking about a wiki. A wiki that allows for sharing of information (both theoretical and observed) about solving problems in all the topics humans care about, using science - and aiming to have both a more objective overview on each topic (the main page for a topic) and a page for each individual editor where they can go into more detail about their theories and observations. A wiki which I have indeed already set up. You're welcome, in fact encouraged!, to use it. :-) If you really want one that is less understandable by everyone (more academic-speak, and less DIY focused), I'd be happy to set up a sister project on my website.
 
+Turil Cronburg - okay, sorry for the mixup: actually right now I'm looking for someone to create something that looks superficially like the arXiv, and is linked to the arXiv, but where people can write comments about each paper, and "rate" it according to a simple system, and see the ratings for each paper. So it's more like a blog linked to the arXiv than a wiki.
 
John, yeah, my point was that this isn't an effective model if your goal is to create a more effective form of science knowledge sharing system. I'm not sure what you were agreeing with initially, but I was initially saying that for us to have both the best information (novel/unpopular) as well as the average information (popular), we need a publicly editable wiki for the main topic, serving as the peer reviewed summary of all the collected theories and observations on a topic, and then profile pages for all the editors where they get to say their unique experiences related to the topics they edit. This is the most efficient way to provide the different kinds of information we need to learn about how things work.

The thing you're talking about is pretty much already in existence. It's called Reddit, where people can comment and vote on the usefulness of stuff on other websites. :-) It's sometimes useful, but not really efficient or effective, educationally, as there is no scientifically organized objective view (a publicly editable summary as the main page for a topic, organized by physical properties and relationship to human interests), and most of the subjective views are easily lost due to voting (the popularity contest approach). Any time you're voting, you're automatically losing the best stuff.
Add a comment...