Open Access Journals: What’s Missing?
by Thomas Arildsen
I just came across this blog post by Nick Brown: Open Access journals: what’s not to like? This, maybe… That post was also what inspired the title of my post. His post really got me into writing mode, mostly because I don’t quite agree with him. I left this as a comment on ihs blog, but I felt it was worth repeating here.
This is a (slightly modified) comment to Nick’s post above.
Nick’s post is a great conversation starter. I share his worry that the fact that authors pay publishing charges could persuade journals to accept papers a bit too easily. That being said, I have to disagree on some other points.
This is really a minor point related to a comment about how authors on The Winnower can get a DOI assigned to their papers without any peer review:
Sadly, many seem to equate having a DOI to being citable. It is not the fact that a document has a DOI that makes it citable; you can, as a publisher, get to assign DOIs to just about anything as long as you promise to preserve it reasonably well. It does make a document a lot easier to reference unambiguously, though. What is citable, however, is really up to the journal etc. in which you are trying to cite the document in question. There seems to be a growing perception in online discussion these days that a document having a DOI means we can safely throw source criticism out the window.
The fact that the author can decide when a paper is ready and thereby cause it to get a DOI assigned does not mean that everyone has to take it as a seal of approval. You can use the open reviews (or the lack of them) to help you judge whether it is worth citing (or even reading).
Now to the more important part:
I think the lines between “respectable” and “spam” journals will begin to blur, but that does not have to be a problem. The problem here is that the services of journals that we as authors are really subscribing to are: prestige and seal of approval. But we borrow the prestige of the journal in the hope that our paper published in it will eventually live up to it – which it probably will not (Nine reasons why Impact Factors fail and using them may harm science – see point 1). With respect to seal of approval, we simply have to take the journal’s word for it, because we can in most cases not see what the reviewers said about the papers in them. All that is left for journals to provide is subject classification for easier discovery, and even that is changing (see for example the rise of services like Sparrho).
This is also why the problem of distinguishing the good from the bad journals does not have to be such a big problem. The solution to that is to judge papers on their individual merits instead of which journal they are published in. This is easier said than done, of course, since this would make it irrelevant where a paper is published. This would in turn destroy the traditional publishers’ business model, so they will of course fight it.
My proposal is to make peer reviews open. Of course this also entails problems, as Nick has also pointed out, because it is hard to attract reviewers. In the end, this could be the value proposition of future journals; the journal that can attract the best and most relevant reviewers wins. If a journal takes care of the review process, the indicator of quality that the reviews provide will be quickly available and you will not have to sit around for years waiting to get cited. So, this provides a qualitative article-level metric that is a lot more useful than wrongly judging a paper by its journal’s JIF.
I replied to the g+ post, then realized I should have better put it here. So: open peer review would be awesome, the problem seems to be social, there are few people willing to go open with their peer reviews. Maybe sometimes in the future. But there is something which is better than peer review. Because, no matter how it’s done, peer review is an opinion piece. Better than an opinion piece is reproducibility. If the work contains also means which allow anybody (willing to try) to reproduce it, then this trumps the scientific reason of peer review. (It does not trump the social reason of peer review, which is to be a bottleneck on the path of public admission of being a “peer”.)
Reproducibility is also an important issue, but as I see it a somewhat different issue. First of all, I absolutely think reproducibility is a very important issue that we as authors should strive as hard as we can to attain. However, a paper could be presenting entirely reproducible results, yet drawing completely ludicrous conclusions from them, or the results could be completely obsolete because the authors are unaware of the state of the art. These are good examples of the need for peer review as well. By the way, reproducibility still has a lot of room for improvement in the literature I read.
Reproducibility is the only thing which matters, from the point of view of the scientific method. A body of work is scientifical if it can be tested, the rest is a matter of opinion. But this is hard, unless one has all the data (including the algorithms which have been used for processing the raw data from experiments). Peer reviews are more a matter of transfer of authority from a body of peers to the work, they can never be considered validation in the scientific sense. In the past, this sharing of all data and algorithms was an impossible demand, but now this is no longer true. At least in a relative sense. Instead of criticizing reproducibility from an absolute standpoint (it can’t be perfectly achieved because there is always some detail which is not communicated to the one who wants to test the work), compare it with a review by a peer, done today, or yesterday. How can be the review other thing than a subjective opinion, if the reviewer has not validated the work by reproducing it? Which is better, in real circumstances: a review done by reading a pdf or a re-run of a body of work, made easy by present technical means?
I agree that reproducibility is very important. But it does not mean that review cannot be useful. Papers contain lots of claims that perhaps do not hold or experiments which the authors claim can support some hypothesis which they perhaps cannot because the authors overlooked something etc. This can often be weeded out in review without even needing to try to reproduce the results.
Further, in the engineering – signal processing – area I work in, papers are usually not built around a hypothesis which can then be unambiguously tested. More often, papers are about some new algorithm and the idea is to investigate “how much better can we do with this algorithm”. Here reproducible usually means that you can re-run the authors’ experiments and, sure enough, get the same results – some performance metric typically. But if the authors for example compared their new method to some hopelessly out-dated method and left out other, more relevant methods to make their own look better by comparison, simply reproducing their results did not show this, but a reviewer familiar with the area can easily point this out.
Agree, review and reproducibility are both needed.
There are disciplines like Social Sciences, Humanities reproducibility is almost impossible and review is the only option. Recently in Psychology only 39% of classical experiments were found reproducible.