Open Access Journals: What’s Missing?

by Thomas Arildsen

I just came across this blog post by Nick Brown: Open Access journals: what’s not to like? This, maybe… That post was also what inspired the title of my post. His post really got me into writing mode, mostly because I don’t quite agree with him. I left this as a comment on ihs blog, but I felt it was worth repeating here.

This is a (slightly modified) comment to Nick’s post above.

Nick’s post is a great conversation starter. I share his worry that the fact that authors pay publishing charges could persuade journals to accept papers a bit too easily. That being said, I have to disagree on some other points.

This is really a minor point related to a comment about how authors on The Winnower can get a DOI assigned to their papers without any peer review:
Sadly, many seem to equate having a DOI to being citable. It is not the fact that a document has a DOI that makes it citable; you can, as a publisher, get to assign DOIs to just about anything as long as you promise to preserve it reasonably well. It does make a document a lot easier to reference unambiguously, though. What is citable, however, is really up to the journal etc. in which you are trying to cite the document in question. There seems to be a growing perception in online discussion these days that a document having a DOI means we can safely throw source criticism out the window.
The fact that the author can decide when a paper is ready and thereby cause it to get a DOI assigned does not mean that everyone has to take it as a seal of approval. You can use the open reviews (or the lack of them) to help you judge whether it is worth citing (or even reading).

Now to the more important part:
I think the lines between “respectable” and “spam” journals will begin to blur, but that does not have to be a problem. The problem here is that the services of journals that we as authors are really subscribing to are: prestige and seal of approval. But we borrow the prestige of the journal in the hope that our paper published in it will eventually live up to it – which it probably will not (Nine reasons why Impact Factors fail and using them may harm science – see point 1). With respect to seal of approval, we simply have to take the journal’s word for it, because we can in most cases not see what the reviewers said about the papers in them. All that is left for journals to provide is subject classification for easier discovery, and even that is changing (see for example the rise of services like Sparrho).
This is also why the problem of distinguishing the good from the bad journals does not have to be such a big problem. The solution to that is to judge papers on their individual merits instead of which journal they are published in. This is easier said than done, of course, since this would make it irrelevant where a paper is published. This would in turn destroy the traditional publishers’ business model, so they will of course fight it.

My proposal is to make peer reviews open. Of course this also entails problems, as Nick has also pointed out, because it is hard to attract reviewers. In the end, this could be the value proposition of future journals; the journal that can attract the best and most relevant reviewers wins. If a journal takes care of the review process, the indicator of quality that the reviews provide will be quickly available and you will not have to sit around for years waiting to get cited. So, this provides a qualitative article-level metric that is a lot more useful than wrongly judging a paper by its journal’s JIF.

Advertisements