Adventures in Signal Processing and Open Science

Tag: open science

An emerging consensus for open evaluation: 18 visions for the future of scientific publishing

An emerging consensus for open evaluation: 18 visions for the future of scientific publishing

I just found this treasure trove of papers on open evaluation in science thanks to this post by Curt Rice that sums it all up very well: Open Evaluation: 11 sure steps – and 2 maybes – towards a new approach to peer review

DAT versioned data

DAT versioned data

I just came across this presentation shared by Karthik Ram on Twitter (see also http://inundata.org/2013/02/28/version-control-for-science/). It describes a project that tries to create a sort of git for data. It seems to be at a very early stage yet, but looks very interesting.

Publishing Models With Open Review

All of the platforms I mentioned previously are in the third and fourth quadrants above and they are all review platforms decoupled from the actual publication of papers. Where PubPeer and selectedpapers.net seem to aim at being independent post-publication peer review and discussion platforms, Publons seems to be trying both that approach as well as offering themselves as “review service provider” to journals.

One of the visions I have seen people describe is that of a publishing model where papers are published as “preprints”, they get reviewed openly in some forum for that purpose – for example the platforms I mentioned. Journals are then simply formed as collections of these openly accessible papers based on their openly accessible reviews. This kind of journals does not own the papers (in terms of copyright and reduced access) and the value they add is simply to help readers filter through the jungle of available papers as well as put a “seal of quality” on the collected papers. This is probably in the more idealistic and radical end of the spectrum, but I like the idea.

A less radical but similar idea is that of portable peer review where authors can submit papers for review to a “review service provider” and then take the review comments to a journal (after probably altering the paper according to review comments) for consideration for publication. Should the journal not be interested, you can take it to another journal – still along with the same review comments. This could save some work and time in the review process. This seems to be what for example Peer Evaluation and Libre as well as Publons are doing.

I am particularly interested in how well Peer Evaluation and Libre will fare. These two platforms are created by two organisations that sound to be non-profit and seem quite open and idealistic. Collective Developments is behind peerevaluation.org. From their homepage:

Peer Evaluation is an Open Access initiative allowing for the dissemination and evaluation
of scholarly works. As a supplement to quantitative reputation metrics (H index, citation counts…),
Peer Evaluation comprises a qualitative reputation system powered by peers alone.
Finally, its business model is the one of a community interest project. – See more at: http://www.collectivedevelopments.org/#sthash.Ikw4SfCX.dpuf

The organisation behind Libre is Open Scholar C.I.C. Their homepage states:

Open Scholar C.I.C. is a not-for-profit organisation whose activities, assets and profits are dedicated to the purpose of providing benefit to the scientific community. Our mision is to develop ideas and tools that promote open and transparentscientific collaboration for a more fast, efficient and naturalorganisation, evaluation and dissemination of global knowledge.

Particularly Open Scholar sounds interesting to me. As they also write:

Our community is open to new members who wish to join efforts towards building a new culture of transparent academic collaboration for the benefit of global knowledge.

Well, I just might give it a shot and accept that invitation. I am eager to try to find some way to contribute to this open science publishing movement.

Inspiring Open Science Talk by Arvind Narayanan

I came across this talk by Arvind Narayanan a couple of days ago. He talks about how he has succeeded in publishing in unconventional ways and makes suggestions on how we might change the current publishing model – a topic that has become a bit of a hobby of mine lately:

http://33bits.org/2013/07/15/academic-publishing-as-ruinous-competition-is-there-a-way-out/

Openness And Anonymity in Peer Review

About a month ago, I attempted to give an overview of a few open review platforms. In relation to that, the question of anonymity of the reviewers came up. I have given it some more thought and would like to discuss some of these thoughts.

First of all, when I talk about open/closed and anonymous/identified review, I would like to point out that I consider these two independent “dimensions” of the nature of scientific peer review:

Anonymous Identified
Closed 1 2
Open 3 4

So, reviews can either be closed or open and at the same time, anonymous or identified.

Traditional journals have, to the best of my knowledge, more or less all been in the first “quadrant” above, i.e. both closed in the sense that they are not available to the readers of the journal after publication; and anonymous in the sense that the authors cannot see who the reviewers are. I do not know any examples of publishing in the second quadrant. UPDATE: Matt Hodgkinson and Magnus Rattray were kind enough to give me two examples of review in the second quadrant as well. Matt Hodgkinson:

…reviewers at the BMJ have to sign their comments http://www.bmj.com/about-bmj/resources-authors/peer-review-process and this is optional at PLOS ONE http://www.plosone.org/static/reviewerGuidelines#anonymity. Neither publishes the reviews.

Magnus Rattray:

In the NIPS conference reviewers and programme chairs are identified to each other but anonymous to the authors. This introduces some advantages of openness (improved review quality, more chance to identify conflicts) while maintaining some advantages of anonymity (fear of reprisals or ruined friendships).

Publishing models and experiments are starting to pop up in the third and fourth quadrants – open review, where the reviewers are anonymous in some cases and in other cases identified.

Let’s take a look at the anonymity issue. I am not sure there is a clear answer to what is best, but there are clearly both advantages and drawbacks of anonymous resp. identified reviews that I think we need to discuss. Below I list a few advantages and drawbacks of anonymity where I assume that a drawback of anonymous review is an advantage of identified review and vice versa.
Drawbacks

  • Reviewers do not get credit for their work. They cannot, for example, reference particular reviews in their CVs as they can with publications.
  • It is relatively “easy” for a reviewer to provide unnecessarily blunt or harsh critique.
  • It is difficult to guess if the reviewer has any conflict of interest with the authors by being, for example, a competing researcher interested in stalling the paper’s publication.

Advantages

  • Reviewers do not have to fear “payback” for an unfavourable review that is perceived as unfair by the authors of the work.
  • Some (perhaps especially “high-profile” senior faculty members) reviewers might find it difficult to find the time to provide as thorough a review as they would ideally like to, yet would still like to contribute and can perhaps provide valuable experienced insight. They can do so without putting their reputation on the line.

What else am I missing?

I have put a publicly editable version of this list on Google Drive – feel free to add points if you like, or comment below: https://docs.google.com/document/d/1RMEHerNSpnR9yg98qfHv1hlKjvWznVGpDUPUGC_cFkg/edit?usp=sharing

The Open Access Button

Recently, David Carroll and Joseph McArthur, medical resp. pharmacology students from London, came up with a great little idea: why not develop an easy way for people looking to read scientific papers online to report when they encounter a paper they cannot access, because they have to pay for it? That is, when they “hit a paywall”. David and Joseph set out to realise this idea by developing a browser button that users can click when that happens. The idea is to record reported incidents in a database to calculate statistics of how large this problem is. The idea has been well received and lots of people seem to have joined the effort to help develop it. You can read more about it here:

http://blogs.plos.org/thestudentblog/2013/08/20/if-someone-hits-a-paywall-in-the-forest-does-it-make-a-sound-the-open-access-button/

and follow their progress here:

http://oabutton.wordpress.com/.

Open Review of Scientific Literature

I recently became interested in open review as an ingredient in open science. There has been a lot of talk about open access in recent years. That, in itself is a very important ingredient for example for the sake of fairness in the sense that the outcome of research that is often funded by taxpayers’ money should also be open to the public. It is also important for advancing science in general, because open access helps ensure that more scientists have access to more of the existing knowledge that they can build upon to bring our collective knowledge forward. My interest in this area was in part spurred by this very inspiring discussion initiated by Pierre Vandergheynst.

Open access and open review are both parts of an ongoing movement that I believe is going to disrupt the traditional publishing model, but more about that later. Here, I want to focus on open review.

Traditionally, we have been used to reviews of papers submitted for publication in a journal being closed and typically blind or even double-blind. The review process being closed means that for a given published paper, readers simply have to trust that reviews were done thoroughly and by enough competent reviewers to ensure that we can actually trust the contents of the paper. Luckily, I believe, we can usually trust this, but for example the recent Reinhart & Rogoff episode shows that mistakes do slip through. For reasons like this and for the sake of open science per se, I believe we need more transparency in the review process (we also need published open data but that is another story). One way to do this is to make reviews open so that we as readers can see what comments reviewers made on the paper.

Reviewing a paper and then publishing it if the reviews assessed the paper as good enough is pre-publication peer review. Publishing the reviews after publication improves transparency and  and as far as I can see, PeerJ (IMO an admirable open access publisher, unfortunately not in my field) is currently practising this. You can also take the somewhat bolder step of publishing papers immediately and then conducting the review in the open afterwards (post-publication peer review). As far as I can see, F1000 Research is doing this. In my opinion, this is an even better approach as it allows public insight into papers and their reviews also for papers that are not traditionally published in the end, i.e. approved by the reviewers.

There is also the question of whether reviews should be kept blind (or double-blind) or the reviewers’ identities should be open as well. I believe there are several arguments for and against this. One argument for doing fully non-blind reviews could be that a reviewer should be able to stand by what she or he says and not “cowardly” hide behind anonymity. On the other hand, especially junior reviewers may be reluctant to disclose their honest opinion about a paper out of fear that they will end in “bad standing” with the authors. Then again, openly linking reviews to reviewers can also facilitate building reputation by conducting reviews of good quality – read more about this in Pierre’s discussion. Another contribution to this debate was made by the founders of PubPeer (more about PubPeer at the end…)

Finally, and again this fits into the bigger picture of the ongoing disruption in the scientific publishing area, the open review approach can also (and should ultimately, IMO) be taken out of the area of traditional publishers. Authors can choose to upload their papers for example to open pre-print archives (such as arXiv), their institutional repositories, or even own homepages. Reviews can then be conducted based on these papers. Ultimately, publication could then turn into a system where “publishers” collect such papers based on their reviews and “publish” the ones they find most attractive, but that is a longer story I will get back to some other day. This approach to open peer review was really what I wanted to get to today. The thing is, several places are starting to pop up who offer platforms for open review. The ones I know of so far are:

  • PubPeer
  • Publons
  • SelectedPapers
  • PaperCritic

I will try to tell you what I know so far about these:

PubPeer (pubpeer.com)
This is an open review platform where you can comment on any published paper with a DOI. It allows works with PubMed IDs or arXiv IDs. Thus, in principle it covers both already (traditionally) published and pre-print papers (at least if they are on arXiv). I guess this also means research output that is not necessarily a paper, such as data, slideshows or posters, from for example figshare which assigns DOIs to uploaded content.
PubPeer received some attention recently when several flaws in a published stem cell paper were pointed out in a comment on PubPeer.
PubPeer has been criticised for being anonymous; both its founders and reviewers commenting on the platform are kept anonymous.
You can only sign up as a user on the platform if you are a first or last author of a paper they can find. So far, I have not been able to do this, as none of my papers seem to be in the areas they focus on. Furthermore, searching for yourself as an author in their database takes forever (seems like it is not that stable), so I have not actually gotten to the bottom of whether I can actually find my own name there.
[Update: I managed to sign up now, using a DOI of one of my papers. PubPeer also made me aware that anyone can comment on the platform without actually signing up as an author.]

Publons (publons.com)
This is an open review platform somewhat similar to PubPeer. It supports papers from Nature, Science and a number of physics journals as well as arXiv. Unlike PubPeer, Publons is not shrouded in anonymity. I guess it is a matter of opinion whether you support one or the other. At least Publons does not seem to have the same problems of signing up as PubPeer has.

SelectedPapers (selectedpapers.net)
This is a somewhat different platform from the two above. While it is an open review platform like PubPeer and Publons, it goes even further in the sense that it explicitly tries not to become a “walled garden” type of service and publishes review comments outside the platform itself, ideally on the reviewers’ platform of choice. SelectedPapers is still in its early stages of development and is currently in alpha release. So far, it only supports Google+ as the platform for publishing the reviews, but they promise that more are to come. I am watching this initiative closely as I find their concept extremely interesting. A lot of interesting reading about the philosophy behind the SelectedPapers network can be found here:

  1. http://thinking.bioinformatics.ucla.edu/2011/07/02/open-peer-review-by-a-selected-papers-network/
  2. http://johncarlosbaez.wordpress.com/2013/06/07/the-selected-papers-network-part-1/
  3. http://johncarlosbaez.wordpress.com/2013/06/14/the-selected-papers-network-part-2/
  4. http://johncarlosbaez.wordpress.com/2013/07/12/the-selected-papers-network-part-3/
  5. http://johncarlosbaez.wordpress.com/2013/07/29/the-selected-papers-network-part-4/

PaperCritic (papercritic.com)
I have not investigated this platform in detail. It is based on Mendeley and the whole platform seems to revolve around you being a Mendeley user. That is too much “walled garden” for me, so I am not very interested in this one. Furthermore, with its strict dependence on Mendeley it seems a likely candidate to be swallowed by Elsevier if it becomes successful (like Mendeley itself).

I hope this was a useful introduction to open review and some of the tools that currently exist to facilitate this. I hope some of you out there have experience with some of these or other platforms. Which one do you prefer? Feel free to submit the poll below and elaborate in the comments section if you like.

My humble attempt at practicing more open science

BilledeI have had a keen interest in reproducible research [1] since I started publishing my research back when I was a PhD student. I have lately been following the activities going on around open science and I think this is what we as scientists and researchers should be striving for. So, I decided I had better walk the talk and start a blog to show & tell a bit about what I am working on.

Let me introduce myself a bit: my name is Thomas Arildsen. I work at Aalborg University (AAU), Denmark, as an assistant professor at the Department of Electronic Systems. I am a signal processing engineer with a PhD (also from AAU) in the area of source coding – you can see more details on my LinkedIn Profile.

I currently do research applying compressed sensing in signal processing together with colleagues from our section Signal and Information Processing [2]; you can see some of the research I have been involved in here [3].

I will be blogging about my own research, others’ research in my area, as well as open science-related things I hear about here and there and find interesting.

[1] http://reproducibleresearch.net
[2] http://es.aau.dk/misp
[3] http://vbn.aau.dk/en/persons/thomas-arildsen%28334e2ddc-15ec-4123-82b7-de85132ae371%29/publications.html

(Image by Greg EmmerichCC BY-SA 3.0)

Forest Vista

seeking principles

Academic Karma

Re-engineering Peer Review

Pandelis Perakakis

experience... learn... grow

chorasimilarity

computing with space | open notebook

PEER REVIEW WATCH

Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today.

Short, Fat Matrices

a research blog by Dustin G. Mixon

www.rockyourpaper.org

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button

Push Button. Get Research. Make Progress.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog