Adventures in Signal Processing and Open Science

How to attract reviewers for open / post-publication review?

In traditional journals with closed pre-publication peer review, reviewers are typically invited by the editor. Editors can for example draw on previous authors from the journal or (I guess) their professional network in general. With the recent appearance of several open peer review platforms, for example PubPeer, Publons etc. – more here, there will be a need to attract reviewers to such platforms. Sufficiently flawed papers seem to attract enough attention to trigger reviews, but it is my impression that papers that are generally OK do not get a lot of post-publication review. This is perhaps not such a big deal for papers that have been already been published in some journal with closed pre-publication review – the major downside is that the rest of us do not get to see the review comments. But, if you are going to base an entire journal on post-publication peer review you will want to ensure at least a few reviews of each published paper as a sort of stamp of approval.

The Winnower – the new journal based entirely on open post-publication peer review that I have previously written about here – is about to launch. First and foremost they will of course need to attract some papers to publish. Their in my opinion very fair (when you compare to other open access journals) price of $100 should help and the fact that they span a very broad range of scientific disciplines should also give them a lot of potential authors. They also want to attract reviewers to their papers. The platform is open to review by anyone, but in order to ensure a minimum number of reviews of each paper with at least some experts on the topic among them, this seems like a good idea. But how do you do this? As a new journal there are no previous authors to draw on. It can probably get difficult to get in touch with sufficiently many qualified reviewers across all of the journal’s disciplines and on top of that, reviewers may be more reluctant to accept since the reviews will be open with reviewers’ identities disclosed.

What can be done to attract sufficiently many reviewers? Should the journal gamble on being the cool new kid in class that everyone wants to be friends with or simply try to buy friends? I have been discussing this with their founder Josh Nicholson. One possibility is to pay reviewers a small amount for each review they complete. If we were talking one of the traditional publishers, whom I think in many cases are exploiting authors and reviewers shamelessly to stuff their own pockets, I think it would only be reasonable to actually start paying the reviewers. In the case of The Winnower, it may be different. The Winnower is a new journal trying to get authors and reviewers on board. With a very idealistic approach and pricing, I do not think people are likely to think that they are just trying to make money – being a “predatory publisher”. But on the other hand, paying reviewers might somehow make it look like they are trying to “buy friends”. With the journal’s profile, potential reviewers might mainly be ones that like to think of themselves as a bit idealistic and revolutionary too and that might just not go well with being paid for reviews? Josh told me an anecdote he had been told recently:

To illustrate this point, imagine walking down the street and an able bodied young man asks for your help loading a large box into a truck.  If he were to politely ask for help, most people would be highly likely to assist him in his request.  However, if he were to politely ask and also mention that for your time, he will pay you $0.25 most people will actually turn him down.  Despite being totally irrational, given that under the same circumstance they would do it without the promise of a quarter, the mention of money evokes the passerby to calculate what their time is worth to them.  To many, the $0.25 isn’t going to be worth the effort.  I think this may pose a similar issue.

Then again, paying reviewers could also send the message that they are taking their reviewers and the work they do very seriously? Ideally, I think it might work better with an incentive structure and “review of reviews” / scoring of reviews like the Stackexchange network for example. I am just afraid that something like that will take considerable “critical mass” to be effective. Another option Josh mentioned could be to let reviewers earn free publications with the journal by completing a number of reviews. This sounds better to me: you do not risk offending potential reviewers with a “price on their head”, but there is still something to gain for reviewers.

I think this is a very interesting question and probably one that a lot of people have much more qualified answers for than I. Let me know what you think?!

Standalone peer review platforms

Standalone peer review platforms

I have previously mentioned some platforms for open / post-publication peer review in Open Review of Scientific Literature and discussed the roles of such platforms in Third-party review platforms. I just wanted to mention the above document in Google Docs which seems to have been started by Jason Priem(?). The document contains a list of peer review platforms; both standalone and including manuscript publishing as well. Go have a look – there are probably some that you don’t know yet. Anyone can edit the document, so please add platforms if you now any additional ones.

Compressed sensing with linear correlation between signal and measurement noise

Torben Larsen and I have recently published a paper, “Compressed sensing with linear correlation between signal and measurement noise” in EURASIP Signal Processing. This post is an attempt and a sort of experiment to provide a front page summarizing the paper’s contributions and providing an overview of available versions of the paper and its accompanying code.

We considered compressed sensing with measurement noise in the case where the measurement noise is linearly correlated with the signal of interest. So we have the typical compressed sensing model with measurement noise:

\mathbf y = \mathbf{Ax} + \mathbf n

where the noise \mathbf n is now correlated with \mathbf x. This can be modelled as a scaling by some factor \alpha of the measured signal in addition to additive random noise:

\mathbf y = \alpha \mathbf{Ax} + \mathbf w

The difference in the measurement between the original and scaled signals constitutes the part of the resulting measurement noise that is correlated with the input signal:

\mathbf n = \alpha \mathbf{Ax} + \mathbf w - \mathbf{Ax} = (\alpha - 1) \mathbf{Ax} + \mathbf w

We show that in the case of reconstruction of the measured signal by basis pursuit de-noising (BPDN), the correlation between the measurement noise and the measured signal can be compensated simply by scaling the BPDN solution by 1/\alpha.

It turns out that this simple correlated noise model models the error introduced by low-resolution quantisation quite well. We have tested the proposed reconstruction approach on compressed measurements quantised to 1, 3, and 5 bits, respectively. Especially in the extreme case of 1 bit quantisation we see substantial improvements in reconstruction error, reducing the error by up to around 7dB. This simple modification of BPDN performs better than BIHT (which is specifically designed for 1 bit quantisation) in a large portion of the undersampling/sparsity phase space.

Reconstruction MSE

Relative reconstruction MSE of the proposed approach. The fat contour line marks the region (above and left of it) where the error is below that of BIHT reconstruction.

Below, you can find links to both the official published version of the paper, all versions from the review process on arXiv, and the code for running the numerical simulations.

Paper versions and simulation code

 

Rock Your Paper

I noticed a new web site some time ago: rockyourpaper.org. It is first and foremost a search engine for open access research papers. You can search for open access papers from lots of different publishers and they aim to be the place to go for open access research.

Their initial motivation is to provide easy access to research to students and researchers from countries that typically cannot afford access to the expensive subscription journals. I talked to Rock Your Paper co-founder Neeraj Mehta about their platform to find out a bit more about it.

Rock Your Paper (RYP) started on October 18th, 2013. It is not the only place to search for open access papers. Other possibilities of course include the publishers’ sites themselves, but this is hard work considering the many different publishers you would have to visit. Another centralised place to search for papers is the Directory of Open Access Journals (DOAJ) where you can search among, at the time of writing, 1,573,847 open access papers. When I spoke to Neeraj in January, RYP was hoping to index 20 million research papers by January 31st. In addition, they provide another layer of service to its users. You can create an account with RYP and save both searches and individual papers so that you can keep track of “what was it I searched for the other day when I found that paper…”

Rock Your Paper is a for-profit startup company that of course hopes to earn money from their services, but they promise that their basic search and access features will remain free for users. This seems very much in line with their initial purpose. They may extend their services along the way with additional features such as formatting, editing and translation which users will need to pay for.

Initially, they are aiming to establish themselves first and foremost as an open access search engine. Later on, they may also extend the platform to let users publish research. They have also approached publishers of subscription-based journals about the possibility of providing discounted access to these, but unfortunately they have not had any luck with this yet.

I think Rock Your Paper sounds like one of many interesting new players in the open access / open science area that will be exciting to follow.

Transforming Peer Review Bibliography

http://digital-scholarship.org/tpr/tpr.htm

I came across this treasure trove of papers on the transformation of peer review today. It seems to list lots of papers on open review, among others. The bibliography was compiled by Charles W. Bailey who also seems to have made an effort to provide links to openly accessible versions of the list papers, thanks.

There’s a new journal in town…

Billede[Image: Jean-François Millet: Le Vanneur. Source: WikiPedia]

I have been writing a few posts lately about open peer review in scientific publishing (Open Review of Scientific Literature, Openness And Anonymity in Peer Review, Third-party review platforms). As I have mentioned, quite a few platforms experimenting with open post-publication peer review have been appearing around us recently.

Now it seems there is an actual journal on its way, embracing open review and open access from the very beginning to an extent I have not seen yet. It sounds like a very brave and exciting initiative. According to their own description it is going to be a journal for all disciplines of science. You can read more about the ideas behind the journal on their blog: The Winnower. It was also featured recently here.

Curious about this new journal as I am, I have been talking to its founder, Josh Nicholson, online on a few occasions lately to find out more about the journal. I have decided to publish this Q&A correspondance here in case others are interested.

Q&A with Josh Nicholson

2013/10/04 – on Google+:
As I understand, you will publish manuscripts immediately and publish the accompanying reviews of them when ready. Will these manuscripts be open to review by anyone, will you find reviewers, or a combination thereof?
In principle, it would be “most open” to allow reviews by anyone, but specifically when some paper is not “popular” enough to attract reviewers spontaneously, I guess it might also  be necessary to actively engage reviewers? If so, do you consider somehow paying (monetarily or otherwise) reviewers?

The papers will indeed be open to review by anyone.  We want it to be completely transparent and open.  We also wish to be completely clear that papers without reviews linked to them have not been reviewed and should be viewed accordingly.  We would like to engage reviewers with different incentives in the future and will explore the best ways to do that as we move forward.  Our system will in essence be quite similar to “pre-prints” where authors are allowed to solicit reviews and anyone is allowed to review but it will all occur in the open. We will charge $100 per publication so that we can sustain the site without relying in grants.  We would love to hear more of your feedback should you have any!

I have been considering for some time how an open peer review system can attract reviewers and possibly encourage them to identify themselves to “earn” reputation.
The Stackexhange network, among others, seems to be quite popular and it seems to me that one of the things driving users to contribute is the reputation system where a reputation score becomes the “currency” of the site. Users can vote other users’ questions and answers up or down. This lets other users quickly assess which questions and answers are “good”. Votes earn the poster of the question or answer reputation points and this encourages posters to make an effort to write good questions and answers.
It seems to me that such a system could be used more or less directly on a peer review platform. It would both encourage users to write reviews and let other users assess and score reviews (review of reviews).

We agree with you 100%.  We would even like to offer the “best” reviewers, as judged by the community, free publishing rights.  Ultimately we would also like to make the reviews citeable.  Some of these features will not be present in the initial launch but will be expanded upon and rolled out over time.  We hope you will consider submitting in 2014 and reviewing!
We have a few other select features that will be present in the initial build to attract reviews.  Some of these will be discussed in future blog posts.

2013/10/06 – in blog comment:
Have you at The Winnower considered if you could make use of third-party reviewer platforms for your publishing?

We have briefly communicated with LIBRE and are indeed open to reviews from third-party platforms. We are happy to work with anyone towards the goal of making reviewing more transparent.

2013/10/11 – on Twitter:
Will you have any sort of editorial endorsement of papers you publish or will the open reviews be the only “stamp of approval”?

Open reviews will serve as ‘stamp of approval.’ We hope papers will accumulate many reviews.
Papers can be organized based by content as well as various metrics including most reviewed etc.

2013/10/11 – in blog comment:
I am very excited about your new journal – that’s why I keep asking all sorts of questions about it here and there ;-)
In terms of archiving papers, what will you do to ensure that the papers you have published do not disappear in the event that the Winnower should be out of business? Do you have any mutual archival agreements with other journals or institutional repositories?

We are happy you are excited about The Winnower. Please keep the questions and comments coming!
We are currently looking at what is the best way to preserve papers published in The Winnower should The Winnower not survive. We are looking to participate in CLOCKSS but have not made any agreements as of yet.

Another one: under which terms are you going to license the published manuscripts? For example, I have heard authors express concern about third-party commercial reuse of papers without consent under CC-BY. I am not sure yet what to think about that.

Content published with The Winnower will be licensed under a CC BY license. Commercial reuse of work, as we understand it, must cite the original work. We want to open the exchange ideas and information.

2013/10/18 – in blog comment:
Here goes another one of my questions: will your platform employ versioning of manuscripts?
I imagine that authors of a paper may want to revise their paper in response to relevant review comments. Just like it often happens in traditional pre-publication review – here we just get the whole story out in the open. If so, I think there should be a mechanism in place to keep track of different versions of the paper – all of which should remain open to readers. As a consequence of this, there will also be a need to keep track of which version of a paper specific comments relate to.
Rating: will it be possible to rate/score papers in addition to reviewing/commenting? While a simple score may seem a crude measure I think there is a possibility that it could help readers sift more efficiently through the posted papers. In a publishing model like yours, it is going to be harder for, e.g. funding agencies or hiring committees to assess an author’s work, because they cannot simply judge it by where it was published (that may be the wrong way to do it anyway, but that is not what I am aiming to discuss here). A simple score might make the transition to your proposed publishing process “easier” for some stakeholders. I am a bit reluctant about it myself, but in order not to make it too superficial, maybe scoring/rating should only be possible after having provided a proper review comment. This should make it difficult for readers to score the paper without making a proper effort in assessing the paper.

We are happy to have your questions! There will indeed be an option to revise manuscripts after a MS has collected reviews. We are however a bit uneasy about hosting multiple versions of the paper as we think it may become quite confusing. We are happy to explore this option in the future but currently we believe that the comments along with the responses should be sufficient to inform the reader what was changed.

Do you agree?

Our reviews will be structured meaning that there will be prompts which allow different aspects of the papers to be rated.

So you plan to allow revision, but previous versions are “lost”?
I see the point about the possible confusion, but what if a commenter points out some flaw about some details in the paper, the author acknowledges it and revises the paper? Now, future readers can no longer see in the paper what that comment was about. Well, they can see the comment, but they can no longer see for themselves in the actual paper what the flawed part originally said.
Could the platform perhaps always display the most recent version of the paper but show links to previous versions somewhere along with the metadata, abstract etc. that I assume you will be displaying on a paper’s “landing page”? The actual PDF of an outdated version of a paper could have a prominently displayed text or “stamp” saying that this is an outdated version kept for the record and that a newer version is available?
Perhaps links to older versions of a paper would only be visible in a specific comment that refers to an earlier version of the paper?

These are good points. We will discuss some of these and see what approach will work best and what we are capable of. If it is not too much confusion and not too much to implement this into the build this could be quite useful as you point out. Thanks!

By the way, I think it would be interesting if it were possible not only to comment on papers, but to annotate the actual text in-line. I think it would be great if readers could mark up parts of text and write comments directly next to them. Would it seem too draft-like?
I am not sure how this could be done, technically, but it seems like the technology http://hypothes.is/ are brewing could enable something like this.

We agree that inline comments are quite interesting and we have this as a possible tool to build in the platform in the future. We however have limited funding for the initial build and want to focus on features that are critical first and complimentary second. But this is a great idea and definitely something we will be exploring in the future.

Good point. It is best to get the essential features working well first.

Academia.edu acquires Plasmyd to bring peer review into the 21st century

Academia.edu acquires Plasmyd to bring peer review into the 21st century

I noticed this news piece today. I have previously written about open peer review platform. Most of the recent initiatives in open peer review are entirely new platforms that provide the mechanics to get open peer review going, but in my opinion a challenge for them is to attract a critical mass of users.

Academia.edu’s move seems a bit the other way around: they already have an existing science-related platform with a quite a few users, but now they are adding peer review functionality. It is not entirely clear to me whether this means open review, but the mechanism they describe could help address the challenge of how to attract sufficient numbers of qualified reviewers to such a platform. The article does hint at the possibility that academia.edu might try to “build a reveue model around their modern approach to peer review”. I am not a fan of such a model, as this is one of the things that are wrong with the traditional journal publishing model. Nevertheless, it is going to be interesting to see how it goes.

More on anonymity in peer review

This study lacked an appropriate control group: Two stars

I came across this post on anonymity in peer review by Jon Brock. I have previously tried to discuss pros and cons of anonymity here. I think Jon’s post is a quite good argument in favour of identifying reviewers.

In relation to this, I actually wanted to sign a review I did recently for a journal as I wanted to personally stand by my assessment of the manuscript. I asked with the editor first if this was OK with him. He explicitly requested me NOT to do so, because this was against their review policy…

Science Publishing Laboratory

Science Publishing Laboratory

Browsing on Twitter, I just stumbled on this blog by Alexander Grossmann. It looks like he has a ton of interesting reading on open scientific publishing. I found it through Giuseppe Gangarossa on Twitter.

An emerging consensus for open evaluation: 18 visions for the future of scientific publishing

An emerging consensus for open evaluation: 18 visions for the future of scientific publishing

I just found this treasure trove of papers on open evaluation in science thanks to this post by Curt Rice that sums it all up very well: Open Evaluation: 11 sure steps – and 2 maybes – towards a new approach to peer review

www.rockyourpaper.org

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button blog

Tearing down barriers to accessing research, one click at a time.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog

Follow

Get every new post delivered to your Inbox.