Adventures in Signal Processing and Open Science

Tag: advancing science

Thoughts about Scholarly HTML

The company science.ai is working on a draft standard (or what I guess they hope will eventually become a standard) called Scholarly HTML. The purpose of this seems to be to standardise the way scholarly articles are structured as HTML in order to use that as a more semantic alternative to for example PDF which may look nice but does nothing to help understand the structure of the content, probably more the contrary.
They present their proposed standard in this document. They also seem to have formed a community group at the World Wide Web Consortium. It appears this is not a new initiative. There was already a previous project called Scholarly HTML, but science.ai seem to be trying to help take the idea further from there. Martin Fenner wrote a bit of background story behind the original Scholarly HTML.
I read science.ai’s proposal. It seems like a very promising initiative because it would allow scholarly articles across publishers to be understood better by, not least, algorithms for content mining, automated literature search, recommender systems etc. It would be particularly helpful if all publishers had a common standard for marking up articles and HTML seems a good choice since you only need a web browser to display it. This is also another nice feature about it. I tend to read a lot on my mobile phone and tablet and it really is a pain when the content does not fit the screen. This is often the case with PDF which does not reflow too well in the apps I use for viewing. Here HTML would be much better, not being physical page-focused like PDF.
I started looking at this proposal because it seemed like a natural direction to look further in from my crude preliminary experiments in Publishing Mathematics in e-books.
After reading the proposal, a few questions arose:

  1. The way the formatting of references is described, it seems to me as if references can be of type “schema:Book” or “schema:ScholarlyArticle”. Does this mean that they do not consider a need to cite anything but books or scholarly articles? I know that some people hold the IMO very conservative view that the reference list should only refer to peer-reviewed material, but this is too constrained and I certainly think it will be relevant to cite websites, data sets, source code etc. as well. It should all go into the reference list to make it easier to understand what the background material behind a paper is. This calls for a much richer selection of entry types. For example Biblatex’ entry types could serve as inspiration.
  2. The authors and affiliations section is described here. Author entries are described as having:

    property=”schema:author” or property=”schema:contributor” and a typeof=”sa:ContributorRole”

    I wonder if this way of specifying authors/contributors makes it possible to specify more granular roles or multiple roles for each author like for example Open Research Badges?

  3. Under article structure, they list the following types of sections:

    Sections are expected to be typed using the typeof attribute. The following typeof values are currently understood:

    sa:Funding (which has its specific structure)
    sa:Abstract
    sa:MaterialsAndMethods
    sa:Results
    sa:Conclusion
    sa:Acknowledgements
    sa:ReferenceList

    I think there is a need for more types of sections. I for example also see articles containing Introduction, Analysis, and Discussion sections and I am sure there must be more that I have not thought of.

Peer Evaluation of Science

This is a proposal for a system for evaluation of the quality of scientific papers by open review of the papers through a platform inspired by StackExchange. I have reposted it here from The Self-Journal of Science where I hope my readers will go and comment on it: http://www.sjscience.org/article?id=401. The proposal is also intended as a contribution to #peerrevwk15 on Twitter.

I have chosen to publish this proposal on SJS since this is a platform that comes quite close to what I envision in this proposal.

Introduction

Researchers currently rely on traditional journals for publishing their research. Why is this? you might ask. Is it because it is particularly difficult to publish research results? Perhaps 300 years ago, but certainly not today where anyone can publish anything on the Internet with very little trouble. Why do we keep publishing with them, then? – they charge outrageous amounts for their services in the form of APCs from authors or subscriptions from readers or their libraries. One of the real reasons, I believe, is prestige.

The purpose of publishing your work in a journal is not really to get your work published and read, but it is to prove that your paper was good enough to be published in that particular journal. The more prestigious the journal, the better the paper, it seems. This roughly boils down to using the impact factor of the journal to evaluate the research of authors publishing in it (bad idea, see for example Wrong Number: A closer look at Impact Factors). It is often mentioned in online discussions how researchers are typically evaluated by hiring committees or grant reviewers based which journals they have published in. In Denmark (and Norway – possibly other countries?), universities are even getting funded based on which journals their researchers publish in.

I think the journal’s reputation (impact factor) is used in current practice because it is easy. It is a number that a grant reviewer or hiring committee member can easily look up and use to assess an author without having to read piles of their papers on which they might have to be experts. I support a much more qualitative approach based on the individual works of the individual researcher. So, to have any hope of replacing this practice, I think we need to offer a quantitative “short-cut” that can compete with the impact factor (and H-index etc.) that say little about the actual quality of the researcher’s works. Sadly, a quantitative metric is likely what hiring committees and grant reviewers are going to be looking at. Here I think a (quantitative) “score” or several such scores on different aspects of a paper accompanying the (qualitative) review can be used to provide such an evaluation metric. Here I am going to present some ideas of how such a metric can be calculated and also some potential pitfalls we need to discuss how to handle.

I believe that a system to quantify various aspects of a paper’s quality as part of an open review process could help us turn to a practice of judging papers and their authors by the merits of the individual paper instead of by the journal in which they are published. I also believe that this can be designed to incentivise participation in such a system.

Research and researchers should be evaluated directly by the quality of the research instead of indirectly through the reputation of the journals they publish in. My hope is to base this evaluation on open peer review, i.e. the review comments are open for anyone to read along with the published paper. Even when a publisher (in the many possible incarnations of that word) chooses to use pre-publication peer review, I think that should be made open in the sense that the review comments should be open for all to read after paper acceptance. And in any case, I think it should be supplemented by post-publication peer review (both open in the sense that they are open to read and also open for anyone to comment – although one might opt for a restriction of reviewers to any researcher who has published something themselves as for example Science Open uses).

What do I mean by using peer review to replace journal reputation as a method of evaluation? This is where I envision calculating a “quality” or “reputation” metric as part of the review process. This metric would be established through a quality “score” (could be multiple scores targeting different aspects of the paper) assigned by the reviewers/commenters, but endorsed (or not) by other reviewers through a two-layer scoring system inspired by the reputation metric from StackExchange. This would, in my opinion, comprise a metric that:

  1. specifically evaluates the individual paper (and possibly the individual researcher through a combined score of her/his papers),
  2. is more than a superficial number – the number only accompanies a qualitative (expert) review of the individual paper that others can read to help them assess the paper,
  3. is completely transparent – accompanying reviews/comments are open for all to read and the votes/scores and the algorithm calculating a paper’s metric is completely open.

I have mentioned that this system is inspired by StackExchange. Let me first briefly explain what StackExchange is and how their reputation metric works: StackExchange is a question & answer (Q&A) site where anyone can post questions in different categories and anyone can post answers to those questions. The whole system is governed by a reputation metric which seems to be the currency that makes this platform work impressively well. Each question and each answer on the platform can be voted up or down by other users. When a user gets one of his/her questions or answers voted up, the user’s reputation metric increases. The score resulting from the voting helps rank questions and answers so the best ones are seen at the top of the list.

The System

A somewhat similar system could be used to evaluate scientific papers on a platform designed for the purpose. As I mentioned, my proposal is inspired by StackExchange, but I propose a somewhat different mechanism as the one based on questions and answers on StackExchange does not exactly fit the purpose here. I propose the following two-layer system.

  • First layer: each paper can be reviewed openly by other users on the platform. When someone reviews a paper, along with submission of the review text, the reviewer is asked to score the paper on one or more aspects. This could be simply “quality”, whatever this means, or several aspects such as “clarity”, “novelty”, “correctness”. It is of course an important matter to determine these evaluation aspects and define what they should mean. This is however a different story and I focus on the metric system here.
  • Second layer: other users on the platform can of course read the paper as well as the reviews attached to it. These users can score the individual reviews. This means that some users, even if they do not have the time to write a detailed review themselves, can still evaluate the paper by expressing whether they agree or disagree with the existing reviews of the paper.
  • What values can a score take? We will get to that in a bit.

How are metrics calculated based on this two-layer system?

  • Each paper’s metric is calculated as a weighted average of the scores assigned by reviewers (first layer). The weights assigned to the individual reviews are calculated from the scores other users have assigned to the reviews (second layer). The weight could be calculated in different ways depending on which values scores can take. It could be an average of the votes. It could also be calculated as the sum of votes on each review, meaning that reviews with lots of votes would generally get higher weights than reviews with few votes.
  • Each author’s metric is calculated based on the scores of the author’s papers. This could be done in several ways: One is a simple average; this would not take into account the number of papers an author has published. Maybe it should, so the sum of scores of the author’s papers could be another option. Alternatively, it might also be argued that each paper’s score in the author’s metric should be weighted by the “significance” of the paper which could be based on the number of reviews and votes on these each paper has.
  • Each reviewer’s metric is calculated based on the scores of her/his reviews in a similar way to the calculation of authors’ metrics. This should incentivise reviewers to write good reviews. Most users on the proposed platform will act as both reviewers and authors and will therefore have both a reviewer and an author metric.

Which Values Can Votes Have?

I propose to make the scores of both papers (first layer) and individual reviews (second layer) a  ± 1 vote. One could argue that this is a very coarse-grained scale, but consider the option of for example a 10-level scale. This could cause problems of different users interpreting the scale differently. Some users might hardly ever use the maximum score while other users might give the maximum score to all papers that they merely find worthy of publication. By relying on a simple binary score instead, an average over a (hopefully) high number of reviews and review endorsements/disapprovals would be less sensitive to individual interpretations of the score value than many-level scores.

Conclusion

As mentioned, I hope the proposed model of evaluating scientific publications by accompanying qualitative reviews by a quantitative score would provide a useful metric that – although still quantitative – could prove a more accurate measure of quality of individual publications for those that need to rely on such a measure. This proposal should not be considered a scientific article itself, but I hope it can be a useful contribution to a debate on how to make peer review both more open and more broadly useful to readers and evaluators of scientific publications.

I have chosen to publish this proposal on SJS since this is a platform that comes quite close to what I envision in this proposal. I hope that readers will take the opportunity to comment on the proposal and help start a discussion about it.

Episciences.org update

I mentioned the Episciences project the other day in Scientific journals as an overlay. In the meantime I have tried to contact the people behind this project and The Open Journal, apparently without any luck.

I went and checked the Episciences website yesterday and it actually seems that they are moving forward. They changed the page design completely and there is now a button in the upper right corner to create an account and log in. I took the liberty of doing so to have a look around. I was able to create an account, but is just about it so far. The site still seems quite “beta” – I was not able to save changes to my profile and I cannot yet find anywhere to submit papers. It is nice to see some progress on the platform and I will be keeping an eager eye on it to find out when they will go operational.

Open Review of Scientific Literature

I recently became interested in open review as an ingredient in open science. There has been a lot of talk about open access in recent years. That, in itself is a very important ingredient for example for the sake of fairness in the sense that the outcome of research that is often funded by taxpayers’ money should also be open to the public. It is also important for advancing science in general, because open access helps ensure that more scientists have access to more of the existing knowledge that they can build upon to bring our collective knowledge forward. My interest in this area was in part spurred by this very inspiring discussion initiated by Pierre Vandergheynst.

Open access and open review are both parts of an ongoing movement that I believe is going to disrupt the traditional publishing model, but more about that later. Here, I want to focus on open review.

Traditionally, we have been used to reviews of papers submitted for publication in a journal being closed and typically blind or even double-blind. The review process being closed means that for a given published paper, readers simply have to trust that reviews were done thoroughly and by enough competent reviewers to ensure that we can actually trust the contents of the paper. Luckily, I believe, we can usually trust this, but for example the recent Reinhart & Rogoff episode shows that mistakes do slip through. For reasons like this and for the sake of open science per se, I believe we need more transparency in the review process (we also need published open data but that is another story). One way to do this is to make reviews open so that we as readers can see what comments reviewers made on the paper.

Reviewing a paper and then publishing it if the reviews assessed the paper as good enough is pre-publication peer review. Publishing the reviews after publication improves transparency and  and as far as I can see, PeerJ (IMO an admirable open access publisher, unfortunately not in my field) is currently practising this. You can also take the somewhat bolder step of publishing papers immediately and then conducting the review in the open afterwards (post-publication peer review). As far as I can see, F1000 Research is doing this. In my opinion, this is an even better approach as it allows public insight into papers and their reviews also for papers that are not traditionally published in the end, i.e. approved by the reviewers.

There is also the question of whether reviews should be kept blind (or double-blind) or the reviewers’ identities should be open as well. I believe there are several arguments for and against this. One argument for doing fully non-blind reviews could be that a reviewer should be able to stand by what she or he says and not “cowardly” hide behind anonymity. On the other hand, especially junior reviewers may be reluctant to disclose their honest opinion about a paper out of fear that they will end in “bad standing” with the authors. Then again, openly linking reviews to reviewers can also facilitate building reputation by conducting reviews of good quality – read more about this in Pierre’s discussion. Another contribution to this debate was made by the founders of PubPeer (more about PubPeer at the end…)

Finally, and again this fits into the bigger picture of the ongoing disruption in the scientific publishing area, the open review approach can also (and should ultimately, IMO) be taken out of the area of traditional publishers. Authors can choose to upload their papers for example to open pre-print archives (such as arXiv), their institutional repositories, or even own homepages. Reviews can then be conducted based on these papers. Ultimately, publication could then turn into a system where “publishers” collect such papers based on their reviews and “publish” the ones they find most attractive, but that is a longer story I will get back to some other day. This approach to open peer review was really what I wanted to get to today. The thing is, several places are starting to pop up who offer platforms for open review. The ones I know of so far are:

  • PubPeer
  • Publons
  • SelectedPapers
  • PaperCritic

I will try to tell you what I know so far about these:

PubPeer (pubpeer.com)
This is an open review platform where you can comment on any published paper with a DOI. It allows works with PubMed IDs or arXiv IDs. Thus, in principle it covers both already (traditionally) published and pre-print papers (at least if they are on arXiv). I guess this also means research output that is not necessarily a paper, such as data, slideshows or posters, from for example figshare which assigns DOIs to uploaded content.
PubPeer received some attention recently when several flaws in a published stem cell paper were pointed out in a comment on PubPeer.
PubPeer has been criticised for being anonymous; both its founders and reviewers commenting on the platform are kept anonymous.
You can only sign up as a user on the platform if you are a first or last author of a paper they can find. So far, I have not been able to do this, as none of my papers seem to be in the areas they focus on. Furthermore, searching for yourself as an author in their database takes forever (seems like it is not that stable), so I have not actually gotten to the bottom of whether I can actually find my own name there.
[Update: I managed to sign up now, using a DOI of one of my papers. PubPeer also made me aware that anyone can comment on the platform without actually signing up as an author.]

Publons (publons.com)
This is an open review platform somewhat similar to PubPeer. It supports papers from Nature, Science and a number of physics journals as well as arXiv. Unlike PubPeer, Publons is not shrouded in anonymity. I guess it is a matter of opinion whether you support one or the other. At least Publons does not seem to have the same problems of signing up as PubPeer has.

SelectedPapers (selectedpapers.net)
This is a somewhat different platform from the two above. While it is an open review platform like PubPeer and Publons, it goes even further in the sense that it explicitly tries not to become a “walled garden” type of service and publishes review comments outside the platform itself, ideally on the reviewers’ platform of choice. SelectedPapers is still in its early stages of development and is currently in alpha release. So far, it only supports Google+ as the platform for publishing the reviews, but they promise that more are to come. I am watching this initiative closely as I find their concept extremely interesting. A lot of interesting reading about the philosophy behind the SelectedPapers network can be found here:

  1. http://thinking.bioinformatics.ucla.edu/2011/07/02/open-peer-review-by-a-selected-papers-network/
  2. http://johncarlosbaez.wordpress.com/2013/06/07/the-selected-papers-network-part-1/
  3. http://johncarlosbaez.wordpress.com/2013/06/14/the-selected-papers-network-part-2/
  4. http://johncarlosbaez.wordpress.com/2013/07/12/the-selected-papers-network-part-3/
  5. http://johncarlosbaez.wordpress.com/2013/07/29/the-selected-papers-network-part-4/

PaperCritic (papercritic.com)
I have not investigated this platform in detail. It is based on Mendeley and the whole platform seems to revolve around you being a Mendeley user. That is too much “walled garden” for me, so I am not very interested in this one. Furthermore, with its strict dependence on Mendeley it seems a likely candidate to be swallowed by Elsevier if it becomes successful (like Mendeley itself).

I hope this was a useful introduction to open review and some of the tools that currently exist to facilitate this. I hope some of you out there have experience with some of these or other platforms. Which one do you prefer? Feel free to submit the poll below and elaborate in the comments section if you like.

Forest Vista

seeking principles

Academic Karma

Re-engineering Peer Review

Pandelis Perakakis

experience... learn... grow

chorasimilarity

computing with space | open notebook

PEER REVIEW WATCH

Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today.

Short, Fat Matrices

a research blog by Dustin G. Mixon

www.rockyourpaper.org

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button

Push Button. Get Research. Make Progress.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog