Adventures in Signal Processing and Open Science

Tag: open access

My problem with ResearchGate and Academia.edu

TLDR; you can find my publications in Aalborg University’s repository or ORCID.

ResearchGate – wow, a social network for scientists and researchers you might think. But think again about the ‘wow’. At least I am not so impressed. Here’s why…

I once created a profile on ResearchGate out of curiosity. It initially seemed like a good idea, but I soon realised that this would just add to the list of profile pages I would have to update, sigh. But so far I have kept my profile for fear of missing out. What if others cannot find my publications if I am not on ResearchGate? And so on…

But updating my profile is just the tip of the iceberg. What I find far more problematic about the site is their keen attempts to create a walled garden community. Let me explain what I mean. Take this paper for example (this is not a critique of this paper – in fact I think this is an example of a very interesting paper): One-Bit Compressive Sensing of Dictionary-Sparse Signals by Rich Baraniuk, Simon Foucart, Deanna Needell, Yaniv Plan, and Mary Wootters:

  1. First of all, when you click the link to the paper above you cannot even see it without logging in on ResearchGate.
    “What’s the problem?”, you might think. “ResearchGate is free – just create an account and log in”. But I would argue that open access is not open access if you have to register and log in to read the paper – even if it is free.
  2. Once you log in and can finally see the paper, it turns out that you cannot read the actual paper. This appears to be because the author has not uploaded the full text and ResearchGate displays a button where you can “Request full-text” to ask the author to provide it.
    “Now what?!”, you are thinking. “This is a great service to both readers and authors, making it easy to connect authors to their readers and enabling them to easily give the readers what they are looking for” – wrong! This is a hoax set up up by ResearchGate to convince readers that they are a great benevolent provider of open access literature.

The problem is that the paper is already accessible here: on arXiv – where it should be. ResearchGate has just scraped the paper info from arXiv and are trying to persuade the author to upload it to ResearchGate as well to make it look like ResearchGate is the place to go to read this paper. They could have chosen to simply link to the paper on arXiv, making it easy for readers to find it straight away. But they will not do that, because they want readers to stay inside their walled garden, controlling the information flow to create a false impression that ResearchGate is the only solution.

As if this was not enough, there are yet other reasons to reconsider your membership. For example, they are contributing to journal impact factor abuse-like metric obsession with their ResearchGate score. The problem is that this score is not transparent and not reproducible contributing only to an obsession with numbers driving “shiny” research and encouraging gaming of metrics.

I don’t know about you, but I have had enough – I quit!

…The clever reader has checked and noticed that I have not deleted my ResearchGate profile. Why? Am I just another hypocrite? Look closer – you will notice that the only publication on my profile is a note explaining why I do not wish to use ResearchGate. I think it is better to actively inform about my choice and attack the problem from the inside rather than just staying silently away.

Update 6th of July 2016…

I have now had a closer look at Academia.edu as well and it turns out that they are doing more or less the same, so I have decided to quit this network as well. They do not let you read papers without logging in and they also seem to have papers obviously scraped from arXiv, waiting for the author to upload the full-text version and ignoring the fact that it is available on arXiv. Again, they want to gather everything in-house to make it appear as if they are the rightful gate-keepers of all research.

As I did on ResearchGate as well, I have left my profile on Academia.edu with just a single publication which is basically this blog post (and a publication which is a link to my publications on Aalborg University’s repository.

Advertisements

Comments on “On the marginal cost of scholarly communication”

A new science publisher seems to have appeared recently, or publisher is probably not the right word… science.ai is apparently neither a journal nor a publisher per se. Rather, they seem to be focusing on developing a new publishing platform that provides a modern science publishing solution, built web-native from the bottom up.

The idea feels right and in my opinion, Standard Analytics (the company behind science.ai) could very likely become an important player in a future where I think journals will to a large extent be replaced by recommender systems and where papers can be narrowly categorised by topic rather than by where they were published. Go check out their introduction to their platform afterwards…

A few days ago, I became aware that they had published an article or blog post about “the marginal cost of scholarly communication” in which they examine what it costs as a publisher to publish scientific papers in a web-based format. This is a welcome contribution to the ongoing discussion of what is actually a “fair cost” of open access publishing, considering the very pricey APCs that some publishers charge (see for example Nature Publishing Group). In estimating this marginal cost they define

the minimum requirements for scholarly communication as: 1) submission, 2) management of editorial workflow and peer review, 3) typesetting, 4) DOI registration, and 5) long-term preservation.

They collect data on what these services cost using available vendors of such services and alternatively consider what they would cost if you assume the publisher has software available for performing the typesetting etc. (perhaps they have developed it themselves or have it available as free, open-source software). For the case where the all services are bought from vendors, they find that the marginal cost of publishing a paper is between $69 and $318. For the case where the publisher is assumed to have all necessary software available and basically only needs to pay for server hosting and registration of DOIs, the price is found to be dramatically lower – between $1.36 and $1.61 per paper.

Marginal Cost

This all sounds very interesting, but I found this marginal cost a bit unclear. They define the marginal cost of publishing a paper as follows:

The marginal cost only takes into account the cost of producing one additional scholarly article, therefore excluding fixed costs related to normal business operations.

OK, but here I get in doubt what they categorise as normal business operations. One example apparently is the membership cost to CrossRef for issuing DOIs:

As our focus is on marginal cost, we excluded the membership fee from our calculations.

However, in a box at the end of the article they mention eLife as a specific example:

Based on their 2014 annual report (eLife Sciences, 2014), eLife spent approximately $774,500 on vendor costs (equivalent to 15% of their total expenses). Given that eLife published 800 articles in 2014, their marginal cost of scholarly communication was $968 per article.

I was not able to find the specific amount of $774,500 myself in eLife’s annual report but, assuming it is correct, how do we know whether for example CrossRef membership costs are included in eLife’s vendor costs? If they are, this estimate of eLife’s marginal cost of publication is not comparable to marginal costs calculated in Standard Analytics’ paper as mentioned above.

We could also discuss how relevant the marginal cost is, at least if you are in fact

an agent looking to start an independent, peer-reviewed scholarly journal

I mean, in that situation you are actually looking to start from scratch and have to take all those “fixed costs related to normal business operations” into account…

I should also mention that I have highlighted the quotes above from the paper via hypothes.is here.

Typesetting Solutions

Standard Analytics seem to assume that typesetting will have to include conversion from Microsoft Word, LaTeX etc. and suggest Pandoc as a solution and ast the same time point out that there is a lack of such freely available solutions for those wishing to base their journal on their own software platform. If a prospective journal were to restrict submissions to be in LaTeX format, there are also solutions such as LateXML and ShareLaTeX‘s open source code could be used for this purpose as well. Other interesting solutions are also being developed and I think it is worth keeping an eye on initiatives like PeerJ’s paper-now. Finally, it could also be an idea to simply ask existing free, open-access journals how they handle these things (which I assume they do in a very low-cost way). One example I can think of is the Journal of Machine Learning Research.

Other Opinions

I just became aware that Cameron Neylon also wrote a post: The Marginal Costs of Article Publishing – Critiquing the Standard Analytics Study about Standard Analytics’ paper which I will go and read now…

It’s all about replication

ReScience logoA new journal appeared recently in the scientific publishing landscape: ReScienceannounced at the recent EuroSciPy 2015 conference. The journal has been founded by Nicolas Rougier and Konrad Hinsen. This journal is remarkable in several ways, so remarkable in fact that I could not resist accepting their offer to become associate editor for the journal.

So how does this journal stand out from the crowd? First of all it is about as open as it gets. The entire publishing process is completely transparent – from first submission through review to final publication. Second, the journal platform is based entirely on GitHub, the code repository home to a plethora of open source projects. This is part of what enables the journal to be so open about the entire publishing process. Third, the journal does not actually publish original research – there are plenty of those already. Instead, ReScience focuses entirely on replications of already published computational science.

As has been mentioned by numerous people before me, when dealing with papers based on computational science it is not really enough to review the paper in the classical sense to ensure that the results can be trusted (this not only a problem of computational science, but this is the particular focus of ReScience). Results need to be replicated to validate them and this is what ReScience addresses.

Many of us probably know it: we are working on a new paper of our own and we need to replicate the results of some previous paper that we wish to compare our results against. Except for that comparison, this is essentially lost work after you get your paper published. Others looking at the original paper whose results you replicated may not be aware that anyone replicated these results. Now you can publish the replication of these previous results as well and get credit for it. At the same time you benefit the authors of the original results that you have replicated by helping validate their research.

The process of submitting your work to ReScience is described on their website along with the review process and the roles of editors and reviewers. So if you have replicated someone else’s computational work, go ahead and publish it in ReScience. If it is in the signal processing area I will be happy to take your submission through the publishing process.

Open Access Journals: What’s Missing?

I just came across this blog post by Nick Brown: Open Access journals: what’s not to like? This, maybe… That post was also what inspired the title of my post. His post really got me into writing mode, mostly because I don’t quite agree with him. I left this as a comment on ihs blog, but I felt it was worth repeating here.

Read the rest of this entry »

Episciences.org update

I mentioned the Episciences project the other day in Scientific journals as an overlay. In the meantime I have tried to contact the people behind this project and The Open Journal, apparently without any luck.

I went and checked the Episciences website yesterday and it actually seems that they are moving forward. They changed the page design completely and there is now a button in the upper right corner to create an account and log in. I took the liberty of doing so to have a look around. I was able to create an account, but is just about it so far. The site still seems quite “beta” – I was not able to save changes to my profile and I cannot yet find anywhere to submit papers. It is nice to see some progress on the platform and I will be keeping an eager eye on it to find out when they will go operational.

Scientific journals as an overlay

There is an update on this post in Episciences.org update

In many of my posts since I started this blog, I have been writing about open peer review. Another topic related to open science that interests me is open access (to scientific papers). Part of open access in practice is about authors posting their papers, perhaps submitted to traditional journals, to preprint servers such as arXiv. This is used a lot, particularly in physics and mathematics.
Read the rest of this entry »

The Winnower officially launches today

I have written about The Winnower here before. I have been involved in testing the platform during the past couple of months and must say that it looks very promising.
Today they officially launch! Now it is just up to us to participate and make a change towards transparent publishing with open review.

Rock Your Paper

I noticed a new web site some time ago: rockyourpaper.org. It is first and foremost a search engine for open access research papers. You can search for open access papers from lots of different publishers and they aim to be the place to go for open access research.

Their initial motivation is to provide easy access to research to students and researchers from countries that typically cannot afford access to the expensive subscription journals. I talked to Rock Your Paper co-founder Neeraj Mehta about their platform to find out a bit more about it.

Rock Your Paper (RYP) started on October 18th, 2013. It is not the only place to search for open access papers. Other possibilities of course include the publishers’ sites themselves, but this is hard work considering the many different publishers you would have to visit. Another centralised place to search for papers is the Directory of Open Access Journals (DOAJ) where you can search among, at the time of writing, 1,573,847 open access papers. When I spoke to Neeraj in January, RYP was hoping to index 20 million research papers by January 31st. In addition, they provide another layer of service to its users. You can create an account with RYP and save both searches and individual papers so that you can keep track of “what was it I searched for the other day when I found that paper…”

Rock Your Paper is a for-profit startup company that of course hopes to earn money from their services, but they promise that their basic search and access features will remain free for users. This seems very much in line with their initial purpose. They may extend their services along the way with additional features such as formatting, editing and translation which users will need to pay for.

Initially, they are aiming to establish themselves first and foremost as an open access search engine. Later on, they may also extend the platform to let users publish research. They have also approached publishers of subscription-based journals about the possibility of providing discounted access to these, but unfortunately they have not had any luck with this yet.

I think Rock Your Paper sounds like one of many interesting new players in the open access / open science area that will be exciting to follow.

There’s a new journal in town…

Billede[Image: Jean-François Millet: Le Vanneur. Source: WikiPedia]

I have been writing a few posts lately about open peer review in scientific publishing (Open Review of Scientific Literature, Openness And Anonymity in Peer Review, Third-party review platforms). As I have mentioned, quite a few platforms experimenting with open post-publication peer review have been appearing around us recently.

Now it seems there is an actual journal on its way, embracing open review and open access from the very beginning to an extent I have not seen yet. It sounds like a very brave and exciting initiative. According to their own description it is going to be a journal for all disciplines of science. You can read more about the ideas behind the journal on their blog: The Winnower. It was also featured recently here.

Curious about this new journal as I am, I have been talking to its founder, Josh Nicholson, online on a few occasions lately to find out more about the journal. I have decided to publish this Q&A correspondance here in case others are interested.

Q&A with Josh Nicholson

2013/10/04 – on Google+:
As I understand, you will publish manuscripts immediately and publish the accompanying reviews of them when ready. Will these manuscripts be open to review by anyone, will you find reviewers, or a combination thereof?
In principle, it would be “most open” to allow reviews by anyone, but specifically when some paper is not “popular” enough to attract reviewers spontaneously, I guess it might also  be necessary to actively engage reviewers? If so, do you consider somehow paying (monetarily or otherwise) reviewers?

The papers will indeed be open to review by anyone.  We want it to be completely transparent and open.  We also wish to be completely clear that papers without reviews linked to them have not been reviewed and should be viewed accordingly.  We would like to engage reviewers with different incentives in the future and will explore the best ways to do that as we move forward.  Our system will in essence be quite similar to “pre-prints” where authors are allowed to solicit reviews and anyone is allowed to review but it will all occur in the open. We will charge $100 per publication so that we can sustain the site without relying in grants.  We would love to hear more of your feedback should you have any!

I have been considering for some time how an open peer review system can attract reviewers and possibly encourage them to identify themselves to “earn” reputation.
The Stackexhange network, among others, seems to be quite popular and it seems to me that one of the things driving users to contribute is the reputation system where a reputation score becomes the “currency” of the site. Users can vote other users’ questions and answers up or down. This lets other users quickly assess which questions and answers are “good”. Votes earn the poster of the question or answer reputation points and this encourages posters to make an effort to write good questions and answers.
It seems to me that such a system could be used more or less directly on a peer review platform. It would both encourage users to write reviews and let other users assess and score reviews (review of reviews).

We agree with you 100%.  We would even like to offer the “best” reviewers, as judged by the community, free publishing rights.  Ultimately we would also like to make the reviews citeable.  Some of these features will not be present in the initial launch but will be expanded upon and rolled out over time.  We hope you will consider submitting in 2014 and reviewing!
We have a few other select features that will be present in the initial build to attract reviews.  Some of these will be discussed in future blog posts.

2013/10/06 – in blog comment:
Have you at The Winnower considered if you could make use of third-party reviewer platforms for your publishing?

We have briefly communicated with LIBRE and are indeed open to reviews from third-party platforms. We are happy to work with anyone towards the goal of making reviewing more transparent.

2013/10/11 – on Twitter:
Will you have any sort of editorial endorsement of papers you publish or will the open reviews be the only “stamp of approval”?

Open reviews will serve as ‘stamp of approval.’ We hope papers will accumulate many reviews.
Papers can be organized based by content as well as various metrics including most reviewed etc.

2013/10/11 – in blog comment:
I am very excited about your new journal – that’s why I keep asking all sorts of questions about it here and there 😉
In terms of archiving papers, what will you do to ensure that the papers you have published do not disappear in the event that the Winnower should be out of business? Do you have any mutual archival agreements with other journals or institutional repositories?

We are happy you are excited about The Winnower. Please keep the questions and comments coming!
We are currently looking at what is the best way to preserve papers published in The Winnower should The Winnower not survive. We are looking to participate in CLOCKSS but have not made any agreements as of yet.

Another one: under which terms are you going to license the published manuscripts? For example, I have heard authors express concern about third-party commercial reuse of papers without consent under CC-BY. I am not sure yet what to think about that.

Content published with The Winnower will be licensed under a CC BY license. Commercial reuse of work, as we understand it, must cite the original work. We want to open the exchange ideas and information.

2013/10/18 – in blog comment:
Here goes another one of my questions: will your platform employ versioning of manuscripts?
I imagine that authors of a paper may want to revise their paper in response to relevant review comments. Just like it often happens in traditional pre-publication review – here we just get the whole story out in the open. If so, I think there should be a mechanism in place to keep track of different versions of the paper – all of which should remain open to readers. As a consequence of this, there will also be a need to keep track of which version of a paper specific comments relate to.
Rating: will it be possible to rate/score papers in addition to reviewing/commenting? While a simple score may seem a crude measure I think there is a possibility that it could help readers sift more efficiently through the posted papers. In a publishing model like yours, it is going to be harder for, e.g. funding agencies or hiring committees to assess an author’s work, because they cannot simply judge it by where it was published (that may be the wrong way to do it anyway, but that is not what I am aiming to discuss here). A simple score might make the transition to your proposed publishing process “easier” for some stakeholders. I am a bit reluctant about it myself, but in order not to make it too superficial, maybe scoring/rating should only be possible after having provided a proper review comment. This should make it difficult for readers to score the paper without making a proper effort in assessing the paper.

We are happy to have your questions! There will indeed be an option to revise manuscripts after a MS has collected reviews. We are however a bit uneasy about hosting multiple versions of the paper as we think it may become quite confusing. We are happy to explore this option in the future but currently we believe that the comments along with the responses should be sufficient to inform the reader what was changed.

Do you agree?

Our reviews will be structured meaning that there will be prompts which allow different aspects of the papers to be rated.

So you plan to allow revision, but previous versions are “lost”?
I see the point about the possible confusion, but what if a commenter points out some flaw about some details in the paper, the author acknowledges it and revises the paper? Now, future readers can no longer see in the paper what that comment was about. Well, they can see the comment, but they can no longer see for themselves in the actual paper what the flawed part originally said.
Could the platform perhaps always display the most recent version of the paper but show links to previous versions somewhere along with the metadata, abstract etc. that I assume you will be displaying on a paper’s “landing page”? The actual PDF of an outdated version of a paper could have a prominently displayed text or “stamp” saying that this is an outdated version kept for the record and that a newer version is available?
Perhaps links to older versions of a paper would only be visible in a specific comment that refers to an earlier version of the paper?

These are good points. We will discuss some of these and see what approach will work best and what we are capable of. If it is not too much confusion and not too much to implement this into the build this could be quite useful as you point out. Thanks!

By the way, I think it would be interesting if it were possible not only to comment on papers, but to annotate the actual text in-line. I think it would be great if readers could mark up parts of text and write comments directly next to them. Would it seem too draft-like?
I am not sure how this could be done, technically, but it seems like the technology http://hypothes.is/ are brewing could enable something like this.

We agree that inline comments are quite interesting and we have this as a possible tool to build in the platform in the future. We however have limited funding for the initial build and want to focus on features that are critical first and complimentary second. But this is a great idea and definitely something we will be exploring in the future.

Good point. It is best to get the essential features working well first.

A look at the process of submitting articles to OA journals | Open Science

A look at the process of submitting articles to OA journals | Open Science.

If you are thinking of publishing your article in an open access model, there are usually two paths to choose from. One – you can publish in Green OA, which means adding the paper to a specially prepared repository (self-archiving). Two – you can choose the Gold OA model and submit your article to an OA journal, where it will be corrected, peer-reviewed then published. At this point I would like to briefly describe the process of submitting articles to OA journals for those who are considering just that option. It is a general description, and various steps may differ depending on the publisher and the journal.

Forest Vista

seeking principles

Academic Karma

Re-engineering Peer Review

Pandelis Perakakis

experience... learn... grow

chorasimilarity

computing with space | open notebook

PEER REVIEW WATCH

Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today.

Short, Fat Matrices

a research blog by Dustin G. Mixon

www.rockyourpaper.org

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button

Push Button. Get Research. Make Progress.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog