Adventures in Signal Processing and Open Science

iTWIST’16 Keynote Speakers: Holger Rauhut

At this year’s international Travelling Workshop on Interactions between Sparse models and Technology (iTWIST) we have keynote speakers from several different scientific backgrounds. Our next speaker is a mathematician with a solid track record in compressed sensing and matrix/tensor completion: Holger Rauhut.

meHolger Rauhut is Professor for Mathematics and Head of Chair C for Mathematics (Analysis) at RWTH Aachen University. Professor Rauhut came to RWTH Aachen in 2013 from a position as Professor for Mathematics at the Hausdorff Center for Mathematics, University of Bonn since 2008.

Professor Rauhut has, among many other things, written the book A Mathematical Introduction to Compressive Sensing together with Simon Foucart and published important research contributions about structured random matrices.

At the coming iTWIST workshop I am looking very much forward to hearing Holger Rauhut speak about low-rank tensor recovery. This is especially interesting because, while the compressed sensing (one-dimensional) or the matrix completion (two-dimensional) problems are relatively straightforward to solve, things start getting much more complicated when you try to generalise it from ordinary vectors or matrices to higher-order tensors. Algorithms for the general higher-dimensional case seem to be much more elusive and I am sure that Holger Rauhut can enlighten us on this topic (joint work with Reinhold Schneider and Zeljka Stojanac):

Low rank tensor recovery

An extension of compressive sensing predicts that matrices of low rank can be recovered from incomplete linear information via efficient algorithms, for instance nuclear norm minimization. Low rank representations become much more efficient one passing from matrices to tensors of higher order and it is of interest to extend algorithms and theory to the recovery of low rank tensors from incomplete information. Unfortunately, many problems related to matrix decompositions become computationally hard and/or hard to analyze when passing to higher order tensors. This talk presents two approaches to low rank tensor recovery together with (partial) results. The first one extends iterative hard thresholding algorithms to the tensor case and gives a partial recovery result based on a variant of the restricted isometry property. The second one considers relaxations of the tensor nuclear norm (which itself is NP-hard to compute) and corresponding semidefinite optimization problems. These relaxations are based on so-called theta bodies, a concept from convex algebraic geometry. For both approaches numerical experiments are promising but a number of open problems remain.

iTWIST’16 Keynote Speakers: Florent Krzakala

Note: You can still register for iTWIST’16 until Monday the 1st of August!

Our next speaker at iTWIST’16 is Florent Krzakala. Much like Phil Schniter – the previous speaker presented here – Florent Krzakala has made important and enlightening contributions to the Approximate Message Passing family of algorithms.

floFlorent Krzakala is Professor of Physics at École Normale Supérieure in Paris, France. Professor Krzakala came to ENS in 2013 from a position as Maître de conférence in ESPCI, Paris (Laboratoire de Physico-chimie Theorique) since 2004. Maître de conférence is a particular French academic designation that I am afraid I am going to have to ask my French colleagues to explain to me😉

Where Phil Schniter seems to have approached the (G)AMP algorithms, that have become quite popular for compressed sensing, from an estimation-algorithms-in-digital-communications-background, Florent Krzakala has approached the topic from a statistical physics background which seems to have brought a lot of interesting new insight to the table. For example, together with Marc Mézard, Francois Sausset, Yifan Sun, and Lenka Zdeborová he has shown how AMP algorithms are able to perform impressively well compared to the classic l1-minimization approach by using a special kind of so-called “seeded” measurement matrices in “Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices“.

At this year’s iTWIST workshop in a few weeks, Professor Krzakala is going to speak about matrix factorisation problems and the approximate message passing framework. Specifically, we are going to hear about:

Approximate Message Passing and Low Rank Matrix Factorization Problems

A large amount of interesting problem in machine learning and statistics can be expressed as a low rank structured matrix factorization problem, such as sparse PCA, planted clique, sub-matrix localization, clustering of mixtures of Gaussians or community detection in a graph.

I will discuss how recent ideas from statistical physics and information theory have led, on the one hand, to new mathematical insights in these problems, leading to a characterization of the optimal possible performances, and on the other to the development of new powerful algorithms, called approximate message passing, which turns out to be optimal for a large set of problems and parameters.

iTWIST’16 Keynote Speakers: Phil Schniter

With only one week left to register for iTWIST’16, I am going to walk you through the rest of our keynote speakers this week.

Phil SchniterOur next speaker is Phil Schniter. Phil Schniter is Professsor in Electrical and Computer Engineering at Department of Electrical and Computer Engineering at Ohio State University, USA.

Professor Schniter joined the Department of Electrical and Computer Engineering at OSU after graduating with a PhD in Electrical Engineering from Cornell University in 2000. Phil Schniter also has industrial experience from Tektronix from 1993 to 1996 and has been a visiting professor at Eurecom (Sophia Antipolis, France) from October 2008 through February 2009, and at Supelec (Gif sur Yvette, France) from March 2009 through August 2009.

Professor Schniter has published an impressive selection of research papers; previously especially within digital communication. In recent years he has been very active in the research around generalised approximate message passing (GAMP). GAMP is an estimation framework that has become popular in compressed sensing / sparse estimation. The reasons for the success of this algorithm (family), as I see it, are that the algorithm estimates under-sampled sparse vectors with comparable accuracy to the classic l1-minimisation approach in compressed sensing and favourable computational complexity. At the same time, the framework is easily adaptable to many kinds of different signal distributions and other types of structure than plain sparsity. If you are dealing with a signal that is not distributed according to the Laplace distribution that the l1-minimisation approach implies, you can adapt GAMP to this other (known) distribution and achieve better reconstruction capabilities than the l1-minimisation. Even if you don’t know the distribution, GAMP can also be modified to estimate it automatically and quite efficiently. This and many other details are among Professor Schniter’s contributions to the research on GAMP.

At this year’s iTWIST, Phil Schniter will be describing recent work on robust variants of GAMP. In details, the abstract reads (and this is joint work with Alyson Fletcher and Sundeep Rangan):

Robust approximate message passing

Approximate message passing (AMP) has recently become popular for inference in linear and generalized linear models. AMP can be viewed as an approximation of loopy belief propagation that requires only two matrix multiplies and a (typically simple) denoising step per iteration, and relatively few iterations, making it computationally efficient. When the measurement matrix “A” is large and well modeled as i.i.d. sub-Gaussian, AMP’s behavior is closely predicted by a state evolution. Furthermore, when this state evolution has unique fixed points, the AMP estimates are Bayes optimal. For general measurement matrices, however, AMP may produce highly suboptimal estimates or not even converge. Thus, there has been great interest in making AMP robust to the choice of measurement matrix.

In this talk, we describe some recent progress on robust AMP. In particular, we describe a method based on an approximation of non-loopy expectation propagation that, like AMP, requires only two matrix multiplies and a simple denoising step per iteration. But unlike AMP, it leverages knowledge of the measurement matrix SVD to yield excellent performance over a larger class of measurement matrices. In particular, when the Gramian A’A is large and unitarily invariant, its behavior is closely predicted by a state evolution whose fixed points match the replica prediction. Moreover, convergence has been proven in certain cases, with empirical results showing robust convergence even with severely ill-conditioned matrices. Like AMP, this robust AMP can be successfully used with non-scalar denoisers to accomplish sophisticated inference tasks, such as simultaneously learning and exploiting i.i.d. signal priors, or leveraging black-box denoisers such as BM3D. We look forward to describing these preliminary results, as well as ongoing research, on robust AMP.

iTWIST’16 Keynote Speakers: Karin Schnass

Last week we heard about the first of our keynote speakers at this years’ iTWIST workshop in August – Lieven Vandenberghe.

Karin SchnassNext up on my list of speakers is Karin Schnass. Karin Schnass is an expert on dictionary learning and heading an FWF-START project on dictionary learning in the Applied Mathematics group in the Department of Mathematics at the University of Innsbruck.

Karin Schnass joined the University of Innsbruck in December 2014 as part of an Erwin Schrödinger Research Fellowship where she returned from a research position at University of Sassari, Italy, from 2012 to 2014. She originally graduated from University of Vienna, Austria, with a master in mathematics with distinction: “Gabor Multipliers – A Self-Contained Survey”. She graduated in 2009 with a PhD in computer, communication and information sciences from EPFL, Switzerland: “Sparsity & Dictionaries – Algorithms & Design”. Karin Schnass has, among other things, introduced the iterative thresholding and K-means (ITKM) algorithms for dictionary learning and published the first theoretical paper on dictionary learning (on arXiv) with Rémi Gribonval.

At our workshop this August, I am looking forward to hearing Karin Schnass talk about Sparsity, Co-sparsity and Learning. In compressed sensing, the so-called synthesis model has been the prevailing model since the beginning. First, we have the measurements:

y = A x

From the measurements, we can reconstruct the sparse vector x by solving this convex optimisation problem:

minimize |x|_1 subject to |y - A x|_2 < ε

If the vector x we can observe is not sparse, we can still do this if can find a sparse representation α of x in some dictionary D:

x = D α

where we take our measurements of x using some measurement matrix M:

y = M x = M D α = A α

and we reconstruct the sparse vector α as follows:

minimize |α|_1 subject to |y - M D α|_2 < ε

The above is called the synthesis model because it works by using some sparse vector α to synthesize the vector x that we observe. There is an alternative to this model, called the analysis model, where we analyse an observed vector x to find some sparse representation β of it:

β = D' x

Here D’ is also a dictionary, but it is not the same dictionary as in the synthesis case. We can now reconstruct the vector x from the measurements y as follows:

minimize |D' x|_1 subject to |y - M x|_2 < ε

Now if D is a (square) orthonormal matrix such as an IDFT, we can consider D’ a DFT matrix and they are simply each other’s inverse. In this case, the synthesis and analysis reconstruction problems above are equivalent. The interesting case is when the synthesis dictionary D is a so-called over-complete dictionary – a fat matrix. The analysis counterpart of this is a tall analysis dictionary D’ which behaves differently than the analysis dictionary.

Karin will give an overview over the synthesis and the analysis model and talk about how to learn dictionaries that are useful for either case. Specifically, she plans to tell us about (joint work with Michael Sandbichler):

While (synthesis) sparsity is by now a well-studied low complexity model for signal processing, the dual concept of (analysis) co-sparsity is much less invesitigated but equally promising. We will first give a quick overview over both models and then turn to optimisation formulations for learning sparsifying dictionaries as well as co-sparsifying (analysis) operators. Finally we will discuss the resulting learning algorithms and ongoing research directions.

My problem with ResearchGate and

TLDR; you can find my publications in Aalborg University’s repository or ORCID.

ResearchGate – wow, a social network for scientists and researchers you might think. But think again about the ‘wow’. At least I am not so impressed. Here’s why…

I once created a profile on ResearchGate out of curiosity. It initially seemed like a good idea, but I soon realised that this would just add to the list of profile pages I would have to update, sigh. But so far I have kept my profile for fear of missing out. What if others cannot find my publications if I am not on ResearchGate? And so on…

But updating my profile is just the tip of the iceberg. What I find far more problematic about the site is their keen attempts to create a walled garden community. Let me explain what I mean. Take this paper for example (this is not a critique of this paper – in fact I think this is an example of a very interesting paper): One-Bit Compressive Sensing of Dictionary-Sparse Signals by Rich Baraniuk, Simon Foucart, Deanna Needell, Yaniv Plan, and Mary Wootters:

  1. First of all, when you click the link to the paper above you cannot even see it without logging in on ResearchGate.
    “What’s the problem?”, you might think. “ResearchGate is free – just create an account and log in”. But I would argue that open access is not open access if you have to register and log in to read the paper – even if it is free.
  2. Once you log in and can finally see the paper, it turns out that you cannot read the actual paper. This appears to be because the author has not uploaded the full text and ResearchGate displays a button where you can “Request full-text” to ask the author to provide it.
    “Now what?!”, you are thinking. “This is a great service to both readers and authors, making it easy to connect authors to their readers and enabling them to easily give the readers what they are looking for” – wrong! This is a hoax set up up by ResearchGate to convince readers that they are a great benevolent provider of open access literature.

The problem is that the paper is already accessible here: on arXiv – where it should be. ResearchGate has just scraped the paper info from arXiv and are trying to persuade the author to upload it to ResearchGate as well to make it look like ResearchGate is the place to go to read this paper. They could have chosen to simply link to the paper on arXiv, making it easy for readers to find it straight away. But they will not do that, because they want readers to stay inside their walled garden, controlling the information flow to create a false impression that ResearchGate is the only solution.

As if this was not enough, there are yet other reasons to reconsider your membership. For example, they are contributing to journal impact factor abuse-like metric obsession with their ResearchGate score. The problem is that this score is not transparent and not reproducible contributing only to an obsession with numbers driving “shiny” research and encouraging gaming of metrics.

I don’t know about you, but I have had enough – I quit!

…The clever reader has checked and noticed that I have not deleted my ResearchGate profile. Why? Am I just another hypocrite? Look closer – you will notice that the only publication on my profile is a note explaining why I do not wish to use ResearchGate. I think it is better to actively inform about my choice and attack the problem from the inside rather than just staying silently away.

Update 6th of July 2016…

I have now had a closer look at as well and it turns out that they are doing more or less the same, so I have decided to quit this network as well. They do not let you read papers without logging in and they also seem to have papers obviously scraped from arXiv, waiting for the author to upload the full-text version and ignoring the fact that it is available on arXiv. Again, they want to gather everything in-house to make it appear as if they are the rightful gate-keepers of all research.

As I did on ResearchGate as well, I have left my profile on with just a single publication which is basically this blog post (and a publication which is a link to my publications on Aalborg University’s repository.

iTWIST’16 Keynote Speakers: Lieven Vandenberghe

The workshop program has been ready for some time now, and we are handling the final practicalities to be ready to welcome you in Aalborg in August for the iTWIST’16 workshop. So now I think it is time to start introducing you to our – IMO – pretty impressive line-up of keynote speakers.

Lieven VandenbergheFirst up is Prof. Lieven Vandenberghe from UCLA. Prof. Vandenberghe is an expert on convex optimisation and signal processing and is – among other things – well known for his fundamental textbook “Convex Optimization” together with Steven Boyd.

Lieven Vandenberghe is Professor in the Electrical Engineering Department at UCLA. He joined UCLA in 1997, following postdoctoral appointments at K.U. Leuven and Stanford University, and has held visiting professor positions at K.U. Leuven and the Technical University of Denmark. In addition to “Convex Optimization”, he also edited the “Handbook of Semidefinite Programming” with Henry Wolkowicz and Romesh Saigal.

At iTWIST, I am looking forward to hearing him speak about Semidefinite programming methods for continuous sparse optimization. So far, it is my impression that most theory and literature about compressed sensing and sparse methods has relied on discrete dictionaries consisting of a basis or frame of individual dictionary atoms. If we take the discrete Fourier transform (DFT) as an example, the dictionary has fixed atoms corresponding to a set of discrete frequencies. More recently, theories have started emerging that allow continuous dictionaries instead (see for example also the work of Ben Adcock, Anders Hansen, Bogdan Roman et al. as well). As far as I understand – a generalisation that in principle allows you to get rid of the discretised atoms and consider any atoms on the continuum “in between” as well. This is what Prof. Vandenberghe has planned for us so far (and this is joint work with Hsiao-Han Chao):

We discuss extensions of semidefinite programming methods for 1-norm minimization over infinite dictionaries of complex exponentials, which have recently been proposed for superresolution and gridless compressed sensing.

We show that results related to the generalized Kalman-Yakubovich-Popov lemma in linear system theory provide simple constructive proofs for the semidefinite representations of the penalties used in these problems. The connection leads to extensions to more general dictionaries associated with linear state-space models and matrix pencils.

The results will be illustrated with applications in spectral estimation, array signal processing, and numerical analysis.

iTWIST’16 is taking shape

This year’s international Traveling Workshop on Interactions Between Sparse Models and Technology is starting to take shape now. The workshop will take place on the 24th-26th of August 2016 in Aalborg. See also this recent post about the workshop.


By Alan Lam (CC-BY-ND)

Aalborg is a beautiful city in the northern part of Denmark and what many of you probably do not know is that Aalborg actually scored “Europe’s happiest city” in a recent survey by the European Commission.

It is now possible to register for the workshop and if you are quick and register before July, you get it all for only 200€. That is, three days of workshop, including lunches and a social event with dinner on Thursday evening.

There are plenty of good reasons to attend the workshop. In addition to the many exciting contributed talks and posters that we are now reviewing, we have an impressive line-up of 9 invited keynote speakers! I will be presenting what the speakers have in store for you here on this blog in the coming days.


international Traveling Workshop on Interactions between Sparse models and Technology

international Traveling Workshop on Interactions between Sparse models and Technology

On the 24th to 26th of August 2016, we are organising a workshop called international Traveling Workshop on Interactions between Sparse models and Technology (iTWIST). iTWIST is a biennial workshop organised by a cross-European committee of researchers and academics on theory and applications of sparse models in signal processing and related areas. The workshop has so far taken place in Marseille, France in 2012 and in Namur, Belgium in

I was very excited to learn last fall that the organising committee of the previous two instalments of the workshop had the confidence to let Morten Nielsen and me organise the workshop in Aalborg (Denmark) in 2016.


This year, the workshop continues many of the themes from the first two years and adds a few new:

  • Sparsity-driven data sensing and processing (e.g., optics, computer vision, genomics, biomedical, digital communication, channel estimation, astronomy)
  • Application of sparse models in non-convex/non-linear inverse problems (e.g., phase retrieval, blind deconvolution, self calibration)
  • Approximate probabilistic inference for sparse problems
  • Sparse machine learning and inference
  • “Blind” inverse problems and dictionary learning
  • Optimization for sparse modelling
  • Information theory, geometry and randomness
  • Sparsity? What’s next?
    • Discrete-valued signals
    • Union of low-dimensional spaces,
    • Cosparsity, mixed/group norm, model-based, low-complexity models, …
  • Matrix/manifold sensing/processing (graph, low-rank approximation, …)
  • Complexity/accuracy tradeoffs in numerical methods/optimization
  • Electronic/optical compressive sensors (hardware)

I would like to point out here, as Igor Carron mentioned recently, that HW designs are also very welcome at the workshop – it is not just theory and thought experiments. We are very interested in getting a good mix between theoretical aspects and applications of sparsity and related techniques.

Keynote Speakers

I am very excited to be able to present a range of IMO very impressive keynote speakers covering a wide range of themes:

  • Lieven Vandenberghe – University of California, Los Angeles – homepage
  • Gerhard Wunder – TU Berlin & Fraunhofer Institute – homepage
  • Holger Rauhut – RWTH Aachen – homepage
  • Petros Boufounos – Mitsubishi Electric Research Labs – homepage
  • Florent Krzakala and Eric Tramel – ENS Paris – homepage
  • Phil Schniter – Ohio State University – homepage
  • Karin Schnass – University of Innsbruck – homepage
  • Rachel Ward – University of Texas at Austinhomepage
  • Bogdan Roman – University of Cambridge – homepage

The rest of the workshop is open to contributions from the research community. Please send your papers (in the form of 2-page extended abstracts – see details here). Your research can be presented as an oral presentation or a poster. If you prefer, you can state your preference (paper or poster) during the submission process, but we cannot guarentee that we can honour your request and reserve the right to assign papers to either category in order to put together a coherent programme. Please note that we consider oral and poster presentations equally important – poster presentations will not be stowed away in a dusty corner during coffee breaks but will have one or more dedicated slots in the programme!

Open Science

In order to support open science, we strongly encourage authors to publish any code or data accompanying your paper in a publicly accessible repository, such as GitHub, Figshare, Zenodo etc.

The proceedings of the workshop will be published in arXiv as well as SJS in order to make the papers openly accessible and encourage post-publication discussion.

Thoughts about Scholarly HTML

The company is working on a draft standard (or what I guess they hope will eventually become a standard) called Scholarly HTML. The purpose of this seems to be to standardise the way scholarly articles are structured as HTML in order to use that as a more semantic alternative to for example PDF which may look nice but does nothing to help understand the structure of the content, probably more the contrary.
They present their proposed standard in this document. They also seem to have formed a community group at the World Wide Web Consortium. It appears this is not a new initiative. There was already a previous project called Scholarly HTML, but seem to be trying to help take the idea further from there. Martin Fenner wrote a bit of background story behind the original Scholarly HTML.
I read’s proposal. It seems like a very promising initiative because it would allow scholarly articles across publishers to be understood better by, not least, algorithms for content mining, automated literature search, recommender systems etc. It would be particularly helpful if all publishers had a common standard for marking up articles and HTML seems a good choice since you only need a web browser to display it. This is also another nice feature about it. I tend to read a lot on my mobile phone and tablet and it really is a pain when the content does not fit the screen. This is often the case with PDF which does not reflow too well in the apps I use for viewing. Here HTML would be much better, not being physical page-focused like PDF.
I started looking at this proposal because it seemed like a natural direction to look further in from my crude preliminary experiments in Publishing Mathematics in e-books.
After reading the proposal, a few questions arose:

  1. The way the formatting of references is described, it seems to me as if references can be of type “schema:Book” or “schema:ScholarlyArticle”. Does this mean that they do not consider a need to cite anything but books or scholarly articles? I know that some people hold the IMO very conservative view that the reference list should only refer to peer-reviewed material, but this is too constrained and I certainly think it will be relevant to cite websites, data sets, source code etc. as well. It should all go into the reference list to make it easier to understand what the background material behind a paper is. This calls for a much richer selection of entry types. For example Biblatex’ entry types could serve as inspiration.
  2. The authors and affiliations section is described here. Author entries are described as having:

    property=”schema:author” or property=”schema:contributor” and a typeof=”sa:ContributorRole”

    I wonder if this way of specifying authors/contributors makes it possible to specify more granular roles or multiple roles for each author like for example Open Research Badges?

  3. Under article structure, they list the following types of sections:

    Sections are expected to be typed using the typeof attribute. The following typeof values are currently understood:

    sa:Funding (which has its specific structure)

    I think there is a need for more types of sections. I for example also see articles containing Introduction, Analysis, and Discussion sections and I am sure there must be more that I have not thought of.

Comments on “On the marginal cost of scholarly communication”

A new science publisher seems to have appeared recently, or publisher is probably not the right word… is apparently neither a journal nor a publisher per se. Rather, they seem to be focusing on developing a new publishing platform that provides a modern science publishing solution, built web-native from the bottom up.

The idea feels right and in my opinion, Standard Analytics (the company behind could very likely become an important player in a future where I think journals will to a large extent be replaced by recommender systems and where papers can be narrowly categorised by topic rather than by where they were published. Go check out their introduction to their platform afterwards…

A few days ago, I became aware that they had published an article or blog post about “the marginal cost of scholarly communication” in which they examine what it costs as a publisher to publish scientific papers in a web-based format. This is a welcome contribution to the ongoing discussion of what is actually a “fair cost” of open access publishing, considering the very pricey APCs that some publishers charge (see for example Nature Publishing Group). In estimating this marginal cost they define

the minimum requirements for scholarly communication as: 1) submission, 2) management of editorial workflow and peer review, 3) typesetting, 4) DOI registration, and 5) long-term preservation.

They collect data on what these services cost using available vendors of such services and alternatively consider what they would cost if you assume the publisher has software available for performing the typesetting etc. (perhaps they have developed it themselves or have it available as free, open-source software). For the case where the all services are bought from vendors, they find that the marginal cost of publishing a paper is between $69 and $318. For the case where the publisher is assumed to have all necessary software available and basically only needs to pay for server hosting and registration of DOIs, the price is found to be dramatically lower – between $1.36 and $1.61 per paper.

Marginal Cost

This all sounds very interesting, but I found this marginal cost a bit unclear. They define the marginal cost of publishing a paper as follows:

The marginal cost only takes into account the cost of producing one additional scholarly article, therefore excluding fixed costs related to normal business operations.

OK, but here I get in doubt what they categorise as normal business operations. One example apparently is the membership cost to CrossRef for issuing DOIs:

As our focus is on marginal cost, we excluded the membership fee from our calculations.

However, in a box at the end of the article they mention eLife as a specific example:

Based on their 2014 annual report (eLife Sciences, 2014), eLife spent approximately $774,500 on vendor costs (equivalent to 15% of their total expenses). Given that eLife published 800 articles in 2014, their marginal cost of scholarly communication was $968 per article.

I was not able to find the specific amount of $774,500 myself in eLife’s annual report but, assuming it is correct, how do we know whether for example CrossRef membership costs are included in eLife’s vendor costs? If they are, this estimate of eLife’s marginal cost of publication is not comparable to marginal costs calculated in Standard Analytics’ paper as mentioned above.

We could also discuss how relevant the marginal cost is, at least if you are in fact

an agent looking to start an independent, peer-reviewed scholarly journal

I mean, in that situation you are actually looking to start from scratch and have to take all those “fixed costs related to normal business operations” into account…

I should also mention that I have highlighted the quotes above from the paper via here.

Typesetting Solutions

Standard Analytics seem to assume that typesetting will have to include conversion from Microsoft Word, LaTeX etc. and suggest Pandoc as a solution and ast the same time point out that there is a lack of such freely available solutions for those wishing to base their journal on their own software platform. If a prospective journal were to restrict submissions to be in LaTeX format, there are also solutions such as LateXML and ShareLaTeX‘s open source code could be used for this purpose as well. Other interesting solutions are also being developed and I think it is worth keeping an eye on initiatives like PeerJ’s paper-now. Finally, it could also be an idea to simply ask existing free, open-access journals how they handle these things (which I assume they do in a very low-cost way). One example I can think of is the Journal of Machine Learning Research.

Other Opinions

I just became aware that Cameron Neylon also wrote a post: The Marginal Costs of Article Publishing – Critiquing the Standard Analytics Study about Standard Analytics’ paper which I will go and read now…

Academic Karma

Re-engineering Peer Review

Pandelis Perakakis, PhD

Academic Website


computing with space | open notebook


Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today.

Short, Fat Matrices

a research blog by Dustin G. Mixon

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button

Push Button. Get Research. Make Progress.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog


Get every new post delivered to your Inbox.

Join 380 other followers