Adventures in Signal Processing and Open Science

Tag: applied mathematics

iTWIST’16 Keynote Speakers: Gerhard Wunder

iTWIST’16 is starting less than two weeks from now and we have 46 participants coming to Aalborg for the event (and I can still squeeze in a couple more – single-day registrations possible – so contact me if you are interested; only 4 places left before I have to order a bigger bus for the banquet dinner 🙂 ).

wunderOur next keynote speaker in line for the event is Gerhard Wunder, head of the Heisenberg Communications and Information Theory Group. Gerhard Wunder recently came to Freie UniversitÀt Berlin from Technische UniversitÀt Berlin. Dr. Wunder is currently heading two research projects: the EU FP7 project 5GNOW and PROPHYLAXE funded by the German Ministry of Education and Research and is a member of the management team of the EU H2020 FANTASTIC-5G project. Currently he receives funding in the German DFG priority programs SPP 1798 CoSIP (Compressed Sensing in Information Processing), and the upcoming SPP 1914 Cyber-Physical Networking.

Gerhard Wunder conducts research in wireless communication technologies and has recently started introducing principles of sparsity and compressed sensing into wireless communication. As an example of this, Gerhard Wunder recently published the paper “Sparse Signal Processing Concepts for Efficient 5G System Design” in IEEE Access together with Holger Boche, Thomas Strohmer, and Peter Jung.

At the coming iTWIST workshop, Gerhard Wunder is going to introduce us to the use of compressive sensing in random access medium access control (MAC), applied in massive machine-type communications – a major feature being extensively researched for coming 5G communication standards. The abstract of Dr. Wunder’s talk reads:

Compressive Coded Random Access for 5G Massive Machine-type Communication

Massive Machine-type Communication (MMC) within the Internet of Things (IoT) is an important future market segment in 5G, but not yet efficiently supported in cellular systems. Major challenge in MMC is the very unfavorable payload to control overhead relation due to small messages and oversized Medium Access (MAC) procedures. In this talk we follow up on a recent concept called Compressive Coded Random Access (CCRA) combining advanced MAC protocols with Compressed Sensing (CS) based multiuser detection. Specifically, we introduce a “one shot” random access procedure where users can send a message without a priori synchronizing with the network. In this procedure a common overloaded control channel is used to jointly detect sparse user activity and sparse channel profiles. In the same slot, data is detected based on the already available information. In the talk we show how CS algorithms and in particular the concept of hierarchical sparsity can be used to design efficient and scalable access protocols. The CCRA concept is introduced in full detail and further generalizations are discussed. We present algorithms and analysis that proves the additional benefit of the concept.

iTWIST’16 Keynote Speakers: Holger Rauhut

At this year’s international Travelling Workshop on Interactions between Sparse models and Technology (iTWIST) we have keynote speakers from several different scientific backgrounds. Our next speaker is a mathematician with a solid track record in compressed sensing and matrix/tensor completion: Holger Rauhut.

meHolger Rauhut is Professor for Mathematics and Head of Chair C for Mathematics (Analysis) at RWTH Aachen University. Professor Rauhut came to RWTH Aachen in 2013 from a position as Professor for Mathematics at the Hausdorff Center for Mathematics, University of Bonn since 2008.

Professor Rauhut has, among many other things, written the book A Mathematical Introduction to Compressive Sensing together with Simon Foucart and published important research contributions about structured random matrices.

At the coming iTWIST workshop I am looking very much forward to hearing Holger Rauhut speak about low-rank tensor recovery. This is especially interesting because, while the compressed sensing (one-dimensional) or the matrix completion (two-dimensional) problems are relatively straightforward to solve, things start getting much more complicated when you try to generalise it from ordinary vectors or matrices to higher-order tensors. Algorithms for the general higher-dimensional case seem to be much more elusive and I am sure that Holger Rauhut can enlighten us on this topic (joint work with Reinhold Schneider and Zeljka Stojanac):

Low rank tensor recovery

An extension of compressive sensing predicts that matrices of low rank can be recovered from incomplete linear information via efficient algorithms, for instance nuclear norm minimization. Low rank representations become much more efficient one passing from matrices to tensors of higher order and it is of interest to extend algorithms and theory to the recovery of low rank tensors from incomplete information. Unfortunately, many problems related to matrix decompositions become computationally hard and/or hard to analyze when passing to higher order tensors. This talk presents two approaches to low rank tensor recovery together with (partial) results. The first one extends iterative hard thresholding algorithms to the tensor case and gives a partial recovery result based on a variant of the restricted isometry property. The second one considers relaxations of the tensor nuclear norm (which itself is NP-hard to compute) and corresponding semidefinite optimization problems. These relaxations are based on so-called theta bodies, a concept from convex algebraic geometry. For both approaches numerical experiments are promising but a number of open problems remain.

iTWIST’16 Keynote Speakers: Florent Krzakala

Note: You can still register for iTWIST’16 until Monday the 1st of August!

Our next speaker at iTWIST’16 is Florent Krzakala. Much like Phil Schniter – the previous speaker presented here – Florent Krzakala has made important and enlightening contributions to the Approximate Message Passing family of algorithms.

floFlorent Krzakala is Professor of Physics at École Normale SupĂ©rieure in Paris, France. Professor Krzakala came to ENS in 2013 from a position as MaĂźtre de confĂ©rence in ESPCI, Paris (Laboratoire de Physico-chimie Theorique) since 2004. MaĂźtre de confĂ©rence is a particular French academic designation that I am afraid I am going to have to ask my French colleagues to explain to me 😉

Where Phil Schniter seems to have approached the (G)AMP algorithms, that have become quite popular for compressed sensing, from an estimation-algorithms-in-digital-communications-background, Florent Krzakala has approached the topic from a statistical physics background which seems to have brought a lot of interesting new insight to the table. For example, together with Marc MĂ©zard, Francois Sausset, Yifan Sun, and Lenka ZdeborovĂĄ he has shown how AMP algorithms are able to perform impressively well compared to the classic l1-minimization approach by using a special kind of so-called “seeded” measurement matrices in “Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices“.

At this year’s iTWIST workshop in a few weeks, Professor Krzakala is going to speak about matrix factorisation problems and the approximate message passing framework. Specifically, we are going to hear about:

Approximate Message Passing and Low Rank Matrix Factorization Problems

A large amount of interesting problem in machine learning and statistics can be expressed as a low rank structured matrix factorization problem, such as sparse PCA, planted clique, sub-matrix localization, clustering of mixtures of Gaussians or community detection in a graph.

I will discuss how recent ideas from statistical physics and information theory have led, on the one hand, to new mathematical insights in these problems, leading to a characterization of the optimal possible performances, and on the other to the development of new powerful algorithms, called approximate message passing, which turns out to be optimal for a large set of problems and parameters.

iTWIST’16 Keynote Speakers: Phil Schniter

With only one week left to register for iTWIST’16, I am going to walk you through the rest of our keynote speakers this week.

Phil SchniterOur next speaker is Phil Schniter. Phil Schniter is Professsor in Electrical and Computer Engineering at Department of Electrical and Computer Engineering at Ohio State University, USA.

Professor Schniter joined the Department of Electrical and Computer Engineering at OSU after graduating with a PhD in Electrical Engineering from Cornell University in 2000. Phil Schniter also has industrial experience from Tektronix from 1993 to 1996 and has been a visiting professor at Eurecom (Sophia Antipolis, France) from October 2008 through February 2009, and at Supelec (Gif sur Yvette, France) from March 2009 through August 2009.

Professor Schniter has published an impressive selection of research papers; previously especially within digital communication. In recent years he has been very active in the research around generalised approximate message passing (GAMP). GAMP is an estimation framework that has become popular in compressed sensing / sparse estimation. The reasons for the success of this algorithm (family), as I see it, are that the algorithm estimates under-sampled sparse vectors with comparable accuracy to the classic l1-minimisation approach in compressed sensing and favourable computational complexity. At the same time, the framework is easily adaptable to many kinds of different signal distributions and other types of structure than plain sparsity. If you are dealing with a signal that is not distributed according to the Laplace distribution that the l1-minimisation approach implies, you can adapt GAMP to this other (known) distribution and achieve better reconstruction capabilities than the l1-minimisation. Even if you don’t know the distribution, GAMP can also be modified to estimate it automatically and quite efficiently. This and many other details are among Professor Schniter’s contributions to the research on GAMP.

At this year’s iTWIST, Phil Schniter will be describing recent work on robust variants of GAMP. In details, the abstract reads (and this is joint work with Alyson Fletcher and Sundeep Rangan):

Robust approximate message passing

Approximate message passing (AMP) has recently become popular for inference in linear and generalized linear models. AMP can be viewed as an approximation of loopy belief propagation that requires only two matrix multiplies and a (typically simple) denoising step per iteration, and relatively few iterations, making it computationally efficient. When the measurement matrix “A” is large and well modeled as i.i.d. sub-Gaussian, AMP’s behavior is closely predicted by a state evolution. Furthermore, when this state evolution has unique fixed points, the AMP estimates are Bayes optimal. For general measurement matrices, however, AMP may produce highly suboptimal estimates or not even converge. Thus, there has been great interest in making AMP robust to the choice of measurement matrix.

In this talk, we describe some recent progress on robust AMP. In particular, we describe a method based on an approximation of non-loopy expectation propagation that, like AMP, requires only two matrix multiplies and a simple denoising step per iteration. But unlike AMP, it leverages knowledge of the measurement matrix SVD to yield excellent performance over a larger class of measurement matrices. In particular, when the Gramian A’A is large and unitarily invariant, its behavior is closely predicted by a state evolution whose fixed points match the replica prediction. Moreover, convergence has been proven in certain cases, with empirical results showing robust convergence even with severely ill-conditioned matrices. Like AMP, this robust AMP can be successfully used with non-scalar denoisers to accomplish sophisticated inference tasks, such as simultaneously learning and exploiting i.i.d. signal priors, or leveraging black-box denoisers such as BM3D. We look forward to describing these preliminary results, as well as ongoing research, on robust AMP.

iTWIST’16 Keynote Speakers: Karin Schnass

Last week we heard about the first of our keynote speakers at this years’ iTWIST workshop in August – Lieven Vandenberghe.

Karin SchnassNext up on my list of speakers is Karin Schnass. Karin Schnass is an expert on dictionary learning and heading an FWF-START project on dictionary learning in the Applied Mathematics group in the Department of Mathematics at the University of Innsbruck.

Karin Schnass joined the University of Innsbruck in December 2014 as part of an Erwin Schrödinger Research Fellowship where she returned from a research position at University of Sassari, Italy, from 2012 to 2014. She originally graduated from University of Vienna, Austria, with a master in mathematics with distinction: “Gabor Multipliers – A Self-Contained Survey”. She graduated in 2009 with a PhD in computer, communication and information sciences from EPFL, Switzerland: “Sparsity & Dictionaries – Algorithms & Design”. Karin Schnass has, among other things, introduced the iterative thresholding and K-means (ITKM) algorithms for dictionary learning and published the first theoretical paper on dictionary learning (on arXiv) with RĂ©mi Gribonval.

At our workshop this August, I am looking forward to hearing Karin Schnass talk about Sparsity, Co-sparsity and Learning. In compressed sensing, the so-called synthesis model has been the prevailing model since the beginning. First, we have the measurements:

y = A x

From the measurements, we can reconstruct the sparse vector x by solving this convex optimisation problem:

minimize |x|_1 subject to |y - A x|_2 < Δ

If the vector x we can observe is not sparse, we can still do this if can find a sparse representation α of x in some dictionary D:

x = D α

where we take our measurements of x using some measurement matrix M:

y = M x = M D α = A α

and we reconstruct the sparse vector α as follows:

minimize |α|_1 subject to |y - M D α|_2 < Δ

The above is called the synthesis model because it works by using some sparse vector α to synthesize the vector x that we observe. There is an alternative to this model, called the analysis model, where we analyse an observed vector x to find some sparse representation ÎČ of it:

ÎČ = D' x

Here D’ is also a dictionary, but it is not the same dictionary as in the synthesis case. We can now reconstruct the vector x from the measurements y as follows:

minimize |D' x|_1 subject to |y - M x|_2 < Δ

Now if D is a (square) orthonormal matrix such as an IDFT, we can consider D’ a DFT matrix and they are simply each other’s inverse. In this case, the synthesis and analysis reconstruction problems above are equivalent. The interesting case is when the synthesis dictionary D is a so-called over-complete dictionary – a fat matrix. The analysis counterpart of this is a tall analysis dictionary D’ which behaves differently than the analysis dictionary.

Karin will give an overview over the synthesis and the analysis model and talk about how to learn dictionaries that are useful for either case. Specifically, she plans to tell us about (joint work with Michael Sandbichler):

While (synthesis) sparsity is by now a well-studied low complexity model for signal processing, the dual concept of (analysis) co-sparsity is much less invesitigated but equally promising. We will first give a quick overview over both models and then turn to optimisation formulations for learning sparsifying dictionaries as well as co-sparsifying (analysis) operators. Finally we will discuss the resulting learning algorithms and ongoing research directions.

Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

Our new software metapaper Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images has just been published in Journal of Open Research Software. The paper describes our new software package Magni:

Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM) imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research.

The software itself is on GitHub as well as on Aalborg University’s repository: DOI 10.5278/VBN/MISC/Magni

Go ahead and check it out if you are into compressed sensing or atomic force microscopy. Pull requests welcome if you have ideas.

Episciences.org progress

Lately, I have been following the Episciences project as you may have noticed in my previous post. It seems there has been some more progress recently: I have just noticed that another “epi-committee” has been added to the site (I understand these epi-committees as a sort of editorial boards responsible for a given subject area). In addition to the existing math committee, the new committee is Episciences IAM (Informatics and Applied Mathematics). This sounds a bit closer to my area. I wonder if they consider signal processing to be in their area?
The page so far says that the committee is being formed and as such does not list any members yet. It will be interesting to see what this turns into.

Academic Karma

Re-engineering Peer Review

Pandelis Perakakis, PhD

Academic Website

chorasimilarity

computing with space | open notebook

PEER REVIEW WATCH

Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today.

Short, Fat Matrices

a research blog by Dustin G. Mixon

www.rockyourpaper.org

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button

Push Button. Get Research. Make Progress.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog