Blind separation of a large number of sparse sources

 

Authors: C. Kervazo, J. Bobin, C. Chenot
Journal: Signal Processing
Year: 2018
Download: Paper


Abstract

Blind Source Separation (BSS) is one of the major tools to analyze multispectral data with applications that range from astronomical to biomedical signal processing. Nevertheless, most BSS methods fail when the number of sources becomes large, typically exceeding a few tens. Since the ability to estimate large number of sources is paramount in a very wide range of applications, we introduce a new algorithm, coined block-Generalized Morphological Component Analysis (bGMCA) to specifically tackle sparse BSS problems when large number of sources need to be estimated. Sparse BSS being a challenging nonconvex inverse problem in nature, the role played by the algorithmic strategy is central, especially when many sources have to be estimated. For that purpose, the bGMCA algorithm builds upon block-coordinate descent with intermediate size blocks. Numerical experiments are provided that show the robustness of the bGMCA algorithm when the sources are numerous. Comparisons have been carried out on realistic simulations of spectroscopic data.

Distinguishing standard and modified gravity cosmologies with machine learning

 

Authors: A. Peel, F. Lalande, J.-L. Starck, V. Pettorino, J. Merten,  C. Giocoli, M. Meneghetti,  M. Baldi
Journal: Submitted to PRL
Year: 2018
Download: ADS | arXiv


Abstract

We present a convolutional neural network to identify distinct cosmological scenarios based on the weak-lensing maps they produce. Modified gravity models with massive neutrinos can mimic the standard concordance model in terms of Gaussian weak-lensing observables, limiting a deeper understanding of what causes cosmic acceleration. We demonstrate that a network trained on simulated clean convergence maps, condensed into a novel representation, can discriminate between such degenerate models with 83%-100% accuracy. Our method outperforms conventional statistics by up to 40% and is more robust to noise.

On the dissection of degenerate cosmologies with machine learning

 

Authors: J. Merten,  C. Giocoli, M. Baldi, M. Meneghetti, A. Peel, F. Lalande, J.-L. Starck, V. Pettorino
Journal: Submitted to MNRAS
Year: 2018
Download: ADS | arXiv


Abstract

Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.

Blind separation of a large number of sparse sources

 

Authors: C. Kervazo, J. Bobin, C. Chenot
Journal: Signal Processing
Year: 2018
Download: Paper


Abstract

Blind Source Separation (BSS) is one of the major tools to analyze multispectral data with applications that range from astronomical to biomedical signal processing. Nevertheless, most BSS methods fail when the number of sources becomes large, typically exceeding a few tens. Since the ability to estimate large number of sources is paramount in a very wide range of applications, we introduce a new algorithm, coined block-Generalized Morphological Component Analysis (bGMCA) to specifically tackle sparse BSS problems when large number of sources need to be estimated. Sparse BSS being a challenging nonconvex inverse problem in nature, the role played by the algorithmic strategy is central, especially when many sources have to be estimated. For that purpose, the bGMCA algorithm builds upon block-coordinate descent with intermediate size blocks. Numerical experiments are provided that show the robustness of the bGMCA algorithm when the sources are numerous. Comparisons have been carried out on realistic simulations of spectroscopic data.

Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

 

Authors: A. PeelV. Pettorino, C. Giocoli, J.-L. Starck, M. Baldi
Journal: A&A
Year: 2018
Download: ADS | arXiv


Abstract

General relativity (GR) has been well tested up to solar system scales, but it is much less certain that standard gravity remains an accurate description on the largest, that is, cosmological, scales. Many extensions to GR have been studied that are not yet ruled out by the data, including by that of the recent direct gravitational wave detections. Degeneracies among the standard model (ΛCDM) and modified gravity (MG) models, as well as among different MG parameters, must be addressed in order to best exploit information from current and future surveys and to unveil the nature of dark energy. We propose various higher-order statistics in the weak-lensing signal as a new set of observables able to break degeneracies between massive neutrinos and MG parameters. We have tested our methodology on so-called f(R) models, which constitute a class of viable models that can explain the accelerated universal expansion by a modification of the fundamental gravitational interaction. We have explored a range of these models that still fit current observations at the background and linear level, and we show using numerical simulations that certain models which include massive neutrinos are able to mimic ΛCDM in terms of the 3D power spectrum of matter density fluctuations. We find that depending on the redshift and angular scale of observation, non-Gaussian information accessed by higher-order weak-lensing statistics can be used to break the degeneracy between f(R) models and ΛCDM. In particular, peak counts computed in aperture mass maps outperform third- and fourth-order moments.

NMF with Sparse Regularizations in Transformed Domains

 

Authors: J. Rapin, J. Bobin, A. Larue, J.-L. Starck
Journal: SIAM
Year: 2014
Download: ADS | arXiv


Abstract

Non-negative blind source separation (BSS) has raised interest in various fields of research, as testified by the wide literature on the topic of non-negative matrix factorization (NMF). In this context, it is fundamental that the sources to be estimated present some diversity in order to be efficiently retrieved. Sparsity is known to enhance such contrast between the sources while producing very robust approaches, especially to noise. In this paper we introduce a new algorithm in order to tackle the blind separation of non-negative sparse sources from noisy measurements. We first show that sparsity and non-negativity constraints have to be carefully applied on the sought-after solution. In fact, improperly constrained solutions are unlikely to be stable and are therefore sub-optimal. The proposed algorithm, named nGMCA (non-negative Generalized Morphological Component Analysis), makes use of proximal calculus techniques to provide properly constrained solutions. The performance of nGMCA compared to other state-of-the-art algorithms is demonstrated by numerical experiments encompassing a wide variety of settings, with negligible parameter tuning. In particular, nGMCA is shown to provide robustness to noise and performs well on synthetic mixtures of real NMR spectra.

Low-dimensional signal-strength fingerprint-based positioning in wireless LANs

 

Authors: D. Milioris, G. Tzagkarakis, A. Papakonstantinou, M. Papadopouli, P. Tsakalides
Journal: Ad Hoc Networks
Year: 2011
Download: Science Direct


Abstract

Accurate location awareness is of paramount importance in most ubiquitous and pervasive computing applications. Numerous solutions for indoor localization based on IEEE802.11, bluetooth, ultrasonic and vision technologies have been proposed. This paper introduces a suite of novel indoor positioning techniques utilizing signal-strength (SS) fingerprints collected from access points (APs). Our first approach employs a statistical representation of the received SS measurements by means of a multivariate Gaussian model by considering a discretized grid-like form of the indoor environment and by computing probability distribution signatures at each cell of the grid. At run time, the system compares the signature at the unknown position with the signature of each cell by using the Kullback–Leibler Divergence (KLD) between their corresponding probability densities. Our second approach applies compressive sensing (CS) to perform sparsity-based accurate indoor localization, while reducing significantly the amount of information transmitted from a wireless device, possessing limited power, storage, and processing capabilities, to a central server. The performance evaluation which was conducted at the premises of a research laboratory and an aquarium under real-life conditions, reveals that the proposed statistical fingerprinting and CS-based localization techniques achieve a substantial localization accuracy.

Sparse and Non-Negative BSS for Noisy Data

 

Authors: J. Rapin, J. Bobin, A. Larue, J.-L. Starck
Journal: IEEE
Year: 2013
Download: ADS | arXiv


Abstract

Non-negative blind source separation (BSS) has raised interest in various fields of research, as testified by the wide literature on the topic of non-negative matrix factorization (NMF). In this context, it is fundamental that the sources to be estimated present some diversity in order to be efficiently retrieved. Sparsity is known to enhance such contrast between the sources while producing very robust approaches, especially to noise. In this paper we introduce a new algorithm in order to tackle the blind separation of non-negative sparse sources from noisy measurements. We first show that sparsity and non-negativity constraints have to be carefully applied on the sought-after solution. In fact, improperly constrained solutions are unlikely to be stable and are therefore sub-optimal. The proposed algorithm, named nGMCA (non-negative Generalized Morphological Component Analysis), makes use of proximal calculus techniques to provide properly constrained solutions. The performance of nGMCA compared to other state-of-the-art algorithms is demonstrated by numerical experiments encompassing a wide variety of settings, with negligible parameter tuning. In particular, nGMCA is shown to provide robustness to noise and performs well on synthetic mixtures of real NMR spectra.

The Scale of the Problem : Recovering Images of Reionization with GMCA

 

Authors: E. Chapman, F. B. Abdalla, J. Bobin, J.-L. Starck
Journal: MNRAS
Year: 2013
Download: ADS | arXiv


Abstract

The accurate and precise removal of 21-cm foregrounds from Epoch of Reionization redshifted 21-cm emission data is essential if we are to gain insight into an unexplored cosmological era. We apply a non-parametric technique, Generalized Morphological Component Analysis or GMCA, to simulated LOFAR-EoR data and show that it has the ability to clean the foregrounds with high accuracy. We recover the 21-cm 1D, 2D and 3D power spectra with high accuracy across an impressive range of frequencies and scales. We show that GMCA preserves the 21-cm phase information, especially when the smallest spatial scale data is discarded. While it has been shown that LOFAR-EoR image recovery is theoretically possible using image smoothing, we add that wavelet decomposition is an efficient way of recovering 21-cm signal maps to the same or greater order of accuracy with more flexibility. By comparing the GMCA output residual maps (equal to the noise, 21-cm signal and any foreground fitting errors) with the 21-cm maps at one frequency and discarding the smaller wavelet scale information, we find a correlation coefficient of 0.689, compared to 0.588 for the equivalently smoothed image. Considering only the central 50% of the maps, these coefficients improve to 0.905 and 0.605 respectively and we conclude that wavelet decomposition is a significantly more powerful method to denoise reconstructed 21-cm maps than smoothing.

Active Range Imaging via Random Gating

 

Authors: G. Tsagkatakis, A. Woiselle, G. Tzagkarakis, M. Bousquet, J.-L. Starck and P. Tsakalides
Journal: SPIE
Year: 2012
Download: SPIE


Abstract

Range Imaging (RI) has sparked an enthusiastic interest recently due to the numerous applications that can benefit from the presence 3D data. One of the most successful techniques for RI employs Time-of-Flight (ToF) cameras which emit and subsequently record laser pulses in order to estimate the distance between the camera and an object. A limitation of this class of RI is the requirement for a large number of frames that have to be captured in order to generate high resolution depth maps. In this work, we propose a novel approach for ToF based RI that utilizes the recently proposed framework of Compressed Sensing to dramatically reduce the number of necessary frames. Our technique employs a random gating function along with state-of-the-art minimization techniques in order to estimate the location of a returning laser pulse and infer the distance. To validate the theoretical motivation, software simulations were carried out. Our simulated results have shown that reconstruction of a depth map is possible from as low as 10% of the frames that traditional ToF cameras require with minimum reconstruction error while 20% sampling rates can achieve almost perfect reconstruction in low resolution regimes. Our experimental results have also shown that the proposed method is robust to various types of noise and applicable to realistic signal models. © (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.