Faster and better sparse blind source separation through mini-batch optimization

Sparse Blind Source Separation (sBSS) plays a key role in scientific domains as different as biomedical imaging, remote sensing or astrophysics, which require the development of increasingly faster and scalable BSS methods without sacrificing the separation performances. To that end, a new distributed sparse BSS algorithm is introduced based on a mini-batch ex-tension of the Generalized Morphological Component Analysis algorithm (GMCA). Precisely, it combines a robust projected alternate least-squares method with mini-batches optimization. The originality further lies in the use of a manifold-based aggregation of asynchronously estimated mixing ma- trices. Numerical experiments are carried out on realistic spectroscopic spectra, and highlight the ability of the proposed distributed GMCA (dGMCA) to provide very good separation results even when very small mini-batches are used. Quite unexpectedly, it can further outperform the (non-distributed) state-of-the-art methods for highly sparse sources.

Reference: Christophe Kervazo, Tobias Liaudat and Jérôme Bobin.
“Faster and better sparse blind source separation through mini-batch optimization, Digital Signal Processing, Elsevier, 2020.

DSP Elsevier, HAL.

Multi-CCD Point Spread Function Modelling

Context. Galaxy imaging surveys observe a vast number of objects that are affected by the instrument’s Point Spread Function (PSF). Weak lensing missions, in particular, aim at measuring the shape of galaxies, and PSF effects represent an important source of systematic errors which must be handled appropriately. This demands a high accuracy in the modelling as well as the estimation of the PSF at galaxy positions.

Aims. Sometimes referred to as non-parametric PSF estimation, the goal of this paper is to estimate a PSF at galaxy positions, starting from a set of noisy star image observations distributed over the focal plane. To accomplish this, we need our model to first of all, precisely capture the PSF field variations over the Field of View (FoV), and then to recover the PSF at the selected positions. Methods. This paper proposes a new method, coined MCCD (Multi-CCD PSF modelling), that creates, simultaneously, a PSF field model over all of the instrument’s focal plane. This allows to capture global as well as local PSF features through the use of two complementary models which enforce different spatial constraints. Most existing non-parametric models build one model per Charge-Coupled Device (CCD), which can lead to difficulties in capturing global ellipticity patterns.

Results. We first test our method on a realistic simulated dataset comparing it with two state-of-the-art PSF modelling methods (PSFEx and RCA). We outperform both of them with our proposed method. Then we contrast our approach with PSFEx on real data from CFIS (Canada-France Imaging Survey) that uses the CFHT (Canada-France-Hawaii Telescope). We show that our PSF model is less noisy and achieves a ~ 22% gain on pixel Root Mean Squared Error (RMSE) with respect to PSFEx.

Conclusions. We present, and share the code of, a new PSF modelling algorithm that models the PSF field on all the focal plane that is mature enough to handle real data.

Reference: Tobias Liaudat, Jérôme Bonnin,  Jean-Luc Starck, Morgan A. Schmitz, Axel Guinot, Martin Kilbinger and Stephen D. J. Gwyn. “Multi-CCD Point Spread Function Modelling, submitted 2020.

arXiv, code.

Probabilistic Mapping of Dark Matter by Neural Score Matching


The Dark Matter present in the Large-Scale Structure of the Universe is invisible, but its presence can be inferred through the small gravitational lensing effect it has on the images of far away galaxies. By measuring this lensing effect on a large number of galaxies it is possible to reconstruct maps of the Dark Matter distribution on the sky. This, however, represents an extremely challenging inverse problem due to missing data and noise dominated measurements. In this work, we present a novel methodology for addressing such inverse problems by combining elements of Bayesian statistics, analytic physical theory, and a recent class of Deep Generative Models based on Neural Score Matching. This approach allows to do the following: (1) make full use of analytic cosmological theory to constrain the 2pt statistics of the solution, (2) learn from cosmological simulations any differences between this analytic prior and full simulations, and (3) obtain samples from the full Bayesian posterior of the problem for robust Uncertainty Quantification. We present an application of this methodology on the first deep-learning-assisted Dark Matter map reconstruction of the Hubble Space Telescope COSMOS field.

Reference: Benjamin Remy, François Lanusse, Zaccharie Ramzi, Jia Liu, Niall Jeffrey and Jean-Luc Starck. “Probabilistic Mapping of Dark Matter by Neural Score Matching, Machine Learning and the Physical Sciences Workshop, NeurIPS 2020.

arXiv, code.

XPDNet for MRI Reconstruction: an Application to the fastMRI 2020 Brain Challenge

We present a modular cross-domain neural network the XPDNet and its application to the MRI reconstruction task. This approach consists in unrolling the PDHG algorithm as well as learning the acceleration scheme between steps. We also adopt state-of-the-art techniques specific to Deep Learning for MRI reconstruction. At the time of writing, this approach is the best performer in PSNR on the fastMRI leaderboards for both knee and brain at acceleration factor 4.

Reference:  Z. Ramzi,  P. Ciuciu and J.-L. Starck . “XPDNet for MRI Reconstruction: an Application to the fastMRI 2020 Brain Challenge.

 

This network was used to submit reconstructions to the 2020 fastMRI Brain reconstruction challenge. Results are to be announced on December 6th 2020.

Denoising Score-Matching for Uncertainty Quantification in Inverse Problems

Deep neural networks have proven extremely efficient at solving a wide range of inverse problems, but most often the uncertainty on the solution they provide is hard to quantify. In this work, we propose a generic Bayesian framework for solving inverse problems, in which we limit the use of deep neural networks to learning a prior distribution on the signals to recover. We adopt recent denoising score matching techniques to learn this prior from data, and subsequently use it as part of an annealed Hamiltonian Monte-Carlo scheme to sample the full posterior of image inverse problems. We apply this framework to Magnetic Resonance Image (MRI) reconstruction and illustrate how this approach not only yields high quality reconstructions but can also be used to assess the uncertainty on particular features of a reconstructed image.

Reference:  Z. Ramzi,  Benjamin Remy, François Lanusse, J.-L. Starck and P. Ciuciu. “Denoising Score-Matching for Uncertainty Quantification in Inverse Problems, Deep Learning and Inverse Problems Workshop NeurIPS, 2020.

Wavelets in the Deep Learning Era

Sparsity based methods, such as wavelets, have been state-of-the-art for more than 20 years for inverse problems before being overtaken by neural networks.
In particular, U-nets have proven to be extremely effective.
Their main ingredients are a highly non-linear processing, a massive learning made possible by the flourishing of optimization algorithms with the power of computers (GPU) and the use of large available data sets for training.
While the many stages of non-linearity are intrinsic to deep learning, the usage of learning with training data could also be exploited by sparsity based approaches.
The aim of our study is to push the limits of sparsity with learning, and comparing the results with U-nets.
We present a new network architecture, which conserves the properties of sparsity based methods such as exact reconstruction and good generalization properties, while fostering the power of neural networks for learning and fast calculation.
We evaluate the model on image denoising tasks and show it is competitive with learning-based models.

Reference:  Z. Ramzi,  J.-L. Starck and P. Ciuciu. “Wavelets in the Deep Learning Era, Eusipco, 2020.

Benchmarking Deep Nets MRI Reconstruction Models on the FastMRI Publicly Available Dataset

 

The MRI reconstruction field lacked a proper data set that allowed for reproducible results on real raw data (i.e. complex-valued), especially when it comes to deep learning (DL) methods as this kind of approaches require much more data than classical Compressed Sensing~(CS) reconstruction. This lack is now filled by the fastMRI data set, and it is needed to evaluate recent DL models on this benchmark. Besides, these networks are written in different frameworks and repositories (if publicly available), it is therefore needed to have a common tool, publicly available, allowing a reproducible benchmark of the different methods and ease of building new models. We provide such a tool that allows the benchmark of different reconstruction deep learning models.

Reference:  Z. Ramzi, P. Ciuciu and J.-L. Starck. “Benchmarking Deep Nets MRI Reconstruction Models on the FastMRI Publicly Available Dataset, ISBI, 2020.

Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear

Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear


Abstract

Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological information is encoded by the small scales, where the clustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock Euclid cosmic shear and Planck cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different nonlinear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly nonlinear regime, with both the Hubble constant, H0, and the clustering amplitude, σ8, affected the most. Improvements in the modelling of nonlinear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the nonlinear power spectrum, using different corrections for baryon physics does not significantly impact the precision of Euclid, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from Euclid cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for nonlinear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons.

Effect of nonlinear prescriptions

 

Euclid preparation: VII. Forecast validation for Euclid cosmological probes

Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear


Abstract

Aims: The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts.
Methods: We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required.
Results: We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three.