Faster and better sparse blind source separation through mini-batch optimization

Sparse Blind Source Separation (sBSS) plays a key role in scientific domains as different as biomedical imaging, remote sensing or astrophysics, which require the development of increasingly faster and scalable BSS methods without sacrificing the separation performances. To that end, a new distributed sparse BSS algorithm is introduced based on a mini-batch ex-tension of the Generalized Morphological Component Analysis algorithm (GMCA). Precisely, it combines a robust projected alternate least-squares method with mini-batches optimization. The originality further lies in the use of a manifold-based aggregation of asynchronously estimated mixing ma- trices. Numerical experiments are carried out on realistic spectroscopic spectra, and highlight the ability of the proposed distributed GMCA (dGMCA) to provide very good separation results even when very small mini-batches are used. Quite unexpectedly, it can further outperform the (non-distributed) state-of-the-art methods for highly sparse sources.

Reference: Christophe Kervazo, Tobias Liaudat and Jérôme Bobin.
“Faster and better sparse blind source separation through mini-batch optimization, Digital Signal Processing, Elsevier, 2020.

DSP Elsevier, HAL.

Multi-CCD Point Spread Function Modelling

Context. Galaxy imaging surveys observe a vast number of objects that are affected by the instrument’s Point Spread Function (PSF). Weak lensing missions, in particular, aim at measuring the shape of galaxies, and PSF effects represent an important source of systematic errors which must be handled appropriately. This demands a high accuracy in the modelling as well as the estimation of the PSF at galaxy positions.

Aims. Sometimes referred to as non-parametric PSF estimation, the goal of this paper is to estimate a PSF at galaxy positions, starting from a set of noisy star image observations distributed over the focal plane. To accomplish this, we need our model to first of all, precisely capture the PSF field variations over the Field of View (FoV), and then to recover the PSF at the selected positions. Methods. This paper proposes a new method, coined MCCD (Multi-CCD PSF modelling), that creates, simultaneously, a PSF field model over all of the instrument’s focal plane. This allows to capture global as well as local PSF features through the use of two complementary models which enforce different spatial constraints. Most existing non-parametric models build one model per Charge-Coupled Device (CCD), which can lead to difficulties in capturing global ellipticity patterns.

Results. We first test our method on a realistic simulated dataset comparing it with two state-of-the-art PSF modelling methods (PSFEx and RCA). We outperform both of them with our proposed method. Then we contrast our approach with PSFEx on real data from CFIS (Canada-France Imaging Survey) that uses the CFHT (Canada-France-Hawaii Telescope). We show that our PSF model is less noisy and achieves a ~ 22% gain on pixel Root Mean Squared Error (RMSE) with respect to PSFEx.

Conclusions. We present, and share the code of, a new PSF modelling algorithm that models the PSF field on all the focal plane that is mature enough to handle real data.

Reference: Tobias Liaudat, Jérôme Bonnin,  Jean-Luc Starck, Morgan A. Schmitz, Axel Guinot, Martin Kilbinger and Stephen D. J. Gwyn. “Multi-CCD Point Spread Function Modelling, submitted 2020.

arXiv, code.

Probabilistic Mapping of Dark Matter by Neural Score Matching


The Dark Matter present in the Large-Scale Structure of the Universe is invisible, but its presence can be inferred through the small gravitational lensing effect it has on the images of far away galaxies. By measuring this lensing effect on a large number of galaxies it is possible to reconstruct maps of the Dark Matter distribution on the sky. This, however, represents an extremely challenging inverse problem due to missing data and noise dominated measurements. In this work, we present a novel methodology for addressing such inverse problems by combining elements of Bayesian statistics, analytic physical theory, and a recent class of Deep Generative Models based on Neural Score Matching. This approach allows to do the following: (1) make full use of analytic cosmological theory to constrain the 2pt statistics of the solution, (2) learn from cosmological simulations any differences between this analytic prior and full simulations, and (3) obtain samples from the full Bayesian posterior of the problem for robust Uncertainty Quantification. We present an application of this methodology on the first deep-learning-assisted Dark Matter map reconstruction of the Hubble Space Telescope COSMOS field.

Reference: Benjamin Remy, François Lanusse, Zaccharie Ramzi, Jia Liu, Niall Jeffrey and Jean-Luc Starck. “Probabilistic Mapping of Dark Matter by Neural Score Matching, Machine Learning and the Physical Sciences Workshop, NeurIPS 2020.

arXiv, code.

XPDNet for MRI Reconstruction: an Application to the fastMRI 2020 Brain Challenge

We present a modular cross-domain neural network the XPDNet and its application to the MRI reconstruction task. This approach consists in unrolling the PDHG algorithm as well as learning the acceleration scheme between steps. We also adopt state-of-the-art techniques specific to Deep Learning for MRI reconstruction. At the time of writing, this approach is the best performer in PSNR on the fastMRI leaderboards for both knee and brain at acceleration factor 4.

Reference:  Z. Ramzi,  P. Ciuciu and J.-L. Starck . “XPDNet for MRI Reconstruction: an Application to the fastMRI 2020 Brain Challenge.

 

This network was used to submit reconstructions to the 2020 fastMRI Brain reconstruction challenge. Results are to be announced on December 6th 2020.

Denoising Score-Matching for Uncertainty Quantification in Inverse Problems

Deep neural networks have proven extremely efficient at solving a wide range of inverse problems, but most often the uncertainty on the solution they provide is hard to quantify. In this work, we propose a generic Bayesian framework for solving inverse problems, in which we limit the use of deep neural networks to learning a prior distribution on the signals to recover. We adopt recent denoising score matching techniques to learn this prior from data, and subsequently use it as part of an annealed Hamiltonian Monte-Carlo scheme to sample the full posterior of image inverse problems. We apply this framework to Magnetic Resonance Image (MRI) reconstruction and illustrate how this approach not only yields high quality reconstructions but can also be used to assess the uncertainty on particular features of a reconstructed image.

Reference:  Z. Ramzi,  Benjamin Remy, François Lanusse, J.-L. Starck and P. Ciuciu. “Denoising Score-Matching for Uncertainty Quantification in Inverse Problems, Deep Learning and Inverse Problems Workshop NeurIPS, 2020.

PySAP: Python Sparse Data Analysis Package for Multidisciplinary Image Processing

 

Authors: S. Farrens, A. Grigis, L. El Gueddari, Z. Ramzi, Chaithya G. R., S. Starck, B. Sarthou, H. Cherkaoui, P.Ciuciu, J-L. Starck
Journal: Astronomy and Computing
Year: 2020
DOI: 10.1016/j.ascom.2020.100402
Download: ADS | arXiv


Abstract

We present the open-source image processing software package PySAP (Python Sparse data Analysis Package) developed for the COmpressed Sensing for Magnetic resonance Imaging and Cosmology (COSMIC) project. This package provides a set of flexible tools that can be applied to a variety of compressed sensing and image reconstruction problems in various research domains. In particular, PySAP offers fast wavelet transforms and a range of integrated optimisation algorithms. In this paper we present the features available in PySAP and provide practical demonstrations on astrophysical and magnetic resonance imaging data.


Code

PySAP Code


Euclid: Reconstruction of weak-lensing mass maps for non-Gaussianity studies

Euclid: Reconstruction of weak-lensing mass maps for non-Gaussianity studies

Authors: S. Pires, V. Vandenbussche, V. Kansal, R. Bender, L. Blot, D. Bonino, A. Boucaud, J. Brinchmann, V. Capobianco, J. Carretero, M. Castellano, S. Cavuoti, R. Clédassou, G. Congedo, L. Conversi, L. Corcione, F. Dubath, P. Fosalba, M. Frailis, E. Franceschi, M. Fumana, F. Grupp, F. Hormuth, S. Kermiche, M. Knabenhans, R. Kohley, B. Kubik, M. Kunz, S. Ligori, P.B. Lilje, I. Lloro, E. Maiorano, O. Marggraf, R. Massey, G. Meylan, C. Padilla, S. Paltani, F. Pasian, M. Poncet, D. Potter, F. Raison, J. Rhodes, M. Roncarelli, R. Saglia, P. Schneider, A. Secroun, S. Serrano, J. Stadel, P. Tallada Crespí, I. Tereno, R. Toledo-Moreo, Y. Wang
Journal: Astronomy and Astrophysics
Year: 2020
Download:

ADS | arXiv 

 


Abstract

Weak lensing, namely the deflection of light by matter along the line of sight, has proven to be an efficient method to constrain models of structure formation and reveal the nature of dark energy. So far, most weak lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak lensing surveys such as Euclid, convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While carrying the same information, the lensing signal is more compressed in the convergence maps than in the shear field, simplifying otherwise computationally expensive analyses, for instance non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects due to holes in the data field, field borders, noise and the fact that the shear is not a direct observable (reduced shear). In this paper, we present the two mass inversion methods that are being included in the official Euclid data processing pipeline: the standard Kaiser & Squires method (KS) and a new mass inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS methodology and includes corrections for mass mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the Euclid Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions, third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method reduces substantially the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5 while the errors on the third- and fourth-order moments are reduced by a factor of about 2 and 10 respectively.

Euclid : un subtil amalgame pour un résultat cosmologique plus précis

Au terme de trois années de travail, une équipe de la collaboration Euclid, coordonnée par l’Irfu, dévoile une nouvelle méthode pour traiter conjointement les observations ciblant spécifiquement la matière noire ou l’énergie noire, deux concepts distincts mais corrélés. Résultat : une précision de l’interprétation cosmologique grandement améliorée !

La quête de l’origine de l’accélération cosmique

Déterminer la cause de l’accélération cosmique est l’un des grands défis de la cosmologie. S’agit-il d’une constante (ΛCDM) ou d’un nouveau fluide (énergie noire, DE) ? Existe-t-il une nouvelle force qui modifie la gravité telle que décrite par Einstein (gravité modifiée, MG) ?

Le CEA travaille sur l’analyse de l’énergie noire et de la gravité modifiée dans le cadre de la mission Planck de l’Agence spatiale européenne (ESA) qui a mesuré le rayonnement du fond diffus cosmologique (CMB), la lumière émise 380 000 ans après le big bang. Dans la publication finale des données, nous avons actualisé et testé différents scénarios combinant les résultats de Planck à d’autres jeux de données.

Du machine learning dans l’espace

Clefs CEA n°69 – L’intelligence Artificielle, Novembre 2019.

En astrophysique, à l’instar de nombreux autres domaines scientifiques, le machine learning est devenu incontournable ces dernières années, et pour un très large éventail de problèmes: restauration d’images, classification et caractérisation des étoiles ou des galaxies, séparation automatique des étoiles des galaxies dans les images, simulation numérique d’observations ou de distribution de matière dans l’Univers…