Starlet l1-norm for weak lensing cosmology

Starlet l1-norm for weak lensing cosmology

 

Authors:

Virginia Ajani, Jean-Luc Starck, Valeria Pettorino

Journal:
Astronomy & Astrophysics , Forthcoming article, Letters to the Editor
Year: 01/2021
Download: A&A| Arxiv


Abstract

We present a new summary statistic for weak lensing observables, higher than second order, suitable for extracting non-Gaussian cosmological information and inferring cosmological parameters. We name this statistic the 'starlet 1-norm' as it is computed via the sum of the absolute values of the starlet (wavelet) decomposition coefficients of a weak lensing map. In comparison to the state-of-the-art higher-order statistics -- weak lensing peak counts and minimum counts, or the combination of the two -- the 1-norm provides a fast multi-scale calculation of the full void and peak distribution, avoiding the problem of defining what a peak is and what a void is: The 1-norm carries the information encoded in all pixels of the map, not just the ones in local maxima and minima. We show its potential by applying it to the weak lensing convergence maps provided by the MassiveNus simulations to get constraints on the sum of neutrino masses, the matter density parameter, and the amplitude of the primordial power spectrum. We find that, in an ideal setting without further systematics, the starlet 1-norm remarkably outperforms commonly used summary statistics, such as the power spectrum or the combination of peak and void counts, in terms of constraining power, representing a promising new unified framework to simultaneously account for the information encoded in peak counts and voids. We find that the starlet 1-norm outperforms the power spectrum by 72% on Mν60% on Ωm, and 75% on As for the Euclid-like setting considered; it also improves upon the state-of-the-art combination of peaks and voids for a single smoothing scale by 24% on Mν50% on Ωm, and 24% on As.

State-of-the-art Machine Learning MRI Reconstruction in 2020: Results of the Second fastMRI Challenge

Accelerating MRI scans is one of the principal outstanding problems in the MRI research community. Towards this goal, we hosted the second fastMRI competition targeted towards reconstructing MR images with subsampled k-space data. We provided participants with data from 7,299 clinical brain scans (de-identified via a HIPAA-compliant procedure by NYU Langone Health), holding back the fully-sampled data from 894 of these scans for challenge evaluation purposes. In contrast to the 2019 challenge, we focused our radiologist evaluations on pathological assessment in brain images. We also debuted a new Transfer track that required participants to submit models evaluated on MRI scanners from outside the training set. We received 19 submissions from eight different groups. Results showed one team scoring best in both SSIM scores and qualitative radiologist evaluations. We also performed analysis on alternative metrics to mitigate the effects of background noise and collected feedback from the participants to inform future challenges. Lastly, we identify common failure modes across the submissions, highlighting areas of need for future research in the MRI reconstruction community.

Reference: Mathew J. Muckley, ...,   Z. Ramzi,  P. Ciuciu and J.-L. Starck et al . “State-of-the-art Machine Learning MRI Reconstruction in 2020: Results of the Second fastMRI Challenge.

This paper presents the results of the fastMRI 2020 challenge, where our team finished 2nd in the 4x and 8x supervised tracks.
It is currently being submitted to IEEE TMI.

Faster and better sparse blind source separation through mini-batch optimization

Sparse Blind Source Separation (sBSS) plays a key role in scientific domains as different as biomedical imaging, remote sensing or astrophysics, which require the development of increasingly faster and scalable BSS methods without sacrificing the separation performances. To that end, a new distributed sparse BSS algorithm is introduced based on a mini-batch ex-tension of the Generalized Morphological Component Analysis algorithm (GMCA). Precisely, it combines a robust projected alternate least-squares method with mini-batches optimization. The originality further lies in the use of a manifold-based aggregation of asynchronously estimated mixing ma- trices. Numerical experiments are carried out on realistic spectroscopic spectra, and highlight the ability of the proposed distributed GMCA (dGMCA) to provide very good separation results even when very small mini-batches are used. Quite unexpectedly, it can further outperform the (non-distributed) state-of-the-art methods for highly sparse sources.

Reference: Christophe Kervazo, Tobias Liaudat and Jérôme Bobin.
“Faster and better sparse blind source separation through mini-batch optimization, Digital Signal Processing, Elsevier, 2020.

DSP Elsevier, HAL.

Multi-CCD Point Spread Function Modelling

Context. Galaxy imaging surveys observe a vast number of objects that are affected by the instrument’s Point Spread Function (PSF). Weak lensing missions, in particular, aim at measuring the shape of galaxies, and PSF effects represent an important source of systematic errors which must be handled appropriately. This demands a high accuracy in the modelling as well as the estimation of the PSF at galaxy positions.

Aims. Sometimes referred to as non-parametric PSF estimation, the goal of this paper is to estimate a PSF at galaxy positions, starting from a set of noisy star image observations distributed over the focal plane. To accomplish this, we need our model to first of all, precisely capture the PSF field variations over the Field of View (FoV), and then to recover the PSF at the selected positions. Methods. This paper proposes a new method, coined MCCD (Multi-CCD PSF modelling), that creates, simultaneously, a PSF field model over all of the instrument’s focal plane. This allows to capture global as well as local PSF features through the use of two complementary models which enforce different spatial constraints. Most existing non-parametric models build one model per Charge-Coupled Device (CCD), which can lead to difficulties in capturing global ellipticity patterns.

Results. We first test our method on a realistic simulated dataset comparing it with two state-of-the-art PSF modelling methods (PSFEx and RCA). We outperform both of them with our proposed method. Then we contrast our approach with PSFEx on real data from CFIS (Canada-France Imaging Survey) that uses the CFHT (Canada-France-Hawaii Telescope). We show that our PSF model is less noisy and achieves a ~ 22% gain on pixel Root Mean Squared Error (RMSE) with respect to PSFEx.

Conclusions. We present, and share the code of, a new PSF modelling algorithm that models the PSF field on all the focal plane that is mature enough to handle real data.

Reference: Tobias Liaudat, Jérôme Bonnin,  Jean-Luc Starck, Morgan A. Schmitz, Axel Guinot, Martin Kilbinger and Stephen D. J. Gwyn. “Multi-CCD Point Spread Function Modelling, submitted 2020.

arXiv, code.

Probabilistic Mapping of Dark Matter by Neural Score Matching


The Dark Matter present in the Large-Scale Structure of the Universe is invisible, but its presence can be inferred through the small gravitational lensing effect it has on the images of far away galaxies. By measuring this lensing effect on a large number of galaxies it is possible to reconstruct maps of the Dark Matter distribution on the sky. This, however, represents an extremely challenging inverse problem due to missing data and noise dominated measurements. In this work, we present a novel methodology for addressing such inverse problems by combining elements of Bayesian statistics, analytic physical theory, and a recent class of Deep Generative Models based on Neural Score Matching. This approach allows to do the following: (1) make full use of analytic cosmological theory to constrain the 2pt statistics of the solution, (2) learn from cosmological simulations any differences between this analytic prior and full simulations, and (3) obtain samples from the full Bayesian posterior of the problem for robust Uncertainty Quantification. We present an application of this methodology on the first deep-learning-assisted Dark Matter map reconstruction of the Hubble Space Telescope COSMOS field.

Reference: Benjamin Remy, François Lanusse, Zaccharie Ramzi, Jia Liu, Niall Jeffrey and Jean-Luc Starck. “Probabilistic Mapping of Dark Matter by Neural Score Matching, Machine Learning and the Physical Sciences Workshop, NeurIPS 2020.

arXiv, code.

XPDNet for MRI Reconstruction: an Application to the fastMRI 2020 Brain Challenge

We present a modular cross-domain neural network the XPDNet and its application to the MRI reconstruction task. This approach consists in unrolling the PDHG algorithm as well as learning the acceleration scheme between steps. We also adopt state-of-the-art techniques specific to Deep Learning for MRI reconstruction. At the time of writing, this approach is the best performer in PSNR on the fastMRI leaderboards for both knee and brain at acceleration factor 4.

Reference:  Z. Ramzi,  P. Ciuciu and J.-L. Starck . “XPDNet for MRI Reconstruction: an Application to the fastMRI 2020 Brain Challenge.

 

This network was used to submit reconstructions to the 2020 fastMRI Brain reconstruction challenge. Results are to be announced on December 6th 2020.

Denoising Score-Matching for Uncertainty Quantification in Inverse Problems

Deep neural networks have proven extremely efficient at solving a wide range of inverse problems, but most often the uncertainty on the solution they provide is hard to quantify. In this work, we propose a generic Bayesian framework for solving inverse problems, in which we limit the use of deep neural networks to learning a prior distribution on the signals to recover. We adopt recent denoising score matching techniques to learn this prior from data, and subsequently use it as part of an annealed Hamiltonian Monte-Carlo scheme to sample the full posterior of image inverse problems. We apply this framework to Magnetic Resonance Image (MRI) reconstruction and illustrate how this approach not only yields high quality reconstructions but can also be used to assess the uncertainty on particular features of a reconstructed image.

Reference:  Z. Ramzi,  Benjamin Remy, François Lanusse, J.-L. Starck and P. Ciuciu. “Denoising Score-Matching for Uncertainty Quantification in Inverse Problems, Deep Learning and Inverse Problems Workshop NeurIPS, 2020.

Wavelets in the Deep Learning Era

Sparsity based methods, such as wavelets, have been state-of-the-art for more than 20 years for inverse problems before being overtaken by neural networks.
In particular, U-nets have proven to be extremely effective.
Their main ingredients are a highly non-linear processing, a massive learning made possible by the flourishing of optimization algorithms with the power of computers (GPU) and the use of large available data sets for training.
While the many stages of non-linearity are intrinsic to deep learning, the usage of learning with training data could also be exploited by sparsity based approaches.
The aim of our study is to push the limits of sparsity with learning, and comparing the results with U-nets.
We present a new network architecture, which conserves the properties of sparsity based methods such as exact reconstruction and good generalization properties, while fostering the power of neural networks for learning and fast calculation.
We evaluate the model on image denoising tasks and show it is competitive with learning-based models.

Reference:  Z. Ramzi,  J.-L. Starck and P. Ciuciu. “Wavelets in the Deep Learning Era, Eusipco, 2020.

Benchmarking Deep Nets MRI Reconstruction Models on the FastMRI Publicly Available Dataset

 

The MRI reconstruction field lacked a proper data set that allowed for reproducible results on real raw data (i.e. complex-valued), especially when it comes to deep learning (DL) methods as this kind of approaches require much more data than classical Compressed Sensing~(CS) reconstruction. This lack is now filled by the fastMRI data set, and it is needed to evaluate recent DL models on this benchmark. Besides, these networks are written in different frameworks and repositories (if publicly available), it is therefore needed to have a common tool, publicly available, allowing a reproducible benchmark of the different methods and ease of building new models. We provide such a tool that allows the benchmark of different reconstruction deep learning models.

Reference:  Z. Ramzi, P. Ciuciu and J.-L. Starck. “Benchmarking Deep Nets MRI Reconstruction Models on the FastMRI Publicly Available Dataset, ISBI, 2020.