Benchmarking MRI Reconstruction Neural Networks on Large Public Datasets

Deep learning is starting to offer promising results for reconstruction in Magnetic Resonance Imaging (MRI). A lot of networks are being developed, but the comparisons remain hard because the frameworks used are not the same among studies, the networks are not properly re-trained, and the datasets used are not the same among comparisons. The recent release of a public dataset, fastMRI, consisting of raw k-space data, encouraged us to write a consistent benchmark of several deep neural networks for MR image reconstruction. This paper shows the results obtained for this benchmark, allowing to compare the networks, and links the open source implementation of all these networks in Keras. The main finding of this benchmark is that it is beneficial to perform more iterations between the image and the measurement spaces compared to having a deeper per-space network.

Reference:  Z. Ramzi, P. Ciuciu and J.-L. Starck. “Benchmarking MRI reconstruction neural networks on large public datasetsApplied Sciences, 10, 1816, 2020.  doi:10.3390/app10051816

The first Deep Learning reconstruction of dark matter maps from weak lensing observational data

DeepMass: The first Deep Learning reconstruction of dark matter maps from weak lensing observational data (DES SV weak lensing data)

DeepMass

 This is the first reconstruction of dark matter maps from weak lensing observational data using deep learning. We train a convolution neural network (CNN) with a Unet based architecture on over 3.6 x 10^5 simulated data realisations with non-Gaussian shape noise and with cosmological parameters varying over a broad prior distribution.  Our DeepMass method is substantially more accurate than existing mass-mapping methods. With a validation set of 8000 simulated DES SV data realisations, compared to Wiener filtering with a fixed power spectrum, the DeepMass method improved the mean-square-error (MSE) by 11 per cent. With N-body simulated MICE mock data, we show that Wiener filtering with the optimal known power spectrum still gives a worse MSE than our generalised method with no input cosmological parameters; we show that the improvement is driven by the non-linear structures in the convergence. With higher galaxy density in future weak lensing data unveiling more non-linear scales, it is likely that deep learning will be a leading approach for mass mapping with Euclid and LSST.

Reference 1:  N. Jeffrey, F.  Lanusse, O. Lahav, J.-L. Starck,  "Learning dark matter map reconstructions from DES SV weak lensing data", Monthly Notices of the Royal Astronomical Society, in press, 2019.

 

Euclid: Non-parametric point spread function field recovery through interpolation on a Graph Laplacian

 

Authors: M.A. Schmitz, J.-L. Starck, F. Ngole Mboula, N. Auricchio, J. Brinchmann, R.I. Vito Capobianco, R. Clédassou, L. Conversi, L. Corcione, N. Fourmanoit, M. Frailis, B. Garilli, F. Hormuth, D. Hu, H. Israel, S. Kermiche, T. D. Kitching, B. Kubik, M. Kunz, S. Ligori, P.B. Lilje, I. Lloro, O. Mansutti, O. Marggraf, R.J. Massey, F. Pasian, V. Pettorino, F. Raison, J.D. Rhodes, M. Roncarelli, R.P. Saglia, P. Schneider, S. Serrano, A.N. Taylor, R. Toledo-Moreo, L. Valenziano, C. Vuerli, J. Zoubian
Journal: submitted to A&A
Year: 2019
Download:  arXiv

 


Abstract

Context. Future weak lensing surveys, such as the Euclid mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain very low levels of statistical error and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy.
Aims. This paper's contributions are twofold. First, we take steps toward a non-parametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to Euclid. Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Second, we study the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in a Euclid scenario.
Methods. We use the recently proposed Resolved Components Analysis approach to deal with the undersampling of observed star images. We then estimate the PSF at the positions of galaxies by interpolation on a set of graphs that contain information relative to its spatial variations. We compare our approach to PSFEx, then quantify the impact of PSF recovery errors on galaxy shape measurements through image simulations.
Results. Our approach yields an improvement over PSFEx in terms of PSF model and on observed galaxy shape errors, though it is at present not sufficient to reach the required Euclid accuracy. We also find that different shape measurement approaches can react differently to the same PSF modelling errors.

A Distributed Learning Architecture for Scientific Imaging Problems

 

Authors: A. Panousopoulou, S. Farrens, K. Fotiadou, A. Woiselle, G. Tsagkatakis, J-L. Starck,  P. Tsakalides
Journal: arXiv
Year: 2018
Download: ADS | arXiv


Abstract

Current trends in scientific imaging are challenged by the emerging need of integrating sophisticated machine learning with Big Data analytics platforms. This work proposes an in-memory distributed learning architecture for enabling sophisticated learning and optimization techniques on scientific imaging problems, which are characterized by the combination of variant information from different origins. We apply the resulting, Spark-compliant, architecture on two emerging use cases from the scientific imaging domain, namely: (a) the space variant deconvolution of galaxy imaging surveys (astrophysics), (b) the super-resolution based on coupled dictionary training (remote sensing). We conduct evaluation studies considering relevant datasets, and the results report at least 60\% improvement in time response against the conventional computing solutions. Ultimately, the offered discussion provides useful practical insights on the impact of key Spark tuning parameters on the speedup achieved, and the memory/disk footprint.

Distinguishing standard and modified gravity cosmologies with machine learning

Distinguishing standard and modified gravity cosmologies with machine learning

 

Authors: A. Peel, F. Lalande, J.-L. Starck, V. Pettorino, J. Merten,  C. Giocoli, M. Meneghetti,  M. Baldi
Journal: PRD
Year: 2019
Download: ADS | arXiv


Abstract

We present a convolutional neural network to classify distinct cosmological scenarios based on the statistically similar weak-lensing maps they generate. Modified gravity (MG) models that include massive neutrinos can mimic the standard concordance model (ΛCDM) in terms of Gaussian weak-lensing observables. An inability to distinguish viable models that are based on different physics potentially limits a deeper understanding of the fundamental nature of cosmic acceleration. For a fixed redshift of sources, we demonstrate that a machine learning network trained on simulated convergence maps can discriminate between such models better than conventional higher-order statistics. Results improve further when multiple source redshifts are combined. To accelerate training, we implement a novel data compression strategy that incorporates our prior knowledge of the morphology of typical convergence map features. Our method fully distinguishes ΛCDM from its most similar MG model on noise-free data, and it correctly identifies among the MG models with at least 80% accuracy when using the full redshift information. Adding noise lowers the correct classification rate of all models, but the neural network still significantly outperforms the peak statistics used in a previous analysis.

On the dissection of degenerate cosmologies with machine learning

On the dissection of degenerate cosmologies with machine learning

 

Authors: J. Merten,  C. Giocoli, M. Baldi, M. Meneghetti, A. Peel, F. Lalande, J.-L. Starck, V. Pettorino
Journal: MNRAS
Year: 2019
Download: ADS | arXiv


Abstract

Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.

Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

 

Authors: A. PeelV. Pettorino, C. Giocoli, J.-L. Starck, M. Baldi
Journal: A&A
Year: 2018
Download: ADS | arXiv


Abstract

General relativity (GR) has been well tested up to solar system scales, but it is much less certain that standard gravity remains an accurate description on the largest, that is, cosmological, scales. Many extensions to GR have been studied that are not yet ruled out by the data, including by that of the recent direct gravitational wave detections. Degeneracies among the standard model (ΛCDM) and modified gravity (MG) models, as well as among different MG parameters, must be addressed in order to best exploit information from current and future surveys and to unveil the nature of dark energy. We propose various higher-order statistics in the weak-lensing signal as a new set of observables able to break degeneracies between massive neutrinos and MG parameters. We have tested our methodology on so-called f(R) models, which constitute a class of viable models that can explain the accelerated universal expansion by a modification of the fundamental gravitational interaction. We have explored a range of these models that still fit current observations at the background and linear level, and we show using numerical simulations that certain models which include massive neutrinos are able to mimic ΛCDM in terms of the 3D power spectrum of matter density fluctuations. We find that depending on the redshift and angular scale of observation, non-Gaussian information accessed by higher-order weak-lensing statistics can be used to break the degeneracy between f(R) models and ΛCDM. In particular, peak counts computed in aperture mass maps outperform third- and fourth-order moments.

Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

 

Authors: N. JeffreyF. B. AbdallaO. LahavF. LanusseJ.-L. Starck, et al
Journal:  
Year: 01/2018
Download: ADS| Arxiv


Abstract

Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in the density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.

Wasserstein Dictionary Learning: Optimal Transport-based unsupervised non-linear dictionary learning

 

Authors: M.A. Schmitz, M. Heitz, N. Bonneel, F.-M. Ngolè, D. Coeurjolly, M. Cuturi, G. Peyré & J.-L. Starck
Journal: SIAM SIIMS
Year: 2018
Download: ADS | arXiv

 


Abstract

This article introduces a new non-linear dictionary learning method for histograms in the probability simplex. The method leverages optimal transport theory, in the sense that our aim is to reconstruct histograms using so called displacement interpolations (a.k.a. Wasserstein barycenters) between dictionary atoms; such atoms are themselves synthetic histograms in the probability simplex. Our method simultaneously estimates such atoms, and, for each datapoint, the vector of weights that can optimally reconstruct it as an optimal transport barycenter of such atoms. Our method is computationally tractable thanks to the addition of an entropic regularization to the usual optimal transportation problem, leading to an approximation scheme that is efficient, parallel and simple to differentiate. Both atoms and weights are learned using a gradient-based descent method. Gradients are obtained by automatic differentiation of the generalized Sinkhorn iterations that yield barycenters with entropic smoothing. Because of its formulation relying on Wasserstein barycenters instead of the usual matrix product between dictionary and codes, our method allows for non-linear relationships between atoms and the reconstruction of input data. We illustrate its application in several different image processing settings.

Sparse reconstruction of the merging A520 cluster system

Sparse reconstruction of the merging A520 cluster system

 

Authors: A. Peel, F. Lanusse, J.-L. Starck
Journal: ApJ
Year: 08/2017
Download: ADS| Arxiv


Abstract

Merging galaxy clusters present a unique opportunity to study the properties of dark matter in an astrophysical context. These are rare and extreme cosmic events in which the bulk of the baryonic matter becomes displaced from the dark matter halos of the colliding subclusters. Since all mass bends light, weak gravitational lensing is a primary tool to study the total mass distribution in such systems. Combined with X-ray and optical analyses, mass maps of cluster mergers reconstructed from weak-lensing observations have been used to constrain the self-interaction cross-section of dark matter. The dynamically complex Abell 520 (A520) cluster is an exceptional case, even among merging systems: multi-wavelength observations have revealed a surprising high mass-to-light concentration of dark mass, the interpretation of which is difficult under the standard assumption of effectively collisionless dark matter. We revisit A520 using a new sparsity-based mass-mapping algorithm to independently assess the presence of the puzzling dark core. We obtain high-resolution mass reconstructions from two separate galaxy shape catalogs derived from Hubble Space Telescope observations of the system. Our mass maps agree well overall with the results of previous studies, but we find important differences. In particular, although we are able to identify the dark core at a certain level in both data sets, it is at much lower significance than has been reported before using the same data. As we cannot confirm the detection in our analysis, we do not consider A520 as posing a significant challenge to the collisionless dark matter scenario.