Euclid: Non-parametric point spread function field recovery through interpolation on a Graph Laplacian

 

Authors: M.A. Schmitz, J.-L. Starck, F. Ngole Mboula, N. Auricchio, J. Brinchmann, R.I. Vito Capobianco, R. Clédassou, L. Conversi, L. Corcione, N. Fourmanoit, M. Frailis, B. Garilli, F. Hormuth, D. Hu, H. Israel, S. Kermiche, T. D. Kitching, B. Kubik, M. Kunz, S. Ligori, P.B. Lilje, I. Lloro, O. Mansutti, O. Marggraf, R.J. Massey, F. Pasian, V. Pettorino, F. Raison, J.D. Rhodes, M. Roncarelli, R.P. Saglia, P. Schneider, S. Serrano, A.N. Taylor, R. Toledo-Moreo, L. Valenziano, C. Vuerli, J. Zoubian
Journal: submitted to A&A
Year: 2019
Download:  arXiv

 


Abstract

Context. Future weak lensing surveys, such as the Euclid mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain very low levels of statistical error and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy.
Aims. This paper's contributions are twofold. First, we take steps toward a non-parametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to Euclid. Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Second, we study the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in a Euclid scenario.
Methods. We use the recently proposed Resolved Components Analysis approach to deal with the undersampling of observed star images. We then estimate the PSF at the positions of galaxies by interpolation on a set of graphs that contain information relative to its spatial variations. We compare our approach to PSFEx, then quantify the impact of PSF recovery errors on galaxy shape measurements through image simulations.
Results. Our approach yields an improvement over PSFEx in terms of PSF model and on observed galaxy shape errors, though it is at present not sufficient to reach the required Euclid accuracy. We also find that different shape measurement approaches can react differently to the same PSF modelling errors.

Blind separation of a large number of sparse sources

 

Authors: C. Kervazo, J. Bobin, C. Chenot
Journal: Signal Processing
Year: 2018
Download: Paper


Abstract

Blind Source Separation (BSS) is one of the major tools to analyze multispectral data with applications that range from astronomical to biomedical signal processing. Nevertheless, most BSS methods fail when the number of sources becomes large, typically exceeding a few tens. Since the ability to estimate large number of sources is paramount in a very wide range of applications, we introduce a new algorithm, coined block-Generalized Morphological Component Analysis (bGMCA) to specifically tackle sparse BSS problems when large number of sources need to be estimated. Sparse BSS being a challenging nonconvex inverse problem in nature, the role played by the algorithmic strategy is central, especially when many sources have to be estimated. For that purpose, the bGMCA algorithm builds upon block-coordinate descent with intermediate size blocks. Numerical experiments are provided that show the robustness of the bGMCA algorithm when the sources are numerous. Comparisons have been carried out on realistic simulations of spectroscopic data.

Distinguishing standard and modified gravity cosmologies with machine learning

Distinguishing standard and modified gravity cosmologies with machine learning

 

Authors: A. Peel, F. Lalande, J.-L. Starck, V. Pettorino, J. Merten,  C. Giocoli, M. Meneghetti,  M. Baldi
Journal: PRD
Year: 2019
Download: ADS | arXiv


Abstract

We present a convolutional neural network to classify distinct cosmological scenarios based on the statistically similar weak-lensing maps they generate. Modified gravity (MG) models that include massive neutrinos can mimic the standard concordance model (ΛCDM) in terms of Gaussian weak-lensing observables. An inability to distinguish viable models that are based on different physics potentially limits a deeper understanding of the fundamental nature of cosmic acceleration. For a fixed redshift of sources, we demonstrate that a machine learning network trained on simulated convergence maps can discriminate between such models better than conventional higher-order statistics. Results improve further when multiple source redshifts are combined. To accelerate training, we implement a novel data compression strategy that incorporates our prior knowledge of the morphology of typical convergence map features. Our method fully distinguishes ΛCDM from its most similar MG model on noise-free data, and it correctly identifies among the MG models with at least 80% accuracy when using the full redshift information. Adding noise lowers the correct classification rate of all models, but the neural network still significantly outperforms the peak statistics used in a previous analysis.

On the dissection of degenerate cosmologies with machine learning

On the dissection of degenerate cosmologies with machine learning

 

Authors: J. Merten,  C. Giocoli, M. Baldi, M. Meneghetti, A. Peel, F. Lalande, J.-L. Starck, V. Pettorino
Journal: MNRAS
Year: 2019
Download: ADS | arXiv


Abstract

Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.

Blind separation of a large number of sparse sources

 

Authors: C. Kervazo, J. Bobin, C. Chenot
Journal: Signal Processing
Year: 2018
Download: Paper


Abstract

Blind Source Separation (BSS) is one of the major tools to analyze multispectral data with applications that range from astronomical to biomedical signal processing. Nevertheless, most BSS methods fail when the number of sources becomes large, typically exceeding a few tens. Since the ability to estimate large number of sources is paramount in a very wide range of applications, we introduce a new algorithm, coined block-Generalized Morphological Component Analysis (bGMCA) to specifically tackle sparse BSS problems when large number of sources need to be estimated. Sparse BSS being a challenging nonconvex inverse problem in nature, the role played by the algorithmic strategy is central, especially when many sources have to be estimated. For that purpose, the bGMCA algorithm builds upon block-coordinate descent with intermediate size blocks. Numerical experiments are provided that show the robustness of the bGMCA algorithm when the sources are numerous. Comparisons have been carried out on realistic simulations of spectroscopic data.

Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

 

Authors: A. PeelV. Pettorino, C. Giocoli, J.-L. Starck, M. Baldi
Journal: A&A
Year: 2018
Download: ADS | arXiv


Abstract

General relativity (GR) has been well tested up to solar system scales, but it is much less certain that standard gravity remains an accurate description on the largest, that is, cosmological, scales. Many extensions to GR have been studied that are not yet ruled out by the data, including by that of the recent direct gravitational wave detections. Degeneracies among the standard model (ΛCDM) and modified gravity (MG) models, as well as among different MG parameters, must be addressed in order to best exploit information from current and future surveys and to unveil the nature of dark energy. We propose various higher-order statistics in the weak-lensing signal as a new set of observables able to break degeneracies between massive neutrinos and MG parameters. We have tested our methodology on so-called f(R) models, which constitute a class of viable models that can explain the accelerated universal expansion by a modification of the fundamental gravitational interaction. We have explored a range of these models that still fit current observations at the background and linear level, and we show using numerical simulations that certain models which include massive neutrinos are able to mimic ΛCDM in terms of the 3D power spectrum of matter density fluctuations. We find that depending on the redshift and angular scale of observation, non-Gaussian information accessed by higher-order weak-lensing statistics can be used to break the degeneracy between f(R) models and ΛCDM. In particular, peak counts computed in aperture mass maps outperform third- and fourth-order moments.

NMF with Sparse Regularizations in Transformed Domains

 

Authors: J. Rapin, J. Bobin, A. Larue, J.-L. Starck
Journal: SIAM
Year: 2014
Download: ADS | arXiv


Abstract

Non-negative blind source separation (BSS) has raised interest in various fields of research, as testified by the wide literature on the topic of non-negative matrix factorization (NMF). In this context, it is fundamental that the sources to be estimated present some diversity in order to be efficiently retrieved. Sparsity is known to enhance such contrast between the sources while producing very robust approaches, especially to noise. In this paper we introduce a new algorithm in order to tackle the blind separation of non-negative sparse sources from noisy measurements. We first show that sparsity and non-negativity constraints have to be carefully applied on the sought-after solution. In fact, improperly constrained solutions are unlikely to be stable and are therefore sub-optimal. The proposed algorithm, named nGMCA (non-negative Generalized Morphological Component Analysis), makes use of proximal calculus techniques to provide properly constrained solutions. The performance of nGMCA compared to other state-of-the-art algorithms is demonstrated by numerical experiments encompassing a wide variety of settings, with negligible parameter tuning. In particular, nGMCA is shown to provide robustness to noise and performs well on synthetic mixtures of real NMR spectra.

Low-dimensional signal-strength fingerprint-based positioning in wireless LANs

 

Authors: D. Milioris, G. Tzagkarakis, A. Papakonstantinou, M. Papadopouli, P. Tsakalides
Journal: Ad Hoc Networks
Year: 2011
Download: Science Direct


Abstract

Accurate location awareness is of paramount importance in most ubiquitous and pervasive computing applications. Numerous solutions for indoor localization based on IEEE802.11, bluetooth, ultrasonic and vision technologies have been proposed. This paper introduces a suite of novel indoor positioning techniques utilizing signal-strength (SS) fingerprints collected from access points (APs). Our first approach employs a statistical representation of the received SS measurements by means of a multivariate Gaussian model by considering a discretized grid-like form of the indoor environment and by computing probability distribution signatures at each cell of the grid. At run time, the system compares the signature at the unknown position with the signature of each cell by using the Kullback–Leibler Divergence (KLD) between their corresponding probability densities. Our second approach applies compressive sensing (CS) to perform sparsity-based accurate indoor localization, while reducing significantly the amount of information transmitted from a wireless device, possessing limited power, storage, and processing capabilities, to a central server. The performance evaluation which was conducted at the premises of a research laboratory and an aquarium under real-life conditions, reveals that the proposed statistical fingerprinting and CS-based localization techniques achieve a substantial localization accuracy.

Sparse and Non-Negative BSS for Noisy Data

 

Authors: J. Rapin, J. Bobin, A. Larue, J.-L. Starck
Journal: IEEE
Year: 2013
Download: ADS | arXiv


Abstract

Non-negative blind source separation (BSS) has raised interest in various fields of research, as testified by the wide literature on the topic of non-negative matrix factorization (NMF). In this context, it is fundamental that the sources to be estimated present some diversity in order to be efficiently retrieved. Sparsity is known to enhance such contrast between the sources while producing very robust approaches, especially to noise. In this paper we introduce a new algorithm in order to tackle the blind separation of non-negative sparse sources from noisy measurements. We first show that sparsity and non-negativity constraints have to be carefully applied on the sought-after solution. In fact, improperly constrained solutions are unlikely to be stable and are therefore sub-optimal. The proposed algorithm, named nGMCA (non-negative Generalized Morphological Component Analysis), makes use of proximal calculus techniques to provide properly constrained solutions. The performance of nGMCA compared to other state-of-the-art algorithms is demonstrated by numerical experiments encompassing a wide variety of settings, with negligible parameter tuning. In particular, nGMCA is shown to provide robustness to noise and performs well on synthetic mixtures of real NMR spectra.

The Scale of the Problem : Recovering Images of Reionization with GMCA

 

Authors: E. Chapman, F. B. Abdalla, J. Bobin, J.-L. Starck
Journal: MNRAS
Year: 2013
Download: ADS | arXiv


Abstract

The accurate and precise removal of 21-cm foregrounds from Epoch of Reionization redshifted 21-cm emission data is essential if we are to gain insight into an unexplored cosmological era. We apply a non-parametric technique, Generalized Morphological Component Analysis or GMCA, to simulated LOFAR-EoR data and show that it has the ability to clean the foregrounds with high accuracy. We recover the 21-cm 1D, 2D and 3D power spectra with high accuracy across an impressive range of frequencies and scales. We show that GMCA preserves the 21-cm phase information, especially when the smallest spatial scale data is discarded. While it has been shown that LOFAR-EoR image recovery is theoretically possible using image smoothing, we add that wavelet decomposition is an efficient way of recovering 21-cm signal maps to the same or greater order of accuracy with more flexibility. By comparing the GMCA output residual maps (equal to the noise, 21-cm signal and any foreground fitting errors) with the 21-cm maps at one frequency and discarding the smaller wavelet scale information, we find a correlation coefficient of 0.689, compared to 0.588 for the equivalently smoothed image. Considering only the central 50% of the maps, these coefficients improve to 0.905 and 0.605 respectively and we conclude that wavelet decomposition is a significantly more powerful method to denoise reconstructed 21-cm maps than smoothing.