La matière noire de la toile cosmique révélée par l’effet de lentille gravitationnelle

Une nouvelle étape a été franchie dans le domaine des lentilles gravitationnelles faibles (weak lensing) avec la production d’un des plus riches catalogues de galaxies. Ce catalogue contient la morphologie ultra-précise de 100 millions de galaxies lointaines, permettant de mesurer les déformations infimes causées par le lentillage gravitationnel qui agit sur la lumière se propageant à travers la toile cosmique de matière noire présente dans tout l’Univers.

Une gigantesque cartographie du ciel et un jeu considérable de données pour mieux comprendre la matière noire

Au sein de la collaboration internationale UNIONS, des scientifiques de l’Institut de recherche sur les lois fondamentales de l’Univers du CEA ont produit un des plus grands jeux de données sur la matière noire, provenant de l’observation de 100 millions de galaxies déformées par des lentilles gravitationnelles. Des données très précieuses pour de nombreuses missions scientifiques. 

Early dark energy in the pre- and post-recombination epochs

Early dark energy in the pre- and postrecombination epochs

 

Authors:

  Adrià Gómez-ValentZiyang ZhengLuca AmendolaValeria PettorinoChristof Wetterich

Journal:
PRD
Year: 07/2021
Download: PRD | Arxiv


Abstract

Dark energy could play a role at redshifts zO(1). Many quintessence models possess scaling or attractor solutions where the fraction of dark energy follows the dominant component in previous epochs of the Universe’s expansion, or phase transitions may happen close to the time of matter-radiation equality. A non-negligible early dark energy (EDE) fraction around matter-radiation equality could contribute to alleviate the well-known H0 tension. In this work, we constrain the fraction of EDE using two approaches: first, we use a fluid parameterization that mimics the plateaux of the dominant components in the past. An alternative tomographic approach constrains the EDE density in binned redshift intervals. The latter allows us to reconstruct the evolution of Ωde(z) before and after the decoupling of the cosmic microwave background (CMB) photons. We have employed Planck data 2018, the Pantheon compilation of supernovae of Type Ia (SNIa), data on galaxy clustering, the prior on the absolute magnitude of SNIa by SH0ES, and weak lensing data from KiDS+VIKING450 and DES-Y1. When we use a minimal parameterization mimicking the background plateaux, EDE has only a small impact on current cosmological tensions. We show how the constraints on the EDE fraction weaken considerably when its sound speed is allowed to vary. By means of our binned analysis we put very tight constraints on the EDE fraction around the CMB decoupling time, 0.4% at 2σ c.l. We confirm previous results that a significant EDE fraction in the radiation-dominated epoch loosens the H0 tension, but tends to worsen the tension for σ8. A subsequent presence of EDE in the matter-dominated era helps to alleviate this issue. When both the SH0ES prior and weak lensing data are considered in the fitting analysis in combination with data from CMB, SNIa and baryon acoustic oscillations, the EDE fractions are constrained to be 2.6% in the radiation-dominated epoch and 1.5% in the redshift range z(100,1000) at 2σ c.l. The two tensions remain with a statistical significance of 23σ c.l. 

Press release (in Italian) by MEDIA INAF is available here.

 

Starlet l1-norm for weak lensing cosmology

Starlet l1-norm for weak lensing cosmology

 

Authors:

Virginia Ajani, Jean-Luc Starck, Valeria Pettorino

Journal:
Astronomy & Astrophysics , Forthcoming article, Letters to the Editor
Year: 01/2021
Download: A&A| Arxiv


Abstract

We present a new summary statistic for weak lensing observables, higher than second order, suitable for extracting non-Gaussian cosmological information and inferring cosmological parameters. We name this statistic the 'starlet 1-norm' as it is computed via the sum of the absolute values of the starlet (wavelet) decomposition coefficients of a weak lensing map. In comparison to the state-of-the-art higher-order statistics -- weak lensing peak counts and minimum counts, or the combination of the two -- the 1-norm provides a fast multi-scale calculation of the full void and peak distribution, avoiding the problem of defining what a peak is and what a void is: The 1-norm carries the information encoded in all pixels of the map, not just the ones in local maxima and minima. We show its potential by applying it to the weak lensing convergence maps provided by the MassiveNus simulations to get constraints on the sum of neutrino masses, the matter density parameter, and the amplitude of the primordial power spectrum. We find that, in an ideal setting without further systematics, the starlet 1-norm remarkably outperforms commonly used summary statistics, such as the power spectrum or the combination of peak and void counts, in terms of constraining power, representing a promising new unified framework to simultaneously account for the information encoded in peak counts and voids. We find that the starlet 1-norm outperforms the power spectrum by 72% on Mν60% on Ωm, and 75% on As for the Euclid-like setting considered; it also improves upon the state-of-the-art combination of peaks and voids for a single smoothing scale by 24% on Mν50% on Ωm, and 24% on As.

shear bias

 

Authors:  M. Kilbinger, A. Pujol
Language: Python
Download: GitHub
Description: shear_bias is a package that contains tools and scripts for shear bias estimation for weak gravitational lensing analysis.


Installation

Download the code from the github repository.

git clone https://github.com/CosmoStat/shear_bias

A directory shear_bias is created. There, call the setup script to install the package.

cd shear_bias
python setup.py install

Multi-CCD Point Spread Function Modelling

Context. Galaxy imaging surveys observe a vast number of objects that are affected by the instrument’s Point Spread Function (PSF). Weak lensing missions, in particular, aim at measuring the shape of galaxies, and PSF effects represent an important source of systematic errors which must be handled appropriately. This demands a high accuracy in the modelling as well as the estimation of the PSF at galaxy positions.

Aims. Sometimes referred to as non-parametric PSF estimation, the goal of this paper is to estimate a PSF at galaxy positions, starting from a set of noisy star image observations distributed over the focal plane. To accomplish this, we need our model to first of all, precisely capture the PSF field variations over the Field of View (FoV), and then to recover the PSF at the selected positions. Methods. This paper proposes a new method, coined MCCD (Multi-CCD PSF modelling), that creates, simultaneously, a PSF field model over all of the instrument’s focal plane. This allows to capture global as well as local PSF features through the use of two complementary models which enforce different spatial constraints. Most existing non-parametric models build one model per Charge-Coupled Device (CCD), which can lead to difficulties in capturing global ellipticity patterns.

Results. We first test our method on a realistic simulated dataset comparing it with two state-of-the-art PSF modelling methods (PSFEx and RCA). We outperform both of them with our proposed method. Then we contrast our approach with PSFEx on real data from CFIS (Canada-France Imaging Survey) that uses the CFHT (Canada-France-Hawaii Telescope). We show that our PSF model is less noisy and achieves a ~ 22% gain on pixel Root Mean Squared Error (RMSE) with respect to PSFEx.

Conclusions. We present, and share the code of, a new PSF modelling algorithm that models the PSF field on all the focal plane that is mature enough to handle real data.

Reference: Tobias Liaudat, Jérôme Bonnin,  Jean-Luc Starck, Morgan A. Schmitz, Axel Guinot, Martin Kilbinger and Stephen D. J. Gwyn. “Multi-CCD Point Spread Function Modelling, submitted 2020.

arXiv, code.

Probabilistic Mapping of Dark Matter by Neural Score Matching


The Dark Matter present in the Large-Scale Structure of the Universe is invisible, but its presence can be inferred through the small gravitational lensing effect it has on the images of far away galaxies. By measuring this lensing effect on a large number of galaxies it is possible to reconstruct maps of the Dark Matter distribution on the sky. This, however, represents an extremely challenging inverse problem due to missing data and noise dominated measurements. In this work, we present a novel methodology for addressing such inverse problems by combining elements of Bayesian statistics, analytic physical theory, and a recent class of Deep Generative Models based on Neural Score Matching. This approach allows to do the following: (1) make full use of analytic cosmological theory to constrain the 2pt statistics of the solution, (2) learn from cosmological simulations any differences between this analytic prior and full simulations, and (3) obtain samples from the full Bayesian posterior of the problem for robust Uncertainty Quantification. We present an application of this methodology on the first deep-learning-assisted Dark Matter map reconstruction of the Hubble Space Telescope COSMOS field.

Reference: Benjamin Remy, François Lanusse, Zaccharie Ramzi, Jia Liu, Niall Jeffrey and Jean-Luc Starck. “Probabilistic Mapping of Dark Matter by Neural Score Matching, Machine Learning and the Physical Sciences Workshop, NeurIPS 2020.

arXiv, code.

Euclid: The reduced shear approximation and magnification bias for Stage IV cosmic shear experiments

Euclid: The reduced shear approximation and magnification bias for Stage IV cosmic shear experiments

Authors: A.C. Deshpande, ..., S. Casas, M. Kilbinger, V. Pettorino, S. Pires, J.-L. Starck, F. Sureau, et al.
Journal: Astronomy and Astrophysics
Year: 2020
DOI:  10.1051/0004-6361/201937323
Download:

ADS | arXiv

 


Abstract

Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, previously ignored systematic effects must be addressed. In this work, we evaluate the impact of the reduced shear approximation and magnification bias, on the information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities, in high-magnification regions. The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from Euclid, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach. These effects cause significant biases in Omega_m, sigma_8, n_s, Omega_DE, w_0, and w_a of -0.53 sigma, 0.43 sigma, -0.34 sigma, 1.36 sigma, -0.68 sigma, and 1.21 sigma, respectively. We then show that these lensing biases interact with another systematic: the intrinsic alignment of galaxies. Accordingly, we develop the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to Euclid, we find that the additional terms introduced by this correction are sub-dominant.

Euclid preparation: VI. Verifying the Performance of Cosmic Shear Experiments

Euclid preparation: VI. Verifying the Performance of Cosmic Shear Experiments

Authors: Euclid Collaboration, P. Paykari, ..., S. Farrens, M. Kilbinger, V. Pettorino, S. Pires, J.-L. Starck, F. Sureau, et al.
Journal: Astronomy and Astrophysics
Year: 2020
DOI:  10.1051/0004-6361/201936980
Download:

ADS | arXiv

 


Abstract

Our aim is to quantify the impact of systematic effects on the inference of cosmological parameters from cosmic shear. We present an end-to-end approach that introduces sources of bias in a modelled weak lensing survey on a galaxy-by-galaxy level. Residual biases are propagated through a pipeline from galaxy properties (one end) through to cosmic shear power spectra and cosmological parameter estimates (the other end), to quantify how imperfect knowledge of the pipeline changes the maximum likelihood values of dark energy parameters. We quantify the impact of an imperfect correction for charge transfer inefficiency (CTI) and modelling uncertainties of the point spread function (PSF) for Euclid, and find that the biases introduced can be corrected to acceptable levels.

Constraining neutrino masses with weak-lensing multiscale peak counts

Constraining neutrino masses with weak-lensing multiscale peak counts

Massive neutrinos influence the background evolution of the Universe as well as the growth of structure. Being able to model this effect and constrain the sum of their masses is one of the key challenges in modern cosmology. Weak-lensing cosmological constraints will also soon reach higher levels of precision with next-generation surveys like LSST, WFIRST and Euclid. In this context, we use the MassiveNus simulations to derive constraints on the sum of neutrino masses Mν , the present- day total matter density Ωm, and the primordial power spectrum normalization As in a tomographic setting. We measure the lensing power spectrum as second-order statistics along with peak counts as higher-order statistics on lensing convergence maps generated from the simulations. We investigate the impact of multi-scale filtering approaches on cosmological parameters by employing a starlet (wavelet) filter and a concatenation of Gaussian filters. In both cases peak counts perform better than the power spectrum on the set of parameters [Mν, Ωm, As] respectively by 63%, 40% and 72% when using a starlet filter and by 70%, 40% and 77% when using a multi-scale Gaussian. More importantly, we show that when using a multi-scale approach, joining power spectrum and peaks does not add any relevant information over considering just the peaks alone. While both multi-scale filters behave similarly, we find that with the starlet filter the majority of the information in the data covariance matrix is encoded in the diagonal elements; this can be an advantage when inverting the matrix, speeding up the numerical implementation. For the starlet case, we further identify the minimum resolution required to obtain constraints comparable to those achievable with the full wavelet decomposition and we show that the information contained in the coarse-scale map cannot be neglected.

Reference: Virginia Ajani, Austin Peel, Valeria Pettorino, Jean-Luc Starck, Zack Li, Jia Liu,  2020. More details in the paper