## La matière noire de la toile cosmique révélée par l’effet de lentille gravitationnelle

Une nouvelle étape a été franchie dans le domaine des lentilles gravitationnelles faibles (weak lensing) avec la production d’un des plus riches catalogues de galaxies. Ce catalogue contient la morphologie ultra-précise de 100 millions de galaxies lointaines, permettant de mesurer les déformations infimes causées par le lentillage gravitationnel qui agit sur la lumière se propageant à travers la toile cosmique de matière noire présente dans tout l’Univers.

## Une gigantesque cartographie du ciel et un jeu considérable de données pour mieux comprendre la matière noire

Au sein de la collaboration internationale UNIONS, des scientifiques de l’Institut de recherche sur les lois fondamentales de l’Univers du CEA ont produit un des plus grands jeux de données sur la matière noire, provenant de l’observation de 100 millions de galaxies déformées par des lentilles gravitationnelles. Des données très précieuses pour de nombreuses missions scientifiques.

## Early dark energy in the pre- and post-recombination epochs

### Early dark energy in the pre- and postrecombination epochs

 Authors: Journal: PRD Year: 07/2021 Download: PRD | Arxiv

## Abstract

Dark energy could play a role at redshifts zO(1). Many quintessence models possess scaling or attractor solutions where the fraction of dark energy follows the dominant component in previous epochs of the Universe’s expansion, or phase transitions may happen close to the time of matter-radiation equality. A non-negligible early dark energy (EDE) fraction around matter-radiation equality could contribute to alleviate the well-known H0 tension. In this work, we constrain the fraction of EDE using two approaches: first, we use a fluid parameterization that mimics the plateaux of the dominant components in the past. An alternative tomographic approach constrains the EDE density in binned redshift intervals. The latter allows us to reconstruct the evolution of Ωde(z) before and after the decoupling of the cosmic microwave background (CMB) photons. We have employed Planck data 2018, the Pantheon compilation of supernovae of Type Ia (SNIa), data on galaxy clustering, the prior on the absolute magnitude of SNIa by SH0ES, and weak lensing data from KiDS+VIKING450 and DES-Y1. When we use a minimal parameterization mimicking the background plateaux, EDE has only a small impact on current cosmological tensions. We show how the constraints on the EDE fraction weaken considerably when its sound speed is allowed to vary. By means of our binned analysis we put very tight constraints on the EDE fraction around the CMB decoupling time, 0.4% at 2σ c.l. We confirm previous results that a significant EDE fraction in the radiation-dominated epoch loosens the H0 tension, but tends to worsen the tension for σ8. A subsequent presence of EDE in the matter-dominated era helps to alleviate this issue. When both the SH0ES prior and weak lensing data are considered in the fitting analysis in combination with data from CMB, SNIa and baryon acoustic oscillations, the EDE fractions are constrained to be 2.6% in the radiation-dominated epoch and 1.5% in the redshift range z(100,1000) at 2σ c.l. The two tensions remain with a statistical significance of 23σ c.l.

Press release (in Italian) by MEDIA INAF is available here.

## Starlet l1-norm for weak lensing cosmology

### Starlet l1-norm for weak lensing cosmology

 Authors: Virginia Ajani, Jean-Luc Starck, Valeria Pettorino Journal: Astronomy & Astrophysics , Forthcoming article, Letters to the Editor Year: 01/2021 Download: A&A| Arxiv

## Abstract

We present a new summary statistic for weak lensing observables, higher than second order, suitable for extracting non-Gaussian cosmological information and inferring cosmological parameters. We name this statistic the 'starlet 1-norm' as it is computed via the sum of the absolute values of the starlet (wavelet) decomposition coefficients of a weak lensing map. In comparison to the state-of-the-art higher-order statistics -- weak lensing peak counts and minimum counts, or the combination of the two -- the 1-norm provides a fast multi-scale calculation of the full void and peak distribution, avoiding the problem of defining what a peak is and what a void is: The 1-norm carries the information encoded in all pixels of the map, not just the ones in local maxima and minima. We show its potential by applying it to the weak lensing convergence maps provided by the MassiveNus simulations to get constraints on the sum of neutrino masses, the matter density parameter, and the amplitude of the primordial power spectrum. We find that, in an ideal setting without further systematics, the starlet 1-norm remarkably outperforms commonly used summary statistics, such as the power spectrum or the combination of peak and void counts, in terms of constraining power, representing a promising new unified framework to simultaneously account for the information encoded in peak counts and voids. We find that the starlet 1-norm outperforms the power spectrum by 72% on Mν60% on Ωm, and 75% on As for the Euclid-like setting considered; it also improves upon the state-of-the-art combination of peaks and voids for a single smoothing scale by 24% on Mν50% on Ωm, and 24% on As.

## shear bias

 Authors: M. Kilbinger, A. Pujol Language: Python Download: GitHub Description: shear_bias is a package that contains tools and scripts for shear bias estimation for weak gravitational lensing analysis.

## Installation

Download the code from the github repository.

git clone https://github.com/CosmoStat/shear_bias

A directory shear_bias is created. There, call the setup script to install the package.

cd shear_bias
python setup.py install

## Multi-CCD Point Spread Function Modelling

Context. Galaxy imaging surveys observe a vast number of objects that are affected by the instrument’s Point Spread Function (PSF). Weak lensing missions, in particular, aim at measuring the shape of galaxies, and PSF effects represent an important source of systematic errors which must be handled appropriately. This demands a high accuracy in the modelling as well as the estimation of the PSF at galaxy positions.

Aims. Sometimes referred to as non-parametric PSF estimation, the goal of this paper is to estimate a PSF at galaxy positions, starting from a set of noisy star image observations distributed over the focal plane. To accomplish this, we need our model to first of all, precisely capture the PSF field variations over the Field of View (FoV), and then to recover the PSF at the selected positions. Methods. This paper proposes a new method, coined MCCD (Multi-CCD PSF modelling), that creates, simultaneously, a PSF field model over all of the instrument’s focal plane. This allows to capture global as well as local PSF features through the use of two complementary models which enforce different spatial constraints. Most existing non-parametric models build one model per Charge-Coupled Device (CCD), which can lead to difficulties in capturing global ellipticity patterns.

Results. We first test our method on a realistic simulated dataset comparing it with two state-of-the-art PSF modelling methods (PSFEx and RCA). We outperform both of them with our proposed method. Then we contrast our approach with PSFEx on real data from CFIS (Canada-France Imaging Survey) that uses the CFHT (Canada-France-Hawaii Telescope). We show that our PSF model is less noisy and achieves a ~ 22% gain on pixel Root Mean Squared Error (RMSE) with respect to PSFEx.

Conclusions. We present, and share the code of, a new PSF modelling algorithm that models the PSF field on all the focal plane that is mature enough to handle real data.

Reference: Tobias Liaudat, Jérôme Bonnin,  Jean-Luc Starck, Morgan A. Schmitz, Axel Guinot, Martin Kilbinger and Stephen D. J. Gwyn. “Multi-CCD Point Spread Function Modelling, submitted 2020.

arXiv, code.

## Probabilistic Mapping of Dark Matter by Neural Score Matching

The Dark Matter present in the Large-Scale Structure of the Universe is invisible, but its presence can be inferred through the small gravitational lensing effect it has on the images of far away galaxies. By measuring this lensing effect on a large number of galaxies it is possible to reconstruct maps of the Dark Matter distribution on the sky. This, however, represents an extremely challenging inverse problem due to missing data and noise dominated measurements. In this work, we present a novel methodology for addressing such inverse problems by combining elements of Bayesian statistics, analytic physical theory, and a recent class of Deep Generative Models based on Neural Score Matching. This approach allows to do the following: (1) make full use of analytic cosmological theory to constrain the 2pt statistics of the solution, (2) learn from cosmological simulations any differences between this analytic prior and full simulations, and (3) obtain samples from the full Bayesian posterior of the problem for robust Uncertainty Quantification. We present an application of this methodology on the first deep-learning-assisted Dark Matter map reconstruction of the Hubble Space Telescope COSMOS field.

Reference: Benjamin Remy, François Lanusse, Zaccharie Ramzi, Jia Liu, Niall Jeffrey and Jean-Luc Starck. “Probabilistic Mapping of Dark Matter by Neural Score Matching, Machine Learning and the Physical Sciences Workshop, NeurIPS 2020.

arXiv, code.

## Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear

### Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear

 Authors: Journal: Year: 10/2020 Download: Inspire| Arxiv

## Abstract

Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological information is encoded by the small scales, where the clustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock Euclid cosmic shear and Planck cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different nonlinear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly nonlinear regime, with both the Hubble constant, H0, and the clustering amplitude, σ8, affected the most. Improvements in the modelling of nonlinear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the nonlinear power spectrum, using different corrections for baryon physics does not significantly impact the precision of Euclid, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from Euclid cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for nonlinear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons.

## Euclid preparation: VII. Forecast validation for Euclid cosmological probes

### Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear

 Authors: Journal: Astronomy & Astrophysics, Volume 642, id.A191, 66 pp. Year: 10/2020 Download: Inspire| Arxiv

## Abstract

Aims: The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts.
Methods: We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required.
Results: We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three.

## Hybrid Pℓ(k): general, unified, non-linear matter power spectrum in redshift space

### Hybrid Pℓ(k): general, unified, non-linear matter power spectrum in redshift space

 Authors: Journal: Journal of Cosmology and Astroparticle Physics, Issue 09, article id. 001 (2020) Year: 09/2020 Download: Inspire| Arxiv | DOI

## Abstract

Constraints on gravity and cosmology will greatly benefit from performing joint clustering and weak lensing analyses on large-scale structure data sets. Utilising non-linear information coming from small physical scales can greatly enhance these constraints. At the heart of these analyses is the matter power spectrum. Here we employ a simple method, dubbed "Hybrid Pl(k)", based on the Gaussian Streaming Model (GSM), to calculate the quasi non-linear redshift space matter power spectrum multipoles. This employs a fully non-linear and theoretically general prescription for the matter power spectrum. We test this approach against comoving Lagrangian acceleration simulation measurements performed in GR, DGP and f(R) gravity and find that our method performs comparably or better to the dark matter TNS redshift space power spectrum model {for dark matter. When comparing the redshift space multipoles for halos, we find that the Gaussian approximation of the GSM with a linear bias and a free stochastic term, N, is competitive to the TNS model.} Our approach offers many avenues for improvement in accuracy as well as further unification under the halo model.