Multi-CCD Point Spread Function Modelling

Context. Galaxy imaging surveys observe a vast number of objects that are affected by the instrument’s Point Spread Function (PSF). Weak lensing missions, in particular, aim at measuring the shape of galaxies, and PSF effects represent an important source of systematic errors which must be handled appropriately. This demands a high accuracy in the modelling as well as the estimation of the PSF at galaxy positions.

Aims. Sometimes referred to as non-parametric PSF estimation, the goal of this paper is to estimate a PSF at galaxy positions, starting from a set of noisy star image observations distributed over the focal plane. To accomplish this, we need our model to first of all, precisely capture the PSF field variations over the Field of View (FoV), and then to recover the PSF at the selected positions. Methods. This paper proposes a new method, coined MCCD (Multi-CCD PSF modelling), that creates, simultaneously, a PSF field model over all of the instrument’s focal plane. This allows to capture global as well as local PSF features through the use of two complementary models which enforce different spatial constraints. Most existing non-parametric models build one model per Charge-Coupled Device (CCD), which can lead to difficulties in capturing global ellipticity patterns.

Results. We first test our method on a realistic simulated dataset comparing it with two state-of-the-art PSF modelling methods (PSFEx and RCA). We outperform both of them with our proposed method. Then we contrast our approach with PSFEx on real data from CFIS (Canada-France Imaging Survey) that uses the CFHT (Canada-France-Hawaii Telescope). We show that our PSF model is less noisy and achieves a ~ 22% gain on pixel Root Mean Squared Error (RMSE) with respect to PSFEx.

Conclusions. We present, and share the code of, a new PSF modelling algorithm that models the PSF field on all the focal plane that is mature enough to handle real data.

Reference: Tobias Liaudat, Jérôme Bonnin,  Jean-Luc Starck, Morgan A. Schmitz, Axel Guinot, Martin Kilbinger and Stephen D. J. Gwyn. “Multi-CCD Point Spread Function Modelling, submitted 2020.

arXiv, code.

Probabilistic Mapping of Dark Matter by Neural Score Matching


The Dark Matter present in the Large-Scale Structure of the Universe is invisible, but its presence can be inferred through the small gravitational lensing effect it has on the images of far away galaxies. By measuring this lensing effect on a large number of galaxies it is possible to reconstruct maps of the Dark Matter distribution on the sky. This, however, represents an extremely challenging inverse problem due to missing data and noise dominated measurements. In this work, we present a novel methodology for addressing such inverse problems by combining elements of Bayesian statistics, analytic physical theory, and a recent class of Deep Generative Models based on Neural Score Matching. This approach allows to do the following: (1) make full use of analytic cosmological theory to constrain the 2pt statistics of the solution, (2) learn from cosmological simulations any differences between this analytic prior and full simulations, and (3) obtain samples from the full Bayesian posterior of the problem for robust Uncertainty Quantification. We present an application of this methodology on the first deep-learning-assisted Dark Matter map reconstruction of the Hubble Space Telescope COSMOS field.

Reference: Benjamin Remy, François Lanusse, Zaccharie Ramzi, Jia Liu, Niall Jeffrey and Jean-Luc Starck. “Probabilistic Mapping of Dark Matter by Neural Score Matching, Machine Learning and the Physical Sciences Workshop, NeurIPS 2020.

arXiv, code.

Euclid preparation: VII. Forecast validation for Euclid cosmological probes

Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear


Abstract

Aims: The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts.
Methods: We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required.
Results: We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three.

 

Euclid: Reconstruction of weak-lensing mass maps for non-Gaussianity studies

Euclid: Reconstruction of weak-lensing mass maps for non-Gaussianity studies

Authors: S. Pires, V. Vandenbussche, V. Kansal, R. Bender, L. Blot, D. Bonino, A. Boucaud, J. Brinchmann, V. Capobianco, J. Carretero, M. Castellano, S. Cavuoti, R. Clédassou, G. Congedo, L. Conversi, L. Corcione, F. Dubath, P. Fosalba, M. Frailis, E. Franceschi, M. Fumana, F. Grupp, F. Hormuth, S. Kermiche, M. Knabenhans, R. Kohley, B. Kubik, M. Kunz, S. Ligori, P.B. Lilje, I. Lloro, E. Maiorano, O. Marggraf, R. Massey, G. Meylan, C. Padilla, S. Paltani, F. Pasian, M. Poncet, D. Potter, F. Raison, J. Rhodes, M. Roncarelli, R. Saglia, P. Schneider, A. Secroun, S. Serrano, J. Stadel, P. Tallada Crespí, I. Tereno, R. Toledo-Moreo, Y. Wang
Journal: Astronomy and Astrophysics
Year: 2020
Download:

ADS | arXiv 

 


Abstract

Weak lensing, namely the deflection of light by matter along the line of sight, has proven to be an efficient method to constrain models of structure formation and reveal the nature of dark energy. So far, most weak lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak lensing surveys such as Euclid, convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While carrying the same information, the lensing signal is more compressed in the convergence maps than in the shear field, simplifying otherwise computationally expensive analyses, for instance non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects due to holes in the data field, field borders, noise and the fact that the shear is not a direct observable (reduced shear). In this paper, we present the two mass inversion methods that are being included in the official Euclid data processing pipeline: the standard Kaiser & Squires method (KS) and a new mass inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS methodology and includes corrections for mass mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the Euclid Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions, third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method reduces substantially the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5 while the errors on the third- and fourth-order moments are reduced by a factor of about 2 and 10 respectively.

Euclid: The importance of galaxy clustering and weak lensing cross-correlations within the photometric Euclid survey

Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear


Abstract

Context. The data from the Euclid mission will enable the measurement of the angular positions and weak lensing shapes of over a billion galaxies, with their photometric redshifts obtained together with ground-based observations. This large dataset, with well-controlled systematic effects, will allow for cosmological analyses using the angular clustering of galaxies (GCph) and cosmic shear (WL). For Euclid, these two cosmological probes will not be independent because they will probe the same volume of the Universe. The cross-correlation (XC) between these probes can tighten constraints and is therefore important to quantify their impact for Euclid.
Aims: In this study, we therefore extend the recently published Euclid forecasts by carefully quantifying the impact of XC not only on the final parameter constraints for different cosmological models, but also on the nuisance parameters. In particular, we aim to decipher the amount of additional information that XC can provide for parameters encoding systematic effects, such as galaxy bias, intrinsic alignments (IAs), and knowledge of the redshift distributions.
Methods: We follow the Fisher matrix formalism and make use of previously validated codes. We also investigate a different galaxy bias model, which was obtained from the Flagship simulation, and additional photometric-redshift uncertainties; we also elucidate the impact of including the XC terms on constraining these latter.
Results: Starting with a baseline model, we show that the XC terms reduce the uncertainties on galaxy bias by ∼17% and the uncertainties on IA by a factor of about four. The XC terms also help in constraining the γ parameter for minimal modified gravity models. Concerning galaxy bias, we observe that the role of the XC terms on the final parameter constraints is qualitatively the same irrespective of the specific galaxy-bias model used. For IA, we show that the XC terms can help in distinguishing between different models, and that if IA terms are neglected then this can lead to significant biases on the cosmological parameters. Finally, we show that the XC terms can lead to a better determination of the mean of the photometric galaxy distributions.
Conclusions: We find that the XC between GCph and WL within the Euclid survey is necessary to extract the full information content from the data in future analyses. These terms help in better constraining the cosmological model, and also lead to a better understanding of the systematic effects that contaminate these probes. Furthermore, we find that XC significantly helps in constraining the mean of the photometric-redshift distributions, but, at the same time, it requires more precise knowledge of this mean with respect to single probes in order not to degrade the final "figure of merit".

XC importance
Ratio of the errors on Δzi\Delta z_iΔzi​ without and with the inclusion of XC. Yellow and red lines refer to the pessimistic and optimistic scenario.

 

Euclid: The reduced shear approximation and magnification bias for Stage IV cosmic shear experiments

Euclid: The reduced shear approximation and magnification bias for Stage IV cosmic shear experiments

Authors: A.C. Deshpande, ..., S. Casas, M. Kilbinger, V. Pettorino, S. Pires, J.-L. Starck, F. Sureau, et al.
Journal: Astronomy and Astrophysics
Year: 2020
DOI:  10.1051/0004-6361/201937323
Download:

ADS | arXiv

 


Abstract

Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, previously ignored systematic effects must be addressed. In this work, we evaluate the impact of the reduced shear approximation and magnification bias, on the information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities, in high-magnification regions. The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from Euclid, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach. These effects cause significant biases in Omega_m, sigma_8, n_s, Omega_DE, w_0, and w_a of -0.53 sigma, 0.43 sigma, -0.34 sigma, 1.36 sigma, -0.68 sigma, and 1.21 sigma, respectively. We then show that these lensing biases interact with another systematic: the intrinsic alignment of galaxies. Accordingly, we develop the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to Euclid, we find that the additional terms introduced by this correction are sub-dominant.

Euclid preparation: VI. Verifying the Performance of Cosmic Shear Experiments

Euclid preparation: VI. Verifying the Performance of Cosmic Shear Experiments

Authors: Euclid Collaboration, P. Paykari, ..., S. Farrens, M. Kilbinger, V. Pettorino, S. Pires, J.-L. Starck, F. Sureau, et al.
Journal: Astronomy and Astrophysics
Year: 2020
DOI:  10.1051/0004-6361/201936980
Download:

ADS | arXiv

 


Abstract

Our aim is to quantify the impact of systematic effects on the inference of cosmological parameters from cosmic shear. We present an end-to-end approach that introduces sources of bias in a modelled weak lensing survey on a galaxy-by-galaxy level. Residual biases are propagated through a pipeline from galaxy properties (one end) through to cosmic shear power spectra and cosmological parameter estimates (the other end), to quantify how imperfect knowledge of the pipeline changes the maximum likelihood values of dark energy parameters. We quantify the impact of an imperfect correction for charge transfer inefficiency (CTI) and modelling uncertainties of the point spread function (PSF) for Euclid, and find that the biases introduced can be corrected to acceptable levels.

Constraining neutrino masses with weak-lensing starlet peak counts

Constraining neutrino masses with weak-lensing starlet peak counts

Massive neutrinos influence the background evolution of the Universe as well as the growth of structure. Being able to model this effect and constrain the sum of their masses is one of the key challenges in modern cosmology. Weak-lensing cosmological constraints will also soon reach higher levels of precision with next-generation surveys like LSST, WFIRST and Euclid. In this context, we use the MassiveNus simulations to derive constraints on the sum of neutrino masses Mν , the present- day total matter density Ωm, and the primordial power spectrum normalization As in a tomographic setting. We measure the lensing power spectrum as second-order statistics along with peak counts as higher-order statistics on lensing convergence maps generated from the simulations. We investigate the impact of multi-scale filtering approaches on cosmological parameters by employing a starlet (wavelet) filter and a concatenation of Gaussian filters. In both cases peak counts perform better than the power spectrum on the set of parameters [Mν, Ωm, As] respectively by 63%, 40% and 72% when using a starlet filter and by 70%, 40% and 77% when using a multi-scale Gaussian. More importantly, we show that when using a multi-scale approach, joining power spectrum and peaks does not add any relevant information over considering just the peaks alone. While both multi-scale filters behave similarly, we find that with the starlet filter the majority of the information in the data covariance matrix is encoded in the diagonal elements; this can be an advantage when inverting the matrix, speeding up the numerical implementation. For the starlet case, we further identify the minimum resolution required to obtain constraints comparable to those achievable with the full wavelet decomposition and we show that the information contained in the coarse-scale map cannot be neglected.

Reference: Virginia Ajani, Austin Peel, Valeria Pettorino, Jean-Luc Starck, Zack Li, Jia Liu,  2020. More details in the paper

The first Deep Learning reconstruction of dark matter maps from weak lensing observational data

DeepMass: The first Deep Learning reconstruction of dark matter maps from weak lensing observational data (DES SV weak lensing data)

DeepMass

 This is the first reconstruction of dark matter maps from weak lensing observational data using deep learning. We train a convolution neural network (CNN) with a Unet based architecture on over 3.6 x 10^5 simulated data realisations with non-Gaussian shape noise and with cosmological parameters varying over a broad prior distribution.  Our DeepMass method is substantially more accurate than existing mass-mapping methods. With a validation set of 8000 simulated DES SV data realisations, compared to Wiener filtering with a fixed power spectrum, the DeepMass method improved the mean-square-error (MSE) by 11 per cent. With N-body simulated MICE mock data, we show that Wiener filtering with the optimal known power spectrum still gives a worse MSE than our generalised method with no input cosmological parameters; we show that the improvement is driven by the non-linear structures in the convergence. With higher galaxy density in future weak lensing data unveiling more non-linear scales, it is likely that deep learning will be a leading approach for mass mapping with Euclid and LSST.

Reference 1:  N. Jeffrey, F.  Lanusse, O. Lahav, J.-L. Starck,  "Learning dark matter map reconstructions from DES SV weak lensing data", Monthly Notices of the Royal Astronomical Society, in press, 2019.

 

The impact of baryonic physics and massive neutrinos on weak lensing peak statistics

The impact of baryonic physics and massive neutrinos on weak lensing peak statistics

 

Authors: M. Fong, M. Choi, V. Catlett, B. Lee, A. Peel, R. Bowyer,  L. J. King, I. G. McCarthy
Journal: MNRAS
Year: 2019
Download: ADS | arXiv


Abstract

We study the impact of baryonic processes and massive neutrinos on weak lensing peak statistics that can be used to constrain cosmological parameters. We use the BAHAMAS suite of cosmological simulations, which self-consistently include baryonic processes and the effect of massive neutrino free-streaming on the evolution of structure formation. We construct synthetic weak lensing catalogues by ray-tracing through light-cones, and use the aperture mass statistic for the analysis. The peaks detected on the maps reflect the cumulative signal from massive bound objects and general large-scale structure. We present the first study of weak lensing peaks in simulations that include both baryonic physics and massive neutrinos (summed neutrino mass Mν = 0.06, 0.12, 0.24, and 0.48 eV assuming normal hierarchy), so that the uncertainty due to physics beyond the gravity of dark matter can be factored into constraints on cosmological models. Assuming a fiducial model of baryonic physics, we also investigate the correlation between peaks and massive haloes, over a range of summed neutrino mass values. As higher neutrino mass tends to suppress the formation of massive structures in the Universe, the halo mass function and lensing peak counts are therefore modified as a function of Mν. Over most of the S/N range, the impact of fiducial baryonic physics is greater (less) than neutrinos for 0.06 and 0.12 (0.24 and 0.48) eV models. Both baryonic physics and massive neutrinos should be accounted for when deriving cosmological parameters from weak lensing observations.