Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear

Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear


Abstract

Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological information is encoded by the small scales, where the clustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock Euclid cosmic shear and Planck cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different nonlinear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly nonlinear regime, with both the Hubble constant, H0, and the clustering amplitude, σ8, affected the most. Improvements in the modelling of nonlinear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the nonlinear power spectrum, using different corrections for baryon physics does not significantly impact the precision of Euclid, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from Euclid cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for nonlinear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons.

Effect of nonlinear prescriptions

 

Euclid preparation: VII. Forecast validation for Euclid cosmological probes

Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear


Abstract

Aims: The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts.
Methods: We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required.
Results: We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three.

 

Recovery of 21-cm intensity maps with sparse component separation

 

Authors: I.P. Carucci, M.O. Irfan, J.Bobin
Journal: MNRAS
Year: 2020
Download: ADS | arXiv

21 cm intensity mapping has emerged as a promising technique to map the large-scale structure of the Universe. However, the presence of foregrounds with amplitudes orders of magnitude larger than the cosmological signal constitutes a critical challenge. In this work, we test the sparsity-based algorithm Generalised Morphological Component Analysis (GMCA) as a blind component separation technique for this class of experiments. We test the GMCA performance against realistic full-sky mock temperature maps that include, besides astrophysical foregrounds, also a fraction of the polarized part of the signal leaked into the unpolarized one, a very troublesome foreground to subtract, usually referred to as polarization leakage. To our knowledge, this is the first time the removal of such component is performed with no prior assumption. We assess the success of the cleaning by comparing the true and recovered power spectra, in the angular and radial directions. In the best scenario looked at, GMCA is able to recover the input angular (radial) power spectrum with an average bias of 5% for >25 (2030% for k_ll ≳ 0.02 Mpc/h), in the presence of polarization leakage. Our results are robust also when up to 40% of channels are missing, mimicking a Radio Frequency Interference (RFI) flagging of the data. Having quantified the notable effect of polarisation leakage on our results, in perspective we advocate the use of more realistic simulations when testing 21 cm intensity mapping capabilities.

Code and demonstrative notebooks are available here and data-set to reproduce our results is available here.

Semi-supervised dictionary learning with graph regularization and active points

 

Authors: Khanh-Hung TranFred-Maurice Ngole-Mboula, J-L. Starck
Journal: SIAM Journal on Imaging Sciences
Year: 2020
DOI: 10.1137/19M1285469
Download: arXiv


Abstract

Supervised Dictionary Learning has gained much interest in the recent decade and has shown significant performance improvements in image classification. However, in general, supervised learning needs a large number of labelled samples per class to achieve an acceptable result. In order to deal with databases which have just a few labelled samples per class, semi-supervised learning, which also exploits unlabelled samples in training phase is used. Indeed, unlabelled samples can help to regularize the learning model, yielding an improvement of classification accuracy. In this paper, we propose a new semi-supervised dictionary learning method based on two pillars: on one hand, we enforce manifold structure preservation from the original data into sparse code space using Locally Linear Embedding, which can be considered a regularization of sparse code; on the other hand, we train a semi-supervised classifier in sparse code space. We show that our approach provides an improvement over state-of-the-art semi-supervised dictionary learning methods
.

Deep Learning for space-variant deconvolution in galaxy surveys

 

Authors: Florent Sureau, Alexis Lechat, J-L. Starck
Journal: Astronomy and Astrophysics
Year: 2020
DOI: 10.1051/0004-6361/201937039
Download: ADS | arXiv


Abstract

The deconvolution of large survey images with millions of galaxies requires developing a new generation of methods that can take a space-variant point spread function into account. These methods have also to be accurate and fast. We investigate how deep learning might be used to perform this task. We employed a U-net deep neural network architecture to learn parameters that were adapted for galaxy image processing in a supervised setting and studied two deconvolution strategies. The first approach is a post-processing of a mere Tikhonov deconvolution with closed-form solution, and the second approach is an iterative deconvolution framework based on the alternating direction method of multipliers (ADMM). Our numerical results based on GREAT3 simulations with realistic galaxy images and point spread functions show that our two approaches outperform standard techniques that are based on convex optimization, whether assessed in galaxy image reconstruction or shape recovery. The approach based on a Tikhonov deconvolution leads to the most accurate results, except for ellipticity errors at high signal-to-noise ratio. The ADMM approach performs slightly better in this case. Considering that the Tikhonov approach is also more computation-time efficient in processing a large number of galaxies, we recommend this approach in this scenario.

In the spirit of reproducible research, the codes will be made freely available on the CosmoStat website (http://www.cosmostat.org). The testing datasets will also be provided to repeat the experiments performed in this paper.

Hybrid Pℓ(k): general, unified, non-linear matter power spectrum in redshift space

Hybrid Pℓ(k): general, unified, non-linear matter power spectrum in redshift space

 

Authors:

Journal:
Journal of Cosmology and Astroparticle Physics, Issue 09, article id. 001 (2020)
Year: 09/2020
Download: Inspire| Arxiv | DOI

Hybrid Pl(k): general, unified, non-linear matter power spectrum in redshift space


Abstract

Constraints on gravity and cosmology will greatly benefit from performing joint clustering and weak lensing analyses on large-scale structure data sets. Utilising non-linear information coming from small physical scales can greatly enhance these constraints. At the heart of these analyses is the matter power spectrum. Here we employ a simple method, dubbed "Hybrid Pl(k)", based on the Gaussian Streaming Model (GSM), to calculate the quasi non-linear redshift space matter power spectrum multipoles. This employs a fully non-linear and theoretically general prescription for the matter power spectrum. We test this approach against comoving Lagrangian acceleration simulation measurements performed in GR, DGP and f(R) gravity and find that our method performs comparably or better to the dark matter TNS redshift space power spectrum model {for dark matter. When comparing the redshift space multipoles for halos, we find that the Gaussian approximation of the GSM with a linear bias and a free stochastic term, N, is competitive to the TNS model.} Our approach offers many avenues for improvement in accuracy as well as further unification under the halo model.

Hybrid Pk

 

Euclid: Forecast constraints on the cosmic distance duality relation with complementary external probes

Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear


Abstract

In metric theories of gravity with photon number conservation, the luminosity and angular diameter distances are related via the Etherington relation, also known as the distance-duality relation (DDR). A violation of this relation would rule out the standard cosmological paradigm and point at the presence of new physics. We quantify the ability of Euclid, in combination with contemporary surveys, to improve the current constraints on deviations from the DDR in the redshift range 0<z<1.6. We start by an analysis of the latest available data, improving previously reported constraints by a factor of 2.5. We then present a detailed analysis of simulated Euclid and external data products, using both standard parametric methods (relying on phenomenological descriptions of possible DDR violations) and a machine learning reconstruction using Genetic Algorithms. We find that for parametric methods Euclid can (in combination with external probes) improve current constraints by approximately a factor of six, while for non-parametric methods Euclid can improve current constraints by a factor of three. Our results highlight the importance of surveys like Euclid in accurately testing the pillars of the current cosmological paradigm and constraining physics beyond the standard cosmological model.

Distance duality relation
2D contours on Ωm,0\Omega_{\rm m,0}Ωm,0​, ϵ0\epsilon_0ϵ0​ and ϵ1\epsilon_1ϵ1​, using currently available data for BAO (blue), SnIa (yellow) and the combination of the two (red). These results refer to the constant (top panel) and binned (central and bottom panels) ϵ(z)\epsilon(z)ϵ(z) cases.

 

PySAP: Python Sparse Data Analysis Package for Multidisciplinary Image Processing

 

Authors: S. Farrens, A. Grigis, L. El Gueddari, Z. Ramzi, Chaithya G. R., S. Starck, B. Sarthou, H. Cherkaoui, P.Ciuciu, J-L. Starck
Journal: Astronomy and Computing
Year: 2020
DOI: 10.1016/j.ascom.2020.100402
Download: ADS | arXiv


Abstract

We present the open-source image processing software package PySAP (Python Sparse data Analysis Package) developed for the COmpressed Sensing for Magnetic resonance Imaging and Cosmology (COSMIC) project. This package provides a set of flexible tools that can be applied to a variety of compressed sensing and image reconstruction problems in various research domains. In particular, PySAP offers fast wavelet transforms and a range of integrated optimisation algorithms. In this paper we present the features available in PySAP and provide practical demonstrations on astrophysical and magnetic resonance imaging data.


Code

PySAP Code


Euclid: The importance of galaxy clustering and weak lensing cross-correlations within the photometric Euclid survey

Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear


Abstract

Context. The data from the Euclid mission will enable the measurement of the angular positions and weak lensing shapes of over a billion galaxies, with their photometric redshifts obtained together with ground-based observations. This large dataset, with well-controlled systematic effects, will allow for cosmological analyses using the angular clustering of galaxies (GCph) and cosmic shear (WL). For Euclid, these two cosmological probes will not be independent because they will probe the same volume of the Universe. The cross-correlation (XC) between these probes can tighten constraints and is therefore important to quantify their impact for Euclid.
Aims: In this study, we therefore extend the recently published Euclid forecasts by carefully quantifying the impact of XC not only on the final parameter constraints for different cosmological models, but also on the nuisance parameters. In particular, we aim to decipher the amount of additional information that XC can provide for parameters encoding systematic effects, such as galaxy bias, intrinsic alignments (IAs), and knowledge of the redshift distributions.
Methods: We follow the Fisher matrix formalism and make use of previously validated codes. We also investigate a different galaxy bias model, which was obtained from the Flagship simulation, and additional photometric-redshift uncertainties; we also elucidate the impact of including the XC terms on constraining these latter.
Results: Starting with a baseline model, we show that the XC terms reduce the uncertainties on galaxy bias by ∼17% and the uncertainties on IA by a factor of about four. The XC terms also help in constraining the γ parameter for minimal modified gravity models. Concerning galaxy bias, we observe that the role of the XC terms on the final parameter constraints is qualitatively the same irrespective of the specific galaxy-bias model used. For IA, we show that the XC terms can help in distinguishing between different models, and that if IA terms are neglected then this can lead to significant biases on the cosmological parameters. Finally, we show that the XC terms can lead to a better determination of the mean of the photometric galaxy distributions.
Conclusions: We find that the XC between GCph and WL within the Euclid survey is necessary to extract the full information content from the data in future analyses. These terms help in better constraining the cosmological model, and also lead to a better understanding of the systematic effects that contaminate these probes. Furthermore, we find that XC significantly helps in constraining the mean of the photometric-redshift distributions, but, at the same time, it requires more precise knowledge of this mean with respect to single probes in order not to degrade the final "figure of merit".

XC importance
Ratio of the errors on Δzi\Delta z_iΔzi​ without and with the inclusion of XC. Yellow and red lines refer to the pessimistic and optimistic scenario.

 

Euclid: The reduced shear approximation and magnification bias for Stage IV cosmic shear experiments

Euclid: The reduced shear approximation and magnification bias for Stage IV cosmic shear experiments

Authors: A.C. Deshpande, ..., S. Casas, M. Kilbinger, V. Pettorino, S. Pires, J.-L. Starck, F. Sureau, et al.
Journal: Astronomy and Astrophysics
Year: 2020
DOI:  10.1051/0004-6361/201937323
Download:

ADS | arXiv

 


Abstract

Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, previously ignored systematic effects must be addressed. In this work, we evaluate the impact of the reduced shear approximation and magnification bias, on the information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities, in high-magnification regions. The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from Euclid, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach. These effects cause significant biases in Omega_m, sigma_8, n_s, Omega_DE, w_0, and w_a of -0.53 sigma, 0.43 sigma, -0.34 sigma, 1.36 sigma, -0.68 sigma, and 1.21 sigma, respectively. We then show that these lensing biases interact with another systematic: the intrinsic alignment of galaxies. Accordingly, we develop the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to Euclid, we find that the additional terms introduced by this correction are sub-dominant.