Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear

Euclid: impact of nonlinear prescriptions on cosmological parameter estimation from weak lensing cosmic shear


Abstract

Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological information is encoded by the small scales, where the clustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock Euclid cosmic shear and Planck cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different nonlinear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly nonlinear regime, with both the Hubble constant, H0, and the clustering amplitude, σ8, affected the most. Improvements in the modelling of nonlinear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the nonlinear power spectrum, using different corrections for baryon physics does not significantly impact the precision of Euclid, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from Euclid cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for nonlinear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons.

Effect of nonlinear prescriptions

 

Hybrid Pℓ(k): general, unified, non-linear matter power spectrum in redshift space

Hybrid Pℓ(k): general, unified, non-linear matter power spectrum in redshift space

 

Authors:

Journal:
Journal of Cosmology and Astroparticle Physics, Issue 09, article id. 001 (2020)
Year: 09/2020
Download: Inspire| Arxiv | DOI

Hybrid Pl(k): general, unified, non-linear matter power spectrum in redshift space


Abstract

Constraints on gravity and cosmology will greatly benefit from performing joint clustering and weak lensing analyses on large-scale structure data sets. Utilising non-linear information coming from small physical scales can greatly enhance these constraints. At the heart of these analyses is the matter power spectrum. Here we employ a simple method, dubbed "Hybrid Pl(k)", based on the Gaussian Streaming Model (GSM), to calculate the quasi non-linear redshift space matter power spectrum multipoles. This employs a fully non-linear and theoretically general prescription for the matter power spectrum. We test this approach against comoving Lagrangian acceleration simulation measurements performed in GR, DGP and f(R) gravity and find that our method performs comparably or better to the dark matter TNS redshift space power spectrum model {for dark matter. When comparing the redshift space multipoles for halos, we find that the Gaussian approximation of the GSM with a linear bias and a free stochastic term, N, is competitive to the TNS model.} Our approach offers many avenues for improvement in accuracy as well as further unification under the halo model.

Hybrid Pk

 

Constraining neutrino masses with weak-lensing multiscale peak counts

Constraining neutrino masses with weak-lensing multiscale peak counts

Massive neutrinos influence the background evolution of the Universe as well as the growth of structure. Being able to model this effect and constrain the sum of their masses is one of the key challenges in modern cosmology. Weak-lensing cosmological constraints will also soon reach higher levels of precision with next-generation surveys like LSST, WFIRST and Euclid. In this context, we use the MassiveNus simulations to derive constraints on the sum of neutrino masses Mν , the present- day total matter density Ωm, and the primordial power spectrum normalization As in a tomographic setting. We measure the lensing power spectrum as second-order statistics along with peak counts as higher-order statistics on lensing convergence maps generated from the simulations. We investigate the impact of multi-scale filtering approaches on cosmological parameters by employing a starlet (wavelet) filter and a concatenation of Gaussian filters. In both cases peak counts perform better than the power spectrum on the set of parameters [Mν, Ωm, As] respectively by 63%, 40% and 72% when using a starlet filter and by 70%, 40% and 77% when using a multi-scale Gaussian. More importantly, we show that when using a multi-scale approach, joining power spectrum and peaks does not add any relevant information over considering just the peaks alone. While both multi-scale filters behave similarly, we find that with the starlet filter the majority of the information in the data covariance matrix is encoded in the diagonal elements; this can be an advantage when inverting the matrix, speeding up the numerical implementation. For the starlet case, we further identify the minimum resolution required to obtain constraints comparable to those achievable with the full wavelet decomposition and we show that the information contained in the coarse-scale map cannot be neglected.

Reference: Virginia Ajani, Austin Peel, Valeria Pettorino, Jean-Luc Starck, Zack Li, Jia Liu,  2020. More details in the paper

Beyond self-acceleration: force- and fluid-acceleration

The notion of self acceleration has been introduced as a convenient way to theoretically distinguish cosmological models in which acceleration is due to modified gravity from those in which it is due to the properties of matter or fields. In this paper we review the concept of self acceleration as given, for example, by [1], and highlight two problems. First, that it applies only to universal couplings, and second, that it is too narrow, i.e. it excludes models in which the acceleration can be shown to be induced by a genuine modification of gravity, for instance coupled dark energy with a universal coupling, the Hu-Sawicki f(R) model or, in the context of inflation, the Starobinski model. We then propose two new, more general, concepts in its place: force-acceleration and field-acceleration, which are also applicable in presence of non universal cosmologies. We illustrate their concrete application with two examples, among the modified gravity classes which are still in agreement with current data, i.e. f(R) models and coupled dark energy.

As noted already for example in [35, 36], we further remark that at present non-universal couplings are among the (few) classes of models which survive gravitational wave detection and local constraints (see [12] for a review on models surviving with a universal coupling). This is because, by construction, baryonic interactions are standard and satisfy solar system constraints; furthermore the speed of gravitational waves in these models is  cT = 1 and therefore in agreement with gravitational wave detection. It has also been noted (see for example [37–39] and the update in [33]) that models in which a non-universal coupling between dark matter particles is considered would also solve the tension in the measurement of the Hubble parameter [40] due to the degeneracy beta - H0 first noted in Ref. [41].

Reference: L.Amendola, V.Pettorino  "Beyond self-acceleration: force- and fluid-acceleration", Physics Letters B, in press, 2020.

Emulators for the nonlinear matter power spectrum beyond ΛCDM

Emulators for the nonlinear matter power spectrum beyond ΛCDM

 

Authors:

Winther, Hans A.; Casas, Santiago; Baldi, Marco; Koyama, Kazuya; Li, Baojiu; Lombriser, Lucas; Zhao, Gong-Bo 

Journal:
Physical Review D, Volume 100, Issue 12, article id.123540
Year: 12/2019
Download: Inspire| Arxiv


Abstract

Accurate predictions for the nonlinear matter power spectrum are needed to confront theory with observations in current and near future weak-lensing and galaxy clustering surveys. We propose a computationally cheap method to create an emulator for modified gravity models by utilizing existing emulators for Λ CDM . Using a suite of N -body simulations, we construct a fitting function for the enhancement of both the linear and nonlinear matter power spectrum in the commonly studied Hu-Sawicki f (R ) gravity model valid for wave numbers k ≲5 - 10 h Mpc-1 and redshifts z ≲3 . We show that the cosmology dependence of this enhancement is relatively weak so that our fit, using simulations coming from only one cosmology, can be used to get accurate predictions for other cosmological parameters. We also show that the cosmology dependence can, if needed, be included by using linear theory, approximate N -body simulations (such as comoving lagrangian acceleration) and semianalytical tools like the halo model. Our final fit can easily be combined with any emulator or semianalytical models for the nonlinear Λ CDM power spectrum to accurately, and quickly, produce a nonlinear power spectrum for this particular modified gravity model. The method we use can be applied to fairly cheaply construct an emulator for other modified gravity models. As an application of our fitting formula, we use it to compute Fisher forecasts for how well galaxy clustering and weak lensing in a Euclid-like survey will be at constraining modifications of gravity.

Fitting formula

 

Measuring Gravity at Cosmological Scales

Measuring Gravity at Cosmological Scales

 

Authors:  Luca Amendola , Dario Bettoni, Ana Marta Pinho Santiago Casas,
Journal: Review Paper
Year: 02/2019
Download: Inspire| Arxiv


Abstract

This paper is a pedagogical introduction to models of gravity and how to constrain them through cosmological observations. We focus on the Horndeski scalar-tensor theory and on the quantities that can be measured with a minimum of assumptions. Alternatives or extensions of General Relativity have been proposed ever since its early years. Because of Lovelock theorem, modifying gravity in four dimensions typically means adding new degrees of freedom. The simplest way is to include a scalar field coupled to the curvature tensor terms. The most general way of doing so without incurring in the Ostrogradski instability is the Horndeski Lagrangian and its extensions. Testing gravity means therefore, in its simplest term, testing the Horndeski Lagrangian. Since local gravity experiments can always be evaded by assuming some screening mechanism or that baryons are decoupled, or even that the effects of modified gravity are visible only at early times, we need to test gravity with cosmological observations in the late universe (large-scale structure) and in the early universe (cosmic microwave background). In this work we review the basic tools to test gravity at cosmological scales, focusing on model-independent measurements.

logfsigma8

 

Future constraints on the gravitational slip with the mass profiles of galaxy clusters


Abstract

The gravitational slip parameter is an important discriminator between large classes of gravity theories at cosmological and astrophysical scales. In this work we use a combination of simulated information of galaxy cluster mass profiles, inferred by Strong+Weak lensing analyses and by the study of the dynamics of the cluster member galaxies, to reconstruct the gravitational slip parameter η and predict the accuracy with which it can be constrained with current and future galaxy cluster surveys. Performing a full-likelihood statistical analysis, we show that galaxy cluster observations can constrain η down to the percent level already with a few tens of clusters. We discuss the significance of possible systematics, and show that the cluster masses and numbers of galaxy members used to reconstruct the dynamics mass profile have a mild effect on the predicted constraints.

Scale-invariant alternatives to general relativity. The inflation–dark-energy connection

Scale-invariant alternatives to general relativity. The inflation–dark-energy connection


Abstract

We discuss the cosmological phenomenology of biscalar--tensor models
displaying a maximally symmetric Einstein--frame kinetic sector and
constructed on the basis of scale symmetry and volume--preserving
diffeomorphisms. These theories contain a single dimensionful
parameter $\Lambda_0$---associated with the invariance under the
aforementioned restricted coordinate transformations---and a massless
dilaton field. At large field values these scenarios lead to inflation
with no generation of isocurvature perturbations. The corresponding
predictions depend only on two dimensionless parameters, which
characterize the curvature of the field--manifold and the leading
order behavior of the inflationary potential. For $\Lambda_0=0$ the
scale symmetry is unbroken and the dilaton admits only derivative
couplings to matter, evading all fifth force constraints. For
$\Lambda_0\neq 0$ the field acquires a run-away potential that can
support a dark energy dominated era at late times. We confront a
minimalistic realization of this appealing framework with observations
using a Markov-Chain-Monte-Carlo approach, with likelihoods from
present BAO, SNIa and CMB data. A Bayesian model comparison indicates
a preference for the considered model over $\Lambda$CDM, under certain
assumptions for the priors. The impact of possible consistency
relations among the early and late Universe dynamics that can appear
within this setting is discussed with the use of correlation
matrices. The results indicate that a precise determination of the
inflationary observables and the dark energy equation--of--state could
significantly constraint the model parameters.

Distinguishing standard and modified gravity cosmologies with machine learning

Distinguishing standard and modified gravity cosmologies with machine learning

 

Authors: A. Peel, F. Lalande, J.-L. Starck, V. Pettorino, J. Merten,  C. Giocoli, M. Meneghetti,  M. Baldi
Journal: PRD
Year: 2019
Download: ADS | arXiv


Abstract

We present a convolutional neural network to classify distinct cosmological scenarios based on the statistically similar weak-lensing maps they generate. Modified gravity (MG) models that include massive neutrinos can mimic the standard concordance model (ΛCDM) in terms of Gaussian weak-lensing observables. An inability to distinguish viable models that are based on different physics potentially limits a deeper understanding of the fundamental nature of cosmic acceleration. For a fixed redshift of sources, we demonstrate that a machine learning network trained on simulated convergence maps can discriminate between such models better than conventional higher-order statistics. Results improve further when multiple source redshifts are combined. To accelerate training, we implement a novel data compression strategy that incorporates our prior knowledge of the morphology of typical convergence map features. Our method fully distinguishes ΛCDM from its most similar MG model on noise-free data, and it correctly identifies among the MG models with at least 80% accuracy when using the full redshift information. Adding noise lowers the correct classification rate of all models, but the neural network still significantly outperforms the peak statistics used in a previous analysis.

On the dissection of degenerate cosmologies with machine learning

On the dissection of degenerate cosmologies with machine learning

 

Authors: J. Merten,  C. Giocoli, M. Baldi, M. Meneghetti, A. Peel, F. Lalande, J.-L. Starck, V. Pettorino
Journal: MNRAS
Year: 2019
Download: ADS | arXiv


Abstract

Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.