Measuring Gravity at Cosmological Scales

Measuring Gravity at Cosmological Scales

 

Authors:  Luca Amendola , Dario Bettoni, Ana Marta Pinho Santiago Casas,
Journal: Review Paper
Year: 02/2019
Download: Inspire| Arxiv


Abstract

This paper is a pedagogical introduction to models of gravity and how to constrain them through cosmological observations. We focus on the Horndeski scalar-tensor theory and on the quantities that can be measured with a minimum of assumptions. Alternatives or extensions of General Relativity have been proposed ever since its early years. Because of Lovelock theorem, modifying gravity in four dimensions typically means adding new degrees of freedom. The simplest way is to include a scalar field coupled to the curvature tensor terms. The most general way of doing so without incurring in the Ostrogradski instability is the Horndeski Lagrangian and its extensions. Testing gravity means therefore, in its simplest term, testing the Horndeski Lagrangian. Since local gravity experiments can always be evaded by assuming some screening mechanism or that baryons are decoupled, or even that the effects of modified gravity are visible only at early times, we need to test gravity with cosmological observations in the late universe (large-scale structure) and in the early universe (cosmic microwave background). In this work we review the basic tools to test gravity at cosmological scales, focusing on model-independent measurements.

logfsigma8

 

Future constraints on the gravitational slip with the mass profiles of galaxy clusters


Abstract

The gravitational slip parameter is an important discriminator between large classes of gravity theories at cosmological and astrophysical scales. In this work we use a combination of simulated information of galaxy cluster mass profiles, inferred by Strong+Weak lensing analyses and by the study of the dynamics of the cluster member galaxies, to reconstruct the gravitational slip parameter η and predict the accuracy with which it can be constrained with current and future galaxy cluster surveys. Performing a full-likelihood statistical analysis, we show that galaxy cluster observations can constrain η down to the percent level already with a few tens of clusters. We discuss the significance of possible systematics, and show that the cluster masses and numbers of galaxy members used to reconstruct the dynamics mass profile have a mild effect on the predicted constraints.

Scale-invariant alternatives to general relativity. The inflation–dark-energy connection


Abstract

We discuss the cosmological phenomenology of biscalar--tensor models
displaying a maximally symmetric Einstein--frame kinetic sector and
constructed on the basis of scale symmetry and volume--preserving
diffeomorphisms. These theories contain a single dimensionful
parameter $\Lambda_0$---associated with the invariance under the
aforementioned restricted coordinate transformations---and a massless
dilaton field. At large field values these scenarios lead to inflation
with no generation of isocurvature perturbations. The corresponding
predictions depend only on two dimensionless parameters, which
characterize the curvature of the field--manifold and the leading
order behavior of the inflationary potential. For $\Lambda_0=0$ the
scale symmetry is unbroken and the dilaton admits only derivative
couplings to matter, evading all fifth force constraints. For
$\Lambda_0\neq 0$ the field acquires a run-away potential that can
support a dark energy dominated era at late times. We confront a
minimalistic realization of this appealing framework with observations
using a Markov-Chain-Monte-Carlo approach, with likelihoods from
present BAO, SNIa and CMB data. A Bayesian model comparison indicates
a preference for the considered model over $\Lambda$CDM, under certain
assumptions for the priors. The impact of possible consistency
relations among the early and late Universe dynamics that can appear
within this setting is discussed with the use of correlation
matrices. The results indicate that a precise determination of the
inflationary observables and the dark energy equation--of--state could
significantly constraint the model parameters.

Distinguishing standard and modified gravity cosmologies with machine learning

Distinguishing standard and modified gravity cosmologies with machine learning

 

Authors: A. Peel, F. Lalande, J.-L. Starck, V. Pettorino, J. Merten,  C. Giocoli, M. Meneghetti,  M. Baldi
Journal: PRD
Year: 2019
Download: ADS | arXiv


Abstract

We present a convolutional neural network to classify distinct cosmological scenarios based on the statistically similar weak-lensing maps they generate. Modified gravity (MG) models that include massive neutrinos can mimic the standard concordance model (ΛCDM) in terms of Gaussian weak-lensing observables. An inability to distinguish viable models that are based on different physics potentially limits a deeper understanding of the fundamental nature of cosmic acceleration. For a fixed redshift of sources, we demonstrate that a machine learning network trained on simulated convergence maps can discriminate between such models better than conventional higher-order statistics. Results improve further when multiple source redshifts are combined. To accelerate training, we implement a novel data compression strategy that incorporates our prior knowledge of the morphology of typical convergence map features. Our method fully distinguishes ΛCDM from its most similar MG model on noise-free data, and it correctly identifies among the MG models with at least 80% accuracy when using the full redshift information. Adding noise lowers the correct classification rate of all models, but the neural network still significantly outperforms the peak statistics used in a previous analysis.

On the dissection of degenerate cosmologies with machine learning

On the dissection of degenerate cosmologies with machine learning

 

Authors: J. Merten,  C. Giocoli, M. Baldi, M. Meneghetti, A. Peel, F. Lalande, J.-L. Starck, V. Pettorino
Journal: MNRAS
Year: 2019
Download: ADS | arXiv


Abstract

Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.

Cosmological evolution in DHOST theories

 

Authors: M. Crisostomi , K. Koyama, D. Langlois, K. Noui and D. A. Steer
Journal:  
Year: 2018
Download: arXiv


Abstract

In the context of Degenerate Higher-Order Scalar-Tensor (DHOST) theories, we study cosmological solutions and their stability properties. In particular, we explicitly illustrate the crucial role of degeneracy by showing how the higher order homogeneous equations in the physical frame (where matter is minimally coupled) can be recast in a system of equations that do not involve higher order derivatives. We study the fixed points of the dynamics, finding the conditions for having a de Sitter attractor at late times. Then we consider the coupling to matter field (described for convenience by a k-essence Lagrangian) and find the conditions to avoid gradient and ghost instabilities at linear order in cosmological perturbations, extending previous work. Finally, we apply these results to a simple subclass of DHOST theories, showing that de Sitter attractor conditions, no ghost and no gradient instabilities conditions (both in the self-accelerating era and in the matter dominated era) can be compatible.

The road ahead of Horndeski: cosmology of surviving scalar-tensor theories


Abstract

In the context of the effective field theory of dark energy (EFT) we perform agnostic explorations of Horndeski gravity. We choose two parametrizations for the free EFT functions, namely a power law and a dark energy density-like behaviour on a non trivial Chevallier-Polarski-Linder background. We restrict our analysis to those EFT functions which do not modify the speed of propagation of gravitational waves. Among those, we prove that one specific function cannot be constrained by data, since its contribution to the observables is below the cosmic variance, although we show it has a relevant role in defining the viable parameter space. We place constraints on the parameters of these models combining measurements from present day cosmological datasets and we prove that the next generation galaxy surveys can improve such constraints by one order of magnitude. We then verify the validity of the quasi-static limit within the sound horizon of the dark field, by looking at the phenomenological functions μ and Σ, associated respectively to clustering and lensing potentials. Furthermore, we notice up to 5% deviations in μ,Σ with respect to General Relativity at scales smaller than the Compton one. For the chosen parametrizations and in the quasi-static limit, future constraints on μ and Σ can reach the 1% level and will allow us to discriminate between certain models at more than 3σ, provided the present best-fit values remain.

Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

 

Authors: A. PeelV. Pettorino, C. Giocoli, J.-L. Starck, M. Baldi
Journal: A&A
Year: 2018
Download: ADS | arXiv


Abstract

General relativity (GR) has been well tested up to solar system scales, but it is much less certain that standard gravity remains an accurate description on the largest, that is, cosmological, scales. Many extensions to GR have been studied that are not yet ruled out by the data, including by that of the recent direct gravitational wave detections. Degeneracies among the standard model (ΛCDM) and modified gravity (MG) models, as well as among different MG parameters, must be addressed in order to best exploit information from current and future surveys and to unveil the nature of dark energy. We propose various higher-order statistics in the weak-lensing signal as a new set of observables able to break degeneracies between massive neutrinos and MG parameters. We have tested our methodology on so-called f(R) models, which constitute a class of viable models that can explain the accelerated universal expansion by a modification of the fundamental gravitational interaction. We have explored a range of these models that still fit current observations at the background and linear level, and we show using numerical simulations that certain models which include massive neutrinos are able to mimic ΛCDM in terms of the 3D power spectrum of matter density fluctuations. We find that depending on the redshift and angular scale of observation, non-Gaussian information accessed by higher-order weak-lensing statistics can be used to break the degeneracy between f(R) models and ΛCDM. In particular, peak counts computed in aperture mass maps outperform third- and fourth-order moments.

Model-independent reconstruction of the linear anisotropic stress

 

Authors: Ana Marta Pinho Santiago Casas, Luca Amendola
Journal: Accepted for JCAP
Year: 05/2018
Download: Inspire| Arxiv


Abstract

In this work, we use recent data on the Hubble expansion rate H(z), the quantity fσ8(z) from redshift space distortions and the statistic Eg from clustering and lensing observables to constrain in a model-independent way the linear anisotropic stress parameter η. This estimate is free of assumptions about initial conditions, bias, the abundance of dark matter and the background expansion. We denote this observable estimator as ηobs. If ηobs turns out to be different from unity, it would imply either a modification of gravity or a non-perfect fluid form of dark energy clustering at sub-horizon scales. Using three different methods to reconstruct the underlying model from data, we report the value of ηobs at three redshift values, z=0.29,0.58,0.86. Using the method of polynomial regression, we find ηobs=0.57±1.05, ηobs=0.48±0.96, and ηobs=0.11±3.21, respectively. Assuming a constant ηobs in this range, we find ηobs=0.49±0.69. We consider this method as our fiducial result, for reasons clarified in the text. The other two methods give for a constant anisotropic stress ηobs=0.15±0.27 (binning) and ηobs=0.53±0.19 (Gaussian Process). We find that all three estimates are compatible with each other within their 1σ error bars. While the polynomial regression method is compatible with standard gravity, the other two methods are in tension with it.

Testing (modified) gravity with 3D and tomographic cosmic shear

 

Authors: A. Spurio Mancini, R. Reischke, V. Pettorino, B.M. Scháefer, M. Zumalacárregui
Journal: Submitted to MNRAS
Year: 2018
Download: ADS | arXiv


Abstract

Cosmic shear, the weak gravitational lensing caused by the large-scale structure, is one of the primary probes to test gravity with current and future surveys. There are two main techniques to analyse a cosmic shear survey; a tomographic method, where correlations between the lensing signal in different redshift bins are used to recover redshift information, and a 3D approach, where the full redshift information is carried through the entire analysis. Here we compare the two methods, by forecasting cosmological constraints for future surveys like Euclid. We extend the 3D formalism for the first time to theories beyond the standard model, belonging to the Horndeski class. This includes the majority of universally coupled extensions to LCDM with one scalar degree of freedom in addition to the metric, which are still in agreement with current observations. Given a fixed background, the evolution of linear perturbations in Horndeski gravity is described by a set of four functions of time only. We model their time evolution assuming proportionality to the dark energy density fraction and place Fisher matrix constraints on the proportionality coefficients. We find that a 3D analysis can constrain Horndeski theories better than a tomographic one, in particular with a decrease in the errors on the Horndeski parameters of the order of 20 - 30%. This paper shows for the first time a quantitative comparison on an equal footing between Fisher matrix forecasts for both a fully 3D and a tomographic analysis of cosmic shear surveys. The increased sensitivity of the 3D formalism comes from its ability to retain information on the source redshifts along the entire analysis.


Summary

A new paper has been put on the arXiv, led by Alessio Spurio Mancini, PhD student of CosmoStat member Valeria Pettorino in collaboration with R. Reischke, B.M. Scháefer (Heidelberg) and M. Zumalacárregui (Berkeley LBNL and Paris Saclay IPhT).
The authors investigate the performance of a 3D analysis of cosmic shear measurements vs a tomographic analysis as a probe of Horndeski theories of modified gravity, setting constraints by means of a Fisher matrix analysis on the parameters that describe the evolution of linear perturbations, using the specifications of a future Euclid-like experiment. Constraints are shown on both the modified gravity parameters and on a set of standard cosmological parameters, including the sum of neutrino masses. The analysis is restricted to angular modes ell < 1000 and k < 1 h/Mpc to avoid the deeply non-linear regime of structure growth. Below the main results of the paper.

 
  • The signal-to-noise ratio of both a 3D analysis as well as a tomographic one is very similar.
  • 3D cosmic shear provides tighter constraints than tomography for most cosmological parameters, with both methods showing very similar degeneracies.
  • The gain of 3D vs tomography is particularly significant for the sum of the neutrino masses (factor 3). For the Horndeski parameters the
    gain is of the order of 20 - 30 % in the errors.
  •  In Horndeski theories, braiding and the effective Newton coupling parameters (\alpha_B and \alpha_M) are constrained better if the kineticity is higher.
  • We investigated the impact on non-linear scales, and introduced an artificial screening scale, which pushes the deviations from General Relativity to zero below its value.  The gain when including the non-linear signal calls for the development of analytic or semi-analytic prescriptions for the treatment of non-linear scales in ΛCDM and modified gravity.