The impact of baryonic physics and massive neutrinos on weak lensing peak statistics

The impact of baryonic physics and massive neutrinos on weak lensing peak statistics

 

Authors: M. Fong, M. Choi, V. Catlett, B. Lee, A. Peel, R. Bowyer,  L. J. King, I. G. McCarthy
Journal: MNRAS
Year: 2019
Download: ADS | arXiv


Abstract

We study the impact of baryonic processes and massive neutrinos on weak lensing peak statistics that can be used to constrain cosmological parameters. We use the BAHAMAS suite of cosmological simulations, which self-consistently include baryonic processes and the effect of massive neutrino free-streaming on the evolution of structure formation. We construct synthetic weak lensing catalogues by ray-tracing through light-cones, and use the aperture mass statistic for the analysis. The peaks detected on the maps reflect the cumulative signal from massive bound objects and general large-scale structure. We present the first study of weak lensing peaks in simulations that include both baryonic physics and massive neutrinos (summed neutrino mass Mν = 0.06, 0.12, 0.24, and 0.48 eV assuming normal hierarchy), so that the uncertainty due to physics beyond the gravity of dark matter can be factored into constraints on cosmological models. Assuming a fiducial model of baryonic physics, we also investigate the correlation between peaks and massive haloes, over a range of summed neutrino mass values. As higher neutrino mass tends to suppress the formation of massive structures in the Universe, the halo mass function and lensing peak counts are therefore modified as a function of Mν. Over most of the S/N range, the impact of fiducial baryonic physics is greater (less) than neutrinos for 0.06 and 0.12 (0.24 and 0.48) eV models. Both baryonic physics and massive neutrinos should be accounted for when deriving cosmological parameters from weak lensing observations.

Euclid preparation III. Galaxy cluster detection in the wide photometric survey, performance and algorithm selection

 

Authors: Euclid Collaboration, R. Adam, ..., S. Farrens, et al.
Journal: A&A
Year: 2019
Download: ADS | arXiv


Abstract

Galaxy cluster counts in bins of mass and redshift have been shown to be a competitive probe to test cosmological models. This method requires an efficient blind detection of clusters from surveys with a well-known selection function and robust mass estimates. The Euclid wide survey will cover 15000 deg2 of the sky in the optical and near-infrared bands, down to magnitude 24 in the H-band. The resulting data will make it possible to detect a large number of galaxy clusters spanning a wide-range of masses up to redshift ∼2. This paper presents the final results of the Euclid Cluster Finder Challenge (CFC). The objective of these challenges was to select the cluster detection algorithms that best meet the requirements of the Euclid mission. The final CFC included six independent detection algorithms, based on different techniques, such as photometric redshift tomography, optimal filtering, hierarchical approach, wavelet and friend-of-friends algorithms. These algorithms were blindly applied to a mock galaxy catalog with representative Euclid-like properties. The relative performance of the algorithms was assessed by matching the resulting detections to known clusters in the simulations. Several matching procedures were tested, thus making it possible to estimate the associated systematic effects on completeness to <3%. All the tested algorithms are very competitive in terms of performance, with three of them reaching >80% completeness for a mean purity of 80% down to masses of 1014 M⊙ and up to redshift z=2. Based on these results, two algorithms were selected to be implemented in the Euclid pipeline, the AMICO code, based on matched filtering, and the PZWav code, based on an adaptive wavelet approach.

Measuring Gravity at Cosmological Scales

Measuring Gravity at Cosmological Scales

 

Authors:  Luca Amendola , Dario Bettoni, Ana Marta Pinho Santiago Casas,
Journal: Review Paper
Year: 02/2019
Download: Inspire| Arxiv


Abstract

This paper is a pedagogical introduction to models of gravity and how to constrain them through cosmological observations. We focus on the Horndeski scalar-tensor theory and on the quantities that can be measured with a minimum of assumptions. Alternatives or extensions of General Relativity have been proposed ever since its early years. Because of Lovelock theorem, modifying gravity in four dimensions typically means adding new degrees of freedom. The simplest way is to include a scalar field coupled to the curvature tensor terms. The most general way of doing so without incurring in the Ostrogradski instability is the Horndeski Lagrangian and its extensions. Testing gravity means therefore, in its simplest term, testing the Horndeski Lagrangian. Since local gravity experiments can always be evaded by assuming some screening mechanism or that baryons are decoupled, or even that the effects of modified gravity are visible only at early times, we need to test gravity with cosmological observations in the late universe (large-scale structure) and in the early universe (cosmic microwave background). In this work we review the basic tools to test gravity at cosmological scales, focusing on model-independent measurements.

logfsigma8

 

Future constraints on the gravitational slip with the mass profiles of galaxy clusters


Abstract

The gravitational slip parameter is an important discriminator between large classes of gravity theories at cosmological and astrophysical scales. In this work we use a combination of simulated information of galaxy cluster mass profiles, inferred by Strong+Weak lensing analyses and by the study of the dynamics of the cluster member galaxies, to reconstruct the gravitational slip parameter η and predict the accuracy with which it can be constrained with current and future galaxy cluster surveys. Performing a full-likelihood statistical analysis, we show that galaxy cluster observations can constrain η down to the percent level already with a few tens of clusters. We discuss the significance of possible systematics, and show that the cluster masses and numbers of galaxy members used to reconstruct the dynamics mass profile have a mild effect on the predicted constraints.

Scale-invariant alternatives to general relativity. The inflation–dark-energy connection


Abstract

We discuss the cosmological phenomenology of biscalar--tensor models
displaying a maximally symmetric Einstein--frame kinetic sector and
constructed on the basis of scale symmetry and volume--preserving
diffeomorphisms. These theories contain a single dimensionful
parameter $\Lambda_0$---associated with the invariance under the
aforementioned restricted coordinate transformations---and a massless
dilaton field. At large field values these scenarios lead to inflation
with no generation of isocurvature perturbations. The corresponding
predictions depend only on two dimensionless parameters, which
characterize the curvature of the field--manifold and the leading
order behavior of the inflationary potential. For $\Lambda_0=0$ the
scale symmetry is unbroken and the dilaton admits only derivative
couplings to matter, evading all fifth force constraints. For
$\Lambda_0\neq 0$ the field acquires a run-away potential that can
support a dark energy dominated era at late times. We confront a
minimalistic realization of this appealing framework with observations
using a Markov-Chain-Monte-Carlo approach, with likelihoods from
present BAO, SNIa and CMB data. A Bayesian model comparison indicates
a preference for the considered model over $\Lambda$CDM, under certain
assumptions for the priors. The impact of possible consistency
relations among the early and late Universe dynamics that can appear
within this setting is discussed with the use of correlation
matrices. The results indicate that a precise determination of the
inflationary observables and the dark energy equation--of--state could
significantly constraint the model parameters.

Distinguishing standard and modified gravity cosmologies with machine learning

Distinguishing standard and modified gravity cosmologies with machine learning

 

Authors: A. Peel, F. Lalande, J.-L. Starck, V. Pettorino, J. Merten,  C. Giocoli, M. Meneghetti,  M. Baldi
Journal: PRD
Year: 2019
Download: ADS | arXiv


Abstract

We present a convolutional neural network to classify distinct cosmological scenarios based on the statistically similar weak-lensing maps they generate. Modified gravity (MG) models that include massive neutrinos can mimic the standard concordance model (ΛCDM) in terms of Gaussian weak-lensing observables. An inability to distinguish viable models that are based on different physics potentially limits a deeper understanding of the fundamental nature of cosmic acceleration. For a fixed redshift of sources, we demonstrate that a machine learning network trained on simulated convergence maps can discriminate between such models better than conventional higher-order statistics. Results improve further when multiple source redshifts are combined. To accelerate training, we implement a novel data compression strategy that incorporates our prior knowledge of the morphology of typical convergence map features. Our method fully distinguishes ΛCDM from its most similar MG model on noise-free data, and it correctly identifies among the MG models with at least 80% accuracy when using the full redshift information. Adding noise lowers the correct classification rate of all models, but the neural network still significantly outperforms the peak statistics used in a previous analysis.

On the dissection of degenerate cosmologies with machine learning

On the dissection of degenerate cosmologies with machine learning

 

Authors: J. Merten,  C. Giocoli, M. Baldi, M. Meneghetti, A. Peel, F. Lalande, J.-L. Starck, V. Pettorino
Journal: MNRAS
Year: 2019
Download: ADS | arXiv


Abstract

Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.

The road ahead of Horndeski: cosmology of surviving scalar-tensor theories


Abstract

In the context of the effective field theory of dark energy (EFT) we perform agnostic explorations of Horndeski gravity. We choose two parametrizations for the free EFT functions, namely a power law and a dark energy density-like behaviour on a non trivial Chevallier-Polarski-Linder background. We restrict our analysis to those EFT functions which do not modify the speed of propagation of gravitational waves. Among those, we prove that one specific function cannot be constrained by data, since its contribution to the observables is below the cosmic variance, although we show it has a relevant role in defining the viable parameter space. We place constraints on the parameters of these models combining measurements from present day cosmological datasets and we prove that the next generation galaxy surveys can improve such constraints by one order of magnitude. We then verify the validity of the quasi-static limit within the sound horizon of the dark field, by looking at the phenomenological functions μ and Σ, associated respectively to clustering and lensing potentials. Furthermore, we notice up to 5% deviations in μ,Σ with respect to General Relativity at scales smaller than the Compton one. For the chosen parametrizations and in the quasi-static limit, future constraints on μ and Σ can reach the 1% level and will allow us to discriminate between certain models at more than 3σ, provided the present best-fit values remain.

Measuring Linear and Non-linear Galaxy Bias Using Counts-in-Cells in the Dark Energy Survey Science Verification Data

 

Authors: A. I. Salvador, F. J. Sánchez, A. Pagul et al.
Journal:  
Year: 07/2018
Download: ADS| Arxiv


Abstract

Non-linear bias measurements require a great level of control of potential systematic effects in galaxy redshift surveys. Our goal is to demonstrate the viability of using Counts-in-Cells (CiC), a statistical measure of the galaxy distribution, as a competitive method to determine linear and higher-order galaxy bias and assess clustering systematics. We measure the galaxy bias by comparing the first four moments of the galaxy density distribution with those of the dark matter distribution. We use data from the MICE simulation to evaluate the performance of this method, and subsequently perform measurements on the public Science Verification (SV) data from the Dark Energy Survey (DES). We find that the linear bias obtained with CiC is consistent with measurements of the bias performed using galaxy-galaxy clustering, galaxy-galaxy lensing, CMB lensing, and shear+clustering measurements. Furthermore, we compute the projected (2D) non-linear bias using the expansion $\delta_{g} = \sum_{k=0}^{3} (b_{k}/k!) \delta^{k}$, finding a non-zero value for $b_2$ at the $3\sigma$ level. We also check a non-local bias model and show that the linear bias measurements are robust to the addition of new parameters. We compare our 2D results to the 3D prediction and find compatibility in the large scale regime ($>30$ Mpc $h^{-1}$)

Cosmological parameters from weak cosmological lensing

 

Authors: M. Kilbinger
Journal:  
Year: 07/2018
Download: ADS| Arxiv


Abstract

In this manuscript of the habilitation à diriger des recherches (HDR), the author presents some of his work over the last ten years. The main topic of this thesis is cosmic shear, the distortion of images of distant galaxies due to weak gravitational lensing by the large-scale structure in the Universe. Cosmic shear has become a powerful probe into the nature of dark matter and the origin of the current accelerated expansion of the Universe. Over the last years, cosmic shear has evolved into a reliable and robust cosmological probe, providing measurements of the expansion history of the Universe and the growth of its structure.
I review the principles of weak gravitational lensing and show how cosmic shear is interpreted in a cosmological context. Then I give an overview of weak-lensing measurements, and present observational results from the Canada-France Hawai'i Lensing Survey (CFHTLenS), as well as the implications for cosmology. I conclude with an outlook on the various future surveys and missions, for which cosmic shear is one of the main science drivers, and discuss promising new weak cosmological lensing techniques for future observations.