# Author: Austin Peel

## The impact of baryonic physics and massive neutrinos on weak lensing peak statistics

## Abstract

We study the impact of baryonic processes and massive neutrinos on weak lensing peak statistics that can be used to constrain cosmological parameters. We use the BAHAMAS suite of cosmological simulations, which self-consistently include baryonic processes and the effect of massive neutrino free-streaming on the evolution of structure formation. We construct synthetic weak lensing catalogues by ray-tracing through light-cones, and use the aperture mass statistic for the analysis. The peaks detected on the maps reflect the cumulative signal from massive bound objects and general large-scale structure. We present the first study of weak lensing peaks in simulations that include both baryonic physics and massive neutrinos (summed neutrino mass Mν = 0.06, 0.12, 0.24, and 0.48 eV assuming normal hierarchy), so that the uncertainty due to physics beyond the gravity of dark matter can be factored into constraints on cosmological models. Assuming a fiducial model of baryonic physics, we also investigate the correlation between peaks and massive haloes, over a range of summed neutrino mass values. As higher neutrino mass tends to suppress the formation of massive structures in the Universe, the halo mass function and lensing peak counts are therefore modified as a function of Mν. Over most of the S/N range, the impact of fiducial baryonic physics is greater (less) than neutrinos for 0.06 and 0.12 (0.24 and 0.48) eV models. Both baryonic physics and massive neutrinos should be accounted for when deriving cosmological parameters from weak lensing observations.

## Distinguishing standard and modified gravity cosmologies with machine learning

### Distinguishing standard and modified gravity cosmologies with machine learning

Authors: | A. Peel, F. Lalande, J.-L. Starck, V. Pettorino, J. Merten, C. Giocoli, M. Meneghetti, M. Baldi |

Journal: | PRD |

Year: | 2019 |

Download: | ADS | arXiv |

## Abstract

We present a convolutional neural network to classify distinct cosmological scenarios based on the statistically similar weak-lensing maps they generate. Modified gravity (MG) models that include massive neutrinos can mimic the standard concordance model (ΛCDM) in terms of Gaussian weak-lensing observables. An inability to distinguish viable models that are based on different physics potentially limits a deeper understanding of the fundamental nature of cosmic acceleration. For a fixed redshift of sources, we demonstrate that a machine learning network trained on simulated convergence maps can discriminate between such models better than conventional higher-order statistics. Results improve further when multiple source redshifts are combined. To accelerate training, we implement a novel data compression strategy that incorporates our prior knowledge of the morphology of typical convergence map features. Our method fully distinguishes ΛCDM from its most similar MG model on noise-free data, and it correctly identifies among the MG models with at least 80% accuracy when using the full redshift information. Adding noise lowers the correct classification rate of all models, but the neural network still significantly outperforms the peak statistics used in a previous analysis.

## On the dissection of degenerate cosmologies with machine learning

### On the dissection of degenerate cosmologies with machine learning

Authors: | J. Merten, C. Giocoli, M. Baldi, M. Meneghetti, A. Peel, F. Lalande, J.-L. Starck, V. Pettorino |

Journal: | MNRAS |

Year: | 2019 |

Download: | ADS | arXiv |

## Abstract

Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.

## MGCNN

Authors: | F. Lalande, A. Peel |

Language: | Python 3 |

Download: | mgcnn.tar.gz |

Description: | A Convolutional Neural Network (CNN) architecture for classifying standard and modified gravity (MG) cosmological models based on the weak-lensing convergence maps they produce. |

## Introduction

This repository contains the code and data used to produce the results in A. Peel et al. (2018), arXiv:1810.11030.

The Convolutional Neural Network (CNN) is implemented in Keras using TensorFlow as backend. Since the DUSTGRAIN-*pathfinder* simulations are not yet public, we are not able to include the original convergence maps obtained from the various cosmological runs. We do provide, however, the wavelet PDF datacubes derived for the four models as described in the paper: one standard LCDM and three modified gravity f(R) models.

## Requirements

- Python 3
- numpy
- Keras with Tensorflow as backend
- scikit-learn

## Usage

$ python3 train_mgcnn.py -n0

The three options for the noise flag "-n" are (0, 1, 2), which correspond to noise standard deviations of sigma = (0, 0.35, 0.70) added to the original convergence maps. Additional options are "-i" and "-e" for the number of training iterations and epochs, respectively.

Confusion matrices and evaluation metrics (loss function and validation accuracy) are saved as numpy arrays in the generated output/ directory after each iteration.

## Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

### Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

Authors: | A. Peel, V. Pettorino, C. Giocoli, J.-L. Starck, M. Baldi |

Journal: | A&A |

Year: | 2018 |

Download: | ADS | arXiv |

## Abstract

General relativity (GR) has been well tested up to solar system scales, but it is much less certain that standard gravity remains an accurate description on the largest, that is, cosmological, scales. Many extensions to GR have been studied that are not yet ruled out by the data, including by that of the recent direct gravitational wave detections. Degeneracies among the standard model (ΛCDM) and modified gravity (MG) models, as well as among different MG parameters, must be addressed in order to best exploit information from current and future surveys and to unveil the nature of dark energy. We propose various higher-order statistics in the weak-lensing signal as a new set of observables able to break degeneracies between massive neutrinos and MG parameters. We have tested our methodology on so-called f(R) models, which constitute a class of viable models that can explain the accelerated universal expansion by a modification of the fundamental gravitational interaction. We have explored a range of these models that still fit current observations at the background and linear level, and we show using numerical simulations that certain models which include massive neutrinos are able to mimic ΛCDM in terms of the 3D power spectrum of matter density fluctuations. We find that depending on the redshift and angular scale of observation, non-Gaussian information accessed by higher-order weak-lensing statistics can be used to break the degeneracy between f(R) models and ΛCDM. In particular, peak counts computed in aperture mass maps outperform third- and fourth-order moments.

## Sparse reconstruction of the merging A520 cluster system

### Sparse reconstruction of the merging A520 cluster system

Authors: | A. Peel, F. Lanusse, J.-L. Starck |

Journal: | ApJ |

Year: | 08/2017 |

Download: | ADS| Arxiv |

## Abstract

Merging galaxy clusters present a unique opportunity to study the properties of dark matter in an astrophysical context. These are rare and extreme cosmic events in which the bulk of the baryonic matter becomes displaced from the dark matter halos of the colliding subclusters. Since all mass bends light, weak gravitational lensing is a primary tool to study the total mass distribution in such systems. Combined with X-ray and optical analyses, mass maps of cluster mergers reconstructed from weak-lensing observations have been used to constrain the self-interaction cross-section of dark matter. The dynamically complex Abell 520 (A520) cluster is an exceptional case, even among merging systems: multi-wavelength observations have revealed a surprising high mass-to-light concentration of dark mass, the interpretation of which is difficult under the standard assumption of effectively collisionless dark matter. We revisit A520 using a new sparsity-based mass-mapping algorithm to independently assess the presence of the puzzling dark core. We obtain high-resolution mass reconstructions from two separate galaxy shape catalogs derived from Hubble Space Telescope observations of the system. Our mass maps agree well overall with the results of previous studies, but we find important differences. In particular, although we are able to identify the dark core at a certain level in both data sets, it is at much lower significance than has been reported before using the same data. As we cannot confirm the detection in our analysis, we do not consider A520 as posing a significant challenge to the collisionless dark matter scenario.

## Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey

### Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey

Authors: | A. Peel, C.-A. Lin, F. Lanusse, A. Leonard, J.-L. Starck, M. Kilbinger |

Journal: | A&A |

Year: | 2017 |

Download: | ADS | arXiv |

## Abstract

Peak statistics in weak-lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complementary probe to two-point and higher-order statistics to constrain our cosmological models. Next-generation galaxy surveys, with their advanced optics and large areas, will measure the cosmic weak-lensing signal with unprecedented precision. To prepare for these anticipated data sets, we assess the constraining power of peak counts in a simulated Euclid-like survey on the cosmological parameters Ωm, σ8, and w0. In particular, we study how the Camelus model--a fast stochastic algorithm for predicting peaks--can be applied to such large surveys. We measure the peak count abundance in a mock shear catalogue of ~5,000 sq. deg. using a multiscale mass map filtering technique. We then constrain the parameters of the mock survey using Camelus combined with approximate Bayesian computation (ABC). We find that peak statistics yield a tight but significantly biased constraint in the σ8-Ωm plane, indicating the need to better understand and control the model's systematics. We calibrate the model to remove the bias and compare results to those from the two-point correlation functions (2PCF) measured on the same field. In this case, we find the derived parameter Σ8=σ8(Ωm/0.27)^α=0.76−0.03+0.02 with α=0.65 for peaks, while for 2PCF the value is Σ8=0.76−0.01+0.02 with α=0.70. We therefore see comparable constraining power between the two probes, and the offset of their σ8-Ωm degeneracy directions suggests that a combined analysis would yield tighter constraints than either measure alone. As expected, w0 cannot be well constrained without a tomographic analysis, but its degeneracy directions with the other two varied parameters are still clear for both peaks and 2PCF.

## Effect of inhomogeneities on high precision measurements of cosmological distances

## Abstract

We study effects of inhomogeneities on distance measures in an exact relativistic Swiss-cheese model of the Universe, focusing on the distance modulus. The model has Λ CDM background dynamics, and the "holes" are nonsymmetric structures described by the Szekeres metric. The Szekeres exact solution of Einstein's equations, which is inhomogeneous and anisotropic, allows us to capture potentially relevant effects on light propagation due to nontrivial evolution of structures in an exact framework. Light beams traversing a single Szekeres structure in different ways can experience either magnification or demagnification, depending on the particular path. Consistent with expectations, we find a shift in the distance modulus μ to distant sources due to demagnification when the light beam travels primarily through the void regions of our model. Conversely, beams are magnified when they propagate mainly through the overdense regions of the structures, and we explore a small additional effect due to time evolution of the structures. We then study the probability distributions of Δ μ = μ_{ΛCDM}-μ_{SC} for sources at different redshifts in various Swiss-cheese constructions, where the light beams travel through a large number of randomly oriented Szekeres holes with random impact parameters. We find for Δμ the dispersions 0.004 ≤ σ_{Δμ} ≤ 0.008 mag for sources with redshifts 1.0 ≤ z ≤ 1.5 , which are smaller than the intrinsic dispersion of, for example, magnitudes of type Ia supernovae. The shapes of the distributions we obtain for our Swiss-cheese constructions are peculiar in the sense that they are not consistently skewed toward the demagnification side, as they are in analyses of lensing in cosmological simulations. Depending on the source redshift, the distributions for our models can be skewed to either the demagnification or the magnification side, reflecting a limitation of these constructions. This could be the result of requiring the continuity of Einstein's equations throughout the overall spacetime patchwork, which imposes the condition that compensating overdense shells must accompany the underdense void regions in the holes. The possibility to explore other uses of these constructions that could circumvent this limitation and lead to different statistics remains open.

## The effects of structure anisotropy on lensing observables in an exact general relativistic setting for precision cosmology

## Abstract

The study of relativistic, higher order, and nonlinear effects has become necessary in recent years in the pursuit of precision cosmology. We develop and apply here a framework to study gravitational lensing in exact models in general relativity that are not restricted to homogeneity and isotropy, and where full nonlinearity and relativistic effects are thus naturally included. We apply the framework to a specific, anisotropic galaxy cluster model which is based on a modified NFW halo density profile and described by the Szekeres metric. We examine the effects of increasing levels of anisotropy in the galaxy cluster on lensing observables like the convergence and shear for various lensing geometries, finding a strong nonlinear response in both the convergence and shear for rays passing through anisotropic regions of the cluster. Deviation from the expected values in a spherically symmetric structure are asymmetric with respect to path direction and thus will persist as a statistical effect when averaged over some ensemble of such clusters. The resulting relative difference in various geometries can be as large as approximately 2%, 8%, and 24% in the measure of convergence (1-κ) for levels of anisotropy of 5%, 10%, and 15%, respectively, as a fraction of total cluster mass. For the total magnitude of shear, the relative difference can grow near the center of the structure to be as large as 15%, 32%, and 44% for the same levels of anisotropy, averaged over the two extreme geometries. The convergence is impacted most strongly for rays which pass in directions along the axis of maximum dipole anisotropy in the structure, while the shear is most strongly impacted for rays which pass in directions orthogonal to this axis, as expected. The rich features found in the lensing signal due to anisotropic substructure are nearly entirely lost when one treats the cluster in the traditional FLRW lensing framework. These effects due to anisotropic structures are thus likely to impact lensing measurements and must be fully examined in an era of precision cosmology.