Measuring Linear and Non-linear Galaxy Bias Using Counts-in-Cells in the Dark Energy Survey Science Verification Data

 

Authors: A. I. Salvador, F. J. Sánchez, A. Pagul et al.
Journal:  
Year: 07/2018
Download: ADS| Arxiv


Abstract

Non-linear bias measurements require a great level of control of potential systematic effects in galaxy redshift surveys. Our goal is to demonstrate the viability of using Counts-in-Cells (CiC), a statistical measure of the galaxy distribution, as a competitive method to determine linear and higher-order galaxy bias and assess clustering systematics. We measure the galaxy bias by comparing the first four moments of the galaxy density distribution with those of the dark matter distribution. We use data from the MICE simulation to evaluate the performance of this method, and subsequently perform measurements on the public Science Verification (SV) data from the Dark Energy Survey (DES). We find that the linear bias obtained with CiC is consistent with measurements of the bias performed using galaxy-galaxy clustering, galaxy-galaxy lensing, CMB lensing, and shear+clustering measurements. Furthermore, we compute the projected (2D) non-linear bias using the expansion $\delta_{g} = \sum_{k=0}^{3} (b_{k}/k!) \delta^{k}$, finding a non-zero value for $b_2$ at the $3\sigma$ level. We also check a non-local bias model and show that the linear bias measurements are robust to the addition of new parameters. We compare our 2D results to the 3D prediction and find compatibility in the large scale regime ($>30$ Mpc $h^{-1}$)

A highly precise shape-noise-free shear bias estimator

 

Authors: A. Pujol, M. Kilbinger, F. Sureau et al.
Journal:  
Year: 06/2018
Download: ADS| Arxiv


Abstract

We present a new method to estimate shear measurement bias in image simulations that significantly improves its precision with respect to the state-of-the-art methods. This method is based on measuring the shear response for individual images. We generate sheared versions of the same image to measure how the shape measurement changes with the changes in the shear, so that we obtain a shear response for each original image, as well as its additive bias. Using the exact same noise realizations for each sheared version allows us to obtain an exact estimation of its shear response. The estimated shear bias of a sample of galaxies comes from the measured averages of the shear response and individual additive bias. The precision of this method supposes an improvement with respect to previous methods since our method is not affected by shape noise. As a consequence, the method does not require shape noise cancellation for a precise estimation of shear bias. The method can be easily applied to many applications such as shear measurement validation and calibration, reducing the number of necessary simulated images by a few orders of magnitude to achieve the same precision requirements.

Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

 

Authors: A. Peel, V. Pettorino, C. Giocoli, J.-L. Starck, M. Baldi
Journal: Submitted to A&A
Year: 2018
Download: ADS | arXiv


Abstract

General relativity (GR) has been well tested up to solar system scales, but it is much less certain that standard gravity remains an accurate description on the largest, i.e. cosmological, scales. Many extensions to GR have been studied that are not yet ruled out by the data, including by that of the recent direct gravitational wave detections. Degeneracies among the standard model (LCDM) and modified gravity (MG) models, as well as among different MG parameters, need to be addressed in order to best exploit information from current and future surveys and to unveil the nature of dark energy. We propose various higher-order statistics in the weak-lensing signal as a new set of observables able to break degeneracies between massive neutrinos and MG parameters. We test our methodology on so-called f(R) models, which constitute a class of viable models that can explain the accelerated universal expansion by a modification of the fundamental gravitational interaction. We explore a range of these models that still fit current observations at the background and linear level, and we show using numerical simulations that certain models which include massive neutrinos are able to mimic LCDM in terms of the 3D power spectrum of matter density fluctuations. We find that depending on the redshift and angular scale of observation, non-Gaussian information accessed by higher-order weak-lensing statistics can be used to break the degeneracy between f(R) models and LCDM. In particular, peak counts computed in aperture mass maps outperform third- and fourth-order moments.

Testing (modified) gravity with 3D and tomographic cosmic shear

 

Authors: A. Spurio Mancini, R. Reischke, V. Pettorino, B.M. Scháefer, M. Zumalacárregui
Journal: Submitted to MNRAS
Year: 2018
Download: ADS | arXiv


Abstract

Cosmic shear, the weak gravitational lensing caused by the large-scale structure, is one of the primary probes to test gravity with current and future surveys. There are two main techniques to analyse a cosmic shear survey; a tomographic method, where correlations between the lensing signal in different redshift bins are used to recover redshift information, and a 3D approach, where the full redshift information is carried through the entire analysis. Here we compare the two methods, by forecasting cosmological constraints for future surveys like Euclid. We extend the 3D formalism for the first time to theories beyond the standard model, belonging to the Horndeski class. This includes the majority of universally coupled extensions to LCDM with one scalar degree of freedom in addition to the metric, which are still in agreement with current observations. Given a fixed background, the evolution of linear perturbations in Horndeski gravity is described by a set of four functions of time only. We model their time evolution assuming proportionality to the dark energy density fraction and place Fisher matrix constraints on the proportionality coefficients. We find that a 3D analysis can constrain Horndeski theories better than a tomographic one, in particular with a decrease in the errors on the Horndeski parameters of the order of 20 - 30%. This paper shows for the first time a quantitative comparison on an equal footing between Fisher matrix forecasts for both a fully 3D and a tomographic analysis of cosmic shear surveys. The increased sensitivity of the 3D formalism comes from its ability to retain information on the source redshifts along the entire analysis.


Summary

A new paper has been put on the arXiv, led by Alessio Spurio Mancini, PhD student of CosmoStat member Valeria Pettorino in collaboration with R. Reischke, B.M. Scháefer (Heidelberg) and M. Zumalacárregui (Berkeley LBNL and Paris Saclay IPhT).
The authors investigate the performance of a 3D analysis of cosmic shear measurements vs a tomographic analysis as a probe of Horndeski theories of modified gravity, setting constraints by means of a Fisher matrix analysis on the parameters that describe the evolution of linear perturbations, using the specifications of a future Euclid-like experiment. Constraints are shown on both the modified gravity parameters and on a set of standard cosmological parameters, including the sum of neutrino masses. The analysis is restricted to angular modes ell < 1000 and k < 1 h/Mpc to avoid the deeply non-linear regime of structure growth. Below the main results of the paper.

 
  • The signal-to-noise ratio of both a 3D analysis as well as a tomographic one is very similar.
  • 3D cosmic shear provides tighter constraints than tomography for most cosmological parameters, with both methods showing very similar degeneracies.
  • The gain of 3D vs tomography is particularly significant for the sum of the neutrino masses (factor 3). For the Horndeski parameters the
    gain is of the order of 20 - 30 % in the errors.
  •  In Horndeski theories, braiding and the effective Newton coupling parameters (\alpha_B and \alpha_M) are constrained better if the kineticity is higher.
  • We investigated the impact on non-linear scales, and introduced an artificial screening scale, which pushes the deviations from General Relativity to zero below its value.  The gain when including the non-linear signal calls for the development of analytic or semi-analytic prescriptions for the treatment of non-linear scales in ΛCDM and modified gravity.

Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

 

Authors: N. JeffreyF. B. AbdallaO. LahavF. LanusseJ.-L. Starck, et al
Journal:  
Year: 01/2018
Download: ADS| Arxiv


Abstract

Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in the density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.

The Dark Energy Survey Data Release 1

 

Authors: DES Collaboration
Journal:  
Year: 01/2018
Download: ADS| Arxiv


Abstract

We describe the first public data release of the Dark Energy Survey, DES DR1, consisting of reduced single epoch images, coadded images, coadded source catalogs, and associated products and services assembled over the first three years of DES science operations. DES DR1 is based on optical/near-infrared imaging from 345 distinct nights (August 2013 to February 2016) by the Dark Energy Camera mounted on the 4-m Blanco telescope at Cerro Tololo Inter-American Observatory in Chile. We release data from the DES wide-area survey covering ~5,000 sq. deg. of the southern Galactic cap in five broad photometric bands, grizY. DES DR1 has a median delivered point-spread function of g = 1.12, r = 0.96, i = 0.88, z = 0.84, and Y = 0.90 arcsec FWHM, a photometric precision of < 1% in all bands, and an astrometric precision of 151 mas. The median coadded catalog depth for a 1.95" diameter aperture at S/N = 10 is g = 24.33, r = 24.08, i = 23.44, z = 22.69, and Y = 21.44 mag. DES DR1 includes nearly 400M distinct astronomical objects detected in ~10,000 coadd tiles of size 0.534 sq. deg. produced from ~39,000 individual exposures. Benchmark galaxy and stellar samples contain ~310M and ~ 80M objects, respectively, following a basic object quality selection. These data are accessible through a range of interfaces, including query web clients, image cutout servers, jupyter notebooks, and an interactive coadd image visualization tool. DES DR1 constitutes the largest photometric data set to date at the achieved depth and photometric precision.

Wasserstein Dictionary Learning: Optimal Transport-based unsupervised non-linear dictionary learning

 

Authors: M.A. Schmitz, M. Heitz, N. Bonneel, F.-M. Ngolè, D. Coeurjolly, M. Cuturi, G. Peyré & J.-L. Starck
Journal: SIAM SIIMS
Year: 2018
Download: ADS | arXiv

 


Abstract

This article introduces a new non-linear dictionary learning method for histograms in the probability simplex. The method leverages optimal transport theory, in the sense that our aim is to reconstruct histograms using so called displacement interpolations (a.k.a. Wasserstein barycenters) between dictionary atoms; such atoms are themselves synthetic histograms in the probability simplex. Our method simultaneously estimates such atoms, and, for each datapoint, the vector of weights that can optimally reconstruct it as an optimal transport barycenter of such atoms. Our method is computationally tractable thanks to the addition of an entropic regularization to the usual optimal transportation problem, leading to an approximation scheme that is efficient, parallel and simple to differentiate. Both atoms and weights are learned using a gradient-based descent method. Gradients are obtained by automatic differentiation of the generalized Sinkhorn iterations that yield barycenters with entropic smoothing. Because of its formulation relying on Wasserstein barycenters instead of the usual matrix product between dictionary and codes, our method allows for non-linear relationships between atoms and the reconstruction of input data. We illustrate its application in several different image processing settings.

Cosmic CARNage I: on the calibration of galaxy formation models

 

Authors: A. Knebe, F. R. Pearce, V. Gonzalez-Perez et al.
Journal:  
Year: 12/2017
Download: ADS| Arxiv


Abstract

We present a comparison of nine galaxy formation models, eight semi-analytical and one halo occupation distribution model, run on the same underlying cold dark matter simulation (cosmological box of co-moving width 125h1 Mpc, with a dark-matter particle mass of 1.24×109h1 Msun) and the same merger trees. While their free parameters have been calibrated to the same observational data sets using two approaches, they nevertheless retain some 'memory' of any previous calibration that served as the starting point (especially for the manually-tuned models). For the first calibration, models reproduce the observed z = 0 galaxy stellar mass function (SMF) within 3-{\sigma}. The second calibration extended the observational data to include the z = 2 SMF alongside the z~0 star formation rate function, cold gas mass and the black hole-bulge mass relation. Encapsulating the observed evolution of the SMF from z = 2 to z = 0 is found to be very hard within the context of the physics currently included in the models. We finally use our calibrated models to study the evolution of the stellar-to-halo mass (SHM) ratio. For all models we find that the peak value of the SHM relation decreases with redshift. However, the trends seen for the evolution of the peak position as well as the mean scatter in the SHM relation are rather weak and strongly model dependent. Both the calibration data sets and model results are publicly available.

Sparse estimation of model-based diffuse thermal dust emission

 

Authors: M.O. Irfan, J.Bobin 
Journal: MNRAS
Year: 2017
Download: ADS | arXiv


Abstract

Component separation for the Planck HFI data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low- resolution, estimation of the dust emission. In this paper we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7 and 7.2 per cent at the 1 sigma level across the full sky for thermal dust temperature, spectral index and optical depth at 353 GHz, respectively. Comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However outside of the Galactic plane premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing

 

Authors: DES Collaboration
Journal:  
Year: 08/2017
Download: ADS| Arxiv


Abstract

We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg$^2$ of $griz$ imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies in four redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in five redshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxy shears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometric redshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulk of the analysis was carried out while blind to the true results; we describe an extensive suite of systematics checks performed and passed during this blinded phase. The data are modeled in flat $\Lambda$CDM and $w$CDM cosmologies, marginalizing over 20 nuisance parameters, varying 6 (for $\Lambda$CDM) or 7 (for $w$CDM) cosmological parameters including the neutrino mass density and including the 457 $\times$ 457 element analytic covariance matrix. We find consistent cosmological results from these three two-point functions, and from their combination obtain $S_8 \equiv \sigma_8 (\Omega_m/0.3)^{0.5} = 0.783^{+0.021}_{-0.025}$ and $\Omega_m = 0.264^{+0.032}_{-0.019}$ for $\Lambda$CDM for $w$CDM, we find $S_8 = 0.794^{+0.029}_{-0.027}$, $\Omega_m = 0.279^{+0.043}_{-0.022}$, and $w=-0.80^{+0.20}_{-0.22}$ at 68% CL. The precision of these DES Y1 results rivals that from the Planck cosmic microwave background measurements, allowing a comparison of structure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values for $S_8$ and $\Omega_m$ are lower than the central values from Planck ...