Quantifying systematics from the shear inversion on weak-lensing peak counts

Authors: C. Lin, M. Kilbinger
Journal: Submitted to A&A letters
Year: 2017
Download: ADS | arXiv

 


Abstract

Weak-lensing (WL) peak counts provide a straightforward way to constrain cosmology, and results have been shown promising. However, the importance of understanding and dealing with systematics increases as data quality reaches an unprecedented level. One of the sources of systematics is the convergence-shear inversion. This effect, inevitable from observations, is usually neglected by theoretical peak models. Thus, it could have an impact on cosmological results. In this letter, we study the bias from neglecting the inversion and find it small but not negligible. The cosmological dependence of this bias is difficult to model and depends on the filter size. We also show the evolution of parameter constraints. Although weak biases arise in individual peak bins, the bias can reach 2-sigma for the dark energy equation of state w0. Therefore, we suggest that the inversion cannot be ignored and that inversion-free approaches, such as aperture mass, would be a more suitable tool to study weak-lensing peak counts.

Precision calculations of the cosmic shear power spectrum projection

Authors: M. Kilbinger, C. Heymans, M. Asgari et al.
Journal: MNRAS
Year: 2017
Download: ADS | arXiv


Abstract

We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second- order Limber equations for the projection. We find that the impact of adopting these approximations are negligible when constraining cosmological parameters from current weak lensing surveys. This is demonstrated using data from the Canada-France-Hawaii Lensing Survey (CFHTLenS). We find that the reported tension with Planck Cosmic Microwave Background (CMB) temperature anisotropy results cannot be alleviated, in contrast to the recent claim made by Kitching et al. (2016, version 1). For future large-scale surveys with unprecedented precision, we show that the spherical second-order Limber approximation will provide sufficient accuracy. In this case, the cosmic-shear power spectrum is shown to be in agreement with the full projection at the sub-percent level for l > 3, with the corresponding errors an order of magnitude below cosmic variance for all l. When computing the two-point shear correlation function, we show that the flat-sky fast Hankel transformation results in errors below two percent compared to the full spherical transformation. In the spirit of reproducible research, our numerical implementation of all approximations and the full projection are publicly available within the package nicaea at http://www.cosmostat.org/software/nicaea.


Summary

We discuss various methods to calculate projections for weak gravitational lensing: Since lenses galaxies pick up matter inhomogeneities of the cosmic web along the line of sight while photons from the galaxies propagate through the Universe to the observer, these inhomogeneities have to be projected to a 2D observable, the cumulative shear or convergence. The full projection involves three-dimensional integrals over highly oscillating Bessel functions, and can be time-consuming to compute numerically to high accuracy. Most previous work have therefore used approximations such as the Limber approximation, that reduce the integrals to 1D, and thereby neglecting modes along the line of sight.

The authors show that these projections are more than adequate for present surveys. Sub-percent accuracy is reached for l>20, for example as shown by the pink curve, which is the ratio of the case 'ExtL1Hyb' to the full projection. The abbreviation means 'extended', corresponding to the improved approximation introduced by LoVerde & Afshordi (2008), first-order Limber, and hybrid, since this is a hybrid between flat-sky and spherical coordinates. This case has been used in most of the recent publications (e.g. for KiDS), whereas the cast 'L1Fl' (first-order Limber flat-sky) was popular for most publications since 2014.

These approximations are sufficient for the small areas of current observations coming from CFHTLenS, KiDS, and DES, and well below cosmic variance of even future surveys (the figure shows Euclid - 15,000 deg2 and Kids -1,500 deg2).

K17_Fig1b

The paper then discusses the second-order Limber approximation, introduced in a general framework by LoVerde & Afshordi (2008), and applied to weak lensing in the current paper. The best 2nd-order case 'ExtL2Sph' reaches sub-percent accuracy down to l=3, sufficient for all future surveys.

The paper also computes the shear correlation function in real space, and shows that those approximations have a very minor influence.

We then go on to re-compute the cosmological constraints obtained in Kilbinger et al. (2013), and find virtually no change when choosing different approximations. Only the depreciated case 'ExtL1Fl' makes a noticeable difference, which is however still well within the statistical error bars. This case shows a particular slow convergence to the full projection.

Similar results have been derived in two other recent publications, Kitching et al. (2017), and Lemos, Challinor & Efstathiou (2017).
Note however that Kitching et al. (2017) conclude that errors from projection approximations of the types we discussed here (Limber, flat sky) could make up to 11% of the error budget of future surveys. This is however assuming the worst-case scenario including the deprecated cast 'ExtL1Fl', and we do not share their conclusion, but think that for example the projection 'ExtL2Sph' is sufficient for future surveys such as LSST and Euclid.

Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey

Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey

 

Authors: A. Peel, C.-A. Lin, F. Lanusse, A. Leonard, J.-L. Starck, M. Kilbinger
Journal: A&A
Year: 2017
Download: ADS | arXiv

 


Abstract

Peak statistics in weak-lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complementary probe to two-point and higher-order statistics to constrain our cosmological models. Next-generation galaxy surveys, with their advanced optics and large areas, will measure the cosmic weak-lensing signal with unprecedented precision. To prepare for these anticipated data sets, we assess the constraining power of peak counts in a simulated Euclid-like survey on the cosmological parameters Ωm, σ8, and w0. In particular, we study how the Camelus model--a fast stochastic algorithm for predicting peaks--can be applied to such large surveys. We measure the peak count abundance in a mock shear catalogue of ~5,000 sq. deg. using a multiscale mass map filtering technique. We then constrain the parameters of the mock survey using Camelus combined with approximate Bayesian computation (ABC). We find that peak statistics yield a tight but significantly biased constraint in the σ8-Ωm plane, indicating the need to better understand and control the model's systematics. We calibrate the model to remove the bias and compare results to those from the two-point correlation functions (2PCF) measured on the same field. In this case, we find the derived parameter Σ8=σ8(Ωm/0.27)^α=0.76−0.03+0.02 with α=0.65 for peaks, while for 2PCF the value is Σ8=0.76−0.01+0.02 with α=0.70. We therefore see comparable constraining power between the two probes, and the offset of their σ8-Ωm degeneracy directions suggests that a combined analysis would yield tighter constraints than either measure alone. As expected, w0 cannot be well constrained without a tomographic analysis, but its degeneracy directions with the other two varied parameters are still clear for both peaks and 2PCF.

The DESI Experiment Part I: Science,Targeting, and Survey Design

 

Authors: DESI collaboration
Journal: ArXiv
Year: 2016
Download: ADS | arXiv

 


Abstract

DESI (Dark Energy Spectroscopic Instrument) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations (BAO) and the growth of structure through redshift-space distortions with a wide-area galaxy and quasar redshift survey. To trace the underlying dark matter distribution, spectroscopic targets will be selected in four classes from imaging data. We will measure luminous red galaxies up to $z=1.0$. To probe the Universe out to even higher redshift, DESI will target bright [O II] emission line galaxies up to $z=1.7$. Quasars will be targeted both as direct tracers of the underlying dark matter distribution and, at higher redshifts ($ 2.1 < z < 3.5$), for the Ly-$\alpha$ forest absorption features in their spectra, which will be used to trace the distribution of neutral hydrogen. When moonlight prevents efficient observations of the faint targets of the baseline survey, DESI will conduct a magnitude-limited Bright Galaxy Survey comprising approximately 10 million galaxies with a median $z\approx 0.2$. In total, more than 30 million galaxy and quasar redshifts will be obtained to measure the BAO feature and determine the matter power spectrum, including redshift space distortions.

 

 

The DESI Experiment Part II: Instrument Design

 

Authors: DESI collaboration
Journal: ArXiv
Year: 2016
Download: ADS | arXiv

 


Abstract

DESI (Dark Energy Spectropic Instrument) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations and the growth of structure through redshift-space distortions with a wide-area galaxy and quasar redshift survey. The DESI instrument is a robotically-actuated, fiber-fed spectrograph capable of taking up to 5,000 simultaneous spectra over a wavelength range from 360 nm to 980 nm. The fibers feed ten three-arm spectrographs with resolution
R=λ/ΔλR= λ/Δλ
between 2000 and 5500, depending on wavelength. The DESI instrument will be used to conduct a five-year survey designed to cover 14,000 deg
2^2
. This powerful instrument will be installed at prime focus on the 4-m Mayall telescope in Kitt Peak, Arizona, along with a new optical corrector, which will provide a three-degree diameter field of view. The DESI collaboration will also deliver a spectroscopic pipeline and data management system to reduce and archive all data for eventual public use.

 

 

The XXL survey: First results and future

Authors: M. Pierre et al.
Journal: MNRAS
Year: 2017
Download: ADS | arXiv

 


Abstract

The XXL survey currently covers two 25 sq. deg. patches with XMM observations of ~10ks. We summarise the scientific results associated with the first release of the XXL data set, that occurred mid 2016. We review several arguments for increasing the survey depth to 40 ks during the next decade of XMM operations. X-ray (z<2) cluster, (z<4) AGN and cosmic background survey science will then benefit from an extraordinary data reservoir. This, combined with deep multi-
λλ
observations, will lead to solid standalone cosmological constraints and provide a wealth of information on the formation and evolution of AGN, clusters and the X-ray background. In particular, it will offer a unique opportunity to pinpoint the z>1 cluster density. It will eventually constitute a reference study and an ideal calibration field for the upcoming eROSITA and Euclid missions.

 

A new model to predict weak-lensing peak counts III. Filtering technique comparisons

Authors: C. Lin, M. Kilbinger, S. Pires
Journal: A&A
Year: 2016
Download: ADS | arXiv


Abstract

This is the third in a series of papers that develop a new and flexible model to predict weak-lensing (WL) peak counts, which have been shown to be a very valuable non-Gaussian probe of cosmology. In this paper, we compare the cosmological information extracted from WL peak counts using different filtering techniques of the galaxy shear data, including linear filtering with a Gaussian and two compensated filters (the starlet wavelet and the aperture mass), and the nonlinear filtering method MRLens. We present improvements to our model that account for realistic survey conditions, which are masks, shear-to-convergence transformations, and non-constant noise. We create simulated peak counts from our stochastic model, from which we obtain constraints on the matter density Ωm, the power spectrum normalisation σ8, and the dark-energy parameter w0. We use two methods for parameter inference, a copula likelihood, and approximate Bayesian computation (ABC). We measure the contour width in the Ωm-σ8 degeneracy direction and the figure of merit to compare parameter constraints from different filtering techniques. We find that starlet filtering outperforms the Gaussian kernel, and that including peak counts from different smoothing scales helps to lift parameter degeneracies. Peak counts from different smoothing scales with a compensated filter show very little cross-correlation, and adding information from different scales can therefore strongly enhance the available information. Measuring peak counts separately from different scales yields tighter constraints than using a combined peak histogram from a single map that includes multiscale information. Our results suggest that a compensated filter function with counts included separately from different smoothing scales yields the tightest constraints on cosmological parameters from WL peaks.

Clustering-based redshift estimation: application to VIPERS/CFHTLS

Authors: V. Scottez, Y. Mellier, B. Granett, T. Moutard, M. Kilbinger et al.
Journal: MNRAS
Year: 2016
Download: ADS | arXiv

 


Abstract

We explore the accuracy of the clustering-based redshift estimation proposed by Ménard et al. when applied to VIMOS Public Extragalactic Redshift Survey (VIPERS) and Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) real data. This method enables us to reconstruct redshift distributions from measurement of the angular clustering of objects using a set of secure spectroscopic redshifts. We use state-of-the-art spectroscopic measurements with iAB < 22.5 from the VIPERS as reference population to infer the redshift distribution of galaxies from the CFHTLS T0007 release. VIPERS provides a nearly representative sample to a flux limit of iAB < 22.5 at a redshift of >0.5 which allows us to test the accuracy of the clustering-based redshift distributions. We show that this method enables us to reproduce the true mean colour-redshift relation when both populations have the same magnitude limit. We also show that this technique allows the inference of redshift distributions for a population fainter than the reference and we give an estimate of the colour-redshift mapping in this case. This last point is of great interest for future large-redshift surveys which require a complete faint spectroscopic sample.

The XXL Survey

First round of papers published

The XXL Survey is a deep X-ray survey observed with the XMM satellite, covering two fields of 25 deg2 each. Observations in many other wavelength, from radio to IR and optical, in both imaging and spectroscopy, complement the survey. The main science case is cosmology with X-ray selected galaxy clusters, but other fields such as galaxy evolution, AGNs, cluster physics, and the large-scale structure are being studied.

The main paper (Paper I) describing the survey and giving an overview of the science is arXiv:1512.04317 (Pierre et al. 2015). Paper IV (arxiv.org:1512.03857, Lieu et al. 2015) presents weak-lensing mass measurements of the brightest clusters in the Northern field, using CFHTLenS shapes and photometric redshifts.

 

The mass-temperature relation for XXL and other surveys (CCCP, COSMOS), Lieu et al (2015).

A new model to predict weak-lensing peak counts II. Parameter constraint strategies

Authors: C. Lin, M. Kilbinger
Journal: A&A
Year: 2015
Download: ADS | arXiv


Abstract

Peak counts have been shown to be an excellent tool to extract the non-Gaussian part of the weak lensing signal. Recently, we developped a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analyses. In this work, we explore and compare various strategies for constraining parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique which makes a weaker assumption compared to the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. We find that neglecting the CDC effect enlarges parameter contours by 22%, and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in an excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.