Quantifying systematics from the shear inversion on weak-lensing peak counts

Authors: C. Lin, M. Kilbinger
Journal: Submitted to A&A letters
Year: 2017
Download: ADS | arXiv

 


Abstract

Weak-lensing (WL) peak counts provide a straightforward way to constrain cosmology, and results have been shown promising. However, the importance of understanding and dealing with systematics increases as data quality reaches an unprecedented level. One of the sources of systematics is the convergence-shear inversion. This effect, inevitable from observations, is usually neglected by theoretical peak models. Thus, it could have an impact on cosmological results. In this letter, we study the bias from neglecting the inversion and find it small but not negligible. The cosmological dependence of this bias is difficult to model and depends on the filter size. We also show the evolution of parameter constraints. Although weak biases arise in individual peak bins, the bias can reach 2-sigma for the dark energy equation of state w0. Therefore, we suggest that the inversion cannot be ignored and that inversion-free approaches, such as aperture mass, would be a more suitable tool to study weak-lensing peak counts.

Precision calculations of the cosmic shear power spectrum projection

Authors: M. Kilbinger, C. Heymans, M. Asgari et al.
Journal: MNRAS
Year: 2017
Download: ADS | arXiv


Abstract

We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second- order Limber equations for the projection. We find that the impact of adopting these approximations are negligible when constraining cosmological parameters from current weak lensing surveys. This is demonstrated using data from the Canada-France-Hawaii Lensing Survey (CFHTLenS). We find that the reported tension with Planck Cosmic Microwave Background (CMB) temperature anisotropy results cannot be alleviated, in contrast to the recent claim made by Kitching et al. (2016, version 1). For future large-scale surveys with unprecedented precision, we show that the spherical second-order Limber approximation will provide sufficient accuracy. In this case, the cosmic-shear power spectrum is shown to be in agreement with the full projection at the sub-percent level for l > 3, with the corresponding errors an order of magnitude below cosmic variance for all l. When computing the two-point shear correlation function, we show that the flat-sky fast Hankel transformation results in errors below two percent compared to the full spherical transformation. In the spirit of reproducible research, our numerical implementation of all approximations and the full projection are publicly available within the package nicaea at http://www.cosmostat.org/software/nicaea.


Summary

We discuss various methods to calculate projections for weak gravitational lensing: Since lenses galaxies pick up matter inhomogeneities of the cosmic web along the line of sight while photons from the galaxies propagate through the Universe to the observer, these inhomogeneities have to be projected to a 2D observable, the cumulative shear or convergence. The full projection involves three-dimensional integrals over highly oscillating Bessel functions, and can be time-consuming to compute numerically to high accuracy. Most previous work have therefore used approximations such as the Limber approximation, that reduce the integrals to 1D, and thereby neglecting modes along the line of sight.

The authors show that these projections are more than adequate for present surveys. Sub-percent accuracy is reached for l>20, for example as shown by the pink curve, which is the ratio of the case 'ExtL1Hyb' to the full projection. The abbreviation means 'extended', corresponding to the improved approximation introduced by LoVerde & Afshordi (2008), first-order Limber, and hybrid, since this is a hybrid between flat-sky and spherical coordinates. This case has been used in most of the recent publications (e.g. for KiDS), whereas the cast 'L1Fl' (first-order Limber flat-sky) was popular for most publications since 2014.

These approximations are sufficient for the small areas of current observations coming from CFHTLenS, KiDS, and DES, and well below cosmic variance of even future surveys (the figure shows Euclid - 15,000 deg2 and Kids -1,500 deg2).

K17_Fig1b

The paper then discusses the second-order Limber approximation, introduced in a general framework by LoVerde & Afshordi (2008), and applied to weak lensing in the current paper. The best 2nd-order case 'ExtL2Sph' reaches sub-percent accuracy down to l=3, sufficient for all future surveys.

The paper also computes the shear correlation function in real space, and shows that those approximations have a very minor influence.

We then go on to re-compute the cosmological constraints obtained in Kilbinger et al. (2013), and find virtually no change when choosing different approximations. Only the depreciated case 'ExtL1Fl' makes a noticeable difference, which is however still well within the statistical error bars. This case shows a particular slow convergence to the full projection.

Similar results have been derived in two other recent publications, Kitching et al. (2017), and Lemos, Challinor & Efstathiou (2017).
Note however that Kitching et al. (2017) conclude that errors from projection approximations of the types we discussed here (Limber, flat sky) could make up to 11% of the error budget of future surveys. This is however assuming the worst-case scenario including the deprecated cast 'ExtL1Fl', and we do not share their conclusion, but think that for example the projection 'ExtL2Sph' is sufficient for future surveys such as LSST and Euclid.

Dealing with missing data in the MICROSCOPE space mission: An adaptation of inpainting to handle colored-noise data

Authors: S. Pires, B. Joël, Q. Baghi, P. Touboul, G. Metris
Journal: Physical Review D
Year: 2016
Download: ADS | arXiv


Abstract

The MICROSCOPE space mission, launched on April 25, 2016, aims to test the weak equivalence principle (WEP) with a 10-15 precision. Reaching this performance requires an accurate and robust data analysis method, especially since the possible WEP violation signal will be dominated by a strongly colored noise. An important complication is brought by the fact that some values will be missing—therefore, the measured time series will not be strictly regularly sampled. Those missing values induce a spectral leakage that significantly increases the noise in Fourier space, where the WEP violation signal is looked for, thereby complicating scientific returns. Recently, we developed an inpainting algorithm to correct the MICROSCOPE data for missing values. This code has been integrated in the official MICROSCOPE data processing and analysis pipeline because it enables us to significantly measure an equivalence principle violation (EPV) signal in a model-independent way, in the inertial satellite configuration. In this work, we present several improvements to the method that may allow us now to reach the MICROSCOPE requirements for both inertial and spin satellite configurations. The main improvement has been obtained using a prior on the power spectrum of the colored noise that can be directly derived from the incomplete data. We show that after reconstructing missing values with this new algorithm, a least-squares fit may allow us to significantly measure an EPV signal with a 0.96 ×10-15 precision in the inertial mode and 1.20 ×10-15 precision in the spin mode. Although, the inpainting method presented in this paper has been optimized to the MICROSCOPE data, it remains sufficiently general to be used in the general context of missing data in time series dominated by an unknown colored noise. The improved inpainting software, called inpainting for colored-noise dominated signals, is freely available at http://www.cosmostat.org/software/icon.

High Resolution Weak Lensing Mass-Mapping Combining Shear and Flexion

Authors: F. Lanusse, J.-L. Starck, A. Leonard, S. Pires
Journal: A&A
Year: 2016
Download: ADS | arXiv


Abstract

Aims: We propose a new mass mapping algorithm, specifically designed to recover small-scale information from a combination of gravitational shear and flexion. Including flexion allows us to supplement the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map without relying on strong lensing constraints.
Methods: To preserve all available small scale information, we avoid any binning of the irregularly sampled input shear and flexion fields and treat the mass mapping problem as a general ill-posed inverse problem, which is regularised using a robust multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
Results: We tested our reconstruction method on a set of realistic weak lensing simulations corresponding to typical HST/ACS cluster observations and demonstrate our ability to recover substructures with the inclusion of flexion, which are otherwise lost if only shear information is used. In particular, we can detect substructures on the 15'' scale well outside of the critical region of the clusters. In addition, flexion also helps to constrain the shape of the central regions of the main dark matter halos.

A new model to predict weak-lensing peak counts III. Filtering technique comparisons

Authors: C. Lin, M. Kilbinger, S. Pires
Journal: A&A
Year: 2016
Download: ADS | arXiv


Abstract

This is the third in a series of papers that develop a new and flexible model to predict weak-lensing (WL) peak counts, which have been shown to be a very valuable non-Gaussian probe of cosmology. In this paper, we compare the cosmological information extracted from WL peak counts using different filtering techniques of the galaxy shear data, including linear filtering with a Gaussian and two compensated filters (the starlet wavelet and the aperture mass), and the nonlinear filtering method MRLens. We present improvements to our model that account for realistic survey conditions, which are masks, shear-to-convergence transformations, and non-constant noise. We create simulated peak counts from our stochastic model, from which we obtain constraints on the matter density Ωm, the power spectrum normalisation σ8, and the dark-energy parameter w0. We use two methods for parameter inference, a copula likelihood, and approximate Bayesian computation (ABC). We measure the contour width in the Ωm-σ8 degeneracy direction and the figure of merit to compare parameter constraints from different filtering techniques. We find that starlet filtering outperforms the Gaussian kernel, and that including peak counts from different smoothing scales helps to lift parameter degeneracies. Peak counts from different smoothing scales with a compensated filter show very little cross-correlation, and adding information from different scales can therefore strongly enhance the available information. Measuring peak counts separately from different scales yields tighter constraints than using a combined peak histogram from a single map that includes multiscale information. Our results suggest that a compensated filter function with counts included separately from different smoothing scales yields the tightest constraints on cosmological parameters from WL peaks.

Dealing with missing data: An inpainting application to the MICROSCOPE space mission

Authors: B. Joël, S. Pires, Q. Baghi, P. Touboul, G. Metris
Journal: Physical Review D
Year: 2015
Download: ADS | arXiv


Abstract

Missing data are a common problem in experimental and observational physics. They can be caused by various sources, either an instrument's saturation, or a contamination from an external event, or a data loss. In particular, they can have a disastrous effect when one is seeking to characterize a colored-noise-dominated signal in Fourier space, since they create a spectral leakage that can artificially increase the noise. It is therefore important to either take them into account or to correct for them prior to e.g. a Least-Square fit of the signal to be characterized. In this paper, we present an application of the {\it inpainting} algorithm to mock MICROSCOPE data; {\it inpainting} is based on a sparsity assumption, and has already been used in various astrophysical contexts; MICROSCOPE is a French Space Agency mission, whose launch is expected in 2016, that aims to test the Weak Equivalence Principle down to the 1015 level. We then explore the {\it inpainting} dependence on the number of gaps and the total fraction of missing values. We show that, in a worst-case scenario, after reconstructing missing values with {\it inpainting}, a Least-Square fit may allow us to significantly measure a 1.1×1015 Equivalence Principle violation signal, which is sufficiently close to the MICROSCOPE requirements to implement {\it inpainting} in the official MICROSCOPE data processing and analysis pipeline. Together with the previously published KARMA method, {\it inpainting} will then allow us to independently characterize and cross-check an Equivalence Principle violation signal detection down to the 1015 level.

A new model to predict weak-lensing peak counts II. Parameter constraint strategies

Authors: C. Lin, M. Kilbinger
Journal: A&A
Year: 2015
Download: ADS | arXiv


Abstract

Peak counts have been shown to be an excellent tool to extract the non-Gaussian part of the weak lensing signal. Recently, we developped a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analyses. In this work, we explore and compare various strategies for constraining parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique which makes a weaker assumption compared to the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. We find that neglecting the CDC effect enlarges parameter contours by 22%, and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in an excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.

CFHTLenS: weak lensing constraints on the ellipticity of galaxy-scale matter haloes and the galaxy-halo misalignment

Authors: T. Schrabback et al.
Journal: MNRAS
Year: 2015
Download: ADS | arXiv


Abstract

We present weak lensing constraints on the ellipticity of galaxy-scale matter haloes and the galaxy-halo misalignment. Using data from the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), we measure the weighted-average ratio of the aligned projected ellipticity components of galaxy matter haloes and their embedded galaxies,
fhf_\mathrm{h}
, split by galaxy type. We then compare our observations to measurements taken from the Millennium Simulation, assuming different models of galaxy-halo misalignment. Using the Millennium Simulation we verify that the statistical estimator used removes contamination from cosmic shear. We also detect an additional signal in the simulation, which we interpret as the impact of intrinsic shape-shear alignments between the lenses and their large-scale structure environment. These alignments are likely to have caused some of the previous observational constraints on
fhf_\mathrm{h}
to be biased high. From CFHTLenS we find
fh=0.04±0.25f_\mathrm{h}=-0.04 \pm 0.25
for early-type galaxies, which is consistent with current models for the galaxy-halo misalignment predicting
fh0.20f_\mathrm{h}\simeq 0.20
. For late-type galaxies we measure
fh=0.690.36+0.37f_\mathrm{h}=0.69_{-0.36}^{+0.37}
from CFHTLenS. This can be compared to the simulated results which yield
fh0.02f_\mathrm{h}\simeq 0.02
for misaligned late-type models.

CFHTLenS: A Gaussian likelihood is a sufficient approximation for a cosmological analysis of third-order cosmic shear statistics

Authors: P. Simon, ... , M. Kilbinger,  et al.
Journal: MNRAS
Year: 2015
Download: ADS | arXiv


Abstract

We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopted a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ8=σ8 (Ωm/0.27)0.64=0.79+0.080.11 for a flat ΛCDMcosmology with h=0.7±0.04 (68% credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof=2.9, including a 20% r.m.s. uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual PSF systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher-order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.

A new model to predict weak-lensing peak counts I. Comparison with N-body Simulations

Authors: C. Lin, M. Kilbinger
Journal: A&A
Year: 2015
Download: ADS | arXiv


Abstract

Weak-lensing peak counts has been shown to be a powerful tool for cosmology. It provides non-Gaussian information of large scale structures, complementary to second order statistics. We propose a new flexible method to predict weak lensing peak counts, which can be adapted to realistic scenarios, such as a real source distribution, intrinsic galaxy alignment, mask effects, photo-z errors from surveys, etc. The new model is also suitable for applying the tomography technique and non-linear filters. A probabilistic approach to model peak counts is presented. First, we sample halos from a mass function. Second, we assign them NFW profiles. Third, we place those halos randomly on the field of view. The creation of these "fast simulations" requires much less computing time compared to N-body runs. Then, we perform ray-tracing through these fast simulation boxes and select peaks from weak-lensing maps to predict peak number counts. The computation is achieved by our \textsc{Camelus} algorithm, which we make available at this http URL . We compare our results to N-body simulations to validate our model. We find that our approach is in good agreement with full N-body runs. We show that the lensing signal dominates shape noise and Poisson noise for peaks with SNR between 4 and 6. Also, counts from the same SNR range are sensitive to Ωm and σ8. We show how our model can discriminate between various combinations of those two parameters. In summary, we offer a powerful tool to study weak lensing peaks. The potential of our forward model is its high flexibility, making the use of peak counts under realistic survey conditions feasible.