Origins of weak lensing systematics, and requirements on future instrumentation (or knowledge of instrumentation)

 

Authors: R. Massey, H. Hoekstra, T. Kitching, ..., S. Pires et al.
Journal: MNRAS
Year: 2013
Download: ADS | arXiv


Abstract

The first half of this paper explores the origin of systematic biases in the measurement of weak gravitational lensing. Compared to previous work, we expand the investigation of point spread function instability and fold in for the first time the effects of non-idealities in electronic imaging detectors and imperfect galaxy shape measurement algorithms. Together, these now explain the additive {A}(ℓ) and multiplicative {M}(ℓ) systematics typically reported in current lensing measurements. We find that overall performance is driven by a product of a telescope/camera's absolute performance, and our knowledge about its performance.

The second half of this paper propagates any residual shear measurement biases through to their effect on cosmological parameter constraints. Fully exploiting the statistical power of Stage IV weak lensing surveys will require additive biases overline{{A}}≲ 1.8× 10^{-12} and multiplicative biases overline{{M}}≲ 4.0× 10^{-3}. These can be allocated between individual budgets in hardware, calibration data and software, using results from the first half of the paper.

If instrumentation is stable and well calibrated, we find extant shear measurement software from Gravitational Lensing Accuracy Testing 2010 (GREAT10) already meet requirements on galaxies detected at signal-to-noise ratio = 40. Averaging over a population of galaxies with a realistic distribution of sizes, it also meets requirements for a 2D cosmic shear analysis from space. If used on fainter galaxies or for 3D cosmic shear tomography, existing algorithms would need calibration on simulations to avoid introducing bias at a level similar to the statistical error. Requirements on hardware and calibration data are discussed in more detail in a companion paper. Our analysis is intentionally general, but is specifically being used to drive the hardware and ground segment performance budget for the design of the European Space Agency's recently selected Euclid mission.

Fast Calculation of the Weak Lensing Aperture Mass Statistic

 

Authors: A. Leonard, S. Pires, J.-L. Starck
Journal: MNRAS
Year: 2012
Download: ADS | arXiv


Abstract

The aperture mass statistic is a common tool used in weak lensing studies. By convolving lensing maps with a filter function of a specific scale, chosen to be larger than the scale on which the noise is dominant, the lensing signal may be boosted with respect to the noise. This allows for detection of structures at increased fidelity. Furthermore, higher-order statistics of the aperture mass (such as its skewness or kurtosis), or counting of the peaks seen in the resulting aperture mass maps, provide a convenient and effective method to constrain the cosmological parameters. In this paper, we more fully explore the formalism underlying the aperture mass statistic. We demonstrate that the aperture mass statistic is formally identical to a wavelet transform at a specific scale. Further, we show that the filter functions most frequently used in aperture mass studies are not ideal, being non-local in both real and Fourier space. In contrast, the wavelet formalism offers a number of wavelet functions that are localized both in real and Fourier space, yet similar to the 'optimal' aperture mass filters commonly adopted. Additionally, for a number of wavelet functions, such as the starlet wavelet, very fast algorithms exist to compute the wavelet transform. This offers significant advantages over the usual aperture mass algorithm when it comes to image processing time, demonstrating speed-up factors of ~ 5 - 1200 for aperture radii in the range 2 to 64 pixels on an image of 1024 x 1024 pixels.

Wavelet Helmholtz decomposition for weak lensing mass map reconstruction

 

Authors: E. Deriaz, J.-L. Starck, S.Pires
Journal: A&A
Year: 2012
Download: ADS | arXiv


Abstract

To derive the convergence field from the gravitational shear (gamma) of the background galaxy images, the classical methods require a convolution of the shear to be performed over the entire sky, usually expressed thanks to the Fast Fourier transform (FFT). However, it is not optimal for an imperfect geometry survey. Furthermore, FFT implicitly uses periodic conditions that introduce errors to the reconstruction. A method has been proposed that relies on computation of an intermediate field u that combines the derivatives of gamma and on convolution with a Green kernel. In this paper, we study the wavelet Helmholtz decomposition as a new approach to reconstructing the dark matter mass map. We show that a link exists between the Helmholtz decomposition and the E/B mode separation. We introduce a new wavelet construction, that has a property that gives us more flexibility in handling the border problem, and we propose a new method of reconstructing the dark matter mass map in the wavelet space. A set of experiments based on noise-free images illustrates that this Wavelet Helmholtz decomposition reconstructs the borders better than all other existing methods.

Cosmological constraints from the capture of non-Gaussianity in Weak Lensing data

 

Authors: S. Pires, A. Leonard,  J.-L. Starck
Journal: MNRAS
Year: 2012
Download: ADS | arXiv


Abstract

Weak gravitational lensing has become a common tool to constrain the cosmological model. The majority of the methods to derive constraints on cosmological parameters use second-order statistics of the cosmic shear. Despite their success, second-order statistics are not optimal and degeneracies between some parameters remain. Tighter constraints can be obtained if second-order statistics are combined with a statistic that is efficient to capture non-Gaussianity. In this paper, we search for such a statistical tool and we show that there is additional information to be extracted from statistical analysis of the convergence maps beyond what can be obtained from statistical analysis of the shear field. For this purpose, we have carried out a large number of cosmological simulations along the {\sigma}8-{\Omega}m degeneracy, and we have considered three different statistics commonly used for non-Gaussian features characterization: skewness, kurtosis and peak count. To be able to investigate non-Gaussianity directly in the shear field we have used the aperture mass definition of these three statistics for different scales. Then, the results have been compared with the results obtained with the same statistics estimated in the convergence maps at the same scales. First, we show that shear statistics give similar constraints to those given by convergence statistics, if the same scale is considered. In addition, we find that the peak count statistic is the best to capture non-Gaussianities in the weak lensing field and to break the {\sigma}8-{\Omega}m degeneracy. We show that this statistical analysis should be conducted in the convergence maps: first, because there exist fast algorithms to compute the convergence map for different scales, and secondly because it offers the opportunity to denoise the reconstructed convergence map, which improves non-Gaussian features extraction.

Cosmological model discrimination with weak lensing

 

Authors: S. Pires, J.-L. Starck, A. Amara, A. Réfrégier, R. Teyssier
Journal: Astronomy & Astrophysics
Year: 2009
Download: ADS


Abstract

Weak gravitational lensing provides a unique way of mapping directly the dark matter in the Universe. The majority of lensing analyses use the two-point statistics of the cosmic shear field to constrain the cosmological model, a method that is affected by degeneracies, such as that between σ8 and Ωm which are respectively the rms of the mass fluctuations on a scale of 8 Mpc/h and the matter density parameter, both at z = 0. However, the two-point statistics only measure the Gaussian properties of the field, and the weak lensing field is non-Gaussian. It has been shown that the estimation of non-Gaussian statistics for weak lensing data can improve the constraints on cosmological parameters. In this paper, we systematically compare a wide range of non-Gaussian estimators to determine which one provides tighter constraints on the cosmological parameters. These statistical methods include skewness, kurtosis, and the higher criticism test, in several sparse representations such as wavelet and curvelet; as well as the bispectrum, peak counting, and a newly introduced statistic called wavelet peak counting (WPC). Comparisons based on sparse representations indicate that the wavelet transform is the most sensitive to non-Gaussian cosmological structures. It also appears that the most helpful statistic for non-Gaussian characterization in weak lensing mass maps is the WPC. Finally, we show that the σ8 - Ωmdegeneracy could be even better broken if the WPC estimation is performed on weak lensing mass maps filtered by the wavelet method, MRLens.

FASTLens (FAst STatistics for weak Lensing) : Fast method for Weak Lensing Statistics and map making

 

Authors: S. Pires, J.-L. Starck, A. Amara, R. Teyssier, A. Refregier, J. Fadili
Journal: MNRAS
Year: 2009
Download: ADS | arXiv


Abstract

With increasingly large data sets, weak lensing measurements are able to measure cosmological parameters with ever greater precision. However this increased accuracy also places greater demands on the statistical tools used to extract the available information. To date, the majority of lensing analyses use the two point-statistics of the cosmic shear field. These can either be studied directly using the two-point correlation function, or in Fourier space, using the power spectrum. But analyzing weak lensing data inevitably involves the masking out of regions or example to remove bright stars from the field. Masking out the stars is common practice but the gaps in the data need proper handling. In this paper, we show how an inpainting technique allows us to properly fill in these gaps with only NlogN operations, leading to a new image from which we can compute straight forwardly and with a very good accuracy both the pow er spectrum and the bispectrum. We propose then a new method to compute the bispectrum with a polar FFT algorithm, which has the main advantage of avoiding any interpolation in the Fourier domain. Finally we propose a new method for dark matter mass map reconstruction from shear observations which integrates this new inpainting concept. A range of examples based on 3D N-body simulations illustrates the results.

Full-Sky Weak Lensing Simulation with 70 Billion Particles

 

Authors: R. Teyssier, S. Pires,  ... , J.-L. Starck et al.
Journal: A&A
Year: 2009
Download: ADS | arXiv


Abstract

We have performed a 70 billion dark-matter particles N-body simulation in a 2 h1 Gpc periodic box, using the concordance, cosmological model as favored by the latest WMAP3 results. We have computed a full-sky convergence map with a resolution of Δθ0.74 arcmin2, spanning 4 orders of magnitude in angular dynamical range. Using various high-order statistics on a realistic cut sky, we have characterized the transition from the linear to the nonlinear regime at 1000 and shown that realistic galactic masking affects high-order moments only below <200. Each domain (Gaussian and non-Gaussian) spans 2 decades in angular scale. This map is therefore an ideal tool for testing map-making algorithms on the sphere. As a first step in addressing the full map reconstruction problem, we have benchmarked in this paper two denoising methods: 1) Wiener filtering applied to the Spherical Harmonics decomposition of the map and 2) a new method, called MRLens, based on the modification of the Maximum Entropy Method on a Wavelet decomposition. While the latter is optimal on large spatial scales, where the signal is Gaussian, MRLens outperforms the Wiener method on small spatial scales, where the signal is highly non-Gaussian. The simulated full-sky convergence map is freely available to the community to help the development of new map-making algorithms dedicated to the next generation of weak-lensing surveys.

Dark matter maps reveal cosmic scaffolding

 

Authors: R. Massey, ... , J.-L. Starck, ..., S. Pires et al.
Journal: Nature
Year: 2007
Download: ADS | arXiv


Abstract

Ordinary baryonic particles (such as protons and neutrons) account for only one-sixth of the total matter in the Universe. The remainder is a mysterious "dark matter" component, which does not interact via electromagnetism and thus neither emits nor reflects light. As dark matter cannot be seen directly using traditional observations, very little is currently known about its properties. It does interact via gravity, and is most effectively probed through gravitational lensing: the deflection of light from distant galaxies by the gravitational attraction of foreground mass concentrations. This is a purely geometrical effect that is free of astrophysical assumptions and sensitive to all matter -- whether baryonic or dark. Here we show high fidelity maps of the large-scale distribution of dark matter, resolved in both angle and depth. We find a loose network of filaments, growing over time, which intersect in massive structures at the locations of clusters of galaxies. Our results are consistent with predictions of gravitationally induced structure formation, in which the initial, smooth distribution of dark matter collapses into filaments then into clusters, forming a gravitational scaffold into which gas can accumulate, and stars can be built.

Sunyaev-Zel'dovich clusters reconstruction in multiband bolometer camera surveys

 

Authors: S. Pires, D. Yvon, Y. Moudden, S. Anthoine, E. Pierpaoli
Journal: A&A
Year: 2006
Download: ADS | arXiv


Abstract

We present a new method for the reconstruction of Sunyaev-Zel'dovich (SZ) galaxy clusters in future SZ-survey experiments using multiband bolometer cameras such as Olimpo, APEX, or Planck. Our goal is to optimise SZ-Cluster extraction from our observed noisy maps. We wish to emphasize that none of the algorithms used in the detection chain is tuned on prior knowledge on the SZ -Cluster signal, or other astrophysical sources (Optical Spectrum, Noise Covariance Matrix, or covariance of SZ Cluster wavelet coefficients). First, a blind separation of the different astrophysical components which contribute to the observations is conducted using an Independent Component Analysis (ICA) method. Then, a recent non linear filtering technique in the wavelet domain, based on multiscale entropy and the False Discovery Rate (FDR) method, is used to detect and reconstruct the galaxy clusters. Finally, we use the Source Extractor software to identify the detected clusters. The proposed method was applied on realistic simulations of observations. As for global detection efficiency, this new method is impressive as it provides comparable results to Pierpaoli et al. method being however a blind algorithm. Preprint with full resolution figures is available at the URL: w10-dapnia.saclay.cea.fr/Phocea/Vie_des_labos/Ast/ast_visu.php?id_ast=728

Curvelet analysis of asteroseismic data

 

Authors: P. Lambert, S. Pires, J. Ballot, R.A. Garcia, J.-L. Starck, S. Turck-Chièze
Journal: A&A
Year: 2006
Download: ADS | arXiv

 

Abstract

Context. The detection and identification of oscillation modes (in terms of their l, m, and successive n) is a great challenge for present and future asteroseismic space missions. “Peak tagging" is an important step in the analysis of these data to provide estimations of stellar oscillation mode parameters, i.e., frequencies, rotation rates, and further studies on the stellar structure.
Aims. Our goal is to increase the signal-to-noise ratio of the asteroseismic spectra computed from the time series that are representative of MOST and CoRoT observations (30- and 150-day observations).

Methods. We apply the curvelet transform – a recent image processing technique that looks for curved patterns – to echelle diagrams built using asteroseismic power spectra. In the resulting diagram, the eigenfrequencies appear as smooth continuous ridges. To test the method, we use Monte-Carlo simulations of several sun-like stars with different combinations of rotation rates, rotation-axis inclination, and signal-to-noise ratios.

Results. The filtered diagrams enhance the contrast between the ridges of the modes and the background, allowing a better tagging of the modes and a better extraction of some stellar parameters. Monte-Carlo simulations have also shown that the region where modes can be detected is enlarged at lower and higher frequencies compared to the raw spectra. In addition, the extraction of the mean rotational splitting from modes at low frequency can be done more easily using the filtered spectra rather than the raw spectra.