Euclid: The reduced shear approximation and magnification bias for Stage IV cosmic shear experiments

Euclid: The reduced shear approximation and magnification bias for Stage IV cosmic shear experiments

Authors: A.C. Deshpande, ..., S. Casas, M. Kilbinger, V. Pettorino, S. Pires, J.-L. Starck, F. Sureau, et al.
Journal: Astronomy and Astrophysics
Year: 2020
DOI:  10.1051/0004-6361/201937323
Download:

ADS | arXiv

 


Abstract

Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, previously ignored systematic effects must be addressed. In this work, we evaluate the impact of the reduced shear approximation and magnification bias, on the information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities, in high-magnification regions. The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from Euclid, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach. These effects cause significant biases in Omega_m, sigma_8, n_s, Omega_DE, w_0, and w_a of -0.53 sigma, 0.43 sigma, -0.34 sigma, 1.36 sigma, -0.68 sigma, and 1.21 sigma, respectively. We then show that these lensing biases interact with another systematic: the intrinsic alignment of galaxies. Accordingly, we develop the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to Euclid, we find that the additional terms introduced by this correction are sub-dominant.

Euclid: The selection of quiescent and star-forming galaxies using observed colours

Euclid: The selection of quiescent and star-forming galaxies using observed colours

Authors: L. Bisigello, ..., V. Pettorino, S. Pires, F. Sureau, et al.
Journal: MNRAS
Year: 2020
DOI:  10.1093/mnras/staa885
Download:

ADS | arXiv

 


Abstract

The Euclid mission will observe well over a billion galaxies out to z6 and beyond. This will offer an unrivalled opportunity to investigate several key questions for understanding galaxy formation and evolution. The first step for many of these studies will be the selection of a sample of quiescent and star-forming galaxies, as is often done in the literature by using well known colour techniques such as the `UVJ' diagram. However, given the limited number of filters available for the Euclid telescope, the recovery of such rest-frame colours will be challenging. We therefore investigate the use of observed Euclid colours, on their own and together with ground-based u-band observations, for selecting quiescent and star-forming galaxies. The most efficient colour combination, among the ones tested in this work, consists of the (u-VIS) and (VIS-J) colours. We find that this combination allows users to select a sample of quiescent galaxies complete to above 70% and with less than 15% contamination at redshifts in the range 0.75<z<1. For galaxies at high-z or without the u-band complementary observations, the (VIS-Y) and (J-H) colours represent a valid alternative, with >65% completeness level and contamination below 20% at 1<z<2 for finding quiescent galaxies. In comparison, the sample of quiescent galaxies selected with the traditional UVJ technique is only 20% complete at z<3, when recovering the rest-frame colours using mock Euclid observations. This shows that our new methodology is the most suitable one when only Euclid bands, along with u-band imaging, are available.

Euclid preparation: VI. Verifying the Performance of Cosmic Shear Experiments

Euclid preparation: VI. Verifying the Performance of Cosmic Shear Experiments

Authors: Euclid Collaboration, P. Paykari, ..., S. Farrens, M. Kilbinger, V. Pettorino, S. Pires, J.-L. Starck, F. Sureau, et al.
Journal: Astronomy and Astrophysics
Year: 2020
DOI:  10.1051/0004-6361/201936980
Download:

ADS | arXiv

 


Abstract

Our aim is to quantify the impact of systematic effects on the inference of cosmological parameters from cosmic shear. We present an end-to-end approach that introduces sources of bias in a modelled weak lensing survey on a galaxy-by-galaxy level. Residual biases are propagated through a pipeline from galaxy properties (one end) through to cosmic shear power spectra and cosmological parameter estimates (the other end), to quantify how imperfect knowledge of the pipeline changes the maximum likelihood values of dark energy parameters. We quantify the impact of an imperfect correction for charge transfer inefficiency (CTI) and modelling uncertainties of the point spread function (PSF) for Euclid, and find that the biases introduced can be corrected to acceptable levels.

Space test of the Equivalence Principle: first results of the MICROSCOPE mission

Space test of the Equivalence Principle: first results of the MICROSCOPE mission

Authors: P. Touboul, G. Metris, M. Rodrigues, Y. André, Q. Baghi, J. Bergé, D. Boulanger, S. Bremer, R. Chhun, B. Christophe, V. Cipolla, T. Damour, P. Danto, H. Dittus, P. Fayet, B. Foulon, P.-Y. Guidotti, E. Hardy, P.-A. Huynh, C. Lämmerzahl, V. Lebat, F. Liorzou, M. List, I. Panel, S. Pires, B. Pouilloux, P. Prieur, S. Reynaud, B. Rievers, A. Robert, H. Selig, L. Serron, T. Sumner, P. Viesser
Journal: Classical and Quantum Gravity
Year: 2019
Download: ADS | arXivFait Marquant


Abstract

The Weak Equivalence Principle (WEP), stating that two bodies of different compositions and/or mass fall at the same rate in a gravitational field (universality of free fall), is at the very foundation of General Relativity. The MICROSCOPE mission aims to test its validity to a precision of 10^-15, two orders of magnitude better than current on-ground tests, by using two masses of different compositions (titanium and platinum alloys) on a quasi-circular trajectory around the Earth. This is realised by measuring the accelerations inferred from the forces required to maintain the two masses exactly in the same orbit. Any significant difference between the measured accelerations, occurring at a defined frequency, would correspond to the detection of a violation of the WEP, or to the discovery of a tiny new type of force added to gravity. MICROSCOPE's first results show no hint for such a difference, expressed in terms of Eötvös parameter δ =  [-1 +/- 9(stat) +/- 9 (syst)] x 10^-15 (both 1σ uncertainties) for a titanium and platinum pair of materials. This result was obtained on a session with 120 orbital revolutions representing 7% of the current available data acquired during the whole mission. The quadratic combination of 1σ uncertainties leads to a current limit on δ of about 1.3 x 10^-14.

Dealing with missing data in the MICROSCOPE space mission: An adaptation of inpainting to handle colored-noise data

Authors: S. Pires, B. Joël, Q. Baghi, P. Touboul, G. Metris
Journal: Physical Review D
Year: 2016
Download: ADS | arXiv


Abstract

The MICROSCOPE space mission, launched on April 25, 2016, aims to test the weak equivalence principle (WEP) with a 10-15 precision. Reaching this performance requires an accurate and robust data analysis method, especially since the possible WEP violation signal will be dominated by a strongly colored noise. An important complication is brought by the fact that some values will be missing—therefore, the measured time series will not be strictly regularly sampled. Those missing values induce a spectral leakage that significantly increases the noise in Fourier space, where the WEP violation signal is looked for, thereby complicating scientific returns. Recently, we developed an inpainting algorithm to correct the MICROSCOPE data for missing values. This code has been integrated in the official MICROSCOPE data processing and analysis pipeline because it enables us to significantly measure an equivalence principle violation (EPV) signal in a model-independent way, in the inertial satellite configuration. In this work, we present several improvements to the method that may allow us now to reach the MICROSCOPE requirements for both inertial and spin satellite configurations. The main improvement has been obtained using a prior on the power spectrum of the colored noise that can be directly derived from the incomplete data. We show that after reconstructing missing values with this new algorithm, a least-squares fit may allow us to significantly measure an EPV signal with a 0.96 ×10-15 precision in the inertial mode and 1.20 ×10-15 precision in the spin mode. Although, the inpainting method presented in this paper has been optimized to the MICROSCOPE data, it remains sufficiently general to be used in the general context of missing data in time series dominated by an unknown colored noise. The improved inpainting software, called inpainting for colored-noise dominated signals, is freely available at http://www.cosmostat.org/software/icon.

High Resolution Weak Lensing Mass-Mapping Combining Shear and Flexion

Authors: F. Lanusse, J.-L. Starck, A. Leonard, S. Pires
Journal: A&A
Year: 2016
Download: ADS | arXiv


Abstract

Aims: We propose a new mass mapping algorithm, specifically designed to recover small-scale information from a combination of gravitational shear and flexion. Including flexion allows us to supplement the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map without relying on strong lensing constraints.
Methods: To preserve all available small scale information, we avoid any binning of the irregularly sampled input shear and flexion fields and treat the mass mapping problem as a general ill-posed inverse problem, which is regularised using a robust multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
Results: We tested our reconstruction method on a set of realistic weak lensing simulations corresponding to typical HST/ACS cluster observations and demonstrate our ability to recover substructures with the inclusion of flexion, which are otherwise lost if only shear information is used. In particular, we can detect substructures on the 15'' scale well outside of the critical region of the clusters. In addition, flexion also helps to constrain the shape of the central regions of the main dark matter halos.

A new model to predict weak-lensing peak counts III. Filtering technique comparisons

Authors: C. Lin, M. Kilbinger, S. Pires
Journal: A&A
Year: 2016
Download: ADS | arXiv


Abstract

This is the third in a series of papers that develop a new and flexible model to predict weak-lensing (WL) peak counts, which have been shown to be a very valuable non-Gaussian probe of cosmology. In this paper, we compare the cosmological information extracted from WL peak counts using different filtering techniques of the galaxy shear data, including linear filtering with a Gaussian and two compensated filters (the starlet wavelet and the aperture mass), and the nonlinear filtering method MRLens. We present improvements to our model that account for realistic survey conditions, which are masks, shear-to-convergence transformations, and non-constant noise. We create simulated peak counts from our stochastic model, from which we obtain constraints on the matter density Ωm, the power spectrum normalisation σ8, and the dark-energy parameter w0. We use two methods for parameter inference, a copula likelihood, and approximate Bayesian computation (ABC). We measure the contour width in the Ωm-σ8 degeneracy direction and the figure of merit to compare parameter constraints from different filtering techniques. We find that starlet filtering outperforms the Gaussian kernel, and that including peak counts from different smoothing scales helps to lift parameter degeneracies. Peak counts from different smoothing scales with a compensated filter show very little cross-correlation, and adding information from different scales can therefore strongly enhance the available information. Measuring peak counts separately from different scales yields tighter constraints than using a combined peak histogram from a single map that includes multiscale information. Our results suggest that a compensated filter function with counts included separately from different smoothing scales yields the tightest constraints on cosmological parameters from WL peaks.

Sparsely sampling the sky: Regular vs. random sampling

 

Authors: P. Paykari, S. Pires, J.-L. Starck, A.H. Jaffe
Journal: Astronomy & Astrophysics
Year: 2015
Download: ADS | arXiv


Abstract

Weak gravitational lensing provides a unique way of mapping directly the dark matter in the Universe. The majority of lensing analyses use the two-point statistics of the cosmic shear field to constrain the cosmological model, a method that is affected by degeneracies, such as that between σ8 and Ωm which are respectively the rms of the mass fluctuations on a scale of 8 Mpc/h and the matter density parameter, both at z = 0. However, the two-point statistics only measure the Gaussian properties of the field, and the weak lensing field is non-Gaussian. It has been shown that the estimation of non-Gaussian statistics for weak lensing data can improve the constraints on cosmological parameters. In this paper, we systematically compare a wide range of non-Gaussian estimators to determine which one provides tighter constraints on the cosmological parameters. These statistical methods include skewness, kurtosis, and the higher criticism test, in several sparse representations such as wavelet and curvelet; as well as the bispectrum, peak counting, and a newly introduced statistic called wavelet peak counting (WPC). Comparisons based on sparse representations indicate that the wavelet transform is the most sensitive to non-Gaussian cosmological structures. It also appears that the most helpful statistic for non-Gaussian characterization in weak lensing mass maps is the WPC. Finally, we show that the σ8 - Ωmdegeneracy could be even better broken if the WPC estimation is performed on weak lensing mass maps filtered by the wavelet method, MRLens.

Dealing with missing data: An inpainting application to the MICROSCOPE space mission

Authors: B. Joël, S. Pires, Q. Baghi, P. Touboul, G. Metris
Journal: Physical Review D
Year: 2015
Download: ADS | arXiv


Abstract

Missing data are a common problem in experimental and observational physics. They can be caused by various sources, either an instrument's saturation, or a contamination from an external event, or a data loss. In particular, they can have a disastrous effect when one is seeking to characterize a colored-noise-dominated signal in Fourier space, since they create a spectral leakage that can artificially increase the noise. It is therefore important to either take them into account or to correct for them prior to e.g. a Least-Square fit of the signal to be characterized. In this paper, we present an application of the {\it inpainting} algorithm to mock MICROSCOPE data; {\it inpainting} is based on a sparsity assumption, and has already been used in various astrophysical contexts; MICROSCOPE is a French Space Agency mission, whose launch is expected in 2016, that aims to test the Weak Equivalence Principle down to the 1015 level. We then explore the {\it inpainting} dependence on the number of gaps and the total fraction of missing values. We show that, in a worst-case scenario, after reconstructing missing values with {\it inpainting}, a Least-Square fit may allow us to significantly measure a 1.1×1015 Equivalence Principle violation signal, which is sufficiently close to the MICROSCOPE requirements to implement {\it inpainting} in the official MICROSCOPE data processing and analysis pipeline. Together with the previously published KARMA method, {\it inpainting} will then allow us to independently characterize and cross-check an Equivalence Principle violation signal detection down to the 1015 level.

A PCA-based automated finder for galaxy-scale strong lenses

 

Authors: R. Joseph, F. Courbin, R. B. Metcalf, ...., S.Pires, et al.
Journal: A&A
Year: 2014
Download: ADS | arXiv


Abstract

We present an algorithm using principal component analysis (PCA) to subtract galaxies from imaging data and also two algorithms to find strong, galaxy-scale gravitational lenses in the resulting residual image. The combined method is optimised to find full or partial Einstein rings. Starting from a pre-selection of potential massive galaxies, we first perform a PCA to build a set of basis vectors. The galaxy images are reconstructed using the PCA basis and subtracted from the data. We then filter the residual image with two different methods. The first uses a curvelet (curved wavelets) filter of the residual images to enhance any curved/ring feature. The resulting image is transformed in polar coordinates, centred on the lens galaxy. In these coordinates, a ring is turned into a line, allowing us to detect very faint rings by taking advantage of the integrated signal-to-noise in the ring (a line in polar coordinates). The second way of analysing the PCA-subtracted images identifies structures in the residual images and assesses whether they are lensed images according to their orientation, multiplicity, and elongation. We applied the two methods to a sample of simulated Einstein rings as they would be observed with the ESA Euclid satellite in the VIS band. The polar coordinate transform allowed us to reach a completeness of 90% for a purity of 86%, as soon as the signal-to-noise integrated in the ring was higher than 30 and almost independent of the size of the Einstein ring. Finally, we show with real data that our PCA-based galaxy subtraction scheme performs better than traditional subtraction based on model fitting to the data. Our algorithm can be developed and improved further using machine learning and dictionary learning methods, which would extend the capabilities of the method to more complex and diverse galaxy shapes.