A PCA-based automated finder for galaxy-scale strong lenses

 

Authors: R. Joseph, F. Courbin, R. B. Metcalf, ...., S.Pires, et al.
Journal: A&A
Year: 2014
Download: ADS | arXiv


Abstract

We present an algorithm using principal component analysis (PCA) to subtract galaxies from imaging data and also two algorithms to find strong, galaxy-scale gravitational lenses in the resulting residual image. The combined method is optimised to find full or partial Einstein rings. Starting from a pre-selection of potential massive galaxies, we first perform a PCA to build a set of basis vectors. The galaxy images are reconstructed using the PCA basis and subtracted from the data. We then filter the residual image with two different methods. The first uses a curvelet (curved wavelets) filter of the residual images to enhance any curved/ring feature. The resulting image is transformed in polar coordinates, centred on the lens galaxy. In these coordinates, a ring is turned into a line, allowing us to detect very faint rings by taking advantage of the integrated signal-to-noise in the ring (a line in polar coordinates). The second way of analysing the PCA-subtracted images identifies structures in the residual images and assesses whether they are lensed images according to their orientation, multiplicity, and elongation. We applied the two methods to a sample of simulated Einstein rings as they would be observed with the ESA Euclid satellite in the VIS band. The polar coordinate transform allowed us to reach a completeness of 90% for a purity of 86%, as soon as the signal-to-noise integrated in the ring was higher than 30 and almost independent of the size of the Einstein ring. Finally, we show with real data that our PCA-based galaxy subtraction scheme performs better than traditional subtraction based on model fitting to the data. Our algorithm can be developed and improved further using machine learning and dictionary learning methods, which would extend the capabilities of the method to more complex and diverse galaxy shapes.

Gap interpolation by inpainting methods : Application to Ground and Space-based Asteroseismic data

 

Authors: S. Pires, S. Mathur, R. A. Garcia, J. Ballot, D. Stello, K. Sato
Journal: Astronomy & Astrophysics
Year: 2014
Download: ADS | arXiv


Abstract

In asteroseismology, the observed time series often suffers from incomplete time coverage due to gaps. The presence of periodic gaps may generate spurious peaks in the power spectrum that limit the analysis of the data. Various methods have been developed to deal with gaps in time series data. However, it is still important to improve these methods to be able to extract all the possible information contained in the data. In this paper, we propose a new approach to handle the problem, the so-called inpainting method. This technique, based on a sparsity prior, enables to judiciously fill-in the gaps in the data, preserving the asteroseismic signal, as far as possible. The impact of the observational window function is reduced and the interpretation of the power spectrum is simplified. This method is applied both on ground and space-based data. It appears that the inpainting technique improves the oscillation modes detection and estimation. Additionally, it can be used to study very long time series of many stars because its computation is very fast. For a time series of 50 days of CoRoT-like data, it allows a speed-up factor of 1000, if compared to methods of the same accuracy.


Summary

The paper "Gap interpolation by inpainting methods : Application to Ground and Space-based Asteroseismic data" has been accepted for publication in A&A. This paper is describing the software K-inpainting.

The software K-inpainting has been developed to handle the problem of missing data in the asteroseismic signal of Kepler. This technique, based on a sparsity prior, enables to judiciously fill-in the gaps in the data, preserving the asteroseismic signal, as far as possible.

More recently, it has been implemented in the CoRoT pipeline to process the data from missing data.

The K-inapinting software is available here

3346_3

Power Density Spectrum for a duty cycle of 83% computed using an FFT on the inpainted time series.

Impact on asteroseismic analyses of regular gaps in Kepler data

 

Authors: R.A. Garcıa, S. Mathur, S. Pires, et al.
Journal: Astronomy & Astrophysics
Year: 2014
Download: ADS | arXiv


Abstract

The NASA Kepler mission has observed more than 190,000 stars in the constellations of Cygnus and Lyra. Around 4 years of almost continuous ultra high-precision photometry have been obtained reaching a duty cycle higher than 90% for many of these stars. However, almost regular gaps due to nominal operations are present in the light curves at different time scales. In this paper we want to highlight the impact of those regular gaps in asteroseismic analyses and we try to find a method that minimizes their effect in the frequency domain. To do so, we isolate the two main time scales of quasi regular gaps in the data. We then interpolate the gaps and we compare the power density spectra of four different stars: two red giants at different stages of their evolution, a young F-type star, and a classical pulsator in the instability strip. The spectra obtained after filling the gaps in the selected solar-like stars show a net reduction in the overall background level, as well as a change in the background parameters. The inferred convective properties could change as much as 200% in the selected example, introducing a bias in the p-mode frequency of maximum power. When global asteroseismic scaling relations are used, this bias can lead up to a variation in the surface gravity of 0.05 dex. Finally, the oscillation spectrum in the classical pulsator is cleaner compared to the original one.

Weak Lensing Galaxy Cluster Field Reconstruction

 

Authors: E. Jullo, S.Pires, M. Jauzac, J.-P. Kneib
Journal: MNRAS
Year: 2014
Download: ADS | arXiv


Abstract

In this paper, we compare three methods to reconstruct galaxy cluster density fields with weak lensing data. The first method called FLens integrates an inpainting concept to invert the shear field with possible gaps, and a multi-scale entropy denoising procedure to remove the noise contained in the final reconstruction, that arises mostly from the random intrinsic shape of the galaxies. The second and third methods are based on a model of the density field made of a multi-scale grid of radial basis functions. In one case, the model parameters are computed with a linear inversion involving a singular value decomposition. In the other case, the model parameters are estimated using a Bayesian MCMC optimization implemented in the lensing software Lenstool. Methods are compared on simulated data with varying galaxy density fields. We pay particular attention to the errors estimated with resampling. We find the multi-scale grid model optimized with MCMC to provide the best results, but at high computational cost, especially when considering resampling. The SVD method is much faster but yields noisy maps, although this can be mitigated with resampling. The FLens method is a good compromise with fast computation, high signal to noise reconstruction, but lower resolution maps. All three methods are applied to the MACS J0717+3745 galaxy cluster field, and reveal the filamentary structure discovered in Jauzac et al. 2012. We conclude that sensitive priors can help to get high signal to noise, and unbiased reconstructions.

Defining a weak lensing experiment in space

 

Authors: M. Cropper, H. Hoekstra, T. Kitching, ..., S. Pires et al.
Journal: MNRAS
Year: 2013
Download: ADS | arXiv


Abstract

This paper describes the definition of a typical next-generation space-based weak gravitational lensing experiment. We first adopt a set of top-level science requirements from the literature, based on the scale and depth of the galaxy sample, and the avoidance of systematic effects in the measurements which would bias the derived shear values. We then identify and categorise the contributing factors to the systematic effects, combining them with the correct weighting, in such a way as to fit within the top-level requirements. We present techniques which permit the performance to be evaluated and explore the limits at which the contributing factors can be managed. Besides the modelling biases resulting from the use of weighted moments, the main contributing factors are the reconstruction of the instrument point spread function (PSF), which is derived from the stellar images on the image, and the correction of the charge transfer inefficiency (CTI) in the CCD detectors caused by radiation damage.

If instrumentation is stable and well calibrated, we find extant shear measurement software from Gravitational Lensing Accuracy Testing 2010 (GREAT10) already meet requirements on galaxies detected at signal-to-noise ratio = 40. Averaging over a population of galaxies with a realistic distribution of sizes, it also meets requirements for a 2D cosmic shear analysis from space. If used on fainter galaxies or for 3D cosmic shear tomography, existing algorithms would need calibration on simulations to avoid introducing bias at a level similar to the statistical error. Requirements on hardware and calibration data are discussed in more detail in a companion paper. Our analysis is intentionally general, but is specifically being used to drive the hardware and ground segment performance budget for the design of the European Space Agency's recently selected Euclid mission.

Origins of weak lensing systematics, and requirements on future instrumentation (or knowledge of instrumentation)

 

Authors: R. Massey, H. Hoekstra, T. Kitching, ..., S. Pires et al.
Journal: MNRAS
Year: 2013
Download: ADS | arXiv


Abstract

The first half of this paper explores the origin of systematic biases in the measurement of weak gravitational lensing. Compared to previous work, we expand the investigation of point spread function instability and fold in for the first time the effects of non-idealities in electronic imaging detectors and imperfect galaxy shape measurement algorithms. Together, these now explain the additive {A}(ℓ) and multiplicative {M}(ℓ) systematics typically reported in current lensing measurements. We find that overall performance is driven by a product of a telescope/camera's absolute performance, and our knowledge about its performance.

The second half of this paper propagates any residual shear measurement biases through to their effect on cosmological parameter constraints. Fully exploiting the statistical power of Stage IV weak lensing surveys will require additive biases overline{{A}}≲ 1.8× 10^{-12} and multiplicative biases overline{{M}}≲ 4.0× 10^{-3}. These can be allocated between individual budgets in hardware, calibration data and software, using results from the first half of the paper.

If instrumentation is stable and well calibrated, we find extant shear measurement software from Gravitational Lensing Accuracy Testing 2010 (GREAT10) already meet requirements on galaxies detected at signal-to-noise ratio = 40. Averaging over a population of galaxies with a realistic distribution of sizes, it also meets requirements for a 2D cosmic shear analysis from space. If used on fainter galaxies or for 3D cosmic shear tomography, existing algorithms would need calibration on simulations to avoid introducing bias at a level similar to the statistical error. Requirements on hardware and calibration data are discussed in more detail in a companion paper. Our analysis is intentionally general, but is specifically being used to drive the hardware and ground segment performance budget for the design of the European Space Agency's recently selected Euclid mission.

Fast Calculation of the Weak Lensing Aperture Mass Statistic

 

Authors: A. Leonard, S. Pires, J.-L. Starck
Journal: MNRAS
Year: 2012
Download: ADS | arXiv


Abstract

The aperture mass statistic is a common tool used in weak lensing studies. By convolving lensing maps with a filter function of a specific scale, chosen to be larger than the scale on which the noise is dominant, the lensing signal may be boosted with respect to the noise. This allows for detection of structures at increased fidelity. Furthermore, higher-order statistics of the aperture mass (such as its skewness or kurtosis), or counting of the peaks seen in the resulting aperture mass maps, provide a convenient and effective method to constrain the cosmological parameters. In this paper, we more fully explore the formalism underlying the aperture mass statistic. We demonstrate that the aperture mass statistic is formally identical to a wavelet transform at a specific scale. Further, we show that the filter functions most frequently used in aperture mass studies are not ideal, being non-local in both real and Fourier space. In contrast, the wavelet formalism offers a number of wavelet functions that are localized both in real and Fourier space, yet similar to the 'optimal' aperture mass filters commonly adopted. Additionally, for a number of wavelet functions, such as the starlet wavelet, very fast algorithms exist to compute the wavelet transform. This offers significant advantages over the usual aperture mass algorithm when it comes to image processing time, demonstrating speed-up factors of ~ 5 - 1200 for aperture radii in the range 2 to 64 pixels on an image of 1024 x 1024 pixels.

Wavelet Helmholtz decomposition for weak lensing mass map reconstruction

 

Authors: E. Deriaz, J.-L. Starck, S.Pires
Journal: A&A
Year: 2012
Download: ADS | arXiv


Abstract

To derive the convergence field from the gravitational shear (gamma) of the background galaxy images, the classical methods require a convolution of the shear to be performed over the entire sky, usually expressed thanks to the Fast Fourier transform (FFT). However, it is not optimal for an imperfect geometry survey. Furthermore, FFT implicitly uses periodic conditions that introduce errors to the reconstruction. A method has been proposed that relies on computation of an intermediate field u that combines the derivatives of gamma and on convolution with a Green kernel. In this paper, we study the wavelet Helmholtz decomposition as a new approach to reconstructing the dark matter mass map. We show that a link exists between the Helmholtz decomposition and the E/B mode separation. We introduce a new wavelet construction, that has a property that gives us more flexibility in handling the border problem, and we propose a new method of reconstructing the dark matter mass map in the wavelet space. A set of experiments based on noise-free images illustrates that this Wavelet Helmholtz decomposition reconstructs the borders better than all other existing methods.

Cosmological constraints from the capture of non-Gaussianity in Weak Lensing data

 

Authors: S. Pires, A. Leonard,  J.-L. Starck
Journal: MNRAS
Year: 2012
Download: ADS | arXiv


Abstract

Weak gravitational lensing has become a common tool to constrain the cosmological model. The majority of the methods to derive constraints on cosmological parameters use second-order statistics of the cosmic shear. Despite their success, second-order statistics are not optimal and degeneracies between some parameters remain. Tighter constraints can be obtained if second-order statistics are combined with a statistic that is efficient to capture non-Gaussianity. In this paper, we search for such a statistical tool and we show that there is additional information to be extracted from statistical analysis of the convergence maps beyond what can be obtained from statistical analysis of the shear field. For this purpose, we have carried out a large number of cosmological simulations along the {\sigma}8-{\Omega}m degeneracy, and we have considered three different statistics commonly used for non-Gaussian features characterization: skewness, kurtosis and peak count. To be able to investigate non-Gaussianity directly in the shear field we have used the aperture mass definition of these three statistics for different scales. Then, the results have been compared with the results obtained with the same statistics estimated in the convergence maps at the same scales. First, we show that shear statistics give similar constraints to those given by convergence statistics, if the same scale is considered. In addition, we find that the peak count statistic is the best to capture non-Gaussianities in the weak lensing field and to break the {\sigma}8-{\Omega}m degeneracy. We show that this statistical analysis should be conducted in the convergence maps: first, because there exist fast algorithms to compute the convergence map for different scales, and secondly because it offers the opportunity to denoise the reconstructed convergence map, which improves non-Gaussian features extraction.

Cosmological model discrimination with weak lensing

 

Authors: S. Pires, J.-L. Starck, A. Amara, A. Réfrégier, R. Teyssier
Journal: Astronomy & Astrophysics
Year: 2009
Download: ADS


Abstract

Weak gravitational lensing provides a unique way of mapping directly the dark matter in the Universe. The majority of lensing analyses use the two-point statistics of the cosmic shear field to constrain the cosmological model, a method that is affected by degeneracies, such as that between σ8 and Ωm which are respectively the rms of the mass fluctuations on a scale of 8 Mpc/h and the matter density parameter, both at z = 0. However, the two-point statistics only measure the Gaussian properties of the field, and the weak lensing field is non-Gaussian. It has been shown that the estimation of non-Gaussian statistics for weak lensing data can improve the constraints on cosmological parameters. In this paper, we systematically compare a wide range of non-Gaussian estimators to determine which one provides tighter constraints on the cosmological parameters. These statistical methods include skewness, kurtosis, and the higher criticism test, in several sparse representations such as wavelet and curvelet; as well as the bispectrum, peak counting, and a newly introduced statistic called wavelet peak counting (WPC). Comparisons based on sparse representations indicate that the wavelet transform is the most sensitive to non-Gaussian cosmological structures. It also appears that the most helpful statistic for non-Gaussian characterization in weak lensing mass maps is the WPC. Finally, we show that the σ8 - Ωmdegeneracy could be even better broken if the WPC estimation is performed on weak lensing mass maps filtered by the wavelet method, MRLens.