sparseDust17

Sparse estimation of model-based diffuse thermal dust emission

 

Authors: M.O. Irfan, J.Bobin 
Journal: MNRAS
Year: 2017
Download: ADS | arXiv


Abstract

Component separation for the Planck HFI data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low- resolution, estimation of the dust emission. In this paper we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7 and 7.2 per cent at the 1 sigma level across the full sky for thermal dust temperature, spectral index and optical depth at 353 GHz, respectively. Comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However outside of the Galactic plane premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

rmassey

Origins of weak lensing systematics, and requirements on future instrumentation (or knowledge of instrumentation)

 

Authors: R. Massey, H. Hoekstra, T. Kitching, ..., S. Pires et al.
Journal: MNRAS
Year: 2013
Download: ADS | arXiv


Abstract

The first half of this paper explores the origin of systematic biases in the measurement of weak gravitational lensing. Compared to previous work, we expand the investigation of point spread function instability and fold in for the first time the effects of non-idealities in electronic imaging detectors and imperfect galaxy shape measurement algorithms. Together, these now explain the additive {A}(ℓ) and multiplicative {M}(ℓ) systematics typically reported in current lensing measurements. We find that overall performance is driven by a product of a telescope/camera's absolute performance, and our knowledge about its performance.

The second half of this paper propagates any residual shear measurement biases through to their effect on cosmological parameter constraints. Fully exploiting the statistical power of Stage IV weak lensing surveys will require additive biases overline{{A}}≲ 1.8× 10^{-12} and multiplicative biases overline{{M}}≲ 4.0× 10^{-3}. These can be allocated between individual budgets in hardware, calibration data and software, using results from the first half of the paper.

If instrumentation is stable and well calibrated, we find extant shear measurement software from Gravitational Lensing Accuracy Testing 2010 (GREAT10) already meet requirements on galaxies detected at signal-to-noise ratio = 40. Averaging over a population of galaxies with a realistic distribution of sizes, it also meets requirements for a 2D cosmic shear analysis from space. If used on fainter galaxies or for 3D cosmic shear tomography, existing algorithms would need calibration on simulations to avoid introducing bias at a level similar to the statistical error. Requirements on hardware and calibration data are discussed in more detail in a companion paper. Our analysis is intentionally general, but is specifically being used to drive the hardware and ground segment performance budget for the design of the European Space Agency's recently selected Euclid mission.

arc

A PCA-based automated finder for galaxy-scale strong lenses

 

Authors: R. Joseph, F. Courbin, R. B. Metcalf, ...., S.Pires, et al.
Journal: A&A
Year: 2014
Download: ADS | arXiv


Abstract

We present an algorithm using principal component analysis (PCA) to subtract galaxies from imaging data and also two algorithms to find strong, galaxy-scale gravitational lenses in the resulting residual image. The combined method is optimised to find full or partial Einstein rings. Starting from a pre-selection of potential massive galaxies, we first perform a PCA to build a set of basis vectors. The galaxy images are reconstructed using the PCA basis and subtracted from the data. We then filter the residual image with two different methods. The first uses a curvelet (curved wavelets) filter of the residual images to enhance any curved/ring feature. The resulting image is transformed in polar coordinates, centred on the lens galaxy. In these coordinates, a ring is turned into a line, allowing us to detect very faint rings by taking advantage of the integrated signal-to-noise in the ring (a line in polar coordinates). The second way of analysing the PCA-subtracted images identifies structures in the residual images and assesses whether they are lensed images according to their orientation, multiplicity, and elongation. We applied the two methods to a sample of simulated Einstein rings as they would be observed with the ESA Euclid satellite in the VIS band. The polar coordinate transform allowed us to reach a completeness of 90% for a purity of 86%, as soon as the signal-to-noise integrated in the ring was higher than 30 and almost independent of the size of the Einstein ring. Finally, we show with real data that our PCA-based galaxy subtraction scheme performs better than traditional subtraction based on model fitting to the data. Our algorithm can be developed and improved further using machine learning and dictionary learning methods, which would extend the capabilities of the method to more complex and diverse galaxy shapes.

sampling

Sparsely sampling the sky: Regular vs. random sampling

 

Authors: P. Paykari, S. Pires, J.-L. Starck, A.H. Jaffe
Journal: Astronomy & Astrophysics
Year: 2009
Download: ADS | arXiv


Abstract

Weak gravitational lensing provides a unique way of mapping directly the dark matter in the Universe. The majority of lensing analyses use the two-point statistics of the cosmic shear field to constrain the cosmological model, a method that is affected by degeneracies, such as that between σ8 and Ωm which are respectively the rms of the mass fluctuations on a scale of 8 Mpc/h and the matter density parameter, both at z = 0. However, the two-point statistics only measure the Gaussian properties of the field, and the weak lensing field is non-Gaussian. It has been shown that the estimation of non-Gaussian statistics for weak lensing data can improve the constraints on cosmological parameters. In this paper, we systematically compare a wide range of non-Gaussian estimators to determine which one provides tighter constraints on the cosmological parameters. These statistical methods include skewness, kurtosis, and the higher criticism test, in several sparse representations such as wavelet and curvelet; as well as the bispectrum, peak counting, and a newly introduced statistic called wavelet peak counting (WPC). Comparisons based on sparse representations indicate that the wavelet transform is the most sensitive to non-Gaussian cosmological structures. It also appears that the most helpful statistic for non-Gaussian characterization in weak lensing mass maps is the WPC. Finally, we show that the σ8 - Ωmdegeneracy could be even better broken if the WPC estimation is performed on weak lensing mass maps filtered by the wavelet method, MRLens.

icon1

Dealing with missing data: An inpainting application to the MICROSCOPE space mission

Authors: B. Joël, S. Pires, Q. Baghi, P. Touboul, G. Metris
Journal: Physical Review D
Year: 2015
Download: ADS | arXiv


Abstract

Missing data are a common problem in experimental and observational physics. They can be caused by various sources, either an instrument's saturation, or a contamination from an external event, or a data loss. In particular, they can have a disastrous effect when one is seeking to characterize a colored-noise-dominated signal in Fourier space, since they create a spectral leakage that can artificially increase the noise. It is therefore important to either take them into account or to correct for them prior to e.g. a Least-Square fit of the signal to be characterized. In this paper, we present an application of the {\it inpainting} algorithm to mock MICROSCOPE data; {\it inpainting} is based on a sparsity assumption, and has already been used in various astrophysical contexts; MICROSCOPE is a French Space Agency mission, whose launch is expected in 2016, that aims to test the Weak Equivalence Principle down to the 1015 level. We then explore the {\it inpainting} dependence on the number of gaps and the total fraction of missing values. We show that, in a worst-case scenario, after reconstructing missing values with {\it inpainting}, a Least-Square fit may allow us to significantly measure a 1.1×1015 Equivalence Principle violation signal, which is sufficiently close to the MICROSCOPE requirements to implement {\it inpainting} in the official MICROSCOPE data processing and analysis pipeline. Together with the previously published KARMA method, {\it inpainting} will then allow us to independently characterize and cross-check an Equivalence Principle violation signal detection down to the 1015 level.

icon

Dealing with missing data in the MICROSCOPE space mission: An adaptation of inpainting to handle colored-noise data

Authors: S. Pires, B. Joël, Q. Baghi, P. Touboul, G. Metris
Journal: Physical Review D
Year: 2016
Download: ADS | arXiv


Abstract

The MICROSCOPE space mission, launched on April 25, 2016, aims to test the weak equivalence principle (WEP) with a 10-15 precision. Reaching this performance requires an accurate and robust data analysis method, especially since the possible WEP violation signal will be dominated by a strongly colored noise. An important complication is brought by the fact that some values will be missing—therefore, the measured time series will not be strictly regularly sampled. Those missing values induce a spectral leakage that significantly increases the noise in Fourier space, where the WEP violation signal is looked for, thereby complicating scientific returns. Recently, we developed an inpainting algorithm to correct the MICROSCOPE data for missing values. This code has been integrated in the official MICROSCOPE data processing and analysis pipeline because it enables us to significantly measure an equivalence principle violation (EPV) signal in a model-independent way, in the inertial satellite configuration. In this work, we present several improvements to the method that may allow us now to reach the MICROSCOPE requirements for both inertial and spin satellite configurations. The main improvement has been obtained using a prior on the power spectrum of the colored noise that can be directly derived from the incomplete data. We show that after reconstructing missing values with this new algorithm, a least-squares fit may allow us to significantly measure an EPV signal with a 0.96 ×10-15 precision in the inertial mode and 1.20 ×10-15 precision in the spin mode. Although, the inpainting method presented in this paper has been optimized to the MICROSCOPE data, it remains sufficiently general to be used in the general context of missing data in time series dominated by an unknown colored noise. The improved inpainting software, called inpainting for colored-noise dominated signals, is freely available at http://www.cosmostat.org/software/icon.

francois

High Resolution Weak Lensing Mass-Mapping Combining Shear and Flexion

Authors: F. Lanusse, J.-L. Starck, A. Leonard, S. Pires
Journal: A&A
Year: 2016
Download: ADS | arXiv


Abstract

Aims: We propose a new mass mapping algorithm, specifically designed to recover small-scale information from a combination of gravitational shear and flexion. Including flexion allows us to supplement the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map without relying on strong lensing constraints.
Methods: To preserve all available small scale information, we avoid any binning of the irregularly sampled input shear and flexion fields and treat the mass mapping problem as a general ill-posed inverse problem, which is regularised using a robust multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
Results: We tested our reconstruction method on a set of realistic weak lensing simulations corresponding to typical HST/ACS cluster observations and demonstrate our ability to recover substructures with the inclusion of flexion, which are otherwise lost if only shear information is used. In particular, we can detect substructures on the 15'' scale well outside of the critical region of the clusters. In addition, flexion also helps to constrain the shape of the central regions of the main dark matter halos.

Untitled

Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing

 

Authors: DES Collaboration
Journal:  
Year: 08/2017
Download: ADS| Arxiv


Abstract

We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg$^2$ of $griz$ imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies in four redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in five redshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxy shears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometric redshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulk of the analysis was carried out while blind to the true results; we describe an extensive suite of systematics checks performed and passed during this blinded phase. The data are modeled in flat $\Lambda$CDM and $w$CDM cosmologies, marginalizing over 20 nuisance parameters, varying 6 (for $\Lambda$CDM) or 7 (for $w$CDM) cosmological parameters including the neutrino mass density and including the 457 $\times$ 457 element analytic covariance matrix. We find consistent cosmological results from these three two-point functions, and from their combination obtain $S_8 \equiv \sigma_8 (\Omega_m/0.3)^{0.5} = 0.783^{+0.021}_{-0.025}$ and $\Omega_m = 0.264^{+0.032}_{-0.019}$ for $\Lambda$CDM for $w$CDM, we find $S_8 = 0.794^{+0.029}_{-0.027}$, $\Omega_m = 0.279^{+0.043}_{-0.022}$, and $w=-0.80^{+0.20}_{-0.22}$ at 68% CL. The precision of these DES Y1 results rivals that from the Planck cosmic microwave background measurements, allowing a comparison of structure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values for $S_8$ and $\Omega_m$ are lower than the central values from Planck ...

Untitled

Dark Energy Survey Year 1 Results: Curved-Sky Weak Lensing Mass Map

 

Authors: C. Chang, A. Pujol, B. Mawdsley et al.
Journal:  
Year: 08/2017
Download: ADS| Arxiv


Abstract

We construct the largest curved-sky galaxy weak lensing mass map to date from the DES first-year (DES Y1) data. The map, about 10 times larger than previous work, is constructed over a contiguous $\approx1,500 $deg$^2$, covering a comoving volume of $\approx10 $Gpc$^3$. The effects of masking, sampling, and noise are tested using simulations. We generate weak lensing maps from two DES Y1 shear catalogs, Metacalibration and Im3shape, with sources at redshift $0.2<z<1.3,$ and in each of four bins in this range. In the highest signal-to-noise map, the ratio between the mean signal-to-noise in the E-mode and the B-mode map is $\sim$1.5 ($\sim$2) when smoothed with a Gaussian filter of $\sigma_{G}=30$ (80) arcminutes. The second and third moments of the convergence $\kappa$ in the maps are in agreement with simulations. We also find no significant correlation of $\kappa$ with maps of potential systematic contaminants. Finally, we demonstrate two applications of the mass maps: (1) cross-correlation with different foreground tracers of mass and (2) exploration of the largest peaks and voids in the maps.

a520_glimpse_featured

Sparse reconstruction of the merging A520 cluster system

Sparse reconstruction of the merging A520 cluster system

 

Authors: A. Peel, F. Lanusse, J.-L. Starck
Journal: submitted to ApJ
Year: 08/2017
Download: ADS| Arxiv


Abstract

Merging galaxy clusters present a unique opportunity to study the properties of dark matter in an astrophysical context. These are rare and extreme cosmic events in which the bulk of the baryonic matter becomes displaced from the dark matter halos of the colliding subclusters. Since all mass bends light, weak gravitational lensing is a primary tool to study the total mass distribution in such systems. Combined with X-ray and optical analyses, mass maps of cluster mergers reconstructed from weak-lensing observations have been used to constrain the self-interaction cross-section of dark matter. The dynamically complex Abell 520 (A520) cluster is an exceptional case, even among merging systems: multi-wavelength observations have revealed a surprising high mass-to-light concentration of dark mass, the interpretation of which is difficult under the standard assumption of effectively collisionless dark matter. We revisit A520 using a new sparsity-based mass-mapping algorithm to independently assess the presence of the puzzling dark core. We obtain high-resolution mass reconstructions from two separate galaxy shape catalogs derived from Hubble Space Telescope observations of the system. Our mass maps agree well overall with the results of previous studies, but we find important differences. In particular, although we are able to identify the dark core at a certain level in both data sets, it is at much lower significance than has been reported before using the same data. As we cannot confirm the detection in our analysis, we do not consider A520 as posing a significant challenge to the collisionless dark matter scenario.