Untitled

Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing

 

Authors: DES Collaboration
Journal:  
Year: 08/2017
Download: ADS| Arxiv


Abstract

We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg$^2$ of $griz$ imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies in four redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in five redshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxy shears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometric redshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulk of the analysis was carried out while blind to the true results; we describe an extensive suite of systematics checks performed and passed during this blinded phase. The data are modeled in flat $\Lambda$CDM and $w$CDM cosmologies, marginalizing over 20 nuisance parameters, varying 6 (for $\Lambda$CDM) or 7 (for $w$CDM) cosmological parameters including the neutrino mass density and including the 457 $\times$ 457 element analytic covariance matrix. We find consistent cosmological results from these three two-point functions, and from their combination obtain $S_8 \equiv \sigma_8 (\Omega_m/0.3)^{0.5} = 0.783^{+0.021}_{-0.025}$ and $\Omega_m = 0.264^{+0.032}_{-0.019}$ for $\Lambda$CDM for $w$CDM, we find $S_8 = 0.794^{+0.029}_{-0.027}$, $\Omega_m = 0.279^{+0.043}_{-0.022}$, and $w=-0.80^{+0.20}_{-0.22}$ at 68% CL. The precision of these DES Y1 results rivals that from the Planck cosmic microwave background measurements, allowing a comparison of structure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values for $S_8$ and $\Omega_m$ are lower than the central values from Planck ...

Untitled

Dark Energy Survey Year 1 Results: Curved-Sky Weak Lensing Mass Map

 

Authors: C. Chang, A. Pujol, B. Mawdsley et al.
Journal:  
Year: 08/2017
Download: ADS| Arxiv


Abstract

We construct the largest curved-sky galaxy weak lensing mass map to date from the DES first-year (DES Y1) data. The map, about 10 times larger than previous work, is constructed over a contiguous $\approx1,500 $deg$^2$, covering a comoving volume of $\approx10 $Gpc$^3$. The effects of masking, sampling, and noise are tested using simulations. We generate weak lensing maps from two DES Y1 shear catalogs, Metacalibration and Im3shape, with sources at redshift $0.2<z<1.3,$ and in each of four bins in this range. In the highest signal-to-noise map, the ratio between the mean signal-to-noise in the E-mode and the B-mode map is $\sim$1.5 ($\sim$2) when smoothed with a Gaussian filter of $\sigma_{G}=30$ (80) arcminutes. The second and third moments of the convergence $\kappa$ in the maps are in agreement with simulations. We also find no significant correlation of $\kappa$ with maps of potential systematic contaminants. Finally, we demonstrate two applications of the mass maps: (1) cross-correlation with different foreground tracers of mass and (2) exploration of the largest peaks and voids in the maps.

a520_glimpse_featured

Sparse reconstruction of the merging A520 cluster system

Sparse reconstruction of the merging A520 cluster system

 

Authors: A. Peel, F. Lanusse, J.-L. Starck
Journal: submitted to ApJ
Year: 08/2017
Download: ADS| Arxiv


Abstract

Merging galaxy clusters present a unique opportunity to study the properties of dark matter in an astrophysical context. These are rare and extreme cosmic events in which the bulk of the baryonic matter becomes displaced from the dark matter halos of the colliding subclusters. Since all mass bends light, weak gravitational lensing is a primary tool to study the total mass distribution in such systems. Combined with X-ray and optical analyses, mass maps of cluster mergers reconstructed from weak-lensing observations have been used to constrain the self-interaction cross-section of dark matter. The dynamically complex Abell 520 (A520) cluster is an exceptional case, even among merging systems: multi-wavelength observations have revealed a surprising high mass-to-light concentration of dark mass, the interpretation of which is difficult under the standard assumption of effectively collisionless dark matter. We revisit A520 using a new sparsity-based mass-mapping algorithm to independently assess the presence of the puzzling dark core. We obtain high-resolution mass reconstructions from two separate galaxy shape catalogs derived from Hubble Space Telescope observations of the system. Our mass maps agree well overall with the results of previous studies, but we find important differences. In particular, although we are able to identify the dark core at a certain level in both data sets, it is at much lower significance than has been reported before using the same data. As we cannot confirm the detection in our analysis, we do not consider A520 as posing a significant challenge to the collisionless dark matter scenario.

Untitled3

Shear measurement bias: dependencies on methods, simulation parameters and measured parameters

 

Authors: A. Pujol, F. Sureau, J. Bobin et al.
Journal: A&A
Year: 06/2017
Download: ADS| Arxiv


Abstract

We present a study of the dependencies of shear and ellipticity bias on simulation (input) and measured (output) parameters, noise, PSF anisotropy, pixel size and the model bias coming from two different and independent shape estimators. We use simulated images from Galsim based on the GREAT3 control-space-constant branch and we measure ellipticity and shear bias from a model-fitting method (gFIT) and a moment-based method (KSB). We show the bias dependencies found on input and output parameters for both methods and we identify the main dependencies and causes. We find consistent results between the two methods (given the precision of the analysis) and important dependencies on orientation and morphology properties such as flux, size and ellipticity. We show cases where shear bias and ellipticity bias behave very different for the two methods due to the different nature of these measurements. We also show that noise and pixelization play an important role on the bias dependences on the output properties. We find a large model bias for galaxies consisting of a bulge and a disk with different ellipticities or orientations. We also see an important coupling between several properties on the bias dependences. Because of this we need to study several properties simultaneously in order to properly understand the nature of shear bias.

DAE_contour_levels

Unsupervised feature learning for galaxy SEDs with denoising autoencoders

 

Authors: Frontera-Pons, J., Sureau, F., Bobin, J. and Le Floc'h E.
Journal: Astronomy & Astrophysics
Year: 2017
Download: ADS | arXiv


Abstract

With the increasing number of deep multi-wavelength galaxy surveys, the spectral energy distribution (SED) of galaxies has become an invaluable tool for studying the formation of their structures and their evolution. In this context, standard analysis relies on simple spectro-photometric selection criteria based on a few SED colors. If this fully supervised classification already yielded clear achievements, it is not optimal to extract relevant information from the data. In this article, we propose to employ very recent advances in machine learning, and more precisely in feature learning, to derive a data-driven diagram. We show that the proposed approach based on denoising autoencoders recovers the bi-modality in the galaxy population in an unsupervised manner, without using any prior knowledge on galaxy SED classification. This technique has been compared to principal component analysis (PCA) and to standard color/color representations. In addition, preliminary results illustrate that this enables the capturing of extra physically meaningful information, such as redshift dependence, galaxy mass evolution and variation over the specific star formation rate. PCA also results in an unsupervised representation with physical properties, such as mass and sSFR, although this representation separates out less other characteristics (bimodality, redshift evolution) than denoising autoencoders.

Quantifying systematics from the shear inversion on weak-lensing peak counts

Authors: C. Lin, M. Kilbinger
Journal: Submitted to A&A letters
Year: 2017
Download: ADS | arXiv

 


Abstract

Weak-lensing (WL) peak counts provide a straightforward way to constrain cosmology, and results have been shown promising. However, the importance of understanding and dealing with systematics increases as data quality reaches an unprecedented level. One of the sources of systematics is the convergence-shear inversion. This effect, inevitable from observations, is usually neglected by theoretical peak models. Thus, it could have an impact on cosmological results. In this letter, we study the bias from neglecting the inversion and find it small but not negligible. The cosmological dependence of this bias is difficult to model and depends on the filter size. We also show the evolution of parameter constraints. Although weak biases arise in individual peak bins, the bias can reach 2-sigma for the dark energy equation of state w0. Therefore, we suggest that the inversion cannot be ignored and that inversion-free approaches, such as aperture mass, would be a more suitable tool to study weak-lensing peak counts.

1703.06066_plot

PSF field learning based on Optimal Transport Distances

 

Authors: F. Ngolè Mboula, J-L. Starck
Journal: arXiv
Year: 2017
Download: ADS | arXiv

 


Abstract

Context: in astronomy, observing large fractions of the sky within a reasonable amount of time implies using large field-of-view (fov) optical instruments that typically have a spatially varying Point Spread Function (PSF). Depending on the scientific goals, galaxies images need to be corrected for the PSF whereas no direct measurement of the PSF is available. Aims: given a set of PSFs observed at random locations, we want to estimate the PSFs at galaxies locations for shapes measurements correction. Contributions: we propose an interpolation framework based on Sliced Optimal Transport. A non-linear dimension reduction is first performed based on local pairwise approximated Wasserstein distances. A low dimensional representation of the unknown PSFs is then estimated, which in turn is used to derive representations of those PSFs in the Wasserstein metric. Finally, the interpolated PSFs are calculated as approximated Wasserstein barycenters. Results: the proposed method was tested on simulated monochromatic PSFs of the Euclid space mission telescope (to be launched in 2020). It achieves a remarkable accuracy in terms of pixels values and shape compared to standard methods such as Inverse Distance Weighting or Radial Basis Function based interpolation methods.

illstr_Se

Joint Multichannel Deconvolution and Blind Source Separation

 

Authors: M. Jiang, J. Bobin, J-L. Starck
Journal: arXiv
Year: 2017
Download: ADS | arXiv

 


Abstract

Blind Source Separation (BSS) is a challenging matrix factorization problem that plays a central role in multichannel imaging science. In a large number of applications, such as astrophysics, current unmixing methods are limited since real-world mixtures are generally affected by extra instrumental effects like blurring. Therefore, BSS has to be solved jointly with a deconvolution problem, which requires tackling a new inverse problem: deconvolution BSS (DBSS). In this article, we introduce an innovative DBSS approach, called DecGMCA, based on sparse signal modeling and an efficient alternative projected least square algorithm. Numerical results demonstrate that the DecGMCA algorithm performs very well on simulations. It further highlights the importance of jointly solving BSS and deconvolution instead of considering these two problems independently. Furthermore, the performance of the proposed DecGMCA algorithm is demonstrated on simulated radio-interferometric data.

gal_deconv

Space variant deconvolution of galaxy survey images

 

Authors: S. Farrens, J-L. Starck, F. Ngolè Mboula
Journal: A&A
Year: 2017
Download: ADS | arXiv


Abstract

Removing the aberrations introduced by the Point Spread Function (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the use of regularisation. This task is particularly difficult when the PSF varies spatially as is the case for the Euclid telescope. New surveys will provide images containing thousand of galaxies and the deconvolution regularisation problem can be considered from a completely new perspective. In fact, one can assume that galaxies belong to a low-rank dimensional space. This work introduces the use of the low-rank matrix approximation as a regularisation prior for galaxy image deconvolution and compares its performance with a standard sparse regularisation technique. This new approach leads to a natural way to handle a space variant PSF. Deconvolution is performed using a Python code that implements a primal-dual splitting algorithm. The data set considered is a sample of 10 000 space-based galaxy images convolved with a known spatially varying Euclid-like PSF and including various levels of Gaussian additive noise. Performance is assessed by examining the deconvolved galaxy image pixels and shapes. The results demonstrate that for small samples of galaxies sparsity performs better in terms of pixel and shape recovery, while for larger samples of galaxies it is possible to obtain more accurate estimates of the galaxy shapes using the low-rank approximation.


Summary

Point Spread Function

The Point Spread Function or PSF of an imaging system (also referred to as the impulse response) describes how the system responds to a point (unextended) source. In astrophysics, stars or quasars are often used to measure the PSF of an instrument as in ideal conditions their light would occupy a single pixel on a CCD. Telescopes, however, diffract the incoming photons which limits the maximum resolution achievable. In reality, the images obtained from telescopes include aberrations from various sources such as:

  • The atmosphere (for ground based instruments)
  • Jitter (for space based instruments)
  • Imperfections in the optical system
  • Charge spread of the detectors

Deconvolution

In order to recover the true image properties it is necessary to remove PSF effects from observations. If the PSF is known (which is certainly not trivial) one can attempt to deconvolve the PSF from the image. In the absence of noise this is simple. We can model the observed image \mathbf{y} as follows

\mathbf{y}=\mathbf{Hx}

where \mathbf{x} is the true image and \mathbf{H} is an operator that represents the convolution with the PSF. Thus, to recover the true image, one would simply invert \mathbf{H} as follows

\mathbf{x}=\mathbf{H}^{-1}\mathbf{y}

Unfortunately, the images we observe also contain noise (e.g. from the CCD readout) and this complicates the problem.

\mathbf{y}=\mathbf{Hx} + \mathbf{n}

This problem is ill-posed as even the tiniest amount of noise will have a large impact on the result of the operation. Therefore, to obtain a stable and unique solution, it is necessary to regularise the problem by adding additional prior knowledge of the true images.

Sparsity

One way to regularise the problem is using sparsity. The concept of sparsity is quite simple. If we know that there is a representation of \mathbf{x} that is sparse (i.e. most of the coefficients are zeros) then we can force our deconvolved observation \mathbf{\hat{x}} to be sparse in the same domain. In practice we aim to minimise a problem of the following form

\begin{aligned} & \underset{\mathbf{x}}{\text{argmin}} & \frac{1}{2}\|\mathbf{y}-\mathbf{H}\mathbf{x}\|_2^2 + \lambda\|\Phi(\mathbf{x})\|_1 & & \text{s.t.} & & \mathbf{x} \ge 0 \end{aligned}

where \Phi is a matrix that transforms \mathbf{x} to the sparse domain and \lambda is a regularisation control parameter.

Low-Rank Approximation

Another way to regularise the problem is assume that all of the images one aims to deconvolve live on a underlying low-rank manifold. In other words, if we have a sample of galaxy images we wish to deconvolve then we can construct a matrix X X where each column is a vector of galaxy pixel coefficients. If many of these galaxies have similar properties then we know that X X will have a smaller rank than if images were all very different. We can use this knowledge to regularise the deconvolution problem in the following way

\begin{aligned} & \underset{\mathbf{X}}{\text{argmin}} & \frac{1}{2}\|\mathbf{Y}-\mathcal{H}(\mathbf{X})\|_2^2 + \lambda|\mathbf{X}\|_* & & \text{s.t.} & & \mathbf{X} \ge 0 \end{aligned}

Results

In the paper I implement both of these regularisation techniques and compare how well they perform at deconvolving a sample of 10,000 Euclid-like galaxy images. The results show that, for the data used, sparsity does a better job at recovering the image pixels, while the low-rank approximation does a better job a recovering the galaxy shapes (provided enough galaxies are used).


Code

SF_DECONVOLVE is a Python code designed for PSF deconvolution using a low-rank approximation and sparsity. The code can handle a fixed PSF for the entire field or a stack of PSFs for each galaxy position.

 


1703.01271_plot

Linear and non-linear Modified Gravity forecasts with future surveys

 

Authors: S. Casas, M. Kunz, M. Martinelli, V. Pettorino
Journal: Physics Letters B
Year: 2017
Download: ADS | arXiv


Abstract

Modified Gravity theories generally affect the Poisson equation and the gravitational slip (effective anisotropic stress) in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin the time dependence of these functions in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, treated with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further decorrelate parameters with a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We also extend the analysis to two particular parameterizations of the time evolution of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 25% level when using only linear scales (wavevector k < 0.15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.


Summary

A new paper has been put on the arXiv by new CosmoStat member Valeria Pettorino, her PhD student Santiago Casas, in collaboration with Martin Kunz (Geneva) and Matteo Martinelli (Leiden).
The authors discuss forecasts in Modified Gravity cosmologies, described by two generic functions of time and space [Planck Dark Energy and Modified Gravity 2015Asaba et al 2013,Bull 2015Alonso et al 2016]. Their amplitude is constrained in different redshift bins. The authors elaborate on the impact of non-linear scales, showing that their inclusion (via a non-linear semi-analytical prescription applied to Modified Gravity) enables to highly reduce correlation among different redshift bins, even before any decorrelation procedure is applied. This is visually seen in the figure below (Fig.4 of arXiv), for the case of Galaxy Clustering: the correlation Matrix of the cosmological parameters (including the amplitudes of the Modified Gravity functions, binned in redshift)  is much more diagonal in the non-linear case (right panel) than in the linear one (left panel).

fig4_casasetal2017

A decorrelation procedure (Zero-phase Component Analysis, ZCA) is anyway used to extract those combinations which are best constrained by future surveys such as Euclid. With respect to Principal Component Analysis, ZCA allows to find a new vector of uncorrelated variables that is as similar as possible to the original vector of variables.

The authors further consider two smooth time functions whose main allowed to depart from General Relativity only at late times (late-time parameterization) or able to detach also at early times (early-time parameterization). The Fisher Matrix forecasts for standard and Modified gravity parameters, for different surveys (Euclid, SKA1, SKA2) is shown in the plot below (extracted from Fig.15 of arXiv), in which Galaxy Clustering and Weak Lensing probes are combined. Left panel refers to linear analysis, right panel includes a non-linear treatment.

fig15x4_casasetal2017fig15x6_casasetal2017