a520_glimpse_featured

Sparse reconstruction of the merging A520 cluster system

Sparse reconstruction of the merging A520 cluster system

 

Authors: A. Peel, F. Lanusse, J.-L. Starck
Journal: submitted to ApJ
Year: 08/2017
Download: ADS| Arxiv


Abstract

Merging galaxy clusters present a unique opportunity to study the properties of dark matter in an astrophysical context. These are rare and extreme cosmic events in which the bulk of the baryonic matter becomes displaced from the dark matter halos of the colliding subclusters. Since all mass bends light, weak gravitational lensing is a primary tool to study the total mass distribution in such systems. Combined with X-ray and optical analyses, mass maps of cluster mergers reconstructed from weak-lensing observations have been used to constrain the self-interaction cross-section of dark matter. The dynamically complex Abell 520 (A520) cluster is an exceptional case, even among merging systems: multi-wavelength observations have revealed a surprising high mass-to-light concentration of dark mass, the interpretation of which is difficult under the standard assumption of effectively collisionless dark matter. We revisit A520 using a new sparsity-based mass-mapping algorithm to independently assess the presence of the puzzling dark core. We obtain high-resolution mass reconstructions from two separate galaxy shape catalogs derived from Hubble Space Telescope observations of the system. Our mass maps agree well overall with the results of previous studies, but we find important differences. In particular, although we are able to identify the dark core at a certain level in both data sets, it is at much lower significance than has been reported before using the same data. As we cannot confirm the detection in our analysis, we do not consider A520 as posing a significant challenge to the collisionless dark matter scenario.

Untitled3

Shear measurement bias: dependencies on methods, simulation parameters and measured parameters

 

Authors: A. Pujol, F. Sureau, J. Bobin et al.
Journal: A&A
Year: 06/2017
Download: ADS| Arxiv


Abstract

We present a study of the dependencies of shear and ellipticity bias on simulation (input) and measured (output) parameters, noise, PSF anisotropy, pixel size and the model bias coming from two different and independent shape estimators. We use simulated images from Galsim based on the GREAT3 control-space-constant branch and we measure ellipticity and shear bias from a model-fitting method (gFIT) and a moment-based method (KSB). We show the bias dependencies found on input and output parameters for both methods and we identify the main dependencies and causes. We find consistent results between the two methods (given the precision of the analysis) and important dependencies on orientation and morphology properties such as flux, size and ellipticity. We show cases where shear bias and ellipticity bias behave very different for the two methods due to the different nature of these measurements. We also show that noise and pixelization play an important role on the bias dependences on the output properties. We find a large model bias for galaxies consisting of a bulge and a disk with different ellipticities or orientations. We also see an important coupling between several properties on the bias dependences. Because of this we need to study several properties simultaneously in order to properly understand the nature of shear bias.

1703.06066_plot

PSF field learning based on Optimal Transport Distances

 

Authors: F. Ngolè Mboula, J-L. Starck
Journal: arXiv
Year: 2017
Download: ADS | arXiv

 


Abstract

Context: in astronomy, observing large fractions of the sky within a reasonable amount of time implies using large field-of-view (fov) optical instruments that typically have a spatially varying Point Spread Function (PSF). Depending on the scientific goals, galaxies images need to be corrected for the PSF whereas no direct measurement of the PSF is available. Aims: given a set of PSFs observed at random locations, we want to estimate the PSFs at galaxies locations for shapes measurements correction. Contributions: we propose an interpolation framework based on Sliced Optimal Transport. A non-linear dimension reduction is first performed based on local pairwise approximated Wasserstein distances. A low dimensional representation of the unknown PSFs is then estimated, which in turn is used to derive representations of those PSFs in the Wasserstein metric. Finally, the interpolated PSFs are calculated as approximated Wasserstein barycenters. Results: the proposed method was tested on simulated monochromatic PSFs of the Euclid space mission telescope (to be launched in 2020). It achieves a remarkable accuracy in terms of pixels values and shape compared to standard methods such as Inverse Distance Weighting or Radial Basis Function based interpolation methods.

illstr_Se

Joint Multichannel Deconvolution and Blind Source Separation

 

Authors: M. Jiang, J. Bobin, J-L. Starck
Journal: arXiv
Year: 2017
Download: ADS | arXiv

 


Abstract

Blind Source Separation (BSS) is a challenging matrix factorization problem that plays a central role in multichannel imaging science. In a large number of applications, such as astrophysics, current unmixing methods are limited since real-world mixtures are generally affected by extra instrumental effects like blurring. Therefore, BSS has to be solved jointly with a deconvolution problem, which requires tackling a new inverse problem: deconvolution BSS (DBSS). In this article, we introduce an innovative DBSS approach, called DecGMCA, based on sparse signal modeling and an efficient alternative projected least square algorithm. Numerical results demonstrate that the DecGMCA algorithm performs very well on simulations. It further highlights the importance of jointly solving BSS and deconvolution instead of considering these two problems independently. Furthermore, the performance of the proposed DecGMCA algorithm is demonstrated on simulated radio-interferometric data.

gal_deconv

Space variant deconvolution of galaxy survey images

 

Authors: S. Farrens, J-L. Starck, F. Ngolè Mboula
Journal: A&A
Year: 2017
Download: ADS | arXiv


Abstract

Removing the aberrations introduced by the Point Spread Function (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the use of regularisation. This task is particularly difficult when the PSF varies spatially as is the case for the Euclid telescope. New surveys will provide images containing thousand of galaxies and the deconvolution regularisation problem can be considered from a completely new perspective. In fact, one can assume that galaxies belong to a low-rank dimensional space. This work introduces the use of the low-rank matrix approximation as a regularisation prior for galaxy image deconvolution and compares its performance with a standard sparse regularisation technique. This new approach leads to a natural way to handle a space variant PSF. Deconvolution is performed using a Python code that implements a primal-dual splitting algorithm. The data set considered is a sample of 10 000 space-based galaxy images convolved with a known spatially varying Euclid-like PSF and including various levels of Gaussian additive noise. Performance is assessed by examining the deconvolved galaxy image pixels and shapes. The results demonstrate that for small samples of galaxies sparsity performs better in terms of pixel and shape recovery, while for larger samples of galaxies it is possible to obtain more accurate estimates of the galaxy shapes using the low-rank approximation.


Summary

Point Spread Function

The Point Spread Function or PSF of an imaging system (also referred to as the impulse response) describes how the system responds to a point (unextended) source. In astrophysics, stars or quasars are often used to measure the PSF of an instrument as in ideal conditions their light would occupy a single pixel on a CCD. Telescopes, however, diffract the incoming photons which limits the maximum resolution achievable. In reality, the images obtained from telescopes include aberrations from various sources such as:

  • The atmosphere (for ground based instruments)
  • Jitter (for space based instruments)
  • Imperfections in the optical system
  • Charge spread of the detectors

Deconvolution

In order to recover the true image properties it is necessary to remove PSF effects from observations. If the PSF is known (which is certainly not trivial) one can attempt to deconvolve the PSF from the image. In the absence of noise this is simple. We can model the observed image \mathbf{y} as follows

\mathbf{y}=\mathbf{Hx}

where \mathbf{x} is the true image and \mathbf{H} is an operator that represents the convolution with the PSF. Thus, to recover the true image, one would simply invert \mathbf{H} as follows

\mathbf{x}=\mathbf{H}^{-1}\mathbf{y}

Unfortunately, the images we observe also contain noise (e.g. from the CCD readout) and this complicates the problem.

\mathbf{y}=\mathbf{Hx} + \mathbf{n}

This problem is ill-posed as even the tiniest amount of noise will have a large impact on the result of the operation. Therefore, to obtain a stable and unique solution, it is necessary to regularise the problem by adding additional prior knowledge of the true images.

Sparsity

One way to regularise the problem is using sparsity. The concept of sparsity is quite simple. If we know that there is a representation of \mathbf{x} that is sparse (i.e. most of the coefficients are zeros) then we can force our deconvolved observation \mathbf{\hat{x}} to be sparse in the same domain. In practice we aim to minimise a problem of the following form

\begin{aligned} & \underset{\mathbf{x}}{\text{argmin}} & \frac{1}{2}\|\mathbf{y}-\mathbf{H}\mathbf{x}\|_2^2 + \lambda\|\Phi(\mathbf{x})\|_1 & & \text{s.t.} & & \mathbf{x} \ge 0 \end{aligned}

where \Phi is a matrix that transforms \mathbf{x} to the sparse domain and \lambda is a regularisation control parameter.

Low-Rank Approximation

Another way to regularise the problem is assume that all of the images one aims to deconvolve live on a underlying low-rank manifold. In other words, if we have a sample of galaxy images we wish to deconvolve then we can construct a matrix X X where each column is a vector of galaxy pixel coefficients. If many of these galaxies have similar properties then we know that X X will have a smaller rank than if images were all very different. We can use this knowledge to regularise the deconvolution problem in the following way

\begin{aligned} & \underset{\mathbf{X}}{\text{argmin}} & \frac{1}{2}\|\mathbf{Y}-\mathcal{H}(\mathbf{X})\|_2^2 + \lambda|\mathbf{X}\|_* & & \text{s.t.} & & \mathbf{X} \ge 0 \end{aligned}

Results

In the paper I implement both of these regularisation techniques and compare how well they perform at deconvolving a sample of 10,000 Euclid-like galaxy images. The results show that, for the data used, sparsity does a better job at recovering the image pixels, while the low-rank approximation does a better job a recovering the galaxy shapes (provided enough galaxies are used).


Code

SF_DECONVOLVE is a Python code designed for PSF deconvolution using a low-rank approximation and sparsity. The code can handle a fixed PSF for the entire field or a stack of PSFs for each galaxy position.

 


1703.01271_plot

Linear and non-linear Modified Gravity forecasts with future surveys

 

Authors: S. Casas, M. Kunz, M. Martinelli, V. Pettorino
Journal: Physics Letters B
Year: 2017
Download: ADS | arXiv


Abstract

Modified Gravity theories generally affect the Poisson equation and the gravitational slip (effective anisotropic stress) in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin the time dependence of these functions in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, treated with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further decorrelate parameters with a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We also extend the analysis to two particular parameterizations of the time evolution of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 25% level when using only linear scales (wavevector k < 0.15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.


Summary

A new paper has been put on the arXiv by new CosmoStat member Valeria Pettorino, her PhD student Santiago Casas, in collaboration with Martin Kunz (Geneva) and Matteo Martinelli (Leiden).
The authors discuss forecasts in Modified Gravity cosmologies, described by two generic functions of time and space [Planck Dark Energy and Modified Gravity 2015Asaba et al 2013,Bull 2015Alonso et al 2016]. Their amplitude is constrained in different redshift bins. The authors elaborate on the impact of non-linear scales, showing that their inclusion (via a non-linear semi-analytical prescription applied to Modified Gravity) enables to highly reduce correlation among different redshift bins, even before any decorrelation procedure is applied. This is visually seen in the figure below (Fig.4 of arXiv), for the case of Galaxy Clustering: the correlation Matrix of the cosmological parameters (including the amplitudes of the Modified Gravity functions, binned in redshift)  is much more diagonal in the non-linear case (right panel) than in the linear one (left panel).

fig4_casasetal2017

A decorrelation procedure (Zero-phase Component Analysis, ZCA) is anyway used to extract those combinations which are best constrained by future surveys such as Euclid. With respect to Principal Component Analysis, ZCA allows to find a new vector of uncorrelated variables that is as similar as possible to the original vector of variables.

The authors further consider two smooth time functions whose main allowed to depart from General Relativity only at late times (late-time parameterization) or able to detach also at early times (early-time parameterization). The Fisher Matrix forecasts for standard and Modified gravity parameters, for different surveys (Euclid, SKA1, SKA2) is shown in the plot below (extracted from Fig.15 of arXiv), in which Galaxy Clustering and Weak Lensing probes are combined. Left panel refers to linear analysis, right panel includes a non-linear treatment.

fig15x4_casasetal2017fig15x6_casasetal2017 

K17_Fig1b

Weak-lensing projections

Authors: M. Kilbinger, C. Heymans et al.
Journal: submitted to MNRAS
Year: 2017
Download: ADS | arXiv


Abstract

We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second- order Limber equations for the projection. We find that the impact of adopting these approximations are negligible when constraining cosmological parameters from current weak lensing surveys. This is demonstrated using data from the Canada-France-Hawaii Lensing Survey (CFHTLenS). We find that the reported tension with Planck Cosmic Microwave Background (CMB) temperature anisotropy results cannot be alleviated, in contrast to the recent claim made by Kitching et al. (2016, version 1). For future large-scale surveys with unprecedented precision, we show that the spherical second-order Limber approximation will provide sufficient accuracy. In this case, the cosmic-shear power spectrum is shown to be in agreement with the full projection at the sub-percent level for l > 3, with the corresponding errors an order of magnitude below cosmic variance for all l. When computing the two-point shear correlation function, we show that the flat-sky fast Hankel transformation results in errors below two percent compared to the full spherical transformation. In the spirit of reproducible research, our numerical implementation of all approximations and the full projection are publicly available within the package nicaea at http://www.cosmostat.org/software/nicaea.


Summary

We discuss various methods to calculate projections for weak gravitational lensing: Since lenses galaxies pick up matter inhomogeneities of the cosmic web along the line of sight while photons from the galaxies propagate through the Universe to the observer, these inhomogeneities have to be projected to a 2D observable, the cumulative shear or convergence. The full projection involves three-dimensional integrals over highly oscillating Bessel functions, and can be time-consuming to compute numerically to high accuracy. Most previous work have therefore used approximations such as the Limber approximation, that reduce the integrals to 1D, and thereby neglecting modes along the line of sight.

The authors show that these projections are more than adequate for present surveys. Sub-percent accuracy is reached for l>20, for example as shown by the pink curve, which is the ratio of the case 'ExtL1Hyb' to the full projection. The abbreviation means 'extended', corresponding to the improved approximation introduced by LoVerde & Afshordi (2008), first-order Limber, and hybrid, since this is a hybrid between flat-sky and spherical coordinates. This case has been used in most of the recent publications (e.g. for KiDS), whereas the cast 'L1Fl' (first-order Limber flat-sky) was popular for most publications since 2014.

These approximations are sufficient for the small areas of current observations coming from CFHTLenS, KiDS, and DES, and well below cosmic variance of even future surveys (the figure shows Euclid - 15,000 deg2 and Kids -1,500 deg2).

K17_Fig1b

The paper then discusses the second-order Limber approximation, introduced in a general framework by LoVerde & Afshordi (2008), and applied to weak lensing in the current paper. The best 2nd-order case 'ExtL2Sph' reaches sub-percent accuracy down to l=3, sufficient for all future surveys.

The paper also computes the shear correlation function in real space, and shows that those approximations have a very minor influence.

We then go on to re-compute the cosmological constraints obtained in Kilbinger et al. (2013), and find virtually no change when choosing different approximations. Only the depreciated case 'ExtL1Fl' makes a noticeable difference, which is however still well within the statistical error bars. This case shows a particular slow convergence to the full projection.

Similar results have been derived in two other recent publications, Kitching et al. (2017), and Lemos, Challinor & Efstathiou (2017).
Note however that Kitching et al. (2017) conclude that errors from projection approximations of the types we discussed here (Limber, flat sky) could make up to 11% of the error budget of future surveys. This is however assuming the worst-case scenario including the deprecated cast 'ExtL1Fl', and we do not share their conclusion, but think that for example the projection 'ExtL2Sph' is sufficient for future surveys such as LSST and Euclid.

Untitled

nIFTy Cosmology: the clustering consistency of galaxy formation models

 

Authors: A. Pujol, R. A. Skibba, E. Gaztañaga et al.
Journal: MNRAS
Year: 02/2017
Download: ADS| Arxiv


Abstract

We present a clustering comparison of 12 galaxy formation models (including Semi-Analytic Models (SAMs) and Halo Occupation Distribution (HOD) models) all run on halo catalogues and merger trees extracted from a single {\Lambda}CDM N-body simulation. We compare the results of the measurements of the mean halo occupation numbers, the radial distribution of galaxies in haloes and the 2-Point Correlation Functions (2PCF). We also study the implications of the different treatments of orphan (galaxies not assigned to any dark matter subhalo) and non-orphan galaxies in these measurements. Our main result is that the galaxy formation models generally agree in their clustering predictions but they disagree significantly between HOD and SAMs for the orphan satellites. Although there is a very good agreement between the models on the 2PCF of central galaxies, the scatter between the models when orphan satellites are included can be larger than a factor of 2 for scales smaller than 1 Mpc/h. We also show that galaxy formation models that do not include orphan satellite galaxies have a significantly lower 2PCF on small scales, consistent with previous studies. Finally, we show that the 2PCF of orphan satellites is remarkably different between SAMs and HOD models. Orphan satellites in SAMs present a higher clustering than in HOD models because they tend to occupy more massive haloes. We conclude that orphan satellites have an important role on galaxy clustering and they are the main cause of the differences in the clustering between HOD models and SAMs.

Untitled

What determines large scale galaxy clustering: halo mass or local density?

 

Authors: A. Pujol, K. Hoffmann, N. Jiménez et al.
Journal: A&A
Year: 02/2017
Download: ADS| Arxiv


Abstract

Using a dark matter simulation we show how halo bias is determined by local density and not by halo mass. This is not totally surprising as, according to the peak-background split model, local matter density (bar δ) is the property that constrains bias at large scales. Massive haloes have a high clustering because they reside in high density regions. Small haloes can be found in a wide range of environments which differentially determine their clustering amplitudes. This contradicts the assumption made by standard halo occupation distribution (HOD) models that bias and occupation of haloes is determined solely by their mass. We show that the bias of central galaxies from semi-analytic models of galaxy formation as a function of luminosity and colour is therefore not correctly predicted by the standard HOD model. Using bar δ (of matter or galaxies) instead of halo mass, the HOD model correctly predicts galaxy bias. These results indicate the need to include information about local density and not only mass in order to correctly apply HOD analysis in these galaxy samples. This new model can be readily applied to observations and has the advantage that, in contrast with the dark matter halo mass, the galaxy density can be directly observed.