1703.06066_plot

PSF field learning based on Optimal Transport Distances

 

Authors: F. Ngolè Mboula, J-L. Starck
Journal:  arXiv
Year: 2017
Download: ADS | arXiv

 


Abstract

Context: in astronomy, observing large fractions of the sky within a reasonable amount of time implies using large field-of-view (fov) optical instruments that typically have a spatially varying Point Spread Function (PSF). Depending on the scientific goals, galaxies images need to be corrected for the PSF whereas no direct measurement of the PSF is available. Aims: given a set of PSFs observed at random locations, we want to estimate the PSFs at galaxies locations for shapes measurements correction. Contributions: we propose an interpolation framework based on Sliced Optimal Transport. A non-linear dimension reduction is first performed based on local pairwise approximated Wasserstein distances. A low dimensional representation of the unknown PSFs is then estimated, which in turn is used to derive representations of those PSFs in the Wasserstein metric. Finally, the interpolated PSFs are calculated as approximated Wasserstein barycenters. Results: the proposed method was tested on simulated monochromatic PSFs of the Euclid space mission telescope (to be launched in 2020). It achieves a remarkable accuracy in terms of pixels values and shape compared to standard methods such as Inverse Distance Weighting or Radial Basis Function based interpolation methods.

illstr_Se

Joint Multichannel Deconvolution and Blind Source Separation

 

Authors: M. Jiang, J. Bobin, J-L. Starck
Journal: arXiv
Year: 2017
Download: ADS | arXiv

 


Abstract

Blind Source Separation (BSS) is a challenging matrix factorization problem that plays a central role in multichannel imaging science. In a large number of applications, such as astrophysics, current unmixing methods are limited since real-world mixtures are generally affected by extra instrumental effects like blurring. Therefore, BSS has to be solved jointly with a deconvolution problem, which requires tackling a new inverse problem: deconvolution BSS (DBSS). In this article, we introduce an innovative DBSS approach, called DecGMCA, based on sparse signal modeling and an efficient alternative projected least square algorithm. Numerical results demonstrate that the DecGMCA algorithm performs very well on simulations. It further highlights the importance of jointly solving BSS and deconvolution instead of considering these two problems independently. Furthermore, the performance of the proposed DecGMCA algorithm is demonstrated on simulated radio-interferometric data.

gal_deconv

Space variant deconvolution of galaxy survey images

 

Authors: S. Farrens, J-L. Starck, F. Ngolè Mboula
Journal: A&A
Year: 2017
Download: ADS | arXiv


Abstract

Removing the aberrations introduced by the Point Spread Function (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the use of regularisation. This task is particularly difficult when the PSF varies spatially as is the case for the Euclid telescope. New surveys will provide images containing thousand of galaxies and the deconvolution regularisation problem can be considered from a completely new perspective. In fact, one can assume that galaxies belong to a low-rank dimensional space. This work introduces the use of the low-rank matrix approximation as a regularisation prior for galaxy image deconvolution and compares its performance with a standard sparse regularisation technique. This new approach leads to a natural way to handle a space variant PSF. Deconvolution is performed using a Python code that implements a primal-dual splitting algorithm. The data set considered is a sample of 10 000 space-based galaxy images convolved with a known spatially varying Euclid-like PSF and including various levels of Gaussian additive noise. Performance is assessed by examining the deconvolved galaxy image pixels and shapes. The results demonstrate that for small samples of galaxies sparsity performs better in terms of pixel and shape recovery, while for larger samples of galaxies it is possible to obtain more accurate estimates of the galaxy shapes using the low-rank approximation.


Summary

Point Spread Function

The Point Spread Function or PSF of an imaging system (also referred to as the impulse response) describes how the system responds to a point (unextended) source. In astrophysics, stars or quasars are often used to measure the PSF of an instrument as in ideal conditions their light would occupy a single pixel on a CCD. Telescopes, however, diffract the incoming photons which limits the maximum resolution achievable. In reality, the images obtained from telescopes include aberrations from various sources such as:

  • The atmosphere (for ground based instruments)
  • Jitter (for space based instruments)
  • Imperfections in the optical system
  • Charge spread of the detectors

Deconvolution

In order to recover the true image properties it is necessary to remove PSF effects from observations. If the PSF is known (which is certainly not trivial) one can attempt to deconvolve the PSF from the image. In the absence of noise this is simple. We can model the observed image \mathbf{y} as follows

\mathbf{y}=\mathbf{Hx}

where \mathbf{x} is the true image and \mathbf{H} is an operator that represents the convolution with the PSF. Thus, to recover the true image, one would simply invert \mathbf{H} as follows

\mathbf{x}=\mathbf{H}^{-1}\mathbf{y}

Unfortunately, the images we observe also contain noise (e.g. from the CCD readout) and this complicates the problem.

\mathbf{y}=\mathbf{Hx} + \mathbf{n}

This problem is ill-posed as even the tiniest amount of noise will have a large impact on the result of the operation. Therefore, to obtain a stable and unique solution, it is necessary to regularise the problem by adding additional prior knowledge of the true images.

Sparsity

One way to regularise the problem is using sparsity. The concept of sparsity is quite simple. If we know that there is a representation of \mathbf{x} that is sparse (i.e. most of the coefficients are zeros) then we can force our deconvolved observation \mathbf{\hat{x}} to be sparse in the same domain. In practice we aim to minimise a problem of the following form

\begin{aligned} & \underset{\mathbf{x}}{\text{argmin}} & \frac{1}{2}\|\mathbf{y}-\mathbf{H}\mathbf{x}\|_2^2 + \lambda\|\Phi(\mathbf{x})\|_1 & & \text{s.t.} & & \mathbf{x} \ge 0 \end{aligned}

where \Phi is a matrix that transforms \mathbf{x} to the sparse domain and \lambda is a regularisation control parameter.

Low-Rank Approximation

Another way to regularise the problem is assume that all of the images one aims to deconvolve live on a underlying low-rank manifold. In other words, if we have a sample of galaxy images we wish to deconvolve then we can construct a matrix X X where each column is a vector of galaxy pixel coefficients. If many of these galaxies have similar properties then we know that X X will have a smaller rank than if images were all very different. We can use this knowledge to regularise the deconvolution problem in the following way

\begin{aligned} & \underset{\mathbf{X}}{\text{argmin}} & \frac{1}{2}\|\mathbf{Y}-\mathcal{H}(\mathbf{X})\|_2^2 + \lambda|\mathbf{X}\|_* & & \text{s.t.} & & \mathbf{X} \ge 0 \end{aligned}

Results

In the paper I implement both of these regularisation techniques and compare how well they perform at deconvolving a sample of 10,000 Euclid-like galaxy images. The results show that, for the data used, sparsity does a better job at recovering the image pixels, while the low-rank approximation does a better job a recovering the galaxy shapes (provided enough galaxies are used).


Code

SF_DECONVOLVE is a Python code designed for PSF deconvolution using a low-rank approximation and sparsity. The code can handle a fixed PSF for the entire field or a stack of PSFs for each galaxy position.

Download: GitHub

K17_Fig1b

Weak-lensing projections

Authors: M. Kilbinger, C. Heymans et al.
Journal: submitted to MNRAS
Year: 2017
Download: ADS | arXiv


Abstract

We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second- order Limber equations for the projection. We find that the impact of adopting these approximations are negligible when constraining cosmological parameters from current weak lensing surveys. This is demonstrated using data from the Canada-France-Hawaii Lensing Survey (CFHTLenS). We find that the reported tension with Planck Cosmic Microwave Background (CMB) temperature anisotropy results cannot be alleviated, in contrast to the recent claim made by Kitching et al. (2016, version 1). For future large-scale surveys with unprecedented precision, we show that the spherical second-order Limber approximation will provide sufficient accuracy. In this case, the cosmic-shear power spectrum is shown to be in agreement with the full projection at the sub-percent level for l > 3, with the corresponding errors an order of magnitude below cosmic variance for all l. When computing the two-point shear correlation function, we show that the flat-sky fast Hankel transformation results in errors below two percent compared to the full spherical transformation. In the spirit of reproducible research, our numerical implementation of all approximations and the full projection are publicly available within the package nicaea at http://www.cosmostat.org/software/nicaea.


Summary

We discuss various methods to calculate projections for weak gravitational lensing: Since lenses galaxies pick up matter inhomogeneities of the cosmic web along the line of sight while photons from the galaxies propagate through the Universe to the observer, these inhomogeneities have to be projected to a 2D observable, the cumulative shear or convergence. The full projection involves three-dimensional integrals over highly oscillating Bessel functions, and can be time-consuming to compute numerically to high accuracy. Most previous work have therefore used approximations such as the Limber approximation, that reduce the integrals to 1D, and thereby neglecting modes along the line of sight.

The authors show that these projections are more than adequate for present surveys. Sub-percent accuracy is reached for l>20, for example as shown by the pink curve, which is the ratio of the case 'ExtL1Hyb' to the full projection. The abbreviation means 'extended', corresponding to the improved approximation introduced by LoVerde & Afshordi (2008), first-order Limber, and hybrid, since this is a hybrid between flat-sky and spherical coordinates. This case has been used in most of the recent publications (e.g. for KiDS), whereas the cast 'L1Fl' (first-order Limber flat-sky) was popular for most publications since 2014.

These approximations are sufficient for the small areas of current observations coming from CFHTLenS, KiDS, and DES, and well below cosmic variance of even future surveys (the figure shows Euclid - 15,000 deg2 and Kids -1,500 deg2).

K17_Fig1b

The paper then discusses the second-order Limber approximation, introduced in a general framework by LoVerde & Afshordi (2008), and applied to weak lensing in the current paper. The best 2nd-order case 'ExtL2Sph' reaches sub-percent accuracy down to l=3, sufficient for all future surveys.

The paper also computes the shear correlation function in real space, and shows that those approximations have a very minor influence.

We then go on to re-compute the cosmological constraints obtained in Kilbinger et al. (2013), and find virtually no change when choosing different approximations. Only the depreciated case 'ExtL1Fl' makes a noticeable difference, which is however still well within the statistical error bars. This case shows a particular slow convergence to the full projection.

Similar results have been derived in two other recent publications, Kitching et al. (2017), and Lemos, Challinor & Efstathiou (2017).
Note however that Kitching et al. (2017) conclude that errors from projection approximations of the types we discussed here (Limber, flat sky) could make up to 11% of the error budget of future surveys. This is however assuming the worst-case scenario including the deprecated cast 'ExtL1Fl', and we do not share their conclusion, but think that for example the projection 'ExtL2Sph' is sufficient for future surveys such as LSST and Euclid.

Untitled

nIFTy Cosmology: the clustering consistency of galaxy formation models

nIFTy Cosmology: the clustering consistency of galaxy formation models

 

Authors: A. Pujol, R. A. Skibba, E. Gaztañaga et al.
Journal: MNRAS
Year: 02/2017
Download: ADS| Arxiv

nIFTy Cosmology: the clustering consistency of galaxy formation models


Abstract

We present a clustering comparison of 12 galaxy formation models (including Semi-Analytic Models (SAMs) and Halo Occupation Distribution (HOD) models) all run on halo catalogues and merger trees extracted from a single {\Lambda}CDM N-body simulation. We compare the results of the measurements of the mean halo occupation numbers, the radial distribution of galaxies in haloes and the 2-Point Correlation Functions (2PCF). We also study the implications of the different treatments of orphan (galaxies not assigned to any dark matter subhalo) and non-orphan galaxies in these measurements. Our main result is that the galaxy formation models generally agree in their clustering predictions but they disagree significantly between HOD and SAMs for the orphan satellites. Although there is a very good agreement between the models on the 2PCF of central galaxies, the scatter between the models when orphan satellites are included can be larger than a factor of 2 for scales smaller than 1 Mpc/h. We also show that galaxy formation models that do not include orphan satellite galaxies have a significantly lower 2PCF on small scales, consistent with previous studies. Finally, we show that the 2PCF of orphan satellites is remarkably different between SAMs and HOD models. Orphan satellites in SAMs present a higher clustering than in HOD models because they tend to occupy more massive haloes. We conclude that orphan satellites have an important role on galaxy clustering and they are the main cause of the differences in the clustering between HOD models and SAMs.

Untitled

What determines large scale galaxy clustering: halo mass or local density?

What determines large scale galaxy clustering: halo mass or local density?

 

Authors: A. Pujol, K. Hoffmann, N. Jiménez et al.
Journal: A&A
Year: 02/2017
Download: ADS| Arxiv

What determines large scale galaxy clustering: halo mass or local density?


Abstract

Using a dark matter simulation we show how halo bias is determined by local density and not by halo mass. This is not totally surprising as, according to the peak-background split model, local matter density (bar δ) is the property that constrains bias at large scales. Massive haloes have a high clustering because they reside in high density regions. Small haloes can be found in a wide range of environments which differentially determine their clustering amplitudes. This contradicts the assumption made by standard halo occupation distribution (HOD) models that bias and occupation of haloes is determined solely by their mass. We show that the bias of central galaxies from semi-analytic models of galaxy formation as a function of luminosity and colour is therefore not correctly predicted by the standard HOD model. Using bar δ (of matter or galaxies) instead of halo mass, the HOD model correctly predicts galaxy bias. These results indicate the need to include information about local density and not only mass in order to correctly apply HOD analysis in these galaxy samples. This new model can be readily applied to observations and has the advantage that, in contrast with the dark matter halo mass, the galaxy density can be directly observed.

1612.02264_plot

Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey

 

Authors: A. Peel, C.-A. Lin, F. Lanusse, A. Leonard, J.-L. Starck, M. Kilbinger
Journal: A&A
Year: 2017
Download: ADS | arXiv

 


Abstract

Peak statistics in weak lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complement to two-point and higher-order statistics to constrain our cosmological models. To prepare for the high-precision data of next-generation surveys, we assess the constraining power of peak counts in a simulated Euclid-like survey on the cosmological parameters Ωm\Omega_\mathrm{m}, σ8\sigma_8, and w0dew_0^\mathrm{de}. In particular, we study how the Camelus model--a fast stochastic algorithm for predicting peaks--can be applied to such large surveys. We measure the peak count abundance in a mock shear catalogue of ~5,000 sq. deg. using a multiscale mass map filtering technique. We then constrain the parameters of the mock survey using Camelus combined with approximate Bayesian computation (ABC). We find that peak statistics yield a tight but significantly biased constraint in the σ8\sigma_8-Ωm\Omega_\mathrm{m} plane, indicating the need to better understand and control the model's systematics. We calibrate the model to remove the bias and compare results to those from the two-point correlation functions (2PCF) measured on the same field. In this case, we find the derived parameter Σ8=σ8(Ωm/0.27)α=0.76−0.03+0.02\Sigma_8=\sigma_8(\Omega_\mathrm{m}/0.27)^\alpha=0.76_{-0.03}^{+0.02} with α=0.65\alpha=0.65 for peaks, while for 2PCF the value is Σ8=0.76−0.01+0.02\Sigma_8=0.76_{-0.01}^{+0.02} with α=0.70\alpha=0.70. We therefore see comparable constraining power between the two probes, and the offset of their σ8\sigma_8-Ωm\Omega_\mathrm{m} degeneracy directions suggests that a combined analysis would yield tighter constraints than either measure alone. As expected, w0dew_0^\mathrm{de} cannot be well constrained without a tomographic analysis, but its degeneracy directions with the other two varied parameters are still clear for both peaks and 2PCF.

Untitled

A new method to measure galaxy bias by combining the density and weak lensing fields

A new method to measure galaxy bias by combining the density and weak lensing fields

 

Authors: A. Pujol, C. Chang, E. Gaztañaga et al.
Journal: MNRAS
Year: 10/2016
Download: ADS| Arxiv

A new method to measure galaxy bias by combining the density and weak lensing fields


Abstract

We present a new method to measure redshift-dependent galaxy bias by combining information from the galaxy density field and the weak lensing field. This method is based on the work of Amara et al., who use the galaxy density field to construct a bias-weighted convergence field κg. The main difference between Amara et al.'s work and our new implementation is that here we present another way to measure galaxy bias, using tomography instead of bias parametrizations. The correlation between κg and the true lensing field κ allows us to measure galaxy bias using different zero-lag correlations, such as <κgκ>/<κκ> or <κgκg>/<κgκ>. Our method measures the linear bias factor on linear scales, under the assumption of no stochasticity between galaxies and matter. We use the Marenostrum Institut de Ciències de l'Espai (MICE) simulation to measure the linear galaxy bias for a flux-limited sample (i < 22.5) in tomographic redshift bins using this method. This article is the first that studies the accuracy and systematic uncertainties associated with the implementation of the method and the regime in which it is consistent with the linear galaxy bias defined by projected two-point correlation functions (2PCF). We find that our method is consistent with a linear bias at the per cent level for scales larger than 30 arcmin, while non-linearities appear at smaller scales. This measurement is a good complement to other measurements of bias, since it does not depend strongly on σ8 as do the 2PCF measurements. We will apply this method to the Dark Energy Survey Science Verification data in a follow-up article.

rAMCA_plot

Blind separation of sparse sources in the presence of outliers

 

Authors: C.Chenot, J.Bobin
Journal: Signal Processing, Elsevier
Year: 2016
Download: Elsevier / Preprint

 


 

Abstract

 

Blind Source Separation (BSS) plays a key role to analyze multichannel data since it aims at recovering unknown underlying elementary sources from observed linear mixtures in an unsupervised way. In a large number of applications, multichannel measurements contain corrupted entries, which are highly detrimental for most BSS techniques. In this article, we introduce a new {\it robust} BSS technique coined robust Adaptive Morphological Component Analysis (rAMCA). Based on sparse signal modeling, it makes profit of an alternate reweighting minimization technique that yields a robust estimation of the sources and the mixing matrix simultaneously with the removal of the spurious outliers. Numerical experiments are provided that illustrate the robustness of this new algorithm with respect to aberrant outliers on a wide range of blind separation instances. In contrast to current robust BSS methods, the rAMCA algorithm is shown to perform very well when the number of observations is close or equal to the number of sources.