gal_deconv

Space variant deconvolution of galaxy survey images

Authors: Samuel Farrens, Jean-Luc Starck, Fred Maurice Ngolè Mboula

Journal: A&A

Year: 2017

Download: ADS | arXiv


Abstract

Removing the aberrations introduced by the Point Spread Function (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the use of regularisation. This task is particularly difficult when the PSF varies spatially as is the case for the Euclid telescope. New surveys will provide images containing thousand of galaxies and the deconvolution regularisation problem can be considered from a completely new perspective. In fact, one can assume that galaxies belong to a low-rank dimensional space. This work introduces the use of the low-rank matrix approximation as a regularisation prior for galaxy image deconvolution and compares its performance with a standard sparse regularisation technique. This new approach leads to a natural way to handle a space variant PSF. Deconvolution is performed using a Python code that implements a primal-dual splitting algorithm. The data set considered is a sample of 10 000 space-based galaxy images convolved with a known spatially varying Euclid-like PSF and including various levels of Gaussian additive noise. Performance is assessed by examining the deconvolved galaxy image pixels and shapes. The results demonstrate that for small samples of galaxies sparsity performs better in terms of pixel and shape recovery, while for larger samples of galaxies it is possible to obtain more accurate estimates of the galaxy shapes using the low-rank approximation.


Summary

Point Spread Function

The Point Spread Function or PSF of an imaging system (also referred to as the impulse response) describes how the system responds to a point (unextended) source. In astrophysics, stars or quasars are often used to measure the PSF of an instrument as in ideal conditions their light would occupy a single pixel on a CCD. Telescopes, however, diffract the incoming photons which limits the maximum resolution achievable. In reality, the images obtained from telescopes include aberrations from various sources such as:

  • The atmosphere (for ground based instruments)
  • Jitter (for space based instruments)
  • Imperfections in the optical system
  • Charge spread of the detectors

Deconvolution

In order to recover the true image properties it is necessary to remove PSF effects from observations. If the PSF is known (which is certainly not trivial) one can attempt to deconvolve the PSF from the image. In the absence of noise this is simple. We can model the observed image \mathbf{y} as follows

\mathbf{y}=\mathbf{Hx}

where \mathbf{x} is the true image and \mathbf{H} is an operator that represents the convolution with the PSF. Thus, to recover the true image, one would simply invert \mathbf{H} as follows

\mathbf{x}=\mathbf{H}^{-1}\mathbf{y}

Unfortunately, the images we observe also contain noise (e.g. from the CCD readout) and this complicates the problem.

\mathbf{y}=\mathbf{Hx} + \mathbf{n}

This problem is ill-posed as even the tiniest amount of noise will have a large impact on the result of the operation. Therefore, to obtain a stable and unique solution, it is necessary to regularise the problem by adding additional prior knowledge of the true images.

Sparsity

One way to regularise the problem is using sparsity. The concept of sparsity is quite simple. If we know that there is a representation of \mathbf{x} that is sparse (i.e. most of the coefficients are zeros) then we can force our deconvolved observation \mathbf{\hat{x}} to be sparse in the same domain. In practice we aim to minimise a problem of the following form

\begin{aligned} & \underset{\mathbf{x}}{\text{argmin}} & \frac{1}{2}\|\mathbf{y}-\mathbf{H}\mathbf{x}\|_2^2 + \lambda\|\Phi(\mathbf{x})\|_1 & & \text{s.t.} & & \mathbf{x} \ge 0 \end{aligned}

where \Phi is a matrix that transforms \mathbf{x} to the sparse domain and \lambda is a regularisation control parameter.

Low-Rank Approximation

Another way to regularise the problem is assume that all of the images one aims to deconvolve live on a underlying low-rank manifold. In other words, if we have a sample of galaxy images we wish to deconvolve then we can construct a matrix X X where each column is a vector of galaxy pixel coefficients. If many of these galaxies have similar properties then we know that X X will have a smaller rank than if images were all very different. We can use this knowledge to regularise the deconvolution problem in the following way

\begin{aligned} & \underset{\mathbf{X}}{\text{argmin}} & \frac{1}{2}\|\mathbf{Y}-\mathcal{H}(\mathbf{X})\|_2^2 + \lambda|\mathbf{X}\|_* & & \text{s.t.} & & \mathbf{X} \ge 0 \end{aligned}

Results

In the paper I implement both of these regularisation techniques and compare how well they perform at deconvolving a sample of 10,000 Euclid-like galaxy images. The results show that, for the data used, sparsity does a better job at recovering the image pixels, while the low-rank approximation does a better job a recovering the galaxy shapes (provided enough galaxies are used).


Code

SF_DECONVOLVE is a Python code designed for PSF deconvolution using a low-rank approximation and sparsity. The code can handle a fixed PSF for the entire field or a stack of PSFs for each galaxy position.

Download: GitHub

mnras0417-1402-f6

Friends-of-Friends Groups and Clusters in the 2SLAQ Catalogue

Authors:

Journal:

Year:

Download:

S. Farrens, F.B. Abdalla, E.S. Cypriano, C. Sabiu, C. Blake

MNRAS

2011

ADS | arXiv

Abstract

We present a catalogue of galaxy groups and clusters selected using a friends-of-friends (FoF) algorithm with a dynamic linking length from the 2dF-SDSS LRG and QSO (2SLAQ) luminous red galaxy survey. The linking parameters for the code are chosen through an analysis of simulated 2SLAQ haloes. The resulting catalogue includes 313 clusters containing 1152 galaxies. The galaxy groups and clusters have an average velocity dispersion of ? km s-1 and an average size of ? Mpc h-1. Galaxies from regions of 1 deg2 and centred on the galaxy clusters were downloaded from the Sloan Digital Sky Survey Data Release 6. Investigating the photometric redshifts and cluster red sequence of these galaxies shows that the galaxy clusters detected with the FoF algorithm are reliable out to z˜ 0.6. We estimate masses for the clusters using their velocity dispersions. These mass estimates are shown to be consistent with 2SLAQ mock halo masses. Further analysis of the simulation haloes shows that clipping out low-richness groups with large radii improves the purity of catalogue from 52 to 88 per cent, while retaining a completeness of 94 per cent. Finally, we test the two-point correlation function of our cluster catalogue. We find a best-fitting power-law model, ξ(r) = (r/r0)γ, with parameters r0= 24 ± 4 Mpc h-1 and γ=-2.1 ± 0.2, which are in agreement with other low-redshift cluster samples and consistent with a Λ cold dark matter universe.

K17_Fig1b

Weak-lensing projections

A new paper has been put on the arXiv and submitted to MNRAS by CosmoStat member Martin Kilbinger.
The authors discuss various methods to calculate projections for weak gravitational lensing: Since lenses galaxies pick up matter inhomogeneities of the cosmic web along the line of sight while photons from the galaxies propagate through the Universe to the observer, these inhomogeneities have to be projected to a 2D observable, the cumulative shear or convergence. The full projection involves three-dimensional integrals over highly oscillating Bessel functions, and can be time-consuming to compute numerically to high accuracy. Most previous work have therefore used approximations such as the Limber approximation, that reduce the integrals to 1D, and thereby neglecing modes along the line of sight.

The authors show that these projections are more than adequate for present surveys. Sub-percent accuracy is reached for l>20, for example as shown by the pink curve, which is the ratio of the case 'ExtL1Hyb' to the full projection. The abbreviation means 'extended', corresponding to the improved approximation introduced by LoVerde & Afshordi (2008), first-order Limber, and hybrid, since this is a hybrid between flat-sky and spherical coordinates. This case has been used in most of the recent publications (e.g. for KiDS), whereas the cast 'L1Fl' (first-order Limber flat-sky) was popular for most publications since 2014.

These approximations are sufficient for the small areas of current observations coming from CFHTLenS, KiDS, and DES, and well below cosmic variance of even future surveys (the figure shows Euclid - 15,000 deg2 and Kids -1,500 deg2).

 

K17_Fig1b

The paper then discusses the second-order Limber approximation, introduced in a general framework by LoVerde & Afshordi (2008), and applied to weak lensing in the current paper. The best 2nd-order case 'ExtL2Sph' reaches sub-percent accuracy down to l=3, sufficient for all future surveys.

The paper also computes the shear correlation function in real space, and shows that those approximations have a very minor influence.

The authors then go on to re-compute the cosmological constraints obtained in Kilbinger et al. (2013), and find virtually no change when chosing different approximations. Only the depreciated case 'ExtL1Fl' makes a noticeable difference, which is however still well within the statistical error bars. This case shows a particular slow convergence to the full projection.

The results of this paper are in contrast of the recent publication by Kitching et al. (2017), version 1, who find an order of magnitude larger effect. Those authors have since published a revised version, whose results are in agreement with ours. Note however that they conclude that errors from projection approximations of the types we discussed here (Limber, flat sky) could make up to 11% of the error budget of future surveys. This is however assuming the worst-case scenario including the depriciated cast 'ExtL1Fl', and we do not share their conclusion, but think that for example the projection 'ExtL2Sph' is sufficient for future surveys such as LSST and Euclid.

 

Lieu-MT

The XXL Survey

First round of papers published

 

The XXL Survey is a deep X-ray survey observed with the XMM satellite, covering two fields of 25 deg2 each. Observations in many other wavelength, from radio to IR and optical, in both imaging and spectroscopy, complement the survey. The main science case is cosmology with X-ray selected galaxy clusters, but other fields such as galaxy evolution, AGNs, cluster physics, and the large-scale structure are being studied.

The main paper (Paper I) describing the survey and giving an overview of the science is arXiv:1512.04317 (Pierre et al. 2015). Paper IV (arxiv.org:1512.03857, Lieu et al. 2015) presents weak-lensing mass measurements of the brightest clusters in the Northern field, using CFHTLenS shapes and photometric redshifts.

 

The mass-temperature relation for XXL and other surveys (CCCP, COSMOS), Lieu et al (2015).
Sigma_tab_0715

Review: Cosmology from cosmic shear observations

Martin Kilbinger, CEA Saclay, Service d'Astrophysique (SAp), France

Find on this page general information and updates for my recent review article (arXiv:1411.0155) on cosmic shear, Reports on Progress in Physics 78 (2015) 086901 (ads link for two-column format).

Sigma 06/16
Fig. 7 of the review article: The quantity \Sigma = \sigma_8 \left( \Omega_{\rm m}/0.3 \right)^\alpha as function of publication year.
Get the data in table format as pdf.
 
Updated figure!
02/2015: Added Stripe-82 and CFHTLenS peak counts
07/2015: Added DES-SV.
06/2016: Added DLS, two more CFHTLenS analyses, DES-SV peak counts, and KiDS-450.
 
 
 
 
 

In the video abstract of the article I talk about cosmic shear and the review for a broader audience.
 
 
 
 
Additional references, new papers
General papers, new reviews.
 
    • Another weak-lensing review has been published by my colleagues Liping Fu and Zu-Hui Fan (behind a pay wall, not available on the arXiv).
    • Rachel Mandelbaum's short, pedagogical review to instrumental systematics and WL

 Sect. 2: Cosmological background

 Sect. 5: Measuring weak lensing

    • News on ensemble shape measurement methods:
      An implementation of the Bernstein & Armstrong (2014) Bayesian shape method has been published at arXiv:1403.7669. The team that participated at the great3 challenge with the Bayesian inference method "MBI" published their pipeline and results paper, see arXiv:1411.2608.
    • Okura & Futamase (arXiv:1405.1539) came up with an estimator of ellipticity that uses 0th instead of 2nd-order moments!
    • arXiv:1409.6273 discusses atmospheric chromatic effects for LSST.
    • Dust in spiral galaxies  as source of shape bias, but also astrophysical probe: arXiv:1411.6724.

Scripts

Fig. 3 (b), derivatives of the convergence power spectrum with respect to various cosmological parameters.
cs_review_scripts.tgz.


Comments and suggestions are welcome! Please write to me at martin.kilbinger@cea.fr.

Last updated 22 July 2015.

Fig4

New model on peak counts: paper published

 

Fig 1 from Lin & Kilbinger (2015)
Fig 1 from Lin & Kilbinger (2015)

A new, probabilistic model for weak-lensing peak counts has recently been proposed by CosmoStat group members Lin and Kilbinger (arXiv:1410.6955). It is based on drawing halos from the mass function and, via ray-tracing, generating weak-lensing maps to count peaks. These simulated maps can directly be compared to observations, making this a forward-modelling approach of the cluster mass function, in contrast to many other traditional methods using cluster probes such as X-ray, optical richness, or SZ observations.

 

 

 

 

Fig 4 from Lin & Kilbinger (2015)
Fig 4 from Lin & Kilbinger (2015)

 

The model prediction is in very good agreement with N-body simulations.

It is very flexible, and can potentially include astrophysical and observational effects, such as intrinsic alignment, halo triaxiality, masking, photo-z errors, etc. Moreover, the pdf of the number of peaks can be output by the model, allowing for a very general likelihood calculation, without e.g. assuming a Gaussian distribution of the observables.

 

The paper has been accepted for publication in A&A (20/01/2015). Reference: A&A, 576, A24.

The code corresponding to the model is available for download here.

reduced_F3c

Reduced-shear power spectrum

Fitting formulae of the reduced-shear power spectrum for weak lensing

Reference

Martin Kilbinger, 2010, arXiv:1004.3493

Description

We provide fitting formulae for the reduced-shear power-spectrum correction which is third-order in the lensing potential. This correction reaches up to 10% of the total lensing spectrum. Higher-order correction terms are one order of magnitude below the third-order term. The correction involves an integral over the matter bispectrum. We fit this integral with a combination of power-law functions and polynomials. We also fit the derivatives with respect to cosmological parameters. A Taylor-expansion around a fiducial (WMAP7) model provides accurate reduced-shear corrections within a region in parameter space containing the WMAP7 68% error elllipsoid.

Results

Our fits are accurate to 1% for l<104, and to 2% for l<2·105, which reduces the bias by a factor of four compared to the case of no correction. This matches the precision lensing power spectrum predictions of recent N-body simulations.

Ratio of power spectra uncorrected (lower lines) and corrected (upper lines) for reduced-shear.

 

Download, install, and run the code

Download an example code which includes the fitting matrices. Use 'make' to compile the code. To use the code, you have to fill in Fmn(a) (eq. 10 from the paper) which involves the lensing efficiency, comoving distances and the redshift distribution(s).

The reduced-shear corrections are also implemented in the cosmology and lensing package 'nicaea'. This code provides all necessary functions to produce lensing observables (shear power spectrum and real-space second-order functions). The cosmology and redshift distributions are set via parameter files.

Author

Martin Kilbinger (martin.kilbinger@cea.fr)

EBmode2

Optimes E-/B-mode decomposition

A new cosmic shear function:
Optimised E-/B-mode decomposition on a finite interval

Reference

Liping Fu, Martin Kilbinger, 2009, arXiv:0907.0795

Description

We have introduced a new cosmic shear statistic which decomposes the shear correlation into E- and B-modes on a finite angular interval. The new function is calculated by integrating the shear two-point correlation function with a filter function. The filter function fulfills the E-/B-mode decomposition constraints given in Schneider & Kilbinger (2007).

Download, install, and run the code

Download the tar file decomp.tgz. Extract the archive with
tar xzf decomp.tgz

To compile and run the code:
cd Demo
make links
make decomp_eb
decomp_eb

The package fftw3 has to be installed. If it is not in a standard directory, fftw3.h is looked for in $(FFTW)/include and libfftw3.a in $(FFTW)/lib. Change the variable `FFTW' in the Makefile accordingly. You can download fftw3 from http://www.fftw.org.

The program produces two files, Tpm containing the filter functions T+ and T-, and REB containing the shear functions RE and RB.

Authors

Liping Fu, Martin Kilbinger (martin.kilbinger@cea.fr)