News

gal_deconv

Space variant deconvolution of galaxy survey images

Authors: Samuel Farrens, Jean-Luc Starck, Fred Maurice Ngolè Mboula

Journal: A&A

Year: 2017

Download: ADS | arXiv


Abstract

Removing the aberrations introduced by the Point Spread Function (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the use of regularisation. This task is particularly difficult when the PSF varies spatially as is the case for the Euclid telescope. New surveys will provide images containing thousand of galaxies and the deconvolution regularisation problem can be considered from a completely new perspective. In fact, one can assume that galaxies belong to a low-rank dimensional space. This work introduces the use of the low-rank matrix approximation as a regularisation prior for galaxy image deconvolution and compares its performance with a standard sparse regularisation technique. This new approach leads to a natural way to handle a space variant PSF. Deconvolution is performed using a Python code that implements a primal-dual splitting algorithm. The data set considered is a sample of 10 000 space-based galaxy images convolved with a known spatially varying Euclid-like PSF and including various levels of Gaussian additive noise. Performance is assessed by examining the deconvolved galaxy image pixels and shapes. The results demonstrate that for small samples of galaxies sparsity performs better in terms of pixel and shape recovery, while for larger samples of galaxies it is possible to obtain more accurate estimates of the galaxy shapes using the low-rank approximation.


Summary

Point Spread Function

The Point Spread Function or PSF of an imaging system (also referred to as the impulse response) describes how the system responds to a point (unextended) source. In astrophysics, stars or quasars are often used to measure the PSF of an instrument as in ideal conditions their light would occupy a single pixel on a CCD. Telescopes, however, diffract the incoming photons which limits the maximum resolution achievable. In reality, the images obtained from telescopes include aberrations from various sources such as:

  • The atmosphere (for ground based instruments)
  • Jitter (for space based instruments)
  • Imperfections in the optical system
  • Charge spread of the detectors

Deconvolution

In order to recover the true image properties it is necessary to remove PSF effects from observations. If the PSF is known (which is certainly not trivial) one can attempt to deconvolve the PSF from the image. In the absence of noise this is simple. We can model the observed image \mathbf{y} as follows

\mathbf{y}=\mathbf{Hx}

where \mathbf{x} is the true image and \mathbf{H} is an operator that represents the convolution with the PSF. Thus, to recover the true image, one would simply invert \mathbf{H} as follows

\mathbf{x}=\mathbf{H}^{-1}\mathbf{y}

Unfortunately, the images we observe also contain noise (e.g. from the CCD readout) and this complicates the problem.

\mathbf{y}=\mathbf{Hx} + \mathbf{n}

This problem is ill-posed as even the tiniest amount of noise will have a large impact on the result of the operation. Therefore, to obtain a stable and unique solution, it is necessary to regularise the problem by adding additional prior knowledge of the true images.

Sparsity

One way to regularise the problem is using sparsity. The concept of sparsity is quite simple. If we know that there is a representation of \mathbf{x} that is sparse (i.e. most of the coefficients are zeros) then we can force our deconvolved observation \mathbf{\hat{x}} to be sparse in the same domain. In practice we aim to minimise a problem of the following form

\begin{aligned} & \underset{\mathbf{x}}{\text{argmin}} & \frac{1}{2}\|\mathbf{y}-\mathbf{H}\mathbf{x}\|_2^2 + \lambda\|\Phi(\mathbf{x})\|_1 & & \text{s.t.} & & \mathbf{x} \ge 0 \end{aligned}

where \Phi is a matrix that transforms \mathbf{x} to the sparse domain and \lambda is a regularisation control parameter.

Low-Rank Approximation

Another way to regularise the problem is assume that all of the images one aims to deconvolve live on a underlying low-rank manifold. In other words, if we have a sample of galaxy images we wish to deconvolve then we can construct a matrix X X where each column is a vector of galaxy pixel coefficients. If many of these galaxies have similar properties then we know that X X will have a smaller rank than if images were all very different. We can use this knowledge to regularise the deconvolution problem in the following way

\begin{aligned} & \underset{\mathbf{X}}{\text{argmin}} & \frac{1}{2}\|\mathbf{Y}-\mathcal{H}(\mathbf{X})\|_2^2 + \lambda|\mathbf{X}\|_* & & \text{s.t.} & & \mathbf{X} \ge 0 \end{aligned}

Results

In the paper I implement both of these regularisation techniques and compare how well they perform at deconvolving a sample of 10,000 Euclid-like galaxy images. The results show that, for the data used, sparsity does a better job at recovering the image pixels, while the low-rank approximation does a better job a recovering the galaxy shapes (provided enough galaxies are used).


Code

SF_DECONVOLVE is a Python code designed for PSF deconvolution using a low-rank approximation and sparsity. The code can handle a fixed PSF for the entire field or a stack of PSFs for each galaxy position.

Download: GitHub

mnras0417-1402-f6

Friends-of-Friends Groups and Clusters in the 2SLAQ Catalogue

Authors:

Journal:

Year:

Download:

S. Farrens, F.B. Abdalla, E.S. Cypriano, C. Sabiu, C. Blake

MNRAS

2011

ADS | arXiv

Abstract

We present a catalogue of galaxy groups and clusters selected using a friends-of-friends (FoF) algorithm with a dynamic linking length from the 2dF-SDSS LRG and QSO (2SLAQ) luminous red galaxy survey. The linking parameters for the code are chosen through an analysis of simulated 2SLAQ haloes. The resulting catalogue includes 313 clusters containing 1152 galaxies. The galaxy groups and clusters have an average velocity dispersion of ? km s-1 and an average size of ? Mpc h-1. Galaxies from regions of 1 deg2 and centred on the galaxy clusters were downloaded from the Sloan Digital Sky Survey Data Release 6. Investigating the photometric redshifts and cluster red sequence of these galaxies shows that the galaxy clusters detected with the FoF algorithm are reliable out to z˜ 0.6. We estimate masses for the clusters using their velocity dispersions. These mass estimates are shown to be consistent with 2SLAQ mock halo masses. Further analysis of the simulation haloes shows that clipping out low-richness groups with large radii improves the purity of catalogue from 52 to 88 per cent, while retaining a completeness of 94 per cent. Finally, we test the two-point correlation function of our cluster catalogue. We find a best-fitting power-law model, ξ(r) = (r/r0)γ, with parameters r0= 24 ± 4 Mpc h-1 and γ=-2.1 ± 0.2, which are in agreement with other low-redshift cluster samples and consistent with a Λ cold dark matter universe.

EBmode2

Linear and non-linear Modified Gravity forecasts with future surveys

A new paper has been put on the arXiv by new CosmoStat member Valeria Pettorino, her PhD student Santiago Casas, in collaboration with Martin Kunz (Geneva) and Matteo Martinelli (Leiden).
The authors discuss forecasts in Modified Gravity cosmologies, described by two generic functions of time and space [Planck Dark Energy and Modified Gravity 2015Asaba et al 2013,Bull 2015Alonso et al 2016]. Their amplitude is constrained in different redshift bins. The authors elaborate on the impact of non-linear scales, showing that their inclusion (via a non-linear semi-analytical prescription applied to Modified Gravity) enables to highly reduce correlation among different redshift bins, even before any decorrelation procedure is applied. This is visually seen in the figure below (Fig.4 of arXiv), for the case of Galaxy Clustering: the correlation Matrix of the cosmological parameters (including the amplitudes of the Modified Gravity functions, binned in redshift)  is much more diagonal in the non-linear case (right panel) than in the linear one (left panel).

fig4_casasetal2017

A decorrelation procedure (Zero-phase Component Analysis, ZCA) is anyway used to extract those combinations which are best constrained by future surveys such as Euclid. With respect to Principal Component Analysis, ZCA allows to find a new vector of uncorrelated variables that is as similar as possible to the original vector of variables.

The authors further consider two smooth time functions whose main allowed to depart from General Relativity only at late times (late-time parameterization) or able to detach also at early times (early-time parameterization). The Fisher Matrix forecasts for standard and Modified gravity parameters, for different surveys (Euclid, SKA1, SKA2) is shown in the plot below (extracted from Fig.15 of arXiv), in which Galaxy Clustering and Weak Lensing probes are combined. Left panel refers to linear analysis, right panel includes a non-linear treatment.

fig15x4_casasetal2017fig15x6_casasetal2017 

 

K17_Fig1b

Weak-lensing projections

A new paper has been put on the arXiv and submitted to MNRAS by CosmoStat member Martin Kilbinger.
The authors discuss various methods to calculate projections for weak gravitational lensing: Since lenses galaxies pick up matter inhomogeneities of the cosmic web along the line of sight while photons from the galaxies propagate through the Universe to the observer, these inhomogeneities have to be projected to a 2D observable, the cumulative shear or convergence. The full projection involves three-dimensional integrals over highly oscillating Bessel functions, and can be time-consuming to compute numerically to high accuracy. Most previous work have therefore used approximations such as the Limber approximation, that reduce the integrals to 1D, and thereby neglecing modes along the line of sight.

The authors show that these projections are more than adequate for present surveys. Sub-percent accuracy is reached for l>20, for example as shown by the pink curve, which is the ratio of the case 'ExtL1Hyb' to the full projection. The abbreviation means 'extended', corresponding to the improved approximation introduced by LoVerde & Afshordi (2008), first-order Limber, and hybrid, since this is a hybrid between flat-sky and spherical coordinates. This case has been used in most of the recent publications (e.g. for KiDS), whereas the cast 'L1Fl' (first-order Limber flat-sky) was popular for most publications since 2014.

These approximations are sufficient for the small areas of current observations coming from CFHTLenS, KiDS, and DES, and well below cosmic variance of even future surveys (the figure shows Euclid - 15,000 deg2 and Kids -1,500 deg2).

 

K17_Fig1b

The paper then discusses the second-order Limber approximation, introduced in a general framework by LoVerde & Afshordi (2008), and applied to weak lensing in the current paper. The best 2nd-order case 'ExtL2Sph' reaches sub-percent accuracy down to l=3, sufficient for all future surveys.

The paper also computes the shear correlation function in real space, and shows that those approximations have a very minor influence.

The authors then go on to re-compute the cosmological constraints obtained in Kilbinger et al. (2013), and find virtually no change when chosing different approximations. Only the depreciated case 'ExtL1Fl' makes a noticeable difference, which is however still well within the statistical error bars. This case shows a particular slow convergence to the full projection.

The results of this paper are in contrast of the recent publication by Kitching et al. (2017), version 1, who find an order of magnitude larger effect. Those authors have since published a revised version, whose results are in agreement with ours. Note however that they conclude that errors from projection approximations of the types we discussed here (Limber, flat sky) could make up to 11% of the error budget of future surveys. This is however assuming the worst-case scenario including the depriciated cast 'ExtL1Fl', and we do not share their conclusion, but think that for example the projection 'ExtL2Sph' is sufficient for future surveys such as LSST and Euclid.

 

EBmode2

Paper accepted : New inpainting method to handle colored-noise data to test the weak equivalence principle

The context

The MICROSCOPE space mission, launched on April 25, 2016, aims to test the weak equivalence principle (WEP) with a 1015 precision. To reach this performance requires an accurate and robust data analysis method, especially since the possible WEP violation signal will be dominated by a strongly colored noise. An important complication is brought by the fact that some values will be missing –therefore, the measured time series will not be strictly regularly sampled. Those missing values induce a spectral leakage that significantly increases the noise in Fourier space, where the WEP violation signal is looked for, thereby complicating scientific returns.

fig1
FIG. 1 (from Pires et al, 2016): The black curve shows the MICROSCOPE PSD es- timate for a 120 orbits simulation. An example of a possible EPV signal of 3 × 10−15 in the inertial mode is shown by the peak at 1.8 × 10−4 Hz. The grey curve shows the spectral leakage affecting the PSD estimate when gaps are present in the data.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The results

Recently, we developed an inpainting algorithm to correct the MICROSCOPE data for missing values (in red, fig 4). This code has been integrated in the official MICROSCOPE data processing and analysis pipeline because it enables us to significantly measure an equivalence principle violation (EPV) signal in a model-independent way, in the inertial satellite configuration. In this work, we present several improvements to the method that may allow us now to reach the MICROSCOPE requirements for both inertial and spin satellite configurations (green curve, fig. 4).

 

 

fig4
FIG. 4 (from Pires et al, 2016): MICROSCOPE differential acceleration PSD estimates averaged over 100 simulations in the inertial mode (upper panel) and in the spin mode (lower panel). The black lines show the PSD estimated when all the data is available, the red lines show the PSD estimated from data filled with the inpainting method developed in Paper I and the green lines show the PSD estimated from data filled with the new inpainting method (ICON) presented in this paper.

The code ICON

The code corresponding to the paper is available for download here.

Although, the inpainting method presented in this paper has been optimized to the MICROSCOPE data, it remains sufficiently general to be used in the general context of missing data in time series dominated by an unknown colored-noise.

References

 Dealing with missing data in the MICROSCOPE space mission: An adaptation of inpainting to handle colored-noise data, S. Pires, J. Bergé, Q. Baghi, P. Touboul, G. Métris, accepted in Physical Review D, December 2016

Dealing with missing data: An inpainting application to the MICROSCOPE space mission, J. Bergé, S. Pires, Q. Baghi, P. Touboul, G. Metris, Physical Review D, 92, 11, December 2015 

EBmode2

CFIS proposal accepted

On the day of the Brexit outcome, so disastrous for Europe and the UK, there is at least good news for the cosmological community: CFIS, the Canada-France Imaging Survey, has been accepted! This survey consists of two parts. The WIQD​(Wide ­ Image Quality ­ Deep) part will cover 5,000 deg2 of the Northern sky, observed in the r-band with the CFHT (Canada-France Hawai'i telescope). The u-band will cover 10,000 deg2 to a lower depth, and is part of LUAU (Legacy for the U­-band All­-sky Universe). 271 nights are granted, observations will start in 8 months from now.CFIS Logo

CFIS will allow us to study properties of dark-matter structures, including filaments between galaxy clusters and groups, stripping of dark-matter halos of satellite galaxies in clusters, and the shapes of dark-matter halos. In addition, the laws of gravity on large scales will be tested, and modifications to Einstein's theory of general relativity will be looked for. CFIS will observe a very large number of distant, high-redshift galaxies, and will use techniques of galaxy clustering and weak gravitational lensing to achieve its goals.

In addition, CFIS will create synergie with other ongoing and planned surveys: CFIS will provide ground-based optical data for Euclid photometric-redshifts. It will produce a very useful imaging data set for target selection for spectroscopic surveys such as DESI, WEAVE, and MSE. It will further provide optical data of galaxy clusters that will enhance the science outcome of the X-ray mission eRosita.

PIs: Jean-Charles Cuillandre (CEA Saclay/France) & Alan McConnachie (Victoria/Canada).
CosmoStat participants: Martin Kilbinger, Jean-Luc Starck. Sandrine Pires.
Irfu participants: Monique Arnaud, Hervé Aussel, Olivier Boulade, Pierre-Alain Duc, David Elbaz, Christophe Magneville, Yannick Mellier, Marguerite Pierre, Anand Raichoor, Jim Rich, Vanina Ruhlmann-Kleider, Marc Sauvage, Christophe Yèche.