Space variant deconvolution of galaxy survey images

 

Authors: S. Farrens, J-L. Starck, F. Ngolè Mboula
Journal: A&A
Year: 2017
Download: ADS | arXiv


Abstract

Removing the aberrations introduced by the Point Spread Function (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the use of regularisation. This task is particularly difficult when the PSF varies spatially as is the case for the Euclid telescope. New surveys will provide images containing thousand of galaxies and the deconvolution regularisation problem can be considered from a completely new perspective. In fact, one can assume that galaxies belong to a low-rank dimensional space. This work introduces the use of the low-rank matrix approximation as a regularisation prior for galaxy image deconvolution and compares its performance with a standard sparse regularisation technique. This new approach leads to a natural way to handle a space variant PSF. Deconvolution is performed using a Python code that implements a primal-dual splitting algorithm. The data set considered is a sample of 10 000 space-based galaxy images convolved with a known spatially varying Euclid-like PSF and including various levels of Gaussian additive noise. Performance is assessed by examining the deconvolved galaxy image pixels and shapes. The results demonstrate that for small samples of galaxies sparsity performs better in terms of pixel and shape recovery, while for larger samples of galaxies it is possible to obtain more accurate estimates of the galaxy shapes using the low-rank approximation.


Summary

Point Spread Function

The Point Spread Function or PSF of an imaging system (also referred to as the impulse response) describes how the system responds to a point (unextended) source. In astrophysics, stars or quasars are often used to measure the PSF of an instrument as in ideal conditions their light would occupy a single pixel on a CCD. Telescopes, however, diffract the incoming photons which limits the maximum resolution achievable. In reality, the images obtained from telescopes include aberrations from various sources such as:

  • The atmosphere (for ground based instruments)
  • Jitter (for space based instruments)
  • Imperfections in the optical system
  • Charge spread of the detectors

Deconvolution

In order to recover the true image properties it is necessary to remove PSF effects from observations. If the PSF is known (which is certainly not trivial) one can attempt to deconvolve the PSF from the image. In the absence of noise this is simple. We can model the observed image \mathbf{y} as follows

\mathbf{y}=\mathbf{Hx}

where \mathbf{x} is the true image and \mathbf{H} is an operator that represents the convolution with the PSF. Thus, to recover the true image, one would simply invert \mathbf{H} as follows

\mathbf{x}=\mathbf{H}^{-1}\mathbf{y}

Unfortunately, the images we observe also contain noise (e.g. from the CCD readout) and this complicates the problem.

\mathbf{y}=\mathbf{Hx} + \mathbf{n}

This problem is ill-posed as even the tiniest amount of noise will have a large impact on the result of the operation. Therefore, to obtain a stable and unique solution, it is necessary to regularise the problem by adding additional prior knowledge of the true images.

Sparsity

One way to regularise the problem is using sparsity. The concept of sparsity is quite simple. If we know that there is a representation of \mathbf{x} that is sparse (i.e. most of the coefficients are zeros) then we can force our deconvolved observation \mathbf{\hat{x}} to be sparse in the same domain. In practice we aim to minimise a problem of the following form

\begin{aligned} & \underset{\mathbf{x}}{\text{argmin}} & \frac{1}{2}\|\mathbf{y}-\mathbf{H}\mathbf{x}\|_2^2 + \lambda\|\Phi(\mathbf{x})\|_1 & & \text{s.t.} & & \mathbf{x} \ge 0 \end{aligned}

where \Phi is a matrix that transforms \mathbf{x} to the sparse domain and \lambda is a regularisation control parameter.

Low-Rank Approximation

Another way to regularise the problem is assume that all of the images one aims to deconvolve live on a underlying low-rank manifold. In other words, if we have a sample of galaxy images we wish to deconvolve then we can construct a matrix X X where each column is a vector of galaxy pixel coefficients. If many of these galaxies have similar properties then we know that X X will have a smaller rank than if images were all very different. We can use this knowledge to regularise the deconvolution problem in the following way

\begin{aligned} & \underset{\mathbf{X}}{\text{argmin}} & \frac{1}{2}\|\mathbf{Y}-\mathcal{H}(\mathbf{X})\|_2^2 + \lambda|\mathbf{X}\|_* & & \text{s.t.} & & \mathbf{X} \ge 0 \end{aligned}

Results

In the paper I implement both of these regularisation techniques and compare how well they perform at deconvolving a sample of 10,000 Euclid-like galaxy images. The results show that, for the data used, sparsity does a better job at recovering the image pixels, while the low-rank approximation does a better job a recovering the galaxy shapes (provided enough galaxies are used).


Code

SF_DECONVOLVE is a Python code designed for PSF deconvolution using a low-rank approximation and sparsity. The code can handle a fixed PSF for the entire field or a stack of PSFs for each galaxy position.

 


Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey

 

Authors: A. Peel, C.-A. Lin, F. Lanusse, A. Leonard, J.-L. Starck, M. Kilbinger
Journal: A&A
Year: 2017
Download: ADS | arXiv

 


Abstract

Peak statistics in weak-lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complementary probe to two-point and higher-order statistics to constrain our cosmological models. Next-generation galaxy surveys, with their advanced optics and large areas, will measure the cosmic weak-lensing signal with unprecedented precision. To prepare for these anticipated data sets, we assess the constraining power of peak counts in a simulated Euclid-like survey on the cosmological parameters Ωm, σ8, and w0. In particular, we study how the Camelus model--a fast stochastic algorithm for predicting peaks--can be applied to such large surveys. We measure the peak count abundance in a mock shear catalogue of ~5,000 sq. deg. using a multiscale mass map filtering technique. We then constrain the parameters of the mock survey using Camelus combined with approximate Bayesian computation (ABC). We find that peak statistics yield a tight but significantly biased constraint in the σ8-Ωm plane, indicating the need to better understand and control the model's systematics. We calibrate the model to remove the bias and compare results to those from the two-point correlation functions (2PCF) measured on the same field. In this case, we find the derived parameter Σ8=σ8(Ωm/0.27)^α=0.76−0.03+0.02 with α=0.65 for peaks, while for 2PCF the value is Σ8=0.76−0.01+0.02 with α=0.70. We therefore see comparable constraining power between the two probes, and the offset of their σ8-Ωm degeneracy directions suggests that a combined analysis would yield tighter constraints than either measure alone. As expected, w0 cannot be well constrained without a tomographic analysis, but its degeneracy directions with the other two varied parameters are still clear for both peaks and 2PCF.

Constraint matrix factorization for space variant PSFs field restoration

 

Authors: F. Ngolè Mboula, J-L. Starck, K. Okumura, J. Amiaux, P. Hudelot
Journal: IOP Inverse Problems
Year: 2016
Download: ADS | arXiv

 


Abstract

Context: in large-scale spatial surveys, the Point Spread Function (PSF) varies across the instrument field of view (FOV). Local measurements of the PSFs are given by the isolated stars images. Yet, these estimates may not be directly usable for post-processings because of the observational noise and potentially the aliasing. Aims: given a set of aliased and noisy stars images from a telescope, we want to estimate well-resolved and noise-free PSFs at the observed stars positions, in particular, exploiting the spatial correlation of the PSFs across the FOV. Contributions: we introduce RCA (Resolved Components Analysis) which is a noise-robust dimension reduction and super-resolution method based on matrix factorization. We propose an original way of using the PSFs spatial correlation in the restoration process through sparsity. The introduced formalism can be applied to correlated data sets with respect to any euclidean parametric space. Results: we tested our method on simulated monochromatic PSFs of Euclid telescope (launch planned for 2020). The proposed method outperforms existing PSFs restoration and dimension reduction methods. We show that a coupled sparsity constraint on individual PSFs and their spatial distribution yields a significant improvement on both the restored PSFs shapes and the PSFs subspace identification, in presence of aliasing. Perspectives: RCA can be naturally extended to account for the wavelength dependency of the PSFs.

Sparsely sampling the sky: Regular vs. random sampling

 

Authors: P. Paykari, S. Pires, J.-L. Starck, A.H. Jaffe
Journal: Astronomy & Astrophysics
Year: 2015
Download: ADS | arXiv


Abstract

Weak gravitational lensing provides a unique way of mapping directly the dark matter in the Universe. The majority of lensing analyses use the two-point statistics of the cosmic shear field to constrain the cosmological model, a method that is affected by degeneracies, such as that between σ8 and Ωm which are respectively the rms of the mass fluctuations on a scale of 8 Mpc/h and the matter density parameter, both at z = 0. However, the two-point statistics only measure the Gaussian properties of the field, and the weak lensing field is non-Gaussian. It has been shown that the estimation of non-Gaussian statistics for weak lensing data can improve the constraints on cosmological parameters. In this paper, we systematically compare a wide range of non-Gaussian estimators to determine which one provides tighter constraints on the cosmological parameters. These statistical methods include skewness, kurtosis, and the higher criticism test, in several sparse representations such as wavelet and curvelet; as well as the bispectrum, peak counting, and a newly introduced statistic called wavelet peak counting (WPC). Comparisons based on sparse representations indicate that the wavelet transform is the most sensitive to non-Gaussian cosmological structures. It also appears that the most helpful statistic for non-Gaussian characterization in weak lensing mass maps is the WPC. Finally, we show that the σ8 - Ωmdegeneracy could be even better broken if the WPC estimation is performed on weak lensing mass maps filtered by the wavelet method, MRLens.

CMB reconstruction from the WMAP and Planck PR2 data

 

Authors:  J. Bobin, F. Sureau and J. -L. Starck
Journal: A&A
Year: 2015
Download: ADS | arXiv


Abstract

In this article, we describe a new estimate of the Cosmic Microwave Background (CMB) intensity map reconstructed by a joint analysis of the full Planck 2015 data (PR2) and WMAP nine-years. It provides more than a mere update of the CMB map introduced in (Bobin et al. 2014b) since it benefits from an improvement of the component separation method L-GMCA (Local-Generalized Morphological Component Analysis) that allows the efficient separation of correlated components (Bobin et al. 2015). Based on the most recent CMB data, we further confirm previous results (Bobin et al. 2014b) showing that the proposed CMB map estimate exhibits appealing characteristics for astrophysical and cosmological applications: i) it is a full sky map that did not require any inpainting or interpolation post-processing, ii) foreground contamination is showed to be very low even on the galactic center, iii) it does not exhibit any detectable trace of thermal SZ contamination. We show that its power spectrum is in good agreement with the Planck PR2 official theoretical best-fit power spectrum. Finally, following the principle of reproducible research, we provide the codes to reproduce the L-GMCA, which makes it the only reproducible CMB map.

Curvelet analysis of asteroseismic data

 

Authors: P. Lambert, S. Pires, J. Ballot, R.A. Garcia, J.-L. Starck, S. Turck-Chièze
Journal: A&A
Year: 2006
Download: ADS | arXiv

 

Abstract

Context. The detection and identification of oscillation modes (in terms of their l, m, and successive n) is a great challenge for present and future asteroseismic space missions. “Peak tagging" is an important step in the analysis of these data to provide estimations of stellar oscillation mode parameters, i.e., frequencies, rotation rates, and further studies on the stellar structure.
Aims. Our goal is to increase the signal-to-noise ratio of the asteroseismic spectra computed from the time series that are representative of MOST and CoRoT observations (30- and 150-day observations).

Methods. We apply the curvelet transform – a recent image processing technique that looks for curved patterns – to echelle diagrams built using asteroseismic power spectra. In the resulting diagram, the eigenfrequencies appear as smooth continuous ridges. To test the method, we use Monte-Carlo simulations of several sun-like stars with different combinations of rotation rates, rotation-axis inclination, and signal-to-noise ratios.

Results. The filtered diagrams enhance the contrast between the ridges of the modes and the background, allowing a better tagging of the modes and a better extraction of some stellar parameters. Monte-Carlo simulations have also shown that the region where modes can be detected is enlarged at lower and higher frequencies compared to the raw spectra. In addition, the extraction of the mean rotational splitting from modes at low frequency can be done more easily using the filtered spectra rather than the raw spectra.

Weak lensing mass reconstruction using wavelets

 

Authors: J.-L. Starck, S. Pires, Alexandre Réfrégier
Journal: A&A
Year: 2006
Download: ADS | arXiv

 

Abstract

This paper presents a new method for the reconstruction of weak lensing mass maps. It uses the multiscale entropy concept, which is based on wavelets, and the False Discovery Rate which allows us to derive robust detection levels in wavelet space. We show that this new restoration approach outperforms several standard techniques currently used for weak shear mass reconstruction. This method can also be used to separate E and B modes in the shear field, and thus test for the presence of residual systematic effects. We concentrate on large blind cosmic shear surveys, and illustrate our results using simulated shear maps derived from N-Body Lambda-CDM simulations with added noise corresponding to both ground-based and space-based observations.