News

gal_deconv

SF_Deconvolve

 

Authors: S. Farrens
Language: Python 2.7
Download: GitHub
Description: A Python code designed for PSF deconvolution using a low-rank approximation and sparsity. The code can handle a fixed PSF for the entire field or a stack of PSFs for each galaxy position.
Notes: This code was used to produce the results presented in the paper Space variant deconvolution of galaxy survey images. Sample Euclid-like PSF data can be downloaded from here [63Mb]


Introduction

The following sections provide some details for how to run sf_deconvolve.

The directory lib contains all of the primary functions and classes used for optimisation and analysis. functions contains some additional generic functions and tools.

Dependencies

In order to run the code in this repository the following packages must be installed:

  • Python 2.7 [Tested with v 2.7.11]
  • Numpy [Tested with v 1.11.3]
  • Scipy [Tested with v 0.18.1]
  • Astropy [Tested with v 1.1.2]
  • Matplotlib [Tested with v 1.5.3]
  • Termcolor [Tested with v 1.1.0]

The current implementation of wavelet transformations additionally requires the mr_transform.cc C++ script from the Sparse2D library in the iSap package [Tested with v 3.1]. These C++ scripts will be need to be compiled in order to run (see iSap Documentation for details).

The low-rank approximation method can be run purely in Python.

Execution

The primary code is sf_deconvolve.py which is designed to take an observed (i.e. with PSF effects and noise) stack of galaxy images and a known PSF, and attempt to reconstruct the original images. The input format are Numpy binary files (.npy) or FITS image files (.fits).

The code can be run as follows:

$ sf_deconvolve.py -i INPUT_IMAGES.npy -p PSF.npy -o OUTPUT_NAME

 Where INPUT_IMAGES.npy denotes the Numpy binary file containing the stack of observed galaxy images, PSF.npy denotes the PSF corresponding to each galaxy image and OUTPUT_NAME specifies the output path and file name.

Alternatively the code arguments can be stored in a configuration file (with any name) and the code can be run by providing the file name preceded by a @.

$ sf_deconvolve.py @config.ini

Example

The following example can be run on the sample data provided in the example directory.

This example takes a sample of 100 galaxy images (with PSF effects and added noise) and the corresponding PSFs, and recovers the original images using low-rank approximation via Condat-Vu optimisation.

$ sf_deconvolve.py -i example_image_stack.npy -p example_psf.npy -o example_output --mode lowr

The example can also be run using the configuration file provided.

The result will be two Numpy binary files called example_output_primal.npy and example_output_dual.npy corresponding to the primal and dual variables in the splitting algorithm. The reconstructed images will be in the example_output_primal.npy file.

Code Options

Required Arguments

-i INPUT, --input INPUT: Input data file name. File should be a Numpy binary containing a stack of noisy galaxy images with PSF effects (i.e. a 3D array).

-p PSF, --psf PSF: PSF file name. File should be a Numpy binary containing either: (a) a single PSF (i.e. a 2D array for fixed format) or (b) a stack of PSFs corresponding to each of the galaxy images (i.e. a 3D array for obj_var format).

Optional Arguments

-h, --help: Show the help message and exit.

-v, --version: Show the program's version number and exit.

-q, --quiet: Suppress verbose for each iteration.

-o, --output: Output file name. If not specified output files will placed in input file path.

--output_format Output file format [npy or fits].

Initialisation

-k, --current_res: Current deconvolution results file name (i.e. the file containing the primal results from a previous run).

--noise_est: Initial estimate of the noise standard deviation in the observed galaxy images. If not specified this quantity is automatically calculated using the median absolute deviation of the input image(s).

Optimisation

-m, --mode {all,sparse,lowr,grad}: Option to specify the optimisation mode [all, sparse, lowr or grad]. all performs optimisation using both low-rank approximation and sparsity, sparse using only sparsity, lowr uses only low-rank and grad uses only gradient descent. (default: lowr)

--opt_type {condat,fwbw,gfwbw}: Option to specify the optimisation method to be implemented [condat, fwbw or gfwbw]. condat implements the Condat-Vu proximal splitting method, fwbw implements Forward-Backward splitting with FISTA speed-up and gfwbw implements the generalised Forward-Backward splitting method. (default: condat)

--n_iter: Number of iterations. (default: 150)

--cost_window: Window to measure cost function (i.e. interval of iterations for which cost should be calculated). (default: 1)

--convergence: Convergence tolerance. (default: 0.0001)

--no_pos: Option to turn off positivity constraint.

--no_grad: Option to turn off gradient calculation.

Low-Rank Aproximation

--lowr_thresh_factor: Low rank threshold factor. (default: 1)

--lowr_type: Type of low-rank regularisation [standard or ngole]. (default: standard)

--lowr_thresh_type: Low rank threshold type [soft or hard]. (default: hard)

Sparsity

--wavelet_type: Type of Wavelet to be used (see iSap Documentation). (default: 1)

--wave_thresh_factor: Wavelet threshold factor. (default: [3.0, 3.0, 4.0])

--n_reweights: Number of reweightings. (default: 1)

Condat Algorithm

--relax: Relaxation parameter (rho_n in Condat-Vu method). (default: 0.8)

--condat_sigma: Condat proximal dual parameter. (default: 0.5)

--condat_tau: Condat proximal primal parameter. (default: 0.5)

Testing

-c, --clean_data: Clean data file name.

-r, --random_seed: Random seed. Use this option if the input data is a randomly selected subset (with known seed) of the full sample of clean data.

--kernel: Standard deviation of pixels for Gaussian kernel. This option will multiply the deconvolution results by a Gaussian kernel.

--metric: Metric to average errors [median or mean]. (default: median)

Troubleshooting

If you get the following error:

ERROR: svd() got an unexpected keyword argument 'lapack_driver'

Update your Numpy and Scipy installations

$ pip install --upgrade numpy
$ pip install --upgrade scipy

 

gal_deconv

Space variant deconvolution of galaxy survey images

 

Authors: Samuel Farrens, Jean-Luc Starck, Fred Maurice Ngolè Mboula
Journal: A&A
Year: 2017
Download: ADS | arXiv


Abstract

Removing the aberrations introduced by the Point Spread Function (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the use of regularisation. This task is particularly difficult when the PSF varies spatially as is the case for the Euclid telescope. New surveys will provide images containing thousand of galaxies and the deconvolution regularisation problem can be considered from a completely new perspective. In fact, one can assume that galaxies belong to a low-rank dimensional space. This work introduces the use of the low-rank matrix approximation as a regularisation prior for galaxy image deconvolution and compares its performance with a standard sparse regularisation technique. This new approach leads to a natural way to handle a space variant PSF. Deconvolution is performed using a Python code that implements a primal-dual splitting algorithm. The data set considered is a sample of 10 000 space-based galaxy images convolved with a known spatially varying Euclid-like PSF and including various levels of Gaussian additive noise. Performance is assessed by examining the deconvolved galaxy image pixels and shapes. The results demonstrate that for small samples of galaxies sparsity performs better in terms of pixel and shape recovery, while for larger samples of galaxies it is possible to obtain more accurate estimates of the galaxy shapes using the low-rank approximation.


Summary

Point Spread Function

The Point Spread Function or PSF of an imaging system (also referred to as the impulse response) describes how the system responds to a point (unextended) source. In astrophysics, stars or quasars are often used to measure the PSF of an instrument as in ideal conditions their light would occupy a single pixel on a CCD. Telescopes, however, diffract the incoming photons which limits the maximum resolution achievable. In reality, the images obtained from telescopes include aberrations from various sources such as:

  • The atmosphere (for ground based instruments)
  • Jitter (for space based instruments)
  • Imperfections in the optical system
  • Charge spread of the detectors

Deconvolution

In order to recover the true image properties it is necessary to remove PSF effects from observations. If the PSF is known (which is certainly not trivial) one can attempt to deconvolve the PSF from the image. In the absence of noise this is simple. We can model the observed image \mathbf{y} as follows

\mathbf{y}=\mathbf{Hx}

where \mathbf{x} is the true image and \mathbf{H} is an operator that represents the convolution with the PSF. Thus, to recover the true image, one would simply invert \mathbf{H} as follows

\mathbf{x}=\mathbf{H}^{-1}\mathbf{y}

Unfortunately, the images we observe also contain noise (e.g. from the CCD readout) and this complicates the problem.

\mathbf{y}=\mathbf{Hx} + \mathbf{n}

This problem is ill-posed as even the tiniest amount of noise will have a large impact on the result of the operation. Therefore, to obtain a stable and unique solution, it is necessary to regularise the problem by adding additional prior knowledge of the true images.

Sparsity

One way to regularise the problem is using sparsity. The concept of sparsity is quite simple. If we know that there is a representation of \mathbf{x} that is sparse (i.e. most of the coefficients are zeros) then we can force our deconvolved observation \mathbf{\hat{x}} to be sparse in the same domain. In practice we aim to minimise a problem of the following form

\begin{aligned} & \underset{\mathbf{x}}{\text{argmin}} & \frac{1}{2}\|\mathbf{y}-\mathbf{H}\mathbf{x}\|_2^2 + \lambda\|\Phi(\mathbf{x})\|_1 & & \text{s.t.} & & \mathbf{x} \ge 0 \end{aligned}

where \Phi is a matrix that transforms \mathbf{x} to the sparse domain and \lambda is a regularisation control parameter.

Low-Rank Approximation

Another way to regularise the problem is assume that all of the images one aims to deconvolve live on a underlying low-rank manifold. In other words, if we have a sample of galaxy images we wish to deconvolve then we can construct a matrix X X where each column is a vector of galaxy pixel coefficients. If many of these galaxies have similar properties then we know that X X will have a smaller rank than if images were all very different. We can use this knowledge to regularise the deconvolution problem in the following way

\begin{aligned} & \underset{\mathbf{X}}{\text{argmin}} & \frac{1}{2}\|\mathbf{Y}-\mathcal{H}(\mathbf{X})\|_2^2 + \lambda|\mathbf{X}\|_* & & \text{s.t.} & & \mathbf{X} \ge 0 \end{aligned}

Results

In the paper I implement both of these regularisation techniques and compare how well they perform at deconvolving a sample of 10,000 Euclid-like galaxy images. The results show that, for the data used, sparsity does a better job at recovering the image pixels, while the low-rank approximation does a better job a recovering the galaxy shapes (provided enough galaxies are used).


Code

SF_DECONVOLVE is a Python code designed for PSF deconvolution using a low-rank approximation and sparsity. The code can handle a fixed PSF for the entire field or a stack of PSFs for each galaxy position.

Download: GitHub

peipei

Linear and non-linear Modified Gravity forecasts with future surveys

A new paper has been put on the arXiv by new CosmoStat member Valeria Pettorino, her PhD student Santiago Casas, in collaboration with Martin Kunz (Geneva) and Matteo Martinelli (Leiden).
The authors discuss forecasts in Modified Gravity cosmologies, described by two generic functions of time and space [Planck Dark Energy and Modified Gravity 2015Asaba et al 2013,Bull 2015Alonso et al 2016]. Their amplitude is constrained in different redshift bins. The authors elaborate on the impact of non-linear scales, showing that their inclusion (via a non-linear semi-analytical prescription applied to Modified Gravity) enables to highly reduce correlation among different redshift bins, even before any decorrelation procedure is applied. This is visually seen in the figure below (Fig.4 of arXiv), for the case of Galaxy Clustering: the correlation Matrix of the cosmological parameters (including the amplitudes of the Modified Gravity functions, binned in redshift)  is much more diagonal in the non-linear case (right panel) than in the linear one (left panel).

fig4_casasetal2017

A decorrelation procedure (Zero-phase Component Analysis, ZCA) is anyway used to extract those combinations which are best constrained by future surveys such as Euclid. With respect to Principal Component Analysis, ZCA allows to find a new vector of uncorrelated variables that is as similar as possible to the original vector of variables.

The authors further consider two smooth time functions whose main allowed to depart from General Relativity only at late times (late-time parameterization) or able to detach also at early times (early-time parameterization). The Fisher Matrix forecasts for standard and Modified gravity parameters, for different surveys (Euclid, SKA1, SKA2) is shown in the plot below (extracted from Fig.15 of arXiv), in which Galaxy Clustering and Weak Lensing probes are combined. Left panel refers to linear analysis, right panel includes a non-linear treatment.

fig15x4_casasetal2017fig15x6_casasetal2017 

 

K17_Fig1b

Weak-lensing projections

Authors: M. Kilbinger, C. Heymans et al.
Journal: submitted to MNRAS
Year: 2017
Download: ADS | arXiv


Abstract

We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second- order Limber equations for the projection. We find that the impact of adopting these approximations are negligible when constraining cosmological parameters from current weak lensing surveys. This is demonstrated using data from the Canada-France-Hawaii Lensing Survey (CFHTLenS). We find that the reported tension with Planck Cosmic Microwave Background (CMB) temperature anisotropy results cannot be alleviated, in contrast to the recent claim made by Kitching et al. (2016, version 1). For future large-scale surveys with unprecedented precision, we show that the spherical second-order Limber approximation will provide sufficient accuracy. In this case, the cosmic-shear power spectrum is shown to be in agreement with the full projection at the sub-percent level for l > 3, with the corresponding errors an order of magnitude below cosmic variance for all l. When computing the two-point shear correlation function, we show that the flat-sky fast Hankel transformation results in errors below two percent compared to the full spherical transformation. In the spirit of reproducible research, our numerical implementation of all approximations and the full projection are publicly available within the package nicaea at http://www.cosmostat.org/software/nicaea.


Summary

We discuss various methods to calculate projections for weak gravitational lensing: Since lenses galaxies pick up matter inhomogeneities of the cosmic web along the line of sight while photons from the galaxies propagate through the Universe to the observer, these inhomogeneities have to be projected to a 2D observable, the cumulative shear or convergence. The full projection involves three-dimensional integrals over highly oscillating Bessel functions, and can be time-consuming to compute numerically to high accuracy. Most previous work have therefore used approximations such as the Limber approximation, that reduce the integrals to 1D, and thereby neglecting modes along the line of sight.

The authors show that these projections are more than adequate for present surveys. Sub-percent accuracy is reached for l>20, for example as shown by the pink curve, which is the ratio of the case 'ExtL1Hyb' to the full projection. The abbreviation means 'extended', corresponding to the improved approximation introduced by LoVerde & Afshordi (2008), first-order Limber, and hybrid, since this is a hybrid between flat-sky and spherical coordinates. This case has been used in most of the recent publications (e.g. for KiDS), whereas the cast 'L1Fl' (first-order Limber flat-sky) was popular for most publications since 2014.

These approximations are sufficient for the small areas of current observations coming from CFHTLenS, KiDS, and DES, and well below cosmic variance of even future surveys (the figure shows Euclid - 15,000 deg2 and Kids -1,500 deg2).

K17_Fig1b

The paper then discusses the second-order Limber approximation, introduced in a general framework by LoVerde & Afshordi (2008), and applied to weak lensing in the current paper. The best 2nd-order case 'ExtL2Sph' reaches sub-percent accuracy down to l=3, sufficient for all future surveys.

The paper also computes the shear correlation function in real space, and shows that those approximations have a very minor influence.

We then go on to re-compute the cosmological constraints obtained in Kilbinger et al. (2013), and find virtually no change when choosing different approximations. Only the depreciated case 'ExtL1Fl' makes a noticeable difference, which is however still well within the statistical error bars. This case shows a particular slow convergence to the full projection.

Similar results have been derived in two other recent publications, Kitching et al. (2017), and Lemos, Challinor & Efstathiou (2017).
Note however that Kitching et al. (2017) conclude that errors from projection approximations of the types we discussed here (Limber, flat sky) could make up to 11% of the error budget of future surveys. This is however assuming the worst-case scenario including the deprecated cast 'ExtL1Fl', and we do not share their conclusion, but think that for example the projection 'ExtL2Sph' is sufficient for future surveys such as LSST and Euclid.

peipei

Paper accepted : New inpainting method to handle colored-noise data to test the weak equivalence principle

The context

The MICROSCOPE space mission, launched on April 25, 2016, aims to test the weak equivalence principle (WEP) with a 1015 precision. To reach this performance requires an accurate and robust data analysis method, especially since the possible WEP violation signal will be dominated by a strongly colored noise. An important complication is brought by the fact that some values will be missing –therefore, the measured time series will not be strictly regularly sampled. Those missing values induce a spectral leakage that significantly increases the noise in Fourier space, where the WEP violation signal is looked for, thereby complicating scientific returns.

fig1
FIG. 1 (from Pires et al, 2016): The black curve shows the MICROSCOPE PSD es- timate for a 120 orbits simulation. An example of a possible EPV signal of 3 × 10−15 in the inertial mode is shown by the peak at 1.8 × 10−4 Hz. The grey curve shows the spectral leakage affecting the PSD estimate when gaps are present in the data.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The results

Recently, we developed an inpainting algorithm to correct the MICROSCOPE data for missing values (in red, fig 4). This code has been integrated in the official MICROSCOPE data processing and analysis pipeline because it enables us to significantly measure an equivalence principle violation (EPV) signal in a model-independent way, in the inertial satellite configuration. In this work, we present several improvements to the method that may allow us now to reach the MICROSCOPE requirements for both inertial and spin satellite configurations (green curve, fig. 4).

 

 

fig4
FIG. 4 (from Pires et al, 2016): MICROSCOPE differential acceleration PSD estimates averaged over 100 simulations in the inertial mode (upper panel) and in the spin mode (lower panel). The black lines show the PSD estimated when all the data is available, the red lines show the PSD estimated from data filled with the inpainting method developed in Paper I and the green lines show the PSD estimated from data filled with the new inpainting method (ICON) presented in this paper.

The code ICON

The code corresponding to the paper is available for download here.

Although, the inpainting method presented in this paper has been optimized to the MICROSCOPE data, it remains sufficiently general to be used in the general context of missing data in time series dominated by an unknown colored-noise.

References

 Dealing with missing data in the MICROSCOPE space mission: An adaptation of inpainting to handle colored-noise data, S. Pires, J. Bergé, Q. Baghi, P. Touboul, G. Métris, accepted in Physical Review D, December 2016

Dealing with missing data: An inpainting application to the MICROSCOPE space mission, J. Bergé, S. Pires, Q. Baghi, P. Touboul, G. Metris, Physical Review D, 92, 11, December 2015 

peipei

CFIS proposal accepted

On the day of the Brexit outcome, so disastrous for Europe and the UK, there is at least good news for the cosmological community: CFIS, the Canada-France Imaging Survey, has been accepted! This survey consists of two parts. The WIQD​(Wide ­ Image Quality ­ Deep) part will cover 5,000 deg2 of the Northern sky, observed in the r-band with the CFHT (Canada-France Hawai'i telescope). The u-band will cover 10,000 deg2 to a lower depth, and is part of LUAU (Legacy for the U­-band All­-sky Universe). 271 nights are granted, observations will start in 8 months from now.CFIS Logo

CFIS will allow us to study properties of dark-matter structures, including filaments between galaxy clusters and groups, stripping of dark-matter halos of satellite galaxies in clusters, and the shapes of dark-matter halos. In addition, the laws of gravity on large scales will be tested, and modifications to Einstein's theory of general relativity will be looked for. CFIS will observe a very large number of distant, high-redshift galaxies, and will use techniques of galaxy clustering and weak gravitational lensing to achieve its goals.

In addition, CFIS will create synergie with other ongoing and planned surveys: CFIS will provide ground-based optical data for Euclid photometric-redshifts. It will produce a very useful imaging data set for target selection for spectroscopic surveys such as DESI, WEAVE, and MSE. It will further provide optical data of galaxy clusters that will enhance the science outcome of the X-ray mission eRosita.

PIs: Jean-Charles Cuillandre (CEA Saclay/France) & Alan McConnachie (Victoria/Canada).
CosmoStat participants: Martin Kilbinger, Jean-Luc Starck. Sandrine Pires.
Irfu participants: Monique Arnaud, Hervé Aussel, Olivier Boulade, Pierre-Alain Duc, David Elbaz, Christophe Magneville, Yannick Mellier, Marguerite Pierre, Anand Raichoor, Jim Rich, Vanina Ruhlmann-Kleider, Marc Sauvage, Christophe Yèche.