News

idl

Baolab

 

Authors: A. Labatie, J.L. Starck, M. Lachieze-Rey
Language: IDL
Download: BAOlab.zip
Description: An IDL code for studying BAO.
Notes: Contains additional C++ routines.


BAOlab is related to the study of Baryon Acoustic Oscillations (BAO) using the 2-point correlation function. It enables to perform different tasks, namely BAO detection and BAO parameter constraints. The main novelty of this approach is that it enables to obtain a model-dependent covariance matrix which can change the results both for BAO detection and for parameter constraints.

Software: BAOlab Version 1.0

  • BAOlab contains IDL and C++ routines.
  • Source code and more information are available here.

Publications

Papers related to the software:

idl

Darth Fader

 

Authors: D. Machado, A. Leonard, J-L. Starck, and F. Abdalla
Language: IDL
Download: DFV1.0.tgz
Description: An IDL code designed for estimating galaxy redshifts from spectroscopic data using wavelets.
Notes: Requires iSAP installation


The Darth Fader method is a catalog cleaning method for redshift estimation which is described in:

Code

The code will be soon integrated in the next version of the iSAP software. Meanwhile, it can be used following these instructions:

The documentation is available here.

Results

Benchmark data are available here, and the following routine shows how to use it on these benchmark data:

Running this routine, we obtain the following results:

  • % of catastrophic failures before cleaning = 22.09
  • % of galaxies retained after cleaning = 75.80
  • % of catastrophic failures after cleaning = 4.29

Contact information

For any information related to the code, please contact Adrienne Leonard (adrienne.leonard AT cea DOT fr).

Screen Shot 2017-06-20 at 17.20.58

École Euclid de cosmologie 2017

Date: June 27 - July 8 2017

Venue: Fréjus, France

Website: http://ecole-euclid.cnrs.fr/programme-2017


Lecture ``Weak gravitational lensing'' (Le lentillage gravitationnel), Martin Kilbinger.

Find here links to the lecture notes, TD exercises, "tables rondes" topics, and other information.

  • Resources.
    • A great and detailed introduction to (weak) gravitational lensing are the 2005 Saas Fee lecture notes by Peter Schneider. Download Part I (Introduction to lensing) and Part III (Weak lensing) from my homepage.
    • Check out Sarah Bridle's video lectures on WL from 2014.
  • TD cycle 2, Data analysis.
    1.  We will work on a rather large (150 MB) weak-lensing catalogue from the public CFHTLenS web page. During the TD I will show instructions how to create and download this catalogue. For faster access, it will be available on the server during the school, and I will bring a few USB sticks.
      If you like, you can however download the catalogue on your laptop at home. Please have a look at the instructions (available soon).
    2. If you want to do the TD on your laptop, you'll need to download and install athena (the newest version 1.7).
  • Table ronde topic A: B-modes from varying z?
    The ftp link to download the 2nd set of log-normal simulations (04/2016) is this.
  • Lecture notes and exercise classes (preliminary versions!):
    • Part I (Cycle 1):    [day 1 (1/6)  |   day 2 (2/6) |  day 3 (3/6)]
    • Part II (Cycle 2):  [day 1 (4/6)   |   day 2 (5/6)  | day 3 (6/6)]
    • TD:                             [1/2 and 2/2]
    • Tables Rondes:   [Topics A, B, and C]
python

SF_Deconvolve

 

Authors: S. Farrens
Language: Python 2.7
Download: GitHub
Description: A Python code designed for PSF deconvolution using a low-rank approximation and sparsity. The code can handle a fixed PSF for the entire field or a stack of PSFs for each galaxy position.
Notes: This code was used to produce the results presented in the paper Space variant deconvolution of galaxy survey images. Sample Euclid-like PSF data can be downloaded from here [63Mb]


Introduction

The following sections provide some details for how to run sf_deconvolve.

The directory lib contains all of the primary functions and classes used for optimisation and analysis. functions contains some additional generic functions and tools.

Dependencies

In order to run the code in this repository the following packages must be installed:

  • Python 2.7 [Tested with v 2.7.11]
  • Numpy [Tested with v 1.11.3]
  • Scipy [Tested with v 0.18.1]
  • Astropy [Tested with v 1.1.2]
  • Matplotlib [Tested with v 1.5.3]
  • Termcolor [Tested with v 1.1.0]

The current implementation of wavelet transformations additionally requires the mr_transform.cc C++ script from the Sparse2D library in the iSap package [Tested with v 3.1]. These C++ scripts will be need to be compiled in order to run (see iSap Documentation for details).

The low-rank approximation method can be run purely in Python.

Execution

The primary code is sf_deconvolve.py which is designed to take an observed (i.e. with PSF effects and noise) stack of galaxy images and a known PSF, and attempt to reconstruct the original images. The input format are Numpy binary files (.npy) or FITS image files (.fits).

The code can be run as follows:

$ sf_deconvolve.py -i INPUT_IMAGES.npy -p PSF.npy -o OUTPUT_NAME

 Where INPUT_IMAGES.npy denotes the Numpy binary file containing the stack of observed galaxy images, PSF.npy denotes the PSF corresponding to each galaxy image and OUTPUT_NAME specifies the output path and file name.

Alternatively the code arguments can be stored in a configuration file (with any name) and the code can be run by providing the file name preceded by a @.

$ sf_deconvolve.py @config.ini

Example

The following example can be run on the sample data provided in the example directory.

This example takes a sample of 100 galaxy images (with PSF effects and added noise) and the corresponding PSFs, and recovers the original images using low-rank approximation via Condat-Vu optimisation.

$ sf_deconvolve.py -i example_image_stack.npy -p example_psf.npy -o example_output --mode lowr

The example can also be run using the configuration file provided.

The result will be two Numpy binary files called example_output_primal.npy and example_output_dual.npy corresponding to the primal and dual variables in the splitting algorithm. The reconstructed images will be in the example_output_primal.npy file.

Code Options

Required Arguments

-i INPUT, --input INPUT: Input data file name. File should be a Numpy binary containing a stack of noisy galaxy images with PSF effects (i.e. a 3D array).

-p PSF, --psf PSF: PSF file name. File should be a Numpy binary containing either: (a) a single PSF (i.e. a 2D array for fixed format) or (b) a stack of PSFs corresponding to each of the galaxy images (i.e. a 3D array for obj_var format).

Optional Arguments

-h, --help: Show the help message and exit.

-v, --version: Show the program's version number and exit.

-q, --quiet: Suppress verbose for each iteration.

-o, --output: Output file name. If not specified output files will placed in input file path.

--output_format Output file format [npy or fits].

Initialisation

-k, --current_res: Current deconvolution results file name (i.e. the file containing the primal results from a previous run).

--noise_est: Initial estimate of the noise standard deviation in the observed galaxy images. If not specified this quantity is automatically calculated using the median absolute deviation of the input image(s).

Optimisation

-m, --mode {all,sparse,lowr,grad}: Option to specify the optimisation mode [all, sparse, lowr or grad]. all performs optimisation using both low-rank approximation and sparsity, sparse using only sparsity, lowr uses only low-rank and grad uses only gradient descent. (default: lowr)

--opt_type {condat,fwbw,gfwbw}: Option to specify the optimisation method to be implemented [condat, fwbw or gfwbw]. condat implements the Condat-Vu proximal splitting method, fwbw implements Forward-Backward splitting with FISTA speed-up and gfwbw implements the generalised Forward-Backward splitting method. (default: condat)

--n_iter: Number of iterations. (default: 150)

--cost_window: Window to measure cost function (i.e. interval of iterations for which cost should be calculated). (default: 1)

--convergence: Convergence tolerance. (default: 0.0001)

--no_pos: Option to turn off positivity constraint.

--no_grad: Option to turn off gradient calculation.

Low-Rank Aproximation

--lowr_thresh_factor: Low rank threshold factor. (default: 1)

--lowr_type: Type of low-rank regularisation [standard or ngole]. (default: standard)

--lowr_thresh_type: Low rank threshold type [soft or hard]. (default: hard)

Sparsity

--wavelet_type: Type of Wavelet to be used (see iSap Documentation). (default: 1)

--wave_thresh_factor: Wavelet threshold factor. (default: [3.0, 3.0, 4.0])

--n_reweights: Number of reweightings. (default: 1)

Condat Algorithm

--relax: Relaxation parameter (rho_n in Condat-Vu method). (default: 0.8)

--condat_sigma: Condat proximal dual parameter. (default: 0.5)

--condat_tau: Condat proximal primal parameter. (default: 0.5)

Testing

-c, --clean_data: Clean data file name.

-r, --random_seed: Random seed. Use this option if the input data is a randomly selected subset (with known seed) of the full sample of clean data.

--kernel: Standard deviation of pixels for Gaussian kernel. This option will multiply the deconvolution results by a Gaussian kernel.

--metric: Metric to average errors [median or mean]. (default: median)

Troubleshooting

If you get the following error:

ERROR: svd() got an unexpected keyword argument 'lapack_driver'

Update your Numpy and Scipy installations

$ pip install --upgrade numpy
$ pip install --upgrade scipy