ModOpt

 

Authors:  S. Farrens, Z. Ramzi, Contributors
Language: Python
Download: GitHub
Description: ModOpt is a series of Modular Optimisation tools for solving inverse problems.
Notes:

API documentation


Installation

$ pip install modopt

Contributing

If you want to contribute to ModOpt, be sure to review the contribution guidelines and follow to the code of conduct.

PySAP

 

Authors:  S. Farrens, A. Grigis, L. El Gueddari, Z. Ramzi, Chaithya G. R., S. Starck, B. Sarthou, H. Cherkaoui, P.Ciuciu, J-L. Starck
Language: Python
Download: GitHub
Description: PySAP (Python Sparse data Analysis Package) is a Python module for sparse data analysis.
Notes:

PySAP paper


Installation

The installation of PySAP has been extensively tested on Ubuntu and macOS, however we cannot guarantee it will work on every operating system (e.g. Windows).

If you encounter any installation issues be sure to go through the following steps before opening a new issue:

  1. Check that that all of the installed all the dependencies listed above have been installed.
  2. Read through all of the documentation provided, including the troubleshooting suggestions.
  3. Check if you problem has already been addressed in a previous issue.

Further instructions are available here.

From PyPi

To install PySAP simply run:

$ pip install python-pysap

Depending on your Python setup you may need to provide the --user option.

$ pip install --user python-pysap

Locally

To build PySAP locally, clone the repository:

$ git clone https://github.com/CEA-COSMIC/pysap.git

and run:

$ python setup.py install

or:

$ python setup.py develop

As before, use the --user option if needed.

macOS

Help with installation on macOS is available here.

Linux

Please refer to the PyQtGraph homepage for issues regarding the installation of pyqtgraph.

Contributing

If you want to contribute to pySAP, be sure to review the contribution guidelines and follow to the code of conduct.

Euclid: Reconstruction of weak-lensing mass maps for non-Gaussianity studies

Euclid: Reconstruction of weak-lensing mass maps for non-Gaussianity studies

Authors: S. Pires, V. Vandenbussche, V. Kansal, R. Bender, L. Blot, D. Bonino, A. Boucaud, J. Brinchmann, V. Capobianco, J. Carretero, M. Castellano, S. Cavuoti, R. Clédassou, G. Congedo, L. Conversi, L. Corcione, F. Dubath, P. Fosalba, M. Frailis, E. Franceschi, M. Fumana, F. Grupp, F. Hormuth, S. Kermiche, M. Knabenhans, R. Kohley, B. Kubik, M. Kunz, S. Ligori, P.B. Lilje, I. Lloro, E. Maiorano, O. Marggraf, R. Massey, G. Meylan, C. Padilla, S. Paltani, F. Pasian, M. Poncet, D. Potter, F. Raison, J. Rhodes, M. Roncarelli, R. Saglia, P. Schneider, A. Secroun, S. Serrano, J. Stadel, P. Tallada Crespí, I. Tereno, R. Toledo-Moreo, Y. Wang
Journal: Astronomy and Astrophysics
Year: 2020
Download:

ADS | arXiv 

 


Abstract

Weak lensing, namely the deflection of light by matter along the line of sight, has proven to be an efficient method to constrain models of structure formation and reveal the nature of dark energy. So far, most weak lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak lensing surveys such as Euclid, convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While carrying the same information, the lensing signal is more compressed in the convergence maps than in the shear field, simplifying otherwise computationally expensive analyses, for instance non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects due to holes in the data field, field borders, noise and the fact that the shear is not a direct observable (reduced shear). In this paper, we present the two mass inversion methods that are being included in the official Euclid data processing pipeline: the standard Kaiser & Squires method (KS) and a new mass inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS methodology and includes corrections for mass mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the Euclid Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions, third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method reduces substantially the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5 while the errors on the third- and fourth-order moments are reduced by a factor of about 2 and 10 respectively.

Space test of the Equivalence Principle: first results of the MICROSCOPE mission

Space test of the Equivalence Principle: first results of the MICROSCOPE mission

Authors: P. Touboul, G. Metris, M. Rodrigues, Y. André, Q. Baghi, J. Bergé, D. Boulanger, S. Bremer, R. Chhun, B. Christophe, V. Cipolla, T. Damour, P. Danto, H. Dittus, P. Fayet, B. Foulon, P.-Y. Guidotti, E. Hardy, P.-A. Huynh, C. Lämmerzahl, V. Lebat, F. Liorzou, M. List, I. Panel, S. Pires, B. Pouilloux, P. Prieur, S. Reynaud, B. Rievers, A. Robert, H. Selig, L. Serron, T. Sumner, P. Viesser
Journal: Classical and Quantum Gravity
Year: 2019
Download: ADS | arXivFait Marquant


Abstract

The Weak Equivalence Principle (WEP), stating that two bodies of different compositions and/or mass fall at the same rate in a gravitational field (universality of free fall), is at the very foundation of General Relativity. The MICROSCOPE mission aims to test its validity to a precision of 10^-15, two orders of magnitude better than current on-ground tests, by using two masses of different compositions (titanium and platinum alloys) on a quasi-circular trajectory around the Earth. This is realised by measuring the accelerations inferred from the forces required to maintain the two masses exactly in the same orbit. Any significant difference between the measured accelerations, occurring at a defined frequency, would correspond to the detection of a violation of the WEP, or to the discovery of a tiny new type of force added to gravity. MICROSCOPE's first results show no hint for such a difference, expressed in terms of Eötvös parameter δ =  [-1 +/- 9(stat) +/- 9 (syst)] x 10^-15 (both 1σ uncertainties) for a titanium and platinum pair of materials. This result was obtained on a session with 120 orbital revolutions representing 7% of the current available data acquired during the whole mission. The quadratic combination of 1σ uncertainties leads to a current limit on δ of about 1.3 x 10^-14.

MGCNN

 

Authors: F. Lalande, A. Peel
Language: Python 3
Download: mgcnn.tar.gz
Description: A Convolutional Neural Network (CNN) architecture for classifying standard and modified gravity (MG) cosmological models based on the weak-lensing convergence maps they produce.


Introduction

This repository contains the code and data used to produce the results in A. Peel et al. (2018), arXiv:1810.11030.

The Convolutional Neural Network (CNN) is implemented in Keras using TensorFlow as backend. Since the DUSTGRAIN-pathfinder simulations are not yet public, we are not able to include the original convergence maps obtained from the various cosmological runs. We do provide, however, the wavelet PDF datacubes derived for the four models as described in the paper: one standard LCDM and three modified gravity f(R) models.

Requirements

  • Python 3
  • numpy
  • Keras with Tensorflow as backend
  • scikit-learn

Usage

$ python3 train_mgcnn.py -n0

The three options for the noise flag "-n" are (0, 1, 2), which correspond to noise standard deviations of sigma = (0, 0.35, 0.70) added to the original convergence maps. Additional options are "-i" and "-e" for the number of training iterations and epochs, respectively.

Confusion matrices and evaluation metrics (loss function and validation accuracy) are saved as numpy arrays in the generated output/ directory after each iteration.

pyGMCALab

 

Authors: J. Bobin, J.Rapin, C.Chenot, C.Kervazo
Language: Python
Download: Python
Description: A toolbox for solving Blind Source Separation problems.
Notes:  

 


GMCALab

GMCALab is a Python toolboxes that focus on solving Blind Source Separation problems from multichannel/multispectral/hyperspectral data. In essence, multichannel data provide different observations of the same physical phenomena (e.g. multiple wavelengths, ), which are modeled as a linear combination of unknown elementary components or sources:

\[\mathbf{Y} = \mathbf{A}\mathbf{S},\]

where $$\mathbf{Y}$$ is the data matrix, $$\mathbf{S}$$ is the source matrix, and $$\mathbf{A}$$ is the mixing matrix. The goal of blind source separation is to retrieve $$\mathbf{A}$$ and $$\mathbf{S}$$ from the knwoledge of the data only.

Generalized Morphological Component Analysis, a.k.a. GMCA, is a BSS method that enforces the sparsity of the sought-after sources:

\[\underset{\mathbf{A},~\mathbf{S}}{\text{argmin}}~\|\mathbf{Y}-\mathbf{A}\mathbf{S}\|_2^2+\|\mathbf{\Lambda}\odot\mathbf{S}\|_1,\]

Please check out the project's GitHub page.

It is worth noting that GMCA provides a very generic framework that has been extended to tackle different matrix factorization problems:

  • Non-negative matrix factorization with nGMCA
  • Separation of partially correlated sources with AMCA
  • The decomposition of hyperspectral data with HypGMCA (available soon)
  • The analysis of multichannel data in the presence of outliers with rAMCA at this location (updated the 14/06/16).
  • Robust BSS in transformed domains with tr-rGMCA . 

 We are now developping a python-based toolbox coined pyGMCALab, which is available at this location.