Rethinking data-driven point spread function modeling with a differentiable optical model

 

Authors:

Tobias Liaudat, Jean-Luc Starck, Martin Kilbinger, Pierre-Antoine Frugier

Journal: Inverse Problems
Year: 2023
DOI:  
Download: ADS | arXiv


Abstract

In astronomy, upcoming space telescopes with wide-field optical instruments have a spatially varying point spread function (PSF). Specific scientific goals require a high-fidelity estimation of the PSF at target positions where no direct measurement of the PSF is provided. Even though observations of the PSF are available at some positions of the field of view (FOV), they are undersampled, noisy, and integrated into wavelength in the instrument's passband. PSF modeling represents a challenging ill-posed problem, as it requires building a model from these observations that can infer a super-resolved PSF at any wavelength and position in the FOV. Current data-driven PSF models can tackle spatial variations and super-resolution. However, they are not capable of capturing PSF chromatic variations. Our model, coined WaveDiff, proposes a paradigm shift in the data-driven modeling of the point spread function field of telescopes. We change the data-driven modeling space from the pixels to the wavefront by adding a differentiable optical forward model into the modeling framework. This change allows the transfer of a great deal of complexity from the instrumental response into the forward model. The proposed model relies on efficient automatic differentiation technology and modern stochastic first-order optimization techniques recently developed by the thriving machine-learning community. Our framework paves the way to building powerful, physically motivated models that do not require special calibration data. This paper demonstrates the WaveDiff model in a simplified setting of a space telescope. The proposed framework represents a performance breakthrough with respect to the existing state-of-the-art data-driven approach. The pixel reconstruction errors decrease six-fold at observation resolution and 44-fold for a 3x super-resolution. The ellipticity errors are reduced at least 20 times, and the size error is reduced more than 250 times. By only using noisy broad-band in-focus observations, we successfully capture the PSF chromatic variations due to diffraction. WaveDiff source code and examples associated with this paper are available at this link .

NC-PDNet: a Density-Compensated Unrolled Network for 2D and 3D non-Cartesian MRI Reconstruction

Deep Learning has become a very promising avenue for magnetic resonance image (MRI) reconstruction. In this work, we explore the potential of unrolled networks for the non-Cartesian acquisition setting. We design the NC-PDNet, the first density-compensated unrolled network and validate the need for its key components via an ablation study. Moreover, we conduct some generalizability experiments to test our network in out-of-distribution settings, for example training on knee data and validating on brain data. The results show that the NC-PDNet outperforms the baseline models visually and quantitatively in the 2D settings. Additionally, in the 3D settings, it outperforms them visually. In particular, in the 2D multi-coil acquisition scenario, the NC-PDNet provides up to a 1.2 dB improvement in peak signal-to-noise ratio (PSNR) over baseline networks, while also allowing a gain of at least 1 dB in PSNR in generalization settings. We provide the opensource implementation of our network, and in particular the Non-uniform Fourier Transform in TensorFlow, tested on 2D multi-coil and 3D data.

Reference: Z. Ramzi,  Chaithya G.R.,  J.-L. Starck and P. Ciuciu “NC-PDNet: a Density-Compensated Unrolled Network for 2D and 3D non-Cartesian MRI Reconstruction.

This conference paper presents an adaptation of unrolled networks to the challenging setup of Non-Cartesian MRI Reconstruction. It also introduces the implementation of the Non-Uniform Fast Fourier Transform in TensorFlow: tfkbnufft.
It has been accepted at ISBI 2021.

Density Compensated Unrolled Networks for Non-Cartesian MRI Reconstruction

Deep neural networks have recently been thoroughly investigated as a powerful tool for MRI reconstruction. There is a lack of research, however, regarding their use for a specific setting of MRI, namely non-Cartesian acquisitions. In this work, we introduce a novel kind of deep neural networks to tackle this problem, namely density compensated unrolled neural networks, which rely on Density Compensation to correct the uneven weighting of the k-space. We assess their efficiency on the publicly available fastMRI dataset, and perform a small ablation study. Our results show that the density-compensated unrolled neural networks outperform the different baselines, and that all parts of the design are needed. We also open source our code, in particular a Non-Uniform Fast Fourier transform for TensorFlow.

Reference: Z. Ramzi,  J.-L. Starck and P. Ciuciu “Density Compensated Unrolled Networks for Non-Cartesian MRI Reconstruction.

This conference paper presents an adaptation of unrolled networks to the challenging setup of Non-Cartesian MRI Reconstruction. It also introduces the implementation of the Non-Uniform Fast Fourier Transform in TensorFlow: tfkbnufft.
It has been accepted at ISBI 2021.

shear bias

 

Authors:  M. Kilbinger, A. Pujol
Language: Python
Download: GitHub
Description: shear_bias is a package that contains tools and scripts for shear bias estimation for weak gravitational lensing analysis.


Installation

Download the code from the github repository.

git clone https://github.com/CosmoStat/shear_bias

A directory shear_bias is created. There, call the setup script to install the package.

cd shear_bias
python setup.py install

DecGMCA

 

Authors: M. Jiang
Language: Python
Download: Python
Description: A toolbox for solving joint multichannel Deconvolution and Blind Source Separation (DBSS)
Notes:  

 


DecGMCA

DecGMCA (Deconvolution Generalized Morphological Component Analysis) is a sparsity-based algorithm aiming at solving joint multichannel Deconvolution and Blind Source Separation (DBSS) problem.

For more details, please refer to the paper Joint Multichannel Deconvolution and Blind Source Separation (https://arxiv.org/abs/1703.02650)

ModOpt

 

Authors:  S. Farrens, Z. Ramzi, Contributors
Language: Python
Download: GitHub
Description: ModOpt is a series of Modular Optimisation tools for solving inverse problems.
Notes:

API documentation


Installation

$ pip install modopt

Contributing

If you want to contribute to ModOpt, be sure to review the contribution guidelines and follow to the code of conduct.

PySAP

 

Authors:  S. Farrens, A. Grigis, L. El Gueddari, Z. Ramzi, Chaithya G. R., S. Starck, B. Sarthou, H. Cherkaoui, P.Ciuciu, J-L. Starck
Language: Python
Download: GitHub
Description: PySAP (Python Sparse data Analysis Package) is a Python module for sparse data analysis.
Notes:

PySAP paper


Installation

The installation of PySAP has been extensively tested on Ubuntu and macOS, however we cannot guarantee it will work on every operating system (e.g. Windows).

If you encounter any installation issues be sure to go through the following steps before opening a new issue:

  1. Check that that all of the installed all the dependencies listed above have been installed.
  2. Read through all of the documentation provided, including the troubleshooting suggestions.
  3. Check if you problem has already been addressed in a previous issue.

Further instructions are available here.

From PyPi

To install PySAP simply run:

$ pip install python-pysap

Depending on your Python setup you may need to provide the --user option.

$ pip install --user python-pysap

Locally

To build PySAP locally, clone the repository:

$ git clone https://github.com/CEA-COSMIC/pysap.git

and run:

$ python setup.py install

or:

$ python setup.py develop

As before, use the --user option if needed.

macOS

Help with installation on macOS is available here.

Linux

Please refer to the PyQtGraph homepage for issues regarding the installation of pyqtgraph.

Contributing

If you want to contribute to pySAP, be sure to review the contribution guidelines and follow to the code of conduct.