Rethinking data-driven point spread function modeling with a differentiable optical model

 

Authors:

Tobias Liaudat, Jean-Luc Starck, Martin Kilbinger, Pierre-Antoine Frugier

Journal: Inverse Problems
Year: 2023
DOI:  
Download: ADS | arXiv


Abstract

In astronomy, upcoming space telescopes with wide-field optical instruments have a spatially varying point spread function (PSF). Specific scientific goals require a high-fidelity estimation of the PSF at target positions where no direct measurement of the PSF is provided. Even though observations of the PSF are available at some positions of the field of view (FOV), they are undersampled, noisy, and integrated into wavelength in the instrument's passband. PSF modeling represents a challenging ill-posed problem, as it requires building a model from these observations that can infer a super-resolved PSF at any wavelength and position in the FOV. Current data-driven PSF models can tackle spatial variations and super-resolution. However, they are not capable of capturing PSF chromatic variations. Our model, coined WaveDiff, proposes a paradigm shift in the data-driven modeling of the point spread function field of telescopes. We change the data-driven modeling space from the pixels to the wavefront by adding a differentiable optical forward model into the modeling framework. This change allows the transfer of a great deal of complexity from the instrumental response into the forward model. The proposed model relies on efficient automatic differentiation technology and modern stochastic first-order optimization techniques recently developed by the thriving machine-learning community. Our framework paves the way to building powerful, physically motivated models that do not require special calibration data. This paper demonstrates the WaveDiff model in a simplified setting of a space telescope. The proposed framework represents a performance breakthrough with respect to the existing state-of-the-art data-driven approach. The pixel reconstruction errors decrease six-fold at observation resolution and 44-fold for a 3x super-resolution. The ellipticity errors are reduced at least 20 times, and the size error is reduced more than 250 times. By only using noisy broad-band in-focus observations, we successfully capture the PSF chromatic variations due to diffraction. WaveDiff source code and examples associated with this paper are available at this link .

Deep Learning-based galaxy image deconvolution

 

Authors: Utsav AkhauryJean-Luc StarckPascale JablonkaFrédéric CourbinKevin Michalewicz
Journal: A&A
Year: 2022
DOI:  
Download: ADS | arXiv


Abstract

With the onset of large-scale astronomical surveys capturing millions of images, there is an increasing need to develop fast and accurate deconvolution algorithms that generalize well to different images. A powerful and accessible deconvolution method would allow for the reconstruction of a cleaner estimation of the sky. The deconvolved images would be helpful to perform photometric measurements to help make progress in the fields of galaxy formation and evolution. We propose a new deconvolution method based on the Learnlet transform. Eventually, we investigate and compare the performance of different Unet architectures and Learnlet for image deconvolution in the astrophysical domain by following a two-step approach: a Tikhonov deconvolution with a closed-form solution, followed by post-processing with a neural network. To generate our training dataset, we extract HST cutouts from the CANDELS survey in the F606W filter (V-band) and corrupt these images to simulate their blurred-noisy versions. Our numerical results based on these simulations show a detailed comparison between the considered methods for different noise levels.

Density Compensated Unrolled Networks for Non-Cartesian MRI Reconstruction

Deep neural networks have recently been thoroughly investigated as a powerful tool for MRI reconstruction. There is a lack of research, however, regarding their use for a specific setting of MRI, namely non-Cartesian acquisitions. In this work, we introduce a novel kind of deep neural networks to tackle this problem, namely density compensated unrolled neural networks, which rely on Density Compensation to correct the uneven weighting of the k-space. We assess their efficiency on the publicly available fastMRI dataset, and perform a small ablation study. Our results show that the density-compensated unrolled neural networks outperform the different baselines, and that all parts of the design are needed. We also open source our code, in particular a Non-Uniform Fast Fourier transform for TensorFlow.

Reference: Z. Ramzi,  J.-L. Starck and P. Ciuciu “Density Compensated Unrolled Networks for Non-Cartesian MRI Reconstruction.

This conference paper presents an adaptation of unrolled networks to the challenging setup of Non-Cartesian MRI Reconstruction. It also introduces the implementation of the Non-Uniform Fast Fourier Transform in TensorFlow: tfkbnufft.
It has been accepted at ISBI 2021.

State-of-the-art Machine Learning MRI Reconstruction in 2020: Results of the Second fastMRI Challenge

Accelerating MRI scans is one of the principal outstanding problems in the MRI research community. Towards this goal, we hosted the second fastMRI competition targeted towards reconstructing MR images with subsampled k-space data. We provided participants with data from 7,299 clinical brain scans (de-identified via a HIPAA-compliant procedure by NYU Langone Health), holding back the fully-sampled data from 894 of these scans for challenge evaluation purposes. In contrast to the 2019 challenge, we focused our radiologist evaluations on pathological assessment in brain images. We also debuted a new Transfer track that required participants to submit models evaluated on MRI scanners from outside the training set. We received 19 submissions from eight different groups. Results showed one team scoring best in both SSIM scores and qualitative radiologist evaluations. We also performed analysis on alternative metrics to mitigate the effects of background noise and collected feedback from the participants to inform future challenges. Lastly, we identify common failure modes across the submissions, highlighting areas of need for future research in the MRI reconstruction community.

Reference: Mathew J. Muckley, ...,   Z. Ramzi,  P. Ciuciu and J.-L. Starck et al . “State-of-the-art Machine Learning MRI Reconstruction in 2020: Results of the Second fastMRI Challenge.

This paper presents the results of the fastMRI 2020 challenge, where our team finished 2nd in the 4x and 8x supervised tracks.
It is currently being submitted to IEEE TMI.

Faster and better sparse blind source separation through mini-batch optimization

Sparse Blind Source Separation (sBSS) plays a key role in scientific domains as different as biomedical imaging, remote sensing or astrophysics, which require the development of increasingly faster and scalable BSS methods without sacrificing the separation performances. To that end, a new distributed sparse BSS algorithm is introduced based on a mini-batch ex-tension of the Generalized Morphological Component Analysis algorithm (GMCA). Precisely, it combines a robust projected alternate least-squares method with mini-batches optimization. The originality further lies in the use of a manifold-based aggregation of asynchronously estimated mixing ma- trices. Numerical experiments are carried out on realistic spectroscopic spectra, and highlight the ability of the proposed distributed GMCA (dGMCA) to provide very good separation results even when very small mini-batches are used. Quite unexpectedly, it can further outperform the (non-distributed) state-of-the-art methods for highly sparse sources.

Reference: Christophe Kervazo, Tobias Liaudat and Jérôme Bobin.
“Faster and better sparse blind source separation through mini-batch optimization, Digital Signal Processing, Elsevier, 2020.

DSP Elsevier, HAL.

Multi-CCD Point Spread Function Modelling

Context. Galaxy imaging surveys observe a vast number of objects that are affected by the instrument’s Point Spread Function (PSF). Weak lensing missions, in particular, aim at measuring the shape of galaxies, and PSF effects represent an important source of systematic errors which must be handled appropriately. This demands a high accuracy in the modelling as well as the estimation of the PSF at galaxy positions.

Aims. Sometimes referred to as non-parametric PSF estimation, the goal of this paper is to estimate a PSF at galaxy positions, starting from a set of noisy star image observations distributed over the focal plane. To accomplish this, we need our model to first of all, precisely capture the PSF field variations over the Field of View (FoV), and then to recover the PSF at the selected positions. Methods. This paper proposes a new method, coined MCCD (Multi-CCD PSF modelling), that creates, simultaneously, a PSF field model over all of the instrument’s focal plane. This allows to capture global as well as local PSF features through the use of two complementary models which enforce different spatial constraints. Most existing non-parametric models build one model per Charge-Coupled Device (CCD), which can lead to difficulties in capturing global ellipticity patterns.

Results. We first test our method on a realistic simulated dataset comparing it with two state-of-the-art PSF modelling methods (PSFEx and RCA). We outperform both of them with our proposed method. Then we contrast our approach with PSFEx on real data from CFIS (Canada-France Imaging Survey) that uses the CFHT (Canada-France-Hawaii Telescope). We show that our PSF model is less noisy and achieves a ~ 22% gain on pixel Root Mean Squared Error (RMSE) with respect to PSFEx.

Conclusions. We present, and share the code of, a new PSF modelling algorithm that models the PSF field on all the focal plane that is mature enough to handle real data.

Reference: Tobias Liaudat, Jérôme Bonnin,  Jean-Luc Starck, Morgan A. Schmitz, Axel Guinot, Martin Kilbinger and Stephen D. J. Gwyn. “Multi-CCD Point Spread Function Modelling, submitted 2020.

arXiv, code.

XPDNet for MRI Reconstruction: an Application to the fastMRI 2020 Brain Challenge

We present a modular cross-domain neural network the XPDNet and its application to the MRI reconstruction task. This approach consists in unrolling the PDHG algorithm as well as learning the acceleration scheme between steps. We also adopt state-of-the-art techniques specific to Deep Learning for MRI reconstruction. At the time of writing, this approach is the best performer in PSNR on the fastMRI leaderboards for both knee and brain at acceleration factor 4.

Reference:  Z. Ramzi,  P. Ciuciu and J.-L. Starck . “XPDNet for MRI Reconstruction: an Application to the fastMRI 2020 Brain Challenge.

 

This network was used to submit reconstructions to the 2020 fastMRI Brain reconstruction challenge. Results are to be announced on December 6th 2020.

Denoising Score-Matching for Uncertainty Quantification in Inverse Problems

Deep neural networks have proven extremely efficient at solving a wide range of inverse problems, but most often the uncertainty on the solution they provide is hard to quantify. In this work, we propose a generic Bayesian framework for solving inverse problems, in which we limit the use of deep neural networks to learning a prior distribution on the signals to recover. We adopt recent denoising score matching techniques to learn this prior from data, and subsequently use it as part of an annealed Hamiltonian Monte-Carlo scheme to sample the full posterior of image inverse problems. We apply this framework to Magnetic Resonance Image (MRI) reconstruction and illustrate how this approach not only yields high quality reconstructions but can also be used to assess the uncertainty on particular features of a reconstructed image.

Reference:  Z. Ramzi,  Benjamin Remy, François Lanusse, J.-L. Starck and P. Ciuciu. “Denoising Score-Matching for Uncertainty Quantification in Inverse Problems, Deep Learning and Inverse Problems Workshop NeurIPS, 2020.

Wavelets in the Deep Learning Era

Sparsity based methods, such as wavelets, have been state-of-the-art for more than 20 years for inverse problems before being overtaken by neural networks.
In particular, U-nets have proven to be extremely effective.
Their main ingredients are a highly non-linear processing, a massive learning made possible by the flourishing of optimization algorithms with the power of computers (GPU) and the use of large available data sets for training.
While the many stages of non-linearity are intrinsic to deep learning, the usage of learning with training data could also be exploited by sparsity based approaches.
The aim of our study is to push the limits of sparsity with learning, and comparing the results with U-nets.
We present a new network architecture, which conserves the properties of sparsity based methods such as exact reconstruction and good generalization properties, while fostering the power of neural networks for learning and fast calculation.
We evaluate the model on image denoising tasks and show it is competitive with learning-based models.

Reference:  Z. Ramzi,  J.-L. Starck and P. Ciuciu. “Wavelets in the Deep Learning Era, Eusipco, 2020.

Benchmarking Deep Nets MRI Reconstruction Models on the FastMRI Publicly Available Dataset

 

The MRI reconstruction field lacked a proper data set that allowed for reproducible results on real raw data (i.e. complex-valued), especially when it comes to deep learning (DL) methods as this kind of approaches require much more data than classical Compressed Sensing~(CS) reconstruction. This lack is now filled by the fastMRI data set, and it is needed to evaluate recent DL models on this benchmark. Besides, these networks are written in different frameworks and repositories (if publicly available), it is therefore needed to have a common tool, publicly available, allowing a reproducible benchmark of the different methods and ease of building new models. We provide such a tool that allows the benchmark of different reconstruction deep learning models.

Reference:  Z. Ramzi, P. Ciuciu and J.-L. Starck. “Benchmarking Deep Nets MRI Reconstruction Models on the FastMRI Publicly Available Dataset, ISBI, 2020.