Rethinking data-driven point spread function modeling with a differentiable optical model

 

Authors:

Tobias Liaudat, Jean-Luc Starck, Martin Kilbinger, Pierre-Antoine Frugier

Journal: Inverse Problems
Year: 2023
DOI:  
Download: ADS | arXiv


Abstract

In astronomy, upcoming space telescopes with wide-field optical instruments have a spatially varying point spread function (PSF). Specific scientific goals require a high-fidelity estimation of the PSF at target positions where no direct measurement of the PSF is provided. Even though observations of the PSF are available at some positions of the field of view (FOV), they are undersampled, noisy, and integrated into wavelength in the instrument's passband. PSF modeling represents a challenging ill-posed problem, as it requires building a model from these observations that can infer a super-resolved PSF at any wavelength and position in the FOV. Current data-driven PSF models can tackle spatial variations and super-resolution. However, they are not capable of capturing PSF chromatic variations. Our model, coined WaveDiff, proposes a paradigm shift in the data-driven modeling of the point spread function field of telescopes. We change the data-driven modeling space from the pixels to the wavefront by adding a differentiable optical forward model into the modeling framework. This change allows the transfer of a great deal of complexity from the instrumental response into the forward model. The proposed model relies on efficient automatic differentiation technology and modern stochastic first-order optimization techniques recently developed by the thriving machine-learning community. Our framework paves the way to building powerful, physically motivated models that do not require special calibration data. This paper demonstrates the WaveDiff model in a simplified setting of a space telescope. The proposed framework represents a performance breakthrough with respect to the existing state-of-the-art data-driven approach. The pixel reconstruction errors decrease six-fold at observation resolution and 44-fold for a 3x super-resolution. The ellipticity errors are reduced at least 20 times, and the size error is reduced more than 250 times. By only using noisy broad-band in-focus observations, we successfully capture the PSF chromatic variations due to diffraction. WaveDiff source code and examples associated with this paper are available at this link .

Deep Learning-based galaxy image deconvolution

 

Authors: Utsav AkhauryJean-Luc StarckPascale JablonkaFrédéric CourbinKevin Michalewicz
Journal: A&A
Year: 2022
DOI:  
Download: ADS | arXiv


Abstract

With the onset of large-scale astronomical surveys capturing millions of images, there is an increasing need to develop fast and accurate deconvolution algorithms that generalize well to different images. A powerful and accessible deconvolution method would allow for the reconstruction of a cleaner estimation of the sky. The deconvolved images would be helpful to perform photometric measurements to help make progress in the fields of galaxy formation and evolution. We propose a new deconvolution method based on the Learnlet transform. Eventually, we investigate and compare the performance of different Unet architectures and Learnlet for image deconvolution in the astrophysical domain by following a two-step approach: a Tikhonov deconvolution with a closed-form solution, followed by post-processing with a neural network. To generate our training dataset, we extract HST cutouts from the CANDELS survey in the F606W filter (V-band) and corrupt these images to simulate their blurred-noisy versions. Our numerical results based on these simulations show a detailed comparison between the considered methods for different noise levels.

UNIONS: The impact of systematic errors on weak-lensing peak counts

 

Authors: E. Ayçoberry, V. Ajani, A. Guinot, M. Kilbinger, V. Pettorino, S. Farrens, J.-L. Starck, R. Gavazzi, M. Hudson
Journal: A&A
Year: 2022
DOI:  
Download: ADS | arXiv


Abstract

Context. The Ultraviolet Near-Infrared Optical Northern Survey (UNIONS) is an ongoing deep photometric multi-band survey of the Northern sky. As part of UNIONS, the Canada-France Imaging Survey (CFIS) provides r-band data which we use to study weak-lensing peak counts for cosmological inference.
Aims. We assess systematic effects for weak-lensing peak counts and their impact on cosmological parameters for the UNIONS survey. In particular, we present results on local calibration, metacalibration shear bias, baryonic feedback, the source galaxy redshift estimate, intrinsic alignment, and the cluster member dilution.

Methods. For each uncertainty and systematic effect, we describe our mitigation scheme and the impact on cosmological parameter constraints. We obtain constraints on cosmological parameters from MCMC using CFIS data and MassiveNuS N-body simulations as a model for peak counts statistics.
Results. Depending on the calibration (local versus global, and the inclusion of the residual multiplicative shear bias), the mean matter density parameter Ωm can shift up to −0.024 (−0.5σ). We also see that including baryonic corrections can shift Ωm by +0.027 (+0.5σ) with respect to the DM-only simulations. Reducing the impact of the intrinsic alignment and cluster member dilution through signal-to-noise cuts can lead to a shift in Ωm of +0.027 (+0.5σ). Finally, with a mean redshift uncertainty of ∆z ̄ = 0.03, we see that the shift of Ωm (+0.001 which corresponds to +0.02σ) is not significant.

Conclusions. This paper investigates for the first time with UNIONS weak-lensing data and peak counts the impact of systematic effects. The value of Ωm is the most impacted and can shift up to ∼ 0.03 which corresponds to 0.5σ depending on the choices for each systematics. We expect constraints to become more reliable with future (larger) data catalogues, for which the current pipeline will provide a starting point. The code used to obtain the results is available in the following Github repository.

ShapePipe: a new shape measurement pipeline and weak-lensing application to UNIONS/CFIS data

 

Authors: A. Guinot, M. Kilbinger, S. Farrens, A. Peel, A. Pujol, M. Schmitz, J.-L. Starck, T. Erben, R. Gavazzi, S. Gwyn, M. Hudson,  H. Hiledebrandt, T. Liaudat , et. al
Journal: A&A
Year: 2022
DOI:  
Download: ADS | arXiv


Abstract

UNIONS is an ongoing collaboration that will provide the largest deep photometric survey of the Northern sky in four optical bands to date. As part of this collaboration, CFIS is taking r-band data with an average seeing of 0.65 arcsec, which is complete to magnitude 24.5 and thus ideal for weak-lensing studies. We perform the first weak-lensing analysis of CFIS r-band data over an area spanning 1700 deg2 of the sky. We create a catalogue with measured shapes for 40 million galaxies, corresponding to an effective density of 6.8 galaxies per square arcminute, and demonstrate a low level of systematic biases. This work serves as the basis for further cosmological studies using the full UNIONS survey of 4800 deg2 when completed. Here we present ShapePipe, a newly developed weak-lensing pipeline. This pipeline makes use of state-of-the-art methods such as Ngmix for accurate galaxy shape measurement. Shear calibration is performed with metacalibration. We carry out extensive validation tests on the Point Spread Function (PSF), and on the galaxy shapes. In addition, we create realistic image simulations to validate the estimated shear. We quantify the PSF model accuracy and show that the level of systematics is low as measured by the PSF residuals. Their effect on the shear two-point correlation function is sub-dominant compared to the cosmological contribution on angular scales <100 arcmin. The additive shear bias is below 5x104, and the residual multiplicative shear bias is at most 103 as measured on image simulations. Using COSEBIs we show that there are no significant B-modes present in second-order shear statistics. We present convergence maps and see clear correlations of the E-mode with known cluster positions. We measure the stacked tangential shear profile around Planck clusters at a significance higher than 4σ.

ShapePipe: A modular weak-lensing processing and analysis pipeline

 

Authors: S. Farrens, A. Guinot, M. Kilbinger, T. Liaudat , L. Baumont, X. Jimenez, A. Peel , A. Pujol , M. Schmitz, J.-L. Starck, and A. Z. Vitorelli
Journal: A&A
Year: 2022
DOI: 10.1051/0004-6361/202243970
Download: ADS | arXiv


Abstract

We present the first public release of ShapePipe, an open-source and modular weak-lensing measurement, analysis, and validation pipeline written in Python. We describe the design of the software and justify the choices made. We provide a brief description of all the modules currently available and summarise how the pipeline has been applied to real Ultraviolet Near-Infrared Optical Northern Survey data. Finally, we mention plans for future applications and development. The code and accompanying documentation are publicly available on GitHub.

NC-PDNet: a Density-Compensated Unrolled Network for 2D and 3D non-Cartesian MRI Reconstruction

Deep Learning has become a very promising avenue for magnetic resonance image (MRI) reconstruction. In this work, we explore the potential of unrolled networks for the non-Cartesian acquisition setting. We design the NC-PDNet, the first density-compensated unrolled network and validate the need for its key components via an ablation study. Moreover, we conduct some generalizability experiments to test our network in out-of-distribution settings, for example training on knee data and validating on brain data. The results show that the NC-PDNet outperforms the baseline models visually and quantitatively in the 2D settings. Additionally, in the 3D settings, it outperforms them visually. In particular, in the 2D multi-coil acquisition scenario, the NC-PDNet provides up to a 1.2 dB improvement in peak signal-to-noise ratio (PSNR) over baseline networks, while also allowing a gain of at least 1 dB in PSNR in generalization settings. We provide the opensource implementation of our network, and in particular the Non-uniform Fourier Transform in TensorFlow, tested on 2D multi-coil and 3D data.

Reference: Z. Ramzi,  Chaithya G.R.,  J.-L. Starck and P. Ciuciu “NC-PDNet: a Density-Compensated Unrolled Network for 2D and 3D non-Cartesian MRI Reconstruction.

This conference paper presents an adaptation of unrolled networks to the challenging setup of Non-Cartesian MRI Reconstruction. It also introduces the implementation of the Non-Uniform Fast Fourier Transform in TensorFlow: tfkbnufft.
It has been accepted at ISBI 2021.

Density Compensated Unrolled Networks for Non-Cartesian MRI Reconstruction

Deep neural networks have recently been thoroughly investigated as a powerful tool for MRI reconstruction. There is a lack of research, however, regarding their use for a specific setting of MRI, namely non-Cartesian acquisitions. In this work, we introduce a novel kind of deep neural networks to tackle this problem, namely density compensated unrolled neural networks, which rely on Density Compensation to correct the uneven weighting of the k-space. We assess their efficiency on the publicly available fastMRI dataset, and perform a small ablation study. Our results show that the density-compensated unrolled neural networks outperform the different baselines, and that all parts of the design are needed. We also open source our code, in particular a Non-Uniform Fast Fourier transform for TensorFlow.

Reference: Z. Ramzi,  J.-L. Starck and P. Ciuciu “Density Compensated Unrolled Networks for Non-Cartesian MRI Reconstruction.

This conference paper presents an adaptation of unrolled networks to the challenging setup of Non-Cartesian MRI Reconstruction. It also introduces the implementation of the Non-Uniform Fast Fourier Transform in TensorFlow: tfkbnufft.
It has been accepted at ISBI 2021.

Starlet l1-norm for weak lensing cosmology

Starlet l1-norm for weak lensing cosmology

 

Authors:

Virginia Ajani, Jean-Luc Starck, Valeria Pettorino

Journal:
Astronomy & Astrophysics , Forthcoming article, Letters to the Editor
Year: 01/2021
Download: A&A| Arxiv


Abstract

We present a new summary statistic for weak lensing observables, higher than second order, suitable for extracting non-Gaussian cosmological information and inferring cosmological parameters. We name this statistic the 'starlet 1-norm' as it is computed via the sum of the absolute values of the starlet (wavelet) decomposition coefficients of a weak lensing map. In comparison to the state-of-the-art higher-order statistics -- weak lensing peak counts and minimum counts, or the combination of the two -- the 1-norm provides a fast multi-scale calculation of the full void and peak distribution, avoiding the problem of defining what a peak is and what a void is: The 1-norm carries the information encoded in all pixels of the map, not just the ones in local maxima and minima. We show its potential by applying it to the weak lensing convergence maps provided by the MassiveNus simulations to get constraints on the sum of neutrino masses, the matter density parameter, and the amplitude of the primordial power spectrum. We find that, in an ideal setting without further systematics, the starlet 1-norm remarkably outperforms commonly used summary statistics, such as the power spectrum or the combination of peak and void counts, in terms of constraining power, representing a promising new unified framework to simultaneously account for the information encoded in peak counts and voids. We find that the starlet 1-norm outperforms the power spectrum by 72% on Mν60% on Ωm, and 75% on As for the Euclid-like setting considered; it also improves upon the state-of-the-art combination of peaks and voids for a single smoothing scale by 24% on Mν50% on Ωm, and 24% on As.

State-of-the-art Machine Learning MRI Reconstruction in 2020: Results of the Second fastMRI Challenge

Accelerating MRI scans is one of the principal outstanding problems in the MRI research community. Towards this goal, we hosted the second fastMRI competition targeted towards reconstructing MR images with subsampled k-space data. We provided participants with data from 7,299 clinical brain scans (de-identified via a HIPAA-compliant procedure by NYU Langone Health), holding back the fully-sampled data from 894 of these scans for challenge evaluation purposes. In contrast to the 2019 challenge, we focused our radiologist evaluations on pathological assessment in brain images. We also debuted a new Transfer track that required participants to submit models evaluated on MRI scanners from outside the training set. We received 19 submissions from eight different groups. Results showed one team scoring best in both SSIM scores and qualitative radiologist evaluations. We also performed analysis on alternative metrics to mitigate the effects of background noise and collected feedback from the participants to inform future challenges. Lastly, we identify common failure modes across the submissions, highlighting areas of need for future research in the MRI reconstruction community.

Reference: Mathew J. Muckley, ...,   Z. Ramzi,  P. Ciuciu and J.-L. Starck et al . “State-of-the-art Machine Learning MRI Reconstruction in 2020: Results of the Second fastMRI Challenge.

This paper presents the results of the fastMRI 2020 challenge, where our team finished 2nd in the 4x and 8x supervised tracks.
It is currently being submitted to IEEE TMI.

Multi-CCD Point Spread Function Modelling

Context. Galaxy imaging surveys observe a vast number of objects that are affected by the instrument’s Point Spread Function (PSF). Weak lensing missions, in particular, aim at measuring the shape of galaxies, and PSF effects represent an important source of systematic errors which must be handled appropriately. This demands a high accuracy in the modelling as well as the estimation of the PSF at galaxy positions.

Aims. Sometimes referred to as non-parametric PSF estimation, the goal of this paper is to estimate a PSF at galaxy positions, starting from a set of noisy star image observations distributed over the focal plane. To accomplish this, we need our model to first of all, precisely capture the PSF field variations over the Field of View (FoV), and then to recover the PSF at the selected positions. Methods. This paper proposes a new method, coined MCCD (Multi-CCD PSF modelling), that creates, simultaneously, a PSF field model over all of the instrument’s focal plane. This allows to capture global as well as local PSF features through the use of two complementary models which enforce different spatial constraints. Most existing non-parametric models build one model per Charge-Coupled Device (CCD), which can lead to difficulties in capturing global ellipticity patterns.

Results. We first test our method on a realistic simulated dataset comparing it with two state-of-the-art PSF modelling methods (PSFEx and RCA). We outperform both of them with our proposed method. Then we contrast our approach with PSFEx on real data from CFIS (Canada-France Imaging Survey) that uses the CFHT (Canada-France-Hawaii Telescope). We show that our PSF model is less noisy and achieves a ~ 22% gain on pixel Root Mean Squared Error (RMSE) with respect to PSFEx.

Conclusions. We present, and share the code of, a new PSF modelling algorithm that models the PSF field on all the focal plane that is mature enough to handle real data.

Reference: Tobias Liaudat, Jérôme Bonnin,  Jean-Luc Starck, Morgan A. Schmitz, Axel Guinot, Martin Kilbinger and Stephen D. J. Gwyn. “Multi-CCD Point Spread Function Modelling, submitted 2020.

arXiv, code.