21 cm intensity mapping has emerged as a promising technique to map the large-scale structure of the Universe. However, the presence of foregrounds with amplitudes orders of magnitude larger than the cosmological signal constitutes a critical challenge. In this work, we test the sparsity-based algorithm Generalised Morphological Component Analysis (GMCA) as a blind component separation technique for this class of experiments. We test the GMCA performance against realistic full-sky mock temperature maps that include, besides astrophysical foregrounds, also a fraction of the polarized part of the signal leaked into the unpolarized one, a very troublesome foreground to subtract, usually referred to as polarization leakage. To our knowledge, this is the first time the removal of such component is performed with no prior assumption. We assess the success of the cleaning by comparing the true and recovered power spectra, in the angular and radial directions. In the best scenario looked at, GMCA is able to recover the input angular (radial) power spectrum with an average bias of ∼5% for ℓ>25 (20−30% for k_ll ≳ 0.02 Mpc/h), in the presence of polarization leakage. Our results are robust also when up to 40% of channels are missing, mimicking a Radio Frequency Interference (RFI) flagging of the data. Having quantified the notable effect of polarisation leakage on our results, in perspective we advocate the use of more realistic simulations when testing 21 cm intensity mapping capabilities.
Supervised Dictionary Learning has gained much interest in the recent decade and has shown significant performance improvements in image classification. However, in general, supervised learning needs a large number of labelled samples per class to achieve an acceptable result. In order to deal with databases which have just a few labelled samples per class, semi-supervised learning, which also exploits unlabelled samples in training phase is used. Indeed, unlabelled samples can help to regularize the learning model, yielding an improvement of classification accuracy. In this paper, we propose a new semi-supervised dictionary learning method based on two pillars: on one hand, we enforce manifold structure preservation from the original data into sparse code space using Locally Linear Embedding, which can be considered a regularization of sparse code; on the other hand, we train a semi-supervised classifier in sparse code space. We show that our approach provides an improvement over state-of-the-art semi-supervised dictionary learning methods
The deconvolution of large survey images with millions of galaxies requires developing a new generation of methods that can take a space-variant point spread function into account. These methods have also to be accurate and fast. We investigate how deep learning might be used to perform this task. We employed a U-net deep neural network architecture to learn parameters that were adapted for galaxy image processing in a supervised setting and studied two deconvolution strategies. The first approach is a post-processing of a mere Tikhonov deconvolution with closed-form solution, and the second approach is an iterative deconvolution framework based on the alternating direction method of multipliers (ADMM). Our numerical results based on GREAT3 simulations with realistic galaxy images and point spread functions show that our two approaches outperform standard techniques that are based on convex optimization, whether assessed in galaxy image reconstruction or shape recovery. The approach based on a Tikhonov deconvolution leads to the most accurate results, except for ellipticity errors at high signal-to-noise ratio. The ADMM approach performs slightly better in this case. Considering that the Tikhonov approach is also more computation-time efficient in processing a large number of galaxies, we recommend this approach in this scenario.
In the spirit of reproducible research, the codes will be made freely available on the CosmoStat website (http://www.cosmostat.org). The testing datasets will also be provided to repeat the experiments performed in this paper.
Euclid: Reconstruction of weak-lensing mass maps for non-Gaussianity studies
|Authors:||S. Pires, V. Vandenbussche, V. Kansal, R. Bender, L. Blot, D. Bonino, A. Boucaud, J. Brinchmann, V. Capobianco, J. Carretero, M. Castellano, S. Cavuoti, R. Clédassou, G. Congedo, L. Conversi, L. Corcione, F. Dubath, P. Fosalba, M. Frailis, E. Franceschi, M. Fumana, F. Grupp, F. Hormuth, S. Kermiche, M. Knabenhans, R. Kohley, B. Kubik, M. Kunz, S. Ligori, P.B. Lilje, I. Lloro, E. Maiorano, O. Marggraf, R. Massey, G. Meylan, C. Padilla, S. Paltani, F. Pasian, M. Poncet, D. Potter, F. Raison, J. Rhodes, M. Roncarelli, R. Saglia, P. Schneider, A. Secroun, S. Serrano, J. Stadel, P. Tallada Crespí, I. Tereno, R. Toledo-Moreo, Y. Wang|
|Journal:||Astronomy and Astrophysics|
Weak lensing, namely the deflection of light by matter along the line of sight, has proven to be an efficient method to constrain models of structure formation and reveal the nature of dark energy. So far, most weak lensing studies have focused on the shear field that can be measured directly from the ellipticity of background galaxies. However, within the context of forthcoming full-sky weak lensing surveys such as Euclid, convergence maps (mass maps) offer an important advantage over shear fields in terms of cosmological exploitation. While carrying the same information, the lensing signal is more compressed in the convergence maps than in the shear field, simplifying otherwise computationally expensive analyses, for instance non-Gaussianity studies. However, the inversion of the non-local shear field requires accurate control of systematic effects due to holes in the data field, field borders, noise and the fact that the shear is not a direct observable (reduced shear). In this paper, we present the two mass inversion methods that are being included in the official Euclid data processing pipeline: the standard Kaiser & Squires method (KS) and a new mass inversion method (KS+) that aims to reduce the information loss during the mass inversion. This new method is based on the KS methodology and includes corrections for mass mapping systematic effects. The results of the KS+ method are compared to the original implementation of the KS method in its simplest form, using the Euclid Flagship mock galaxy catalogue. In particular, we estimate the quality of the reconstruction by comparing the two-point correlation functions, third- and fourth-order moments obtained from shear and convergence maps, and we analyse each systematic effect independently and simultaneously. We show that the KS+ method reduces substantially the errors on the two-point correlation function and moments compared to the KS method. In particular, we show that the errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5 while the errors on the third- and fourth-order moments are reduced by a factor of about 2 and 10 respectively.
Deep learning is starting to offer promising results for reconstruction in Magnetic Resonance Imaging (MRI). A lot of networks are being developed, but the comparisons remain hard because the frameworks used are not the same among studies, the networks are not properly re-trained, and the datasets used are not the same among comparisons. The recent release of a public dataset, fastMRI, consisting of raw k-space data, encouraged us to write a consistent benchmark of several deep neural networks for MR image reconstruction. This paper shows the results obtained for this benchmark, allowing to compare the networks, and links the open source implementation of all these networks in Keras. The main finding of this benchmark is that it is beneficial to perform more iterations between the image and the measurement spaces compared to having a deeper per-space network.
Reference: Z. Ramzi, P. Ciuciu and J.-L. Starck. “Benchmarking MRI reconstruction neural networks on large public datasets”, Applied Sciences, 10, 1816, 2020. doi:10.3390/app10051816
Date: March 18th 2020, 10.30am
Speaker: Florent Mertens (LERMA / Kapteyn Astronomical Institute)
Title: The challenges of observing the Epoch of Reionization and Cosmic Dawn
Low-frequency observations of the redshifted 21cm line promise to open a new window onto the first billion years of cosmic history, allowing us to directly study the astrophysical processes occurring during the Epoch of Reionization (EoR) and the Cosmic Dawn (CD). This exciting goal is challenged by the difficulty of extracting the feeble 21-cm signal buried under astrophysical foregrounds orders of magnitude brighter and contaminated by numerous instrumental systematics. Several experiments such as LOFAR, MWA, HERA, and NenuFAR are currently underway aiming at statistically detecting the 21-cm brightness temperature fluctuations from the EoR and CD. While no detection is yet in sight, considerable progress has been made recently. In this talk, I will review the many challenges faced by these difficult experiments and I will share the latest development of the LOFAR Epoch of Reionization and NenuFAR Cosmic Dawn key science projects.
DeepMass: The first Deep Learning reconstruction of dark matter maps from weak lensing observational data (DES SV weak lensing data)
This is the first reconstruction of dark matter maps from weak lensing observational data using deep learning. We train a convolution neural network (CNN) with a Unet based architecture on over 3.6 x 10^5 simulated data realisations with non-Gaussian shape noise and with cosmological parameters varying over a broad prior distribution. Our DeepMass method is substantially more accurate than existing mass-mapping methods. With a validation set of 8000 simulated DES SV data realisations, compared to Wiener filtering with a fixed power spectrum, the DeepMass method improved the mean-square-error (MSE) by 11 per cent. With N-body simulated MICE mock data, we show that Wiener filtering with the optimal known power spectrum still gives a worse MSE than our generalised method with no input cosmological parameters; we show that the improvement is driven by the non-linear structures in the convergence. With higher galaxy density in future weak lensing data unveiling more non-linear scales, it is likely that deep learning will be a leading approach for mass mapping with Euclid and LSST.
Reference 1: N. Jeffrey, F. Lanusse, O. Lahav, J.-L. Starck, "Learning dark matter map reconstructions from DES SV weak lensing data", Monthly Notices of the Royal Astronomical Society, in press, 2019.