Radio Astronomical Images Restoration with Shape Constraint

 

Authors: F. NAMMOUR, M. A. SCHMITZ, F. M. NGOLÈ MBOULA, J.-L. STARCK, J. N. GIRARD
Journal: Proceedings of SPIE
Year: 2019
Download: DOI

 


 

Abstract

Weak gravitational lensing is a very promising probe for cosmology that relies on highly precise shape measurements. Several new instruments are being deployed and will allow for weak lensing studies on unprecedented scales, and at new frequencies. In particular, some of these new instruments should allow for the blooming of radio-weak lensing, specially the SKA with many Petabits per second of raw data. Hence, great challenges will be waiting at the turn. In addition, processing methods should be able to extract the highest precision possible and ideally, be applicable to radio-astronomy. For the moment, the two methods that already exist do not satisfy both conditions. In this paper, we present a new plug-and-play solution where we add a shape constraint to deconvolution algorithms and results show measurements improvement of at least 20%.

Space test of the Equivalence Principle: first results of the MICROSCOPE mission

Space test of the Equivalence Principle: first results of the MICROSCOPE mission

Authors: P. Touboul, G. Metris, M. Rodrigues, Y. André, Q. Baghi, J. Bergé, D. Boulanger, S. Bremer, R. Chhun, B. Christophe, V. Cipolla, T. Damour, P. Danto, H. Dittus, P. Fayet, B. Foulon, P.-Y. Guidotti, E. Hardy, P.-A. Huynh, C. Lämmerzahl, V. Lebat, F. Liorzou, M. List, I. Panel, S. Pires, B. Pouilloux, P. Prieur, S. Reynaud, B. Rievers, A. Robert, H. Selig, L. Serron, T. Sumner, P. Viesser
Journal: Classical and Quantum Gravity
Year: 2019
Download: ADS | arXiv


Abstract

The Weak Equivalence Principle (WEP), stating that two bodies of different compositions and/or mass fall at the same rate in a gravitational field (universality of free fall), is at the very foundation of General Relativity. The MICROSCOPE mission aims to test its validity to a precision of 10^-15, two orders of magnitude better than current on-ground tests, by using two masses of different compositions (titanium and platinum alloys) on a quasi-circular trajectory around the Earth. This is realised by measuring the accelerations inferred from the forces required to maintain the two masses exactly in the same orbit. Any significant difference between the measured accelerations, occurring at a defined frequency, would correspond to the detection of a violation of the WEP, or to the discovery of a tiny new type of force added to gravity. MICROSCOPE's first results show no hint for such a difference, expressed in terms of Eötvös parameter δ =  [-1 +/- 9(stat) +/- 9 (syst)] x 10^-15 (both 1σ uncertainties) for a titanium and platinum pair of materials. This result was obtained on a session with 120 orbital revolutions representing 7% of the current available data acquired during the whole mission. The quadratic combination of 1σ uncertainties leads to a current limit on δ of about 1.3 x 10^-14.

Radio Astronomical Images Restoration with Shape Constraint

 

Authors: F. NAMMOUR, M. A. SCHMITZ, F. M. NGOLÈ MBOULA, J.-L. STARCK, J. N. GIRARD
Journal: Proceedings of SPIE
Year: 2019
Download: DOI

 


 

Abstract

Weak gravitational lensing is a very promising probe for cosmology that relies on highly precise shape measurements. Several new instruments are being deployed and will allow for weak lensing studies on unprecedented scales, and at new frequencies. In particular, some of these new instruments should allow for the blooming of radio-weak lensing, specially the SKA with many Petabits per second of raw data. Hence, great challenges will be waiting at the turn. In addition, processing methods should be able to extract the highest precision possible and ideally, be applicable to radio-astronomy. For the moment, the two methods that already exist do not satisfy both conditions. In this paper, we present a new plug-and-play solution where we add a shape constraint to deconvolution algorithms and results show measurements improvement of at least 20%.

The impact of baryonic physics and massive neutrinos on weak lensing peak statistics

The impact of baryonic physics and massive neutrinos on weak lensing peak statistics

 

Authors: M. Fong, M. Choi, V. Catlett, B. Lee, A. Peel, R. Bowyer,  L. J. King, I. G. McCarthy
Journal: MNRAS
Year: 2019
Download: ADS | arXiv


Abstract

We study the impact of baryonic processes and massive neutrinos on weak lensing peak statistics that can be used to constrain cosmological parameters. We use the BAHAMAS suite of cosmological simulations, which self-consistently include baryonic processes and the effect of massive neutrino free-streaming on the evolution of structure formation. We construct synthetic weak lensing catalogues by ray-tracing through light-cones, and use the aperture mass statistic for the analysis. The peaks detected on the maps reflect the cumulative signal from massive bound objects and general large-scale structure. We present the first study of weak lensing peaks in simulations that include both baryonic physics and massive neutrinos (summed neutrino mass Mν = 0.06, 0.12, 0.24, and 0.48 eV assuming normal hierarchy), so that the uncertainty due to physics beyond the gravity of dark matter can be factored into constraints on cosmological models. Assuming a fiducial model of baryonic physics, we also investigate the correlation between peaks and massive haloes, over a range of summed neutrino mass values. As higher neutrino mass tends to suppress the formation of massive structures in the Universe, the halo mass function and lensing peak counts are therefore modified as a function of Mν. Over most of the S/N range, the impact of fiducial baryonic physics is greater (less) than neutrinos for 0.06 and 0.12 (0.24 and 0.48) eV models. Both baryonic physics and massive neutrinos should be accounted for when deriving cosmological parameters from weak lensing observations.

Euclid: Non-parametric point spread function field recovery through interpolation on a Graph Laplacian

 

Authors: M.A. Schmitz, J.-L. Starck, F. Ngole Mboula, N. Auricchio, J. Brinchmann, R.I. Vito Capobianco, R. Clédassou, L. Conversi, L. Corcione, N. Fourmanoit, M. Frailis, B. Garilli, F. Hormuth, D. Hu, H. Israel, S. Kermiche, T. D. Kitching, B. Kubik, M. Kunz, S. Ligori, P.B. Lilje, I. Lloro, O. Mansutti, O. Marggraf, R.J. Massey, F. Pasian, V. Pettorino, F. Raison, J.D. Rhodes, M. Roncarelli, R.P. Saglia, P. Schneider, S. Serrano, A.N. Taylor, R. Toledo-Moreo, L. Valenziano, C. Vuerli, J. Zoubian
Journal: submitted to A&A
Year: 2019
Download:  arXiv

 


Abstract

Context. Future weak lensing surveys, such as the Euclid mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain very low levels of statistical error and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy.
Aims. This paper's contributions are twofold. First, we take steps toward a non-parametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to Euclid. Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Second, we study the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in a Euclid scenario.
Methods. We use the recently proposed Resolved Components Analysis approach to deal with the undersampling of observed star images. We then estimate the PSF at the positions of galaxies by interpolation on a set of graphs that contain information relative to its spatial variations. We compare our approach to PSFEx, then quantify the impact of PSF recovery errors on galaxy shape measurements through image simulations.
Results. Our approach yields an improvement over PSFEx in terms of PSF model and on observed galaxy shape errors, though it is at present not sufficient to reach the required Euclid accuracy. We also find that different shape measurement approaches can react differently to the same PSF modelling errors.

Euclid preparation III. Galaxy cluster detection in the wide photometric survey, performance and algorithm selection

 

Authors: Euclid Collaboration, R. Adam, ..., S. Farrens, et al.
Journal: A&A
Year: 2019
Download: ADS | arXiv


Abstract

Galaxy cluster counts in bins of mass and redshift have been shown to be a competitive probe to test cosmological models. This method requires an efficient blind detection of clusters from surveys with a well-known selection function and robust mass estimates. The Euclid wide survey will cover 15000 deg2 of the sky in the optical and near-infrared bands, down to magnitude 24 in the H-band. The resulting data will make it possible to detect a large number of galaxy clusters spanning a wide-range of masses up to redshift ∼2. This paper presents the final results of the Euclid Cluster Finder Challenge (CFC). The objective of these challenges was to select the cluster detection algorithms that best meet the requirements of the Euclid mission. The final CFC included six independent detection algorithms, based on different techniques, such as photometric redshift tomography, optimal filtering, hierarchical approach, wavelet and friend-of-friends algorithms. These algorithms were blindly applied to a mock galaxy catalog with representative Euclid-like properties. The relative performance of the algorithms was assessed by matching the resulting detections to known clusters in the simulations. Several matching procedures were tested, thus making it possible to estimate the associated systematic effects on completeness to <3%. All the tested algorithms are very competitive in terms of performance, with three of them reaching >80% completeness for a mean purity of 80% down to masses of 1014 M⊙ and up to redshift z=2. Based on these results, two algorithms were selected to be implemented in the Euclid pipeline, the AMICO code, based on matched filtering, and the PZWav code, based on an adaptive wavelet approach.

Measuring Gravity at Cosmological Scales

Measuring Gravity at Cosmological Scales

 

Authors:  Luca Amendola , Dario Bettoni, Ana Marta Pinho Santiago Casas,
Journal: Review Paper
Year: 02/2019
Download: Inspire| Arxiv


Abstract

This paper is a pedagogical introduction to models of gravity and how to constrain them through cosmological observations. We focus on the Horndeski scalar-tensor theory and on the quantities that can be measured with a minimum of assumptions. Alternatives or extensions of General Relativity have been proposed ever since its early years. Because of Lovelock theorem, modifying gravity in four dimensions typically means adding new degrees of freedom. The simplest way is to include a scalar field coupled to the curvature tensor terms. The most general way of doing so without incurring in the Ostrogradski instability is the Horndeski Lagrangian and its extensions. Testing gravity means therefore, in its simplest term, testing the Horndeski Lagrangian. Since local gravity experiments can always be evaded by assuming some screening mechanism or that baryons are decoupled, or even that the effects of modified gravity are visible only at early times, we need to test gravity with cosmological observations in the late universe (large-scale structure) and in the early universe (cosmic microwave background). In this work we review the basic tools to test gravity at cosmological scales, focusing on model-independent measurements.

logfsigma8

 

Future constraints on the gravitational slip with the mass profiles of galaxy clusters


Abstract

The gravitational slip parameter is an important discriminator between large classes of gravity theories at cosmological and astrophysical scales. In this work we use a combination of simulated information of galaxy cluster mass profiles, inferred by Strong+Weak lensing analyses and by the study of the dynamics of the cluster member galaxies, to reconstruct the gravitational slip parameter η and predict the accuracy with which it can be constrained with current and future galaxy cluster surveys. Performing a full-likelihood statistical analysis, we show that galaxy cluster observations can constrain η down to the percent level already with a few tens of clusters. We discuss the significance of possible systematics, and show that the cluster masses and numbers of galaxy members used to reconstruct the dynamics mass profile have a mild effect on the predicted constraints.

Determining thermal dust emission from Planck HFI data using a sparse, parametric technique

 

Authors: M.O. Irfan, J.Bobin, M-A.Miville-Deschenes, I.Grenier 
Journal: A&A
Year: 2018
Download: ADS | arXiv


Abstract

Context: The Planck data releases have provided the community with sub-millimetre and radio observations of the full-sky at unprecedented resolutions. We make use of the Planck 353, 545 and 857 GHz maps alongside the IRAS 3000 GHz map. These maps contain information on the cosmic microwave background (CMB), cosmic infrared background (CIB), extragalactic point sources and diffuse thermal dust emission. Aims: We aim to determine the modified black body (MBB) model parameters of thermal dust emission in total intensity and produce all sky maps of pure thermal dust, having separated this Galactic component from the CMB and CIB. Methods: This separation is completed using a new, sparsity-based, parametric method which we refer to as premise. The method comprises of three main stages: 1) filtering of the raw data to reduce the effect of the CIB on the MBB fit. 2) fitting an MBB model to the filtered data across super-pixels of various sizes determined by the algorithm itself and 3) refining these super-pixel estimates into full resolution maps of the MBB parameters. Results: We present our maps of MBB temperature, spectral index and optical depth at 5 arcmin resolution and compare our estimates to those of GNILC as well as the two-step MBB fit presented by the Planck collaboration in 2013. Conclusions: By exploiting sparsity we avoid the need for smoothing, enabling us to produce the first full resolution MBB parameter maps from intensity measurements of thermal dust emission.We consider the premise parameter estimates to be competitive with the existing state-of-the-art solutions, outperforming these methods within low signal-to-noise regions as we account for the CIB without removing thermal dust emission through over-smoothing.

Blind separation of a large number of sparse sources

 

Authors: C. Kervazo, J. Bobin, C. Chenot
Journal: Signal Processing
Year: 2018
Download: Paper


Abstract

Blind Source Separation (BSS) is one of the major tools to analyze multispectral data with applications that range from astronomical to biomedical signal processing. Nevertheless, most BSS methods fail when the number of sources becomes large, typically exceeding a few tens. Since the ability to estimate large number of sources is paramount in a very wide range of applications, we introduce a new algorithm, coined block-Generalized Morphological Component Analysis (bGMCA) to specifically tackle sparse BSS problems when large number of sources need to be estimated. Sparse BSS being a challenging nonconvex inverse problem in nature, the role played by the algorithmic strategy is central, especially when many sources have to be estimated. For that purpose, the bGMCA algorithm builds upon block-coordinate descent with intermediate size blocks. Numerical experiments are provided that show the robustness of the bGMCA algorithm when the sources are numerous. Comparisons have been carried out on realistic simulations of spectroscopic data.