## Blind separation of a large number of sparse sources

 Authors: C. Kervazo, J. Bobin, C. Chenot Journal: Signal Processing Year: 2018 Download: Paper

## Abstract

Blind Source Separation (BSS) is one of the major tools to analyze multispectral data with applications that range from astronomical to biomedical signal processing. Nevertheless, most BSS methods fail when the number of sources becomes large, typically exceeding a few tens. Since the ability to estimate large number of sources is paramount in a very wide range of applications, we introduce a new algorithm, coined block-Generalized Morphological Component Analysis (bGMCA) to specifically tackle sparse BSS problems when large number of sources need to be estimated. Sparse BSS being a challenging nonconvex inverse problem in nature, the role played by the algorithmic strategy is central, especially when many sources have to be estimated. For that purpose, the bGMCA algorithm builds upon block-coordinate descent with intermediate size blocks. Numerical experiments are provided that show the robustness of the bGMCA algorithm when the sources are numerous. Comparisons have been carried out on realistic simulations of spectroscopic data.

## Abstract

Non-linear bias measurements require a great level of control of potential systematic effects in galaxy redshift surveys. Our goal is to demonstrate the viability of using Counts-in-Cells (CiC), a statistical measure of the galaxy distribution, as a competitive method to determine linear and higher-order galaxy bias and assess clustering systematics. We measure the galaxy bias by comparing the first four moments of the galaxy density distribution with those of the dark matter distribution. We use data from the MICE simulation to evaluate the performance of this method, and subsequently perform measurements on the public Science Verification (SV) data from the Dark Energy Survey (DES). We find that the linear bias obtained with CiC is consistent with measurements of the bias performed using galaxy-galaxy clustering, galaxy-galaxy lensing, CMB lensing, and shear+clustering measurements. Furthermore, we compute the projected (2D) non-linear bias using the expansion $\delta_{g} = \sum_{k=0}^{3} (b_{k}/k!) \delta^{k}$, finding a non-zero value for $b_2$ at the $3\sigma$ level. We also check a non-local bias model and show that the linear bias measurements are robust to the addition of new parameters. We compare our 2D results to the 3D prediction and find compatibility in the large scale regime ($>30$ Mpc $h^{-1}$)

## The C-Band All-Sky Survey (C-BASS): Design and capabilities

 Authors: M.E. Jones, A.C. Taylor, M. Aich et al. Journal: MNRAS Year: 2018 Download: ADS | arXiv

## Abstract

The C-Band All-Sky Survey (C-BASS) is an all-sky full-polarization survey at a frequency of 5 GHz, designed to provide complementary data to the all-sky surveys of WMAP and Planck, and future CMB B-mode polarization imaging surveys. The observing frequency has been chosen to provide a signal that is dominated by Galactic synchrotron emission, but suffers little from Faraday rotation, so that the measured polarization directions provide a good template for higher frequency observations, and carry direct information about the Galactic magnetic field. Telescopes in both northern and southern hemispheres with matched optical performance are used to provide all-sky coverage from a ground-based experiment. A continuous-comparison radiometer and a correlation polarimeter on each telescope provide stable imaging properties such that all angular scales from the instrument resolution of 45 arcmin up to full sky are accurately measured. The northern instrument has completed its survey and the southern instrument has started observing. We expect that C-BASS data will significantly improve the component separation analysis of Planck and other CMB data, and will provide important constraints on the properties of anomalous Galactic dust and the Galactic magnetic field.

## Abstract

In this manuscript of the habilitation à diriger des recherches (HDR), the author presents some of his work over the last ten years. The main topic of this thesis is cosmic shear, the distortion of images of distant galaxies due to weak gravitational lensing by the large-scale structure in the Universe. Cosmic shear has become a powerful probe into the nature of dark matter and the origin of the current accelerated expansion of the Universe. Over the last years, cosmic shear has evolved into a reliable and robust cosmological probe, providing measurements of the expansion history of the Universe and the growth of its structure.
I review the principles of weak gravitational lensing and show how cosmic shear is interpreted in a cosmological context. Then I give an overview of weak-lensing measurements, and present observational results from the Canada-France Hawai'i Lensing Survey (CFHTLenS), as well as the implications for cosmology. I conclude with an outlook on the various future surveys and missions, for which cosmic shear is one of the main science drivers, and discuss promising new weak cosmological lensing techniques for future observations.

## Abstract

We present a new method to estimate shear measurement bias in image simulations that significantly improves its precision with respect to the state-of-the-art methods. This method is based on measuring the shear response for individual images. We generate sheared versions of the same image to measure how the shape measurement changes with the changes in the shear, so that we obtain a shear response for each original image, as well as its additive bias. Using the exact same noise realizations for each sheared version allows us to obtain an exact estimation of its shear response. The estimated shear bias of a sample of galaxies comes from the measured averages of the shear response and individual additive bias. The precision of this method supposes an improvement with respect to previous methods since our method is not affected by shape noise. As a consequence, the method does not require shape noise cancellation for a precise estimation of shear bias. The method can be easily applied to many applications such as shear measurement validation and calibration, reducing the number of necessary simulated images by a few orders of magnitude to achieve the same precision requirements.

## Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

 Authors: A. Peel, V. Pettorino, C. Giocoli, J.-L. Starck, M. Baldi Journal: A&A Year: 2018 Download: ADS | arXiv

## Abstract

General relativity (GR) has been well tested up to solar system scales, but it is much less certain that standard gravity remains an accurate description on the largest, that is, cosmological, scales. Many extensions to GR have been studied that are not yet ruled out by the data, including by that of the recent direct gravitational wave detections. Degeneracies among the standard model (ΛCDM) and modified gravity (MG) models, as well as among different MG parameters, must be addressed in order to best exploit information from current and future surveys and to unveil the nature of dark energy. We propose various higher-order statistics in the weak-lensing signal as a new set of observables able to break degeneracies between massive neutrinos and MG parameters. We have tested our methodology on so-called f(R) models, which constitute a class of viable models that can explain the accelerated universal expansion by a modification of the fundamental gravitational interaction. We have explored a range of these models that still fit current observations at the background and linear level, and we show using numerical simulations that certain models which include massive neutrinos are able to mimic ΛCDM in terms of the 3D power spectrum of matter density fluctuations. We find that depending on the redshift and angular scale of observation, non-Gaussian information accessed by higher-order weak-lensing statistics can be used to break the degeneracy between f(R) models and ΛCDM. In particular, peak counts computed in aperture mass maps outperform third- and fourth-order moments.

## Model-independent reconstruction of the linear anisotropic stress

 Authors: Ana Marta Pinho,  Santiago Casas, Luca Amendola Journal: Accepted for JCAP Year: 05/2018 Download: Inspire| Arxiv

## Abstract

In this work, we use recent data on the Hubble expansion rate H(z), the quantity fσ8(z) from redshift space distortions and the statistic Eg from clustering and lensing observables to constrain in a model-independent way the linear anisotropic stress parameter η. This estimate is free of assumptions about initial conditions, bias, the abundance of dark matter and the background expansion. We denote this observable estimator as ηobs. If ηobs turns out to be different from unity, it would imply either a modification of gravity or a non-perfect fluid form of dark energy clustering at sub-horizon scales. Using three different methods to reconstruct the underlying model from data, we report the value of ηobs at three redshift values, z=0.29,0.58,0.86. Using the method of polynomial regression, we find ηobs=0.57±1.05, ηobs=0.48±0.96, and ηobs=0.11±3.21, respectively. Assuming a constant ηobs in this range, we find ηobs=0.49±0.69. We consider this method as our fiducial result, for reasons clarified in the text. The other two methods give for a constant anisotropic stress ηobs=0.15±0.27 (binning) and ηobs=0.53±0.19 (Gaussian Process). We find that all three estimates are compatible with each other within their 1σ error bars. While the polynomial regression method is compatible with standard gravity, the other two methods are in tension with it.

## Testing (modified) gravity with 3D and tomographic cosmic shear

 Authors: A. Spurio Mancini, R. Reischke, V. Pettorino, B.M. Scháefer, M. Zumalacárregui Journal: Submitted to MNRAS Year: 2018 Download: ADS | arXiv

## Abstract

Cosmic shear, the weak gravitational lensing caused by the large-scale structure, is one of the primary probes to test gravity with current and future surveys. There are two main techniques to analyse a cosmic shear survey; a tomographic method, where correlations between the lensing signal in different redshift bins are used to recover redshift information, and a 3D approach, where the full redshift information is carried through the entire analysis. Here we compare the two methods, by forecasting cosmological constraints for future surveys like Euclid. We extend the 3D formalism for the first time to theories beyond the standard model, belonging to the Horndeski class. This includes the majority of universally coupled extensions to LCDM with one scalar degree of freedom in addition to the metric, which are still in agreement with current observations. Given a fixed background, the evolution of linear perturbations in Horndeski gravity is described by a set of four functions of time only. We model their time evolution assuming proportionality to the dark energy density fraction and place Fisher matrix constraints on the proportionality coefficients. We find that a 3D analysis can constrain Horndeski theories better than a tomographic one, in particular with a decrease in the errors on the Horndeski parameters of the order of 20 - 30%. This paper shows for the first time a quantitative comparison on an equal footing between Fisher matrix forecasts for both a fully 3D and a tomographic analysis of cosmic shear surveys. The increased sensitivity of the 3D formalism comes from its ability to retain information on the source redshifts along the entire analysis.

## Summary

A new paper has been put on the arXiv, led by Alessio Spurio Mancini, PhD student of CosmoStat member Valeria Pettorino in collaboration with R. Reischke, B.M. Scháefer (Heidelberg) and M. Zumalacárregui (Berkeley LBNL and Paris Saclay IPhT).
The authors investigate the performance of a 3D analysis of cosmic shear measurements vs a tomographic analysis as a probe of Horndeski theories of modified gravity, setting constraints by means of a Fisher matrix analysis on the parameters that describe the evolution of linear perturbations, using the specifications of a future Euclid-like experiment. Constraints are shown on both the modified gravity parameters and on a set of standard cosmological parameters, including the sum of neutrino masses. The analysis is restricted to angular modes ell < 1000 and k < 1 h/Mpc to avoid the deeply non-linear regime of structure growth. Below the main results of the paper.

• The signal-to-noise ratio of both a 3D analysis as well as a tomographic one is very similar.
• 3D cosmic shear provides tighter constraints than tomography for most cosmological parameters, with both methods showing very similar degeneracies.
• The gain of 3D vs tomography is particularly significant for the sum of the neutrino masses (factor 3). For the Horndeski parameters the
gain is of the order of 20 - 30 % in the errors.
•  In Horndeski theories, braiding and the effective Newton coupling parameters (\alpha_B and \alpha_M) are constrained better if the kineticity is higher.
• We investigated the impact on non-linear scales, and introduced an artificial screening scale, which pushes the deviations from General Relativity to zero below its value.  The gain when including the non-linear signal calls for the development of analytic or semi-analytic prescriptions for the treatment of non-linear scales in ΛCDM and modified gravity.

## Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

 Authors: N. Jeffrey, F. B. Abdalla, O. Lahav, F. Lanusse, J.-L. Starck, et al Journal: Year: 01/2018 Download: ADS| Arxiv

## Abstract

Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in the density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.