Euclid preparation. V. Predicted yield of redshift 7 < z < 9 quasars from the wide survey

Euclid preparation: V. Predicted yield of redshift 7

Authors: Euclid Collaboration, R. Barnett, ..., S. Farrens, M. Kilbinger, V. Pettorino, F. Sureau, et al.
Journal: Astronomy and Astrophysics
Year: 2019
DOI:  10.1051/0004-6361/201936427
Download:

ADS | arXiv

 


Abstract

We provide predictions of the yield of 7<z<9 quasars from the Euclid wide survey, updating the calculation presented in the Euclid Red Book in several ways. We account for revisions to the Euclid near-infrared filter wavelengths; we adopt steeper rates of decline of the quasar luminosity function (QLF; Φ) with redshift, Φ∝10k(z−6), k=−0.72, and a further steeper rate of decline, k=−0.92; we use better models of the contaminating populations (MLT dwarfs and compact early-type galaxies); and we use an improved Bayesian selection method, compared to the colour cuts used for the Red Book calculation, allowing the identification of fainter quasars, down to JAB∼23. Quasars at z>8 may be selected from Euclid OYJH photometry alone, but selection over the redshift interval 7<z<8 is greatly improved by the addition of z-band data from, e.g., Pan-STARRS and LSST. We calculate predicted quasar yields for the assumed values of the rate of decline of the QLF beyond z=6. For the case that the decline of the QLF accelerates beyond z=6, with k=−0.92, Euclid should nevertheless find over 100 quasars with 7.0<z<7.5, and ∼25 quasars beyond the current record of z=7.5, including ∼8 beyond z=8.0. The first Euclid quasars at z>7.5 should be found in the DR1 data release, expected in 2024. It will be possible to determine the bright-end slope of the QLF, 7<z<8, M1450<−25, using 8m class telescopes to confirm candidates, but follow-up with JWST or E-ELT will be required to measure the faint-end slope. Contamination of the candidate lists is predicted to be modest even at JAB∼23. The precision with which k can be determined over 7<z<8 depends on the value of k, but assuming k=−0.72 it can be measured to a 1 sigma uncertainty of 0.07.

Euclid: Non-parametric point spread function field recovery through interpolation on a Graph Laplacian

 

Authors: M.A. Schmitz, J.-L. Starck, F. Ngole Mboula, N. Auricchio, J. Brinchmann, R.I. Vito Capobianco, R. Clédassou, L. Conversi, L. Corcione, N. Fourmanoit, M. Frailis, B. Garilli, F. Hormuth, D. Hu, H. Israel, S. Kermiche, T. D. Kitching, B. Kubik, M. Kunz, S. Ligori, P.B. Lilje, I. Lloro, O. Mansutti, O. Marggraf, R.J. Massey, F. Pasian, V. Pettorino, F. Raison, J.D. Rhodes, M. Roncarelli, R.P. Saglia, P. Schneider, S. Serrano, A.N. Taylor, R. Toledo-Moreo, L. Valenziano, C. Vuerli, J. Zoubian
Journal: submitted to A&A
Year: 2019
Download:  arXiv

 


Abstract

Context. Future weak lensing surveys, such as the Euclid mission, will attempt to measure the shapes of billions of galaxies in order to derive cosmological information. These surveys will attain very low levels of statistical error and systematic errors must be extremely well controlled. In particular, the point spread function (PSF) must be estimated using stars in the field, and recovered with high accuracy.
Aims. This paper's contributions are twofold. First, we take steps toward a non-parametric method to address the issue of recovering the PSF field, namely that of finding the correct PSF at the position of any galaxy in the field, applicable to Euclid. Our approach relies solely on the data, as opposed to parametric methods that make use of our knowledge of the instrument. Second, we study the impact of imperfect PSF models on the shape measurement of galaxies themselves, and whether common assumptions about this impact hold true in a Euclid scenario.
Methods. We use the recently proposed Resolved Components Analysis approach to deal with the undersampling of observed star images. We then estimate the PSF at the positions of galaxies by interpolation on a set of graphs that contain information relative to its spatial variations. We compare our approach to PSFEx, then quantify the impact of PSF recovery errors on galaxy shape measurements through image simulations.
Results. Our approach yields an improvement over PSFEx in terms of PSF model and on observed galaxy shape errors, though it is at present not sufficient to reach the required Euclid accuracy. We also find that different shape measurement approaches can react differently to the same PSF modelling errors.

Distinguishing standard and modified gravity cosmologies with machine learning

Distinguishing standard and modified gravity cosmologies with machine learning

 

Authors: A. Peel, F. Lalande, J.-L. Starck, V. Pettorino, J. Merten,  C. Giocoli, M. Meneghetti,  M. Baldi
Journal: PRD
Year: 2019
Download: ADS | arXiv


Abstract

We present a convolutional neural network to classify distinct cosmological scenarios based on the statistically similar weak-lensing maps they generate. Modified gravity (MG) models that include massive neutrinos can mimic the standard concordance model (ΛCDM) in terms of Gaussian weak-lensing observables. An inability to distinguish viable models that are based on different physics potentially limits a deeper understanding of the fundamental nature of cosmic acceleration. For a fixed redshift of sources, we demonstrate that a machine learning network trained on simulated convergence maps can discriminate between such models better than conventional higher-order statistics. Results improve further when multiple source redshifts are combined. To accelerate training, we implement a novel data compression strategy that incorporates our prior knowledge of the morphology of typical convergence map features. Our method fully distinguishes ΛCDM from its most similar MG model on noise-free data, and it correctly identifies among the MG models with at least 80% accuracy when using the full redshift information. Adding noise lowers the correct classification rate of all models, but the neural network still significantly outperforms the peak statistics used in a previous analysis.

On the dissection of degenerate cosmologies with machine learning

On the dissection of degenerate cosmologies with machine learning

 

Authors: J. Merten,  C. Giocoli, M. Baldi, M. Meneghetti, A. Peel, F. Lalande, J.-L. Starck, V. Pettorino
Journal: MNRAS
Year: 2019
Download: ADS | arXiv


Abstract

Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.

Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

Breaking degeneracies in modified gravity with higher (than 2nd) order weak-lensing statistics

 

Authors: A. PeelV. Pettorino, C. Giocoli, J.-L. Starck, M. Baldi
Journal: A&A
Year: 2018
Download: ADS | arXiv


Abstract

General relativity (GR) has been well tested up to solar system scales, but it is much less certain that standard gravity remains an accurate description on the largest, that is, cosmological, scales. Many extensions to GR have been studied that are not yet ruled out by the data, including by that of the recent direct gravitational wave detections. Degeneracies among the standard model (ΛCDM) and modified gravity (MG) models, as well as among different MG parameters, must be addressed in order to best exploit information from current and future surveys and to unveil the nature of dark energy. We propose various higher-order statistics in the weak-lensing signal as a new set of observables able to break degeneracies between massive neutrinos and MG parameters. We have tested our methodology on so-called f(R) models, which constitute a class of viable models that can explain the accelerated universal expansion by a modification of the fundamental gravitational interaction. We have explored a range of these models that still fit current observations at the background and linear level, and we show using numerical simulations that certain models which include massive neutrinos are able to mimic ΛCDM in terms of the 3D power spectrum of matter density fluctuations. We find that depending on the redshift and angular scale of observation, non-Gaussian information accessed by higher-order weak-lensing statistics can be used to break the degeneracy between f(R) models and ΛCDM. In particular, peak counts computed in aperture mass maps outperform third- and fourth-order moments.

Testing (modified) gravity with 3D and tomographic cosmic shear

 

Authors: A. Spurio Mancini, R. Reischke, V. Pettorino, B.M. Scháefer, M. Zumalacárregui
Journal: Submitted to MNRAS
Year: 2018
Download: ADS | arXiv


Abstract

Cosmic shear, the weak gravitational lensing caused by the large-scale structure, is one of the primary probes to test gravity with current and future surveys. There are two main techniques to analyse a cosmic shear survey; a tomographic method, where correlations between the lensing signal in different redshift bins are used to recover redshift information, and a 3D approach, where the full redshift information is carried through the entire analysis. Here we compare the two methods, by forecasting cosmological constraints for future surveys like Euclid. We extend the 3D formalism for the first time to theories beyond the standard model, belonging to the Horndeski class. This includes the majority of universally coupled extensions to LCDM with one scalar degree of freedom in addition to the metric, which are still in agreement with current observations. Given a fixed background, the evolution of linear perturbations in Horndeski gravity is described by a set of four functions of time only. We model their time evolution assuming proportionality to the dark energy density fraction and place Fisher matrix constraints on the proportionality coefficients. We find that a 3D analysis can constrain Horndeski theories better than a tomographic one, in particular with a decrease in the errors on the Horndeski parameters of the order of 20 - 30%. This paper shows for the first time a quantitative comparison on an equal footing between Fisher matrix forecasts for both a fully 3D and a tomographic analysis of cosmic shear surveys. The increased sensitivity of the 3D formalism comes from its ability to retain information on the source redshifts along the entire analysis.


Summary

A new paper has been put on the arXiv, led by Alessio Spurio Mancini, PhD student of CosmoStat member Valeria Pettorino in collaboration with R. Reischke, B.M. Scháefer (Heidelberg) and M. Zumalacárregui (Berkeley LBNL and Paris Saclay IPhT).
The authors investigate the performance of a 3D analysis of cosmic shear measurements vs a tomographic analysis as a probe of Horndeski theories of modified gravity, setting constraints by means of a Fisher matrix analysis on the parameters that describe the evolution of linear perturbations, using the specifications of a future Euclid-like experiment. Constraints are shown on both the modified gravity parameters and on a set of standard cosmological parameters, including the sum of neutrino masses. The analysis is restricted to angular modes ell < 1000 and k < 1 h/Mpc to avoid the deeply non-linear regime of structure growth. Below the main results of the paper.

 
  • The signal-to-noise ratio of both a 3D analysis as well as a tomographic one is very similar.
  • 3D cosmic shear provides tighter constraints than tomography for most cosmological parameters, with both methods showing very similar degeneracies.
  • The gain of 3D vs tomography is particularly significant for the sum of the neutrino masses (factor 3). For the Horndeski parameters the
    gain is of the order of 20 - 30 % in the errors.
  •  In Horndeski theories, braiding and the effective Newton coupling parameters (\alpha_B and \alpha_M) are constrained better if the kineticity is higher.
  • We investigated the impact on non-linear scales, and introduced an artificial screening scale, which pushes the deviations from General Relativity to zero below its value.  The gain when including the non-linear signal calls for the development of analytic or semi-analytic prescriptions for the treatment of non-linear scales in ΛCDM and modified gravity.

Linear and non-linear Modified Gravity forecasts with future surveys

 

Authors: S. Casas, M. Kunz, M. Martinelli, V. Pettorino
Journal: Physics Letters B
Year: 2017
Download: ADS | arXiv


Abstract

Modified Gravity theories generally affect the Poisson equation and the gravitational slip (effective anisotropic stress) in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin the time dependence of these functions in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, treated with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further decorrelate parameters with a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We also extend the analysis to two particular parameterizations of the time evolution of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 25% level when using only linear scales (wavevector k < 0.15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.


Summary

A new paper has been put on the arXiv by new CosmoStat member Valeria Pettorino, her PhD student Santiago Casas, in collaboration with Martin Kunz (Geneva) and Matteo Martinelli (Leiden).
The authors discuss forecasts in Modified Gravity cosmologies, described by two generic functions of time and space [Planck Dark Energy and Modified Gravity 2015Asaba et al 2013,Bull 2015Alonso et al 2016]. Their amplitude is constrained in different redshift bins. The authors elaborate on the impact of non-linear scales, showing that their inclusion (via a non-linear semi-analytical prescription applied to Modified Gravity) enables to highly reduce correlation among different redshift bins, even before any decorrelation procedure is applied. This is visually seen in the figure below (Fig.4 of arXiv), for the case of Galaxy Clustering: the correlation Matrix of the cosmological parameters (including the amplitudes of the Modified Gravity functions, binned in redshift)  is much more diagonal in the non-linear case (right panel) than in the linear one (left panel).

fig4_casasetal2017

A decorrelation procedure (Zero-phase Component Analysis, ZCA) is anyway used to extract those combinations which are best constrained by future surveys such as Euclid. With respect to Principal Component Analysis, ZCA allows to find a new vector of uncorrelated variables that is as similar as possible to the original vector of variables.

The authors further consider two smooth time functions whose main allowed to depart from General Relativity only at late times (late-time parameterization) or able to detach also at early times (early-time parameterization). The Fisher Matrix forecasts for standard and Modified gravity parameters, for different surveys (Euclid, SKA1, SKA2) is shown in the plot below (extracted from Fig.15 of arXiv), in which Galaxy Clustering and Weak Lensing probes are combined. Left panel refers to linear analysis, right panel includes a non-linear treatment.

fig15x4_casasetal2017fig15x6_casasetal2017 

Friction in Gravitational Waves: a test for early-time modified gravity

 

Authors: Pettorino, V., Amendola, L.
Journal: Physics Letters B
Year: 2015
Download: ADS | arXiv


Abstract

Modified gravity theories predict in general a non standard equation for the propagation of gravitational waves. Here we discuss the impact of modified friction and speed of tensor modes on cosmic microwave polarization B modes. We show that the non standard friction term, parametrized by αM, is degenerate with the tensor-to-scalar ratio r, so that small values of r can be compensated by negative constant values of αM. We quantify this degeneracy and its dependence on the epoch at which αM is different from the standard, zero, value and on the speed of gravitational waves cT. In the particular case of scalar-tensor theories, αM is constant and strongly constrained by background and scalar perturbations, 0≤αM<0.01 and the degeneracy with r is removed. In more general cases however such tight bounds are weakened and the B modes can provide useful constraints on early-time modified gravity.