- Impact of Foregrounds on H1 Intensity Mapping Cross-Correlations with Optical Surveys - Melis
- How to add your papers on the CosmoStat website - Santiago
Date: May 16th 2019, 4pm
Speaker: Alejandro Borlaff (Instituto de Astrofísica de Canarias)
Title: The missing light of the Hubble Ultra Deep Field
Date: April 25th 2019, 11am
Speaker: Nial Jeffrey (UCL, currently visiting CosmoStat)
Title: Deep Learning dark matter maps from Dark Energy Survey (DES) weak lensing data
Reconstructed density fields from weak lensing are rich in information about cosmological parameters and models of the Universe — including a large non-Gaussian component that cannot be accessed using traditional 2-point statistics. I will present a new method based on Deep Learning to reconstruct dark matter maps from weak lensing data with higher accuracy. Weak lensing map reconstruction is “ill-posed”, troubled by survey masks and galaxy “shape noise”. With DES SV data we showed that by implementing physically-motivated priors (Gaussian field or halo model), substantial improvements are made over standard approaches. Such advanced methods are still limited due to their prior distributions; non-linear density fields have no simple closed form that can be used as a prior. Deep Learning methods can directly learn the underlying structure of the signal, noise and mask from realistic simulations. By combining Deep Learning methods with a physically motivated closed-form prior, improved reconstruction is guaranteed.
We introduce a novel approach, requiring only mild assumptions, for the characterization of deep neural networks at initialization. Our approach applies both to fully-connected and convolutional networks and easily incorporates the commonly used techniques of batch normalization and skip-connections. Our key insight is to consider the evolution with depth of statistical moments of signal and noise, thereby characterizing the presence or the absence of pathologies in the hypothesis space encoded by the choice of hyperparameters. We establish: (i) for feedforward networks with and without batch normalization, depth multiplicativity inevitably leads to ill-behaved moments and pathologies; (ii) for residual networks with batch normalization, on the other hand, skip-connections induce power-law rather than exponential behaviour, leading to well-behaved moments and no pathology.
Date: March 14th 2019, 11am
Speaker: Adrien Picquenot (CEA Saclay)
Title: Applying the GMCA to extended sources in X-Ray Astronomy
The Galileon model is a tensor-scalar theory of gravity which offers a theoretically viable explanation to the late acceleration of the Universe expansion and recovers General Relativity in the strong field limit. The main goal is to establish the status of the model from cosmological observations. Though, the multi-messenger observation of GW170817 and its consequences for the Galileon model will be briefly discussed, since most allowed Galileon scenarios have a gravitational wave speed different than the speed of light.
Most constraints obtained so far on Galileon model parameters from cosmological data were derived for the limited subset of tracker solutions and reported tensions between the model and data. We present here an exploration of the general solution of the Galileon model, which is confronted against recent cosmological data.
We find that, while the general solution provides a good fit to CMB spectra, it fails to reproduce cosmological data when extending the comparison to BAO and SNIa data. Tensions remain if the models are extended with an additional free parameter, such as the sum of active neutrino masses or the normalization of the CMB lensing spectrum.
Optimal transport has become a mathematical gem at the interface of probability, analysis and optimization. It is a theory longly developed by the mathematician community, started by Monge and followed by Kantorovich which found applications in several fields like differential geometry, PDEs or gradient flows just to name a few.
Lately, it began to make its way into the machine learning and data treatment community. The optimal transport can be used to define a distance that is very useful when comparing histograms or point clouds, a typical scenario in nowadays applications. Some breakthrough contributions, like the entropic regularization, allowed to convexify and efficiently solve the transport problem opening the doors for many applications like Wasserstein barycenters or dictionary learning for example.
Nevertheless, Optimal Transport has not entered fully into the signal treatment community. One of the obstacles is the fact that the theory is well developed in the space of nonnegative measures but very little work has been done to extend it to signed measures. Considering a machine learning point of view, this presentation will deal with some theoretic aspects of an Optimal Transport based "distance" for signed measures that can be useful for future applications like Blind Source Separation. An algorithm for its efficient calculation will be presented as well.