Blind separation of a large number of sparse sources

 

Authors: C. Kervazo, J. Bobin, C. Chenot
Journal: Signal Processing
Year: 2018
Download: Paper


Abstract

Blind Source Separation (BSS) is one of the major tools to analyze multispectral data with applications that range from astronomical to biomedical signal processing. Nevertheless, most BSS methods fail when the number of sources becomes large, typically exceeding a few tens. Since the ability to estimate large number of sources is paramount in a very wide range of applications, we introduce a new algorithm, coined block-Generalized Morphological Component Analysis (bGMCA) to specifically tackle sparse BSS problems when large number of sources need to be estimated. Sparse BSS being a challenging nonconvex inverse problem in nature, the role played by the algorithmic strategy is central, especially when many sources have to be estimated. For that purpose, the bGMCA algorithm builds upon block-coordinate descent with intermediate size blocks. Numerical experiments are provided that show the robustness of the bGMCA algorithm when the sources are numerous. Comparisons have been carried out on realistic simulations of spectroscopic data.

A Distributed Learning Architecture for Scientific Imaging Problems

 

Authors: A. Panousopoulou, S. Farrens, K. Fotiadou, A. Woiselle, G. Tsagkatakis, J-L. Starck,  P. Tsakalides
Journal: arXiv
Year: 2018
Download: ADS | arXiv


Abstract

Current trends in scientific imaging are challenged by the emerging need of integrating sophisticated machine learning with Big Data analytics platforms. This work proposes an in-memory distributed learning architecture for enabling sophisticated learning and optimization techniques on scientific imaging problems, which are characterized by the combination of variant information from different origins. We apply the resulting, Spark-compliant, architecture on two emerging use cases from the scientific imaging domain, namely: (a) the space variant deconvolution of galaxy imaging surveys (astrophysics), (b) the super-resolution based on coupled dictionary training (remote sensing). We conduct evaluation studies considering relevant datasets, and the results report at least 60\% improvement in time response against the conventional computing solutions. Ultimately, the offered discussion provides useful practical insights on the impact of key Spark tuning parameters on the speedup achieved, and the memory/disk footprint.

Scale-invariant alternatives to general relativity. The inflation–dark-energy connection


Abstract

We discuss the cosmological phenomenology of biscalar--tensor models
displaying a maximally symmetric Einstein--frame kinetic sector and
constructed on the basis of scale symmetry and volume--preserving
diffeomorphisms. These theories contain a single dimensionful
parameter $\Lambda_0$---associated with the invariance under the
aforementioned restricted coordinate transformations---and a massless
dilaton field. At large field values these scenarios lead to inflation
with no generation of isocurvature perturbations. The corresponding
predictions depend only on two dimensionless parameters, which
characterize the curvature of the field--manifold and the leading
order behavior of the inflationary potential. For $\Lambda_0=0$ the
scale symmetry is unbroken and the dilaton admits only derivative
couplings to matter, evading all fifth force constraints. For
$\Lambda_0\neq 0$ the field acquires a run-away potential that can
support a dark energy dominated era at late times. We confront a
minimalistic realization of this appealing framework with observations
using a Markov-Chain-Monte-Carlo approach, with likelihoods from
present BAO, SNIa and CMB data. A Bayesian model comparison indicates
a preference for the considered model over $\Lambda$CDM, under certain
assumptions for the priors. The impact of possible consistency
relations among the early and late Universe dynamics that can appear
within this setting is discussed with the use of correlation
matrices. The results indicate that a precise determination of the
inflationary observables and the dark energy equation--of--state could
significantly constraint the model parameters.

Distinguishing standard and modified gravity cosmologies with machine learning

 

Authors: A. Peel, F. Lalande, J.-L. Starck, V. Pettorino, J. Merten,  C. Giocoli, M. Meneghetti,  M. Baldi
Journal: Submitted to PRL
Year: 2018
Download: ADS | arXiv


Abstract

We present a convolutional neural network to identify distinct cosmological scenarios based on the weak-lensing maps they produce. Modified gravity models with massive neutrinos can mimic the standard concordance model in terms of Gaussian weak-lensing observables, limiting a deeper understanding of what causes cosmic acceleration. We demonstrate that a network trained on simulated clean convergence maps, condensed into a novel representation, can discriminate between such degenerate models with 83%-100% accuracy. Our method outperforms conventional statistics by up to 40% and is more robust to noise.

On the dissection of degenerate cosmologies with machine learning

 

Authors: J. Merten,  C. Giocoli, M. Baldi, M. Meneghetti, A. Peel, F. Lalande, J.-L. Starck, V. Pettorino
Journal: Submitted to MNRAS
Year: 2018
Download: ADS | arXiv


Abstract

Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.

Cosmological evolution in DHOST theories

 

Authors: M. Crisostomi , K. Koyama, D. Langlois, K. Noui and D. A. Steer
Journal:  
Year: 2018
Download: arXiv


Abstract

In the context of Degenerate Higher-Order Scalar-Tensor (DHOST) theories, we study cosmological solutions and their stability properties. In particular, we explicitly illustrate the crucial role of degeneracy by showing how the higher order homogeneous equations in the physical frame (where matter is minimally coupled) can be recast in a system of equations that do not involve higher order derivatives. We study the fixed points of the dynamics, finding the conditions for having a de Sitter attractor at late times. Then we consider the coupling to matter field (described for convenience by a k-essence Lagrangian) and find the conditions to avoid gradient and ghost instabilities at linear order in cosmological perturbations, extending previous work. Finally, we apply these results to a simple subclass of DHOST theories, showing that de Sitter attractor conditions, no ghost and no gradient instabilities conditions (both in the self-accelerating era and in the matter dominated era) can be compatible.

The road ahead of Horndeski: cosmology of surviving scalar-tensor theories


Abstract

In the context of the effective field theory of dark energy (EFT) we perform agnostic explorations of Horndeski gravity. We choose two parametrizations for the free EFT functions, namely a power law and a dark energy density-like behaviour on a non trivial Chevallier-Polarski-Linder background. We restrict our analysis to those EFT functions which do not modify the speed of propagation of gravitational waves. Among those, we prove that one specific function cannot be constrained by data, since its contribution to the observables is below the cosmic variance, although we show it has a relevant role in defining the viable parameter space. We place constraints on the parameters of these models combining measurements from present day cosmological datasets and we prove that the next generation galaxy surveys can improve such constraints by one order of magnitude. We then verify the validity of the quasi-static limit within the sound horizon of the dark field, by looking at the phenomenological functions μ and Σ, associated respectively to clustering and lensing potentials. Furthermore, we notice up to 5% deviations in μ,Σ with respect to General Relativity at scales smaller than the Compton one. For the chosen parametrizations and in the quasi-static limit, future constraints on μ and Σ can reach the 1% level and will allow us to discriminate between certain models at more than 3σ, provided the present best-fit values remain.

Blind separation of a large number of sparse sources

 

Authors: C. Kervazo, J. Bobin, C. Chenot
Journal: Signal Processing
Year: 2018
Download: Paper


Abstract

Blind Source Separation (BSS) is one of the major tools to analyze multispectral data with applications that range from astronomical to biomedical signal processing. Nevertheless, most BSS methods fail when the number of sources becomes large, typically exceeding a few tens. Since the ability to estimate large number of sources is paramount in a very wide range of applications, we introduce a new algorithm, coined block-Generalized Morphological Component Analysis (bGMCA) to specifically tackle sparse BSS problems when large number of sources need to be estimated. Sparse BSS being a challenging nonconvex inverse problem in nature, the role played by the algorithmic strategy is central, especially when many sources have to be estimated. For that purpose, the bGMCA algorithm builds upon block-coordinate descent with intermediate size blocks. Numerical experiments are provided that show the robustness of the bGMCA algorithm when the sources are numerous. Comparisons have been carried out on realistic simulations of spectroscopic data.

Measuring Linear and Non-linear Galaxy Bias Using Counts-in-Cells in the Dark Energy Survey Science Verification Data

 

Authors: A. I. Salvador, F. J. Sánchez, A. Pagul et al.
Journal:  
Year: 07/2018
Download: ADS| Arxiv


Abstract

Non-linear bias measurements require a great level of control of potential systematic effects in galaxy redshift surveys. Our goal is to demonstrate the viability of using Counts-in-Cells (CiC), a statistical measure of the galaxy distribution, as a competitive method to determine linear and higher-order galaxy bias and assess clustering systematics. We measure the galaxy bias by comparing the first four moments of the galaxy density distribution with those of the dark matter distribution. We use data from the MICE simulation to evaluate the performance of this method, and subsequently perform measurements on the public Science Verification (SV) data from the Dark Energy Survey (DES). We find that the linear bias obtained with CiC is consistent with measurements of the bias performed using galaxy-galaxy clustering, galaxy-galaxy lensing, CMB lensing, and shear+clustering measurements. Furthermore, we compute the projected (2D) non-linear bias using the expansion $\delta_{g} = \sum_{k=0}^{3} (b_{k}/k!) \delta^{k}$, finding a non-zero value for $b_2$ at the $3\sigma$ level. We also check a non-local bias model and show that the linear bias measurements are robust to the addition of new parameters. We compare our 2D results to the 3D prediction and find compatibility in the large scale regime ($>30$ Mpc $h^{-1}$)

The C-Band All-Sky Survey (C-BASS): Design and capabilities

 

Authors: M.E. Jones, A.C. Taylor, M. Aich et al.
Journal: MNRAS
Year: 2018
Download: ADS | arXiv


Abstract

The C-Band All-Sky Survey (C-BASS) is an all-sky full-polarization survey at a frequency of 5 GHz, designed to provide complementary data to the all-sky surveys of WMAP and Planck, and future CMB B-mode polarization imaging surveys. The observing frequency has been chosen to provide a signal that is dominated by Galactic synchrotron emission, but suffers little from Faraday rotation, so that the measured polarization directions provide a good template for higher frequency observations, and carry direct information about the Galactic magnetic field. Telescopes in both northern and southern hemispheres with matched optical performance are used to provide all-sky coverage from a ground-based experiment. A continuous-comparison radiometer and a correlation polarimeter on each telescope provide stable imaging properties such that all angular scales from the instrument resolution of 45 arcmin up to full sky are accurately measured. The northern instrument has completed its survey and the southern instrument has started observing. We expect that C-BASS data will significantly improve the component separation analysis of Planck and other CMB data, and will provide important constraints on the properties of anomalous Galactic dust and the Galactic magnetic field.