## Oct 20, 2016

**Speaker**

Arnau Pujol (CosmoStat)

**Title**

Journal Club: A study of the sensitivity of shape measurements to the input parameters of weak lensing image simulations

## May 19, 2016

**Speaker**

Carmelita Carbone (INAF)

**Title**

Clustering, lensing, and ISW/Rees-Sciama from the DEMNUni neutrino simulations

**Abstract**

I will present the first set of cosmological simulations produced within the "Dark Energy and Massive Neutrino Universe" (DEMNUni) project. These simulations are characterized by $$L$$ = 2 Gpc$$/h$$, $$N_\mathrm{part}$$ = 2 $$\times$$ 2048$$^3$$, a baseline $$\Lambda$$CDM-Planck cosmology, and four different total neutrino masses, $$M_\nu$$ = 0, 0.17, 0.3, 0.53 eV, with a degenerate mass spectrum. They represent one of the largest set of $$N$$-body simulations with a massive neutrino component treated as an additional particle type. I will present fully non-linear effects in the presence of massive neutrinos, extracted from the DEMNUni simulations, and show how neutrino free-streaming alters not only LSS clustering and lensing, but introduces also an excess of power in the ISW/RS signals, and related cross-correlations, at intermediate scales.

## May 17, 2016

**Speaker**

Ming Jiang (CosmoStat)

**Title**

Joint Multichannel Deconvolution and Blind Source Separation

## May 12, 2016

**Speaker**

Julian Adamek (Paris Observatory)

**Title**

General Relativity and Cosmic Structure Formation

**Abstract**

The Newtonian approximation which is usually invoked in $$N$$-body simulations of cosmic large scale structure relies on the assumptions that gravitational fields are weak and that they are only sourced by nonrelativistic matter. The latter constitutes an implicit assumption about the nature of the "dark" components of the Universe (dark matter and dark energy), thereby precluding a serious assessment of some potentially interesting models. I will present the first $$N$$-body simulations of cosmic structure formation based on a weak field approximation to General Relativity, taking into account all six degrees of freedom of the metric. The geodesic equations are solved consistently also for relativistic particles, such as massive neutrinos. The approach is very general and can be applied to various settings where the Newtonian approximation would be unreliable or inconsistent, for instance some models of dynamical dark energy or modified gravity.

## Apr 28, 2016

**Speaker**

Aurélien Benoit-Lévy (IAP)

**Title**

News from the Dark Energy Survey (DES) [slides]

**Abstract**

The Dark Energy Survey (DES) is a galaxy photometric survey designed to study the properties of the Dark Energy using four main cosmological probes: galaxy clustering on large scales, weak gravitational lensing, galaxy-cluster abundance, and supernova distances. During the northern fall of 2012 the DES collaboration installed and commissioned DECam, a 570 mega-pixel optical and near-infrared camera with a large 3 sq. deg. field of view, set at the prime focus of the Víctor M. Blanco 4-meter telescope in CTIO, Chile. A "Science Verification" (SV) period of observations, lasting until late February 2013, followed the DECam commissioning phase, and provided science-quality images for almost 200 sq. deg. at the nominal depth of the survey. The survey is now finishing its third year of observation. At the end of the five seasons, DES will have mapped an entire octant of the southern sky to unprecedented depth, measuring the position on the sky, redshift and shape of almost 300 million galaxies, together with thousands of galaxy clusters and supernovae. In this talk, I will present the current status of the project and the first scientific results of the survey, based on the Science Verification data, with a focus on the cross-correlations between the DES data and the CMB observations by Planck and SPT.

## Apr 14, 2016

**Speaker**

Samuel Farrens (CosmoStat)

**Title**

Friends-of-friends algorithm for optical cluster detection

## Feb 3, 2016

**Speaker**

Jérémy Rapin (CardioLogs)

**Title**

Deep learning with electrocardiograms

**Abstract**

CardioLogs is a start-up which was created to facilitate the diagnosis and management of patients with cardiovascular diseases using artificial intelligence. This is made possible by the analysis of electrocardiograms (ECGs), which are similar to multispectral signals. After a quick description of what is observable on an ECG, we will show how deep learning, and more specifically deep convolutional neural networks, helped us design algorithms which are much more robust and efficient than state-of-the-art methods.

## Jan 28, 2016

**Speaker**

Philippe Ciuciu (NeuroSpin, CEA Saclay)

**Title**

Compressed Sensing along physically plausible $$k$$-space trajectories in MRI

**Abstract**

Magnetic Resonance Imaging (MRI) is a non-invasive and non-ionizing imaging technique that provides images of body tissues, using the contrast sensitivity coming from the magnetic parameters ($$T_1$$, $$T_2$$ and proton density). Data are acquired in the $$k$$-space, corresponding to spatial Fourier frequencies. Because of physical constraints, the displacement in the $$k$$-space is subject to kinematic constraints. Indeed, magnetic field gradients and their temporal derivative are upper bounded. Hence, the scanning time increases with the image resolution.

Decreasing scanning time is critical to improve patient comfort, reduce exam costs, limit tissue heating and image distortion, and improve spatial or temporal resolution in anatomical or functional MRI, respectively. Reducing scanning time can be addressed by Compressed Sensing (CS) theory. The latter is a technique that guarantees the perfect recovery of an image from under-sampled data in $$k$$-space, by typically assuming that the image is sparse in a wavelet basis.

Unfortunately, CS theory cannot be directly cast to the MRI setting. The reasons are twofold: i) acquisition (Fourier) and representation (wavelets) bases are coherent and ii) sampling schemes obtained using CS theorems are composed of isolated measurements and cannot be realistically implemented by magnetic field gradients: the sampling is usually performed along continuous or more regular curves. However, heuristic application of CS in MRI has provided promising results.

In this talk, I will present the theoretical tools we have developed in Nicolas Chauffert's PhD thesis to apply CS to MRI and other modalities. On the one hand, we theoretically justify why variable density sampling is the right answer the first impediment. The more the sample contains information, the more it is likely to be drawn. On the other hand, we propose sampling schemes and design sampling trajectories that fulfill acquisition constraints, while traversing the $$k$$-space with the sampling density advocated by the theory.

The reconstruction results obtained in simulations using this strategy outperform existing acquisition trajectories (spiral, radial) by about 3 dB. They permit to envision a very near implementation on a real 7 T scanner at NeuroSpin, notably in the context of high resolution anatomical imaging.

## Jan 21, 2016

**Speaker**

Stéphane Plaszczynski (LAL, Orsay)

**Title**

Relieving tensions in Planck likelihoods [slides]

**Abstract**

The recently released Planck data show the intriguing feature of presenting "too much" lensing in their power spectra. This is parametrized by the so-called "$$A_L$$" parameter that should be compatible with 1.

After reviewing the path(s) from building a Planck likelihood function to estimating cosmological parameters, I will show that the $$A_L$$ deviation using the Planck default likelihoods is 2.6$$\sigma$$ and will connect this "tension" to another one on the reionization optical depth determined from Planck likelihoods in different multipole ($$\ell$$) ranges. Using another Planck high-$$\ell$$ based likelihood ("Hillipop") the tension on the reionization depth is reduced but $$A_L$$ is still discrepant with 1 by 2.2$$\sigma$$.

I will then show how using the "very-high-$$\ell$$" part of the CMB spectra from the ACT and SPT experiments allows to cure all the raised issues ($$A_L=1.03\pm 0.08$$) and will present various systematic checks before making an update on $$\Lambda$$CDM cosmological parameters.

More details in [1510.07600].

## Dec 11, 2015

**Speaker**

Zoltán Haiman (Columbia University)

**Title**

Cosmological Information from Non-Linear Weak Lensing

**Abstract**

Weak gravitational lensing by large scale structure is a promising tool to probe cosmology. Several large astronomical surveys have either been proposed or are underway to measure weak lensing distortions of up to a billion galaxies. While cosmology is traditionally extracted from measuring two-point functions of the lensing maps, these statistics capture all the information only for a Gaussian field. The lensing field in the non-linear regime is strongly non-Gaussian, and non-Gaussian statistics could therefore provide significant additional information.

I will report on results from a large suite of ray-tracing $$N$$-body simulations in different cosmologies, and discuss the expected constraints from the number counts of lensing peaks (i.e. from the number of maxima as a function of their height), and from other statistics probing the non-linear regime. These statistics can tighten cosmological constraints by a factor of two, compared to using the power spectrum alone. A recent application of this approach to the CFHTLenS survey has confirmed this for the parameters ($$\Omega_\mathrm{m}$$, $$\sigma_8$$). I will also argue that the numerous low signal-to-noise peaks contain most of the cosmological information, and are relatively insensitive to baryon effects. Finally, I will comment on the possible use of non-Gaussianities in CMB lensing for cosmology, and on some theoretical simulation challenges for large future lensing surveys.

## Nov 12, 2015

**Speaker**

Jean-Yves Ottmann (Université Paris-Dauphine)

**Title** (French)

Bien-être et mal-être au travail dans les métiers de laboratoire : le cas du CEA [slides]

**Abstract** (French)

Le bien-être et le mal-être au travail sont des concepts d’actualité, qui recoupent de nombreux modèles et théories issus de disciplines et d’épistémologies différentes. On peut questionner la pertinence de ces modèles pour comprendre le rapport au travail de professions intellectuelles, d’activités d’expertises ou de travailleurs de la connaissance.

En partant d’une synthèse des littératures qui existent sur ces sujets, cette thèse étudie le rapport au travail des métiers de laboratoire en sciences dures. Ce travail est une démarche qualitative, compréhensive et interprétativiste, basée sur l’étude de cas multiples enchâssés de quatre laboratoires du Commissariat à l’Énergie Atomique et aux Énergies Alternatives (CEA).

Cette recherche montre que les différents statuts présents dans les laboratoires partagent les mêmes sources d’engagement et de bien-être mais ont des facteurs de mal-être différents. De plus, ils présentent tous des « rapports ambigus au travail », conjonction simultanée de bien-être et de mal-être, qui obligent à repenser l’articulation entre bien-être et mal-être au travail.

## Oct 29, 2015

**Speaker**

Fred Ngolè (CosmoStat)

**Title**

Matrix factorization for PSF's field restoration (Rehearsals for Euclid WL Autumn Meeting in London)

## Oct 15, 2015

**Speaker**

Fabien Lacasa (Institut d'Astrophysique spatiale, IAS)

**Title**

Combining cluster counts and galaxy angular power spectrum

**Abstract**

Thanks to its 5-year observation of the southern sky (2014-2019), the Dark Energy Survey will enable unprecedented studies of galaxy clustering and cluster constraints on cosmology and galaxy evolution. I will show ongoing work to combine these two probes to constrain vanilla cosmology, dark energy and Halo Occupation Distribution (HOD) parameters. The halo model can be used to model the cross covariance between cluster counts and galaxy power spectrum (or real-space correlation function), and I will introduce a diagrammatic method to compute easily the different terms implied and have a simple representation. I will show the importance of using a non-linear model for predictions, and that the cross-covariance is particularly important at low redshifts. A Fisher analysis will prove the nice complementarity and synergy of the two probes for cosmological and HOD parameters. If time permits, I will also show the development of a joint likelihood using the Gram-Charlier series and respecting the Poissonian character of cluster counts.

## Sep 29, 2015

**Speaker**

Pierre Weiss and Paul Escande (Institut de Mathématiques de Toulouse)

**Title** (French)

Sur la décomposition d'opérateurs de flou sur des bases d'ondelettes

**Abstract** (French)

Dans cet exposé, nous commencerons par établir plusieurs propriétés des opérateurs de flou (convolutions ou plus généralement, opérateurs intégraux régularisants) lorsqu'ils sont exprimés dans des bases d'ondelettes. Ce type de résultat semble avoir été la motivation initiale de l'introduction des ondelettes par Yves Meyer. De façon surprenante, il ont été extrêmement peu utilisés en imagerie, malgré le succès des ondelettes dans ce domaine. Nous montrerons alors que ces résultats théoriques peuvent avoir un impact fort sur la résolution pratique de problèmes inverses de défloutage. A titre d'exemple, nous montrerons qu'ils permettent de gagner de un à deux ordres de grandeurs sur les temps de déconvolution avec régularisation l1 sur des coefficients d'ondelettes (bases ou tight frames).

## Sep 17th 2015

**Speaker**

Julien Girard (CosmoStat)

**Title**

Facing current and future calibration/imaging challenges of the Square Kilometre Array (SKA) [slides]

**Abstract**

After an brief introduction to the science drivers and techniques of the next generation radio telescope Square Kilometre Array (SKA), I will attempt to give a glimpse into the current imaging and calibration issues that still need to be tackled to achieve full-polarization, multi-frequency, high-dynamic range in short exposure time over a wide field of view. Currently, a lot of effort is being deployed in trying to develop economic approaches which will lower the high demands from this giant facility. Hopefully, the development and use of convex optimization methods, sparsity (e.g. Garsden et al. 2015, Girard et al. 2015) and dictionary learning combined with the "Measurement Equation" framework (Smirnov, 2011, Tasse, 2014) in radio interferometry, might contribute to face these challenges.

## Sep 10, 2015

**Speaker**

Austin Peel (CosmoStat)

**Title**

Investigating cosmological applications of the inhomogeneous Szekeres models [slides]

**Abstract**

Exact solutions of Einstein’s field equations that can describe evolving complex structures in the universe provide complementary frameworks to standard perturbation theory in which to analyze cosmological and astrophysical phenomena. I plan to present an overview of my thesis work, where I explored cosmological applications of a certain family of non-symmetric spacetimes called Szekeres models. The Szekeres metric, a well studied exact solution of general relativity, is inhomogeneous and anisotropic in general but contains the smooth FLRW metric as a special limiting case. It is therefore useful in studies where structures evolve within a $$\Lambda$$CDM background but still retain the full nonlinearity of general relativity. My work focused on Szekeres models in two primary contexts that I will discuss: the growth rate of large scale structures and also light propagation in a Swiss-cheese model of the universe.