Beyond self-acceleration: force- and fluid-acceleration

The notion of self acceleration has been introduced as a convenient way to theoretically distinguish cosmological models in which acceleration is due to modified gravity from those in which it is due to the properties of matter or fields. In this paper we review the concept of self acceleration as given, for example, by [1], and highlight two problems. First, that it applies only to universal couplings, and second, that it is too narrow, i.e. it excludes models in which the acceleration can be shown to be induced by a genuine modification of gravity, for instance coupled dark energy with a universal coupling, the Hu-Sawicki f(R) model or, in the context of inflation, the Starobinski model. We then propose two new, more general, concepts in its place: force-acceleration and field-acceleration, which are also applicable in presence of non universal cosmologies. We illustrate their concrete application with two examples, among the modified gravity classes which are still in agreement with current data, i.e. f(R) models and coupled dark energy.

As noted already for example in [35, 36], we further remark that at present non-universal couplings are among the (few) classes of models which survive gravitational wave detection and local constraints (see [12] for a review on models surviving with a universal coupling). This is because, by construction, baryonic interactions are standard and satisfy solar system constraints; furthermore the speed of gravitational waves in these models is  cT = 1 and therefore in agreement with gravitational wave detection. It has also been noted (see for example [37–39] and the update in [33]) that models in which a non-universal coupling between dark matter particles is considered would also solve the tension in the measurement of the Hubble parameter [40] due to the degeneracy beta - H0 first noted in Ref. [41].

Reference: L.Amendola, V.Pettorino  "Beyond self-acceleration: force- and fluid-acceleration", Physics Letters B, in press, 2020.

The first Deep Learning reconstruction of dark matter maps from weak lensing observational data

DeepMass: The first Deep Learning reconstruction of dark matter maps from weak lensing observational data (DES SV weak lensing data)

DeepMass

 This is the first reconstruction of dark matter maps from weak lensing observational data using deep learning. We train a convolution neural network (CNN) with a Unet based architecture on over 3.6 x 10^5 simulated data realisations with non-Gaussian shape noise and with cosmological parameters varying over a broad prior distribution.  Our DeepMass method is substantially more accurate than existing mass-mapping methods. With a validation set of 8000 simulated DES SV data realisations, compared to Wiener filtering with a fixed power spectrum, the DeepMass method improved the mean-square-error (MSE) by 11 per cent. With N-body simulated MICE mock data, we show that Wiener filtering with the optimal known power spectrum still gives a worse MSE than our generalised method with no input cosmological parameters; we show that the improvement is driven by the non-linear structures in the convergence. With higher galaxy density in future weak lensing data unveiling more non-linear scales, it is likely that deep learning will be a leading approach for mass mapping with Euclid and LSST.

Reference 1:  N. Jeffrey, F.  Lanusse, O. Lahav, J.-L. Starck,  "Learning dark matter map reconstructions from DES SV weak lensing data", Monthly Notices of the Royal Astronomical Society, in press, 2019.

 

Euclid joint meeting: WL + GC + CG SWG + OU-LE3

Poll:

 

Date

February 3 - 7, 2020


Venue

IAP - Institue d'Astrophysique de Paris, 98 bis, bd Arago, 75014 Paris


Program

The preliminary schedule can be found here:

https://docs.google.com/document/d/1XHDepk3W4897GMqxABpo4vgubhm2LFVYVOCgyTqGS_I/edit

Slides (password-protected) are on redmine.

The meeting starts on Monday 3 February at 9:30.

 


Participant list

Please add your name to the following list if you intend to participate. To access IAP, external people are required to indicate their name in advance of the meeting, and might have to show identification at the IAP front desk. There is no conference fee.  

https://docs.google.com/document/d/17Hn8Z6LH54fJDbDY2uQPtZPauZotm6IsnNC4LbBcmII/edit


Practical information

How to get to IAP.

Hotel list.

Restaurant list.


Contacts

Martin Kilbinger  <kilbinger@iap.fr>

Sandrine Codis <codis@iap.fr>

 

Cosmostat Day on Machine Learning in Astrophysics

Date: January the 17th, 2020

Organizer:  Joana Frontera-Pons  <joana.frontera-pons@cea.fr>

Venue:

Local information

CEA Saclay is around 23 km South of Paris. The astrophysics division (DAp) is located at the CEA site at Orme des Merisiers, which is around 1 km South of the main CEA campus. See http://www.cosmostat.org/contact  for detailed information on how to arrive.


On January the 17th, 2020, we organize the 5th day on machine learning in astrophysics at DAp, CEA Saclay. 

Program:

All talks are taking place at DAp, Salle Galilée (Building 713)

10:00 - 10:15h. Welcome and coffee
10:15 - 10:45h. Parameter inference using neural networks Tom Charnock (Institut d'Astrophysique de Paris)
10:45 - 11:15h. Detection and characterisation of solar-type stars with machine learning -  Lisa Bugnet (DAp, CEA Paris-Saclay)
11:15 - 11:45h. DeepMass: Deep learning dark matter map reconstructions with Dark Energy Survey data - Niall Jeffrey (ENS)

12:00 - 13:30h. Lunch

13:30 - 14:00h. Hybrid physical-deep learning models for astronomical image processing - François Lanusse (Berkeley Center for Cosmological Physics and CosmoStat CEA Paris Saclay)
14:00 - 14:30h. A flexible EM-like clustering algorithm for noisy data Violeta Roizman (L2S, CentraleSupélec)                                                           
14:30 - 15:00h. Regularizing Optimal Transport Using Regularity Theory -  François-Pierre Paty (CREST, ENSAE)
15:00 - 15:30h. Deep Learning @ Safran for Image Processing -  Arnaud Woiselle (Safran Electronics and Defense)

15:30 - 16:00h. End of the day


Parameter inference using neural networks

Tom Charnock (Institut d'Astrophysique de Paris)

Neural networks with large training sets are currently providing tighter constraints on cosmological and astrophysical parameters than ever before. However, in their current form, these neural networks are unable to give true Bayesian inference of such model parameters. I will describe why this is true and present two methods by which the information extracting power of neural networks can be built into the necessary robust statistical framework to perform trustworthy inference, whilst at the same time massively reducing the quantity of training data required.


Detection and characterisation of solar-type stars with machine learning

Lisa Bugnet (DAp, CEA Paris-Saclay)

Stellar astrophysics has been strengthened in the 70’s by the discovery of stellar oscillations due to acoustic waves inside the Sun. These waves evolving inside solar-type stars contain information about the composition and dynamics of the surrounding plasma, and are thus very interesting for the understanding of stellar internal and surface physical processes. With classical asteroseismology we are able to extract very precise and accurate masses, radius, and ages of oscillating stars, that are key parameters for the understanding of stellar evolution.
However, classical methods of asteroseismology are time consuming processes, that can only be applied for stars showing a large enough oscillation signal. In the context of the hundred of thousand stars observed by the Transiting Exoplanet Survey Satellite (TESS), the stellar community has to adapt the methodologies previously built for the study of the few ten thousand of stars observed with much better resolution by the Kepler satellite. Our “method exploits the use of Random Forest machine learning algorithms that aim at automatically 1) classifying and 2) characterizing any stellar pulsators from global non-seismic parameters. We also present a recent result based on neural networks on the automatic detection of peculiar solar-type pulsators that have a surprinsigly low dipolar-oscillation amplitude, the signature of an unknown physical process affecting oscillation modes inside the core.


DeepMass: Deep learning dark matter map reconstructions with Dark Energy Survey data

Niall Jeffrey (ENS)

I will present the first reconstruction of dark matter maps from weak lensing observational data using deep learning. We train a convolution neural network (CNN) with a Unet based architecture on over 3.6×10^5 simulated data realisations with non-Gaussian shape noise and with cosmological parameters varying over a broad prior distribution. We interpret our newly created DES SV map as an approximation of the posterior mean P(κ|γ) of the convergence given observed shear. DeepMass method is substantially more accurate than existing mass-mapping methods with a a validation set of 8000 simulated DES SV data realisations. With higher galaxy density in future weak lensing data unveiling more non-linear scales, it is likely that deep learning will be a leading approach for mass mapping with Euclid and LSST.


Hybrid physical-deep learning models for astronomical image processing

François Lanusse (Berkeley Center for Cosmological Physics and CosmoStat CEA Paris Saclay)

The upcoming generation of wide-field optical surveys which includes LSST will aim to shed some much needed light on the physical nature of dark energy and dark matter by mapping the Universe in great detail and on an unprecedented scale. However, with the increase in data quality also comes a significant increase in  data complexity, bringing new and outstanding challenges at all levels of the scientific analysis.
In this talk, I will illustrate how deep generative models, combined with physical modeling, can be used to address some of these challenges at the image processing level, specifically by providing data-driven priors of galaxy morphology.
I will first describe how to build such generative models from corrupted and heterogeneous data, i.e. when the training set contains varying observing
conditions (in terms of noise, seeing, or even instruments). This is a necessary step for practical applications, made possible by a hybrid modeling of the
generation process, using deep neural networks to model the underlying distribution of galaxy morphologies, complemented by a physical model of
the noise and instrumental response. Sampling from these models produces realistic galaxy light profiles, which can then be used in survey emulation,
for the purpose of validating and/or calibrating data reduction pipelines. 

Even more interestingly, these models can be used as priors on galaxy morphologies and used as such as part of standard Bayesian inference techniques to solve astronomical inverse problems ranging from deconvolution to deblending galaxy images. I will present how combining these deep morphology priors with a physical forward model of observed blended scenes allows us to address the galaxy deblending problem in a physically motivated and interpretable way.


A flexible EM-like clustering algorithm for noisy data

Violeta Roizman (L2S, CentraleSupélec)

Though very popular, it is well known that the EM algorithm suffers from non-Gaussian distribution shapes and outliers. This talk will present a flexible EM-like clustering algorithm that can deal with noise and outliers in diverse data sets. This flexibility is due to extra scale parameters that allow us to accommodate for heavier tail distributions and outliers without significantly loosing efficiency in various classical scenarios. I will show experiments where we compare it to other clustering methods such as k-means, EM and spectral clustering when applied to both synthetic data and real data sets. I will conclude with an application example of our algorithm used for image segmentation.


Regularizing Optimal Transport Using Regularity Theory

François-Pierre Paty (CREST, ENSAE)

Optimal transport (OT) dates back to the end of the 18th century, when French mathematician Gaspard Monge proposed to solve the problem of déblais and remblais. In the last few years, OT has also found new applications in statistics and machine learning as a way to analyze and compare data. Both in practice and for statistical reasons, OT need be regularized. In this talk, I will present a new regularization of OT leveraging regularity of the Monge map. Instead of considering regularity as a property that can be proved under suitable assumptions, we consider regularity as a condition that must be enforced when estimating OT. This further allows us to transport out-of-sample points, as well as define a new estimator of the 2-Wasserstein distance between arbitrary measures. (Based on a joint work with Alexandre d'Aspremont and Marco Cuturi).


Deep Learning @ Safran for Image Processing

Arnaud Woiselle (Safran Electronics and Defense)

Deep learning has become the natural tool in computer vision for nearly all high-level tasks, such as object detection and classification for many years, and is now state of the art in most image processing (restoration) tasks, such as debluring or super-resolution. Safran looked into these methods for a large variety of problems, focusing on the use of a low number of network structures, due to electronics constraints for future implementation, and transferred them to real-life noisy and blurry data, both in the visible and the infrared. I will show the results in many applications, and conclude with some tips and take-away messages on what seems important to apply deep learning on a given task.


 Previous Cosmostat Days on Machine Learning in Astrophysics :

Space test of the Equivalence Principle: first results of the MICROSCOPE mission

Space test of the Equivalence Principle: first results of the MICROSCOPE mission

Authors: P. Touboul, G. Metris, M. Rodrigues, Y. André, Q. Baghi, J. Bergé, D. Boulanger, S. Bremer, R. Chhun, B. Christophe, V. Cipolla, T. Damour, P. Danto, H. Dittus, P. Fayet, B. Foulon, P.-Y. Guidotti, E. Hardy, P.-A. Huynh, C. Lämmerzahl, V. Lebat, F. Liorzou, M. List, I. Panel, S. Pires, B. Pouilloux, P. Prieur, S. Reynaud, B. Rievers, A. Robert, H. Selig, L. Serron, T. Sumner, P. Viesser
Journal: Classical and Quantum Gravity
Year: 2019
Download: ADS | arXivFait Marquant


Abstract

The Weak Equivalence Principle (WEP), stating that two bodies of different compositions and/or mass fall at the same rate in a gravitational field (universality of free fall), is at the very foundation of General Relativity. The MICROSCOPE mission aims to test its validity to a precision of 10^-15, two orders of magnitude better than current on-ground tests, by using two masses of different compositions (titanium and platinum alloys) on a quasi-circular trajectory around the Earth. This is realised by measuring the accelerations inferred from the forces required to maintain the two masses exactly in the same orbit. Any significant difference between the measured accelerations, occurring at a defined frequency, would correspond to the detection of a violation of the WEP, or to the discovery of a tiny new type of force added to gravity. MICROSCOPE's first results show no hint for such a difference, expressed in terms of Eötvös parameter δ =  [-1 +/- 9(stat) +/- 9 (syst)] x 10^-15 (both 1σ uncertainties) for a titanium and platinum pair of materials. This result was obtained on a session with 120 orbital revolutions representing 7% of the current available data acquired during the whole mission. The quadratic combination of 1σ uncertainties leads to a current limit on δ of about 1.3 x 10^-14.

GOLD : The Golden Cosmological Surveys Decade

This 10-week programme on the Golden Cosmological Surveys Decade will be held at the new Institut Pascal, in Paris Orsay, from 1st April 2020 to 5th June 2020. The Institut Pascal provides offices, seminar rooms, common areas and supports long-term scientific programmes. 
 
GOLD 2020 will include a summer school, three workshops (on Lensing, Galaxy Clustering, Theory and Interpretation of the Data). 
In-between, an active training programme will be run. We plan to host around 40 people for the whole programme, plus around 30 scientists during the workshops. 
Whether you are a PhD, a postdoc, a senior scientist and are interested in attending this programme, you can now apply. Deadline for applications: 1st October 2019.

 

 

Euclid joint meeting: WL + GC + CG SWG + OU-LE3

Dates: February, 3 - 7, 2020

Organisers:  Martin Kilbinger, ...

Venue: Institut d'Astrophysique de Paris (IAP),  98bis bd Arago, 75014 Paris.

Local information: http://www.iap.fr/accueil/acces/acces.php?langue=en

Contact: martin.kilbinger@cea.fr


Registration

Please add your name to the following google doc if you are planning to attend the meeting.

https://docs.google.com/document/d/17Hn8Z6LH54fJDbDY2uQPtZPauZotm6IsnNC4LbBcmII/edit?usp=sharing

There is no registration fee. Coffee and snacks will be provided for the breaks. For lunch, participants are invited to go to the nearby restaurants, shops, or imbiss stands
(see http://www.iap.fr/vie_scientifique/colloques/Colloque_IAP/2018/i-practicalinfo.html#lunch for some ideas).

 

 

Euclid preparation III. Galaxy cluster detection in the wide photometric survey, performance and algorithm selection

 

Authors: Euclid Collaboration, R. Adam, ..., S. Farrens, et al.
Journal: A&A
Year: 2019
Download: ADS | arXiv


Abstract

Galaxy cluster counts in bins of mass and redshift have been shown to be a competitive probe to test cosmological models. This method requires an efficient blind detection of clusters from surveys with a well-known selection function and robust mass estimates. The Euclid wide survey will cover 15000 deg2 of the sky in the optical and near-infrared bands, down to magnitude 24 in the H-band. The resulting data will make it possible to detect a large number of galaxy clusters spanning a wide-range of masses up to redshift ∼2. This paper presents the final results of the Euclid Cluster Finder Challenge (CFC). The objective of these challenges was to select the cluster detection algorithms that best meet the requirements of the Euclid mission. The final CFC included six independent detection algorithms, based on different techniques, such as photometric redshift tomography, optimal filtering, hierarchical approach, wavelet and friend-of-friends algorithms. These algorithms were blindly applied to a mock galaxy catalog with representative Euclid-like properties. The relative performance of the algorithms was assessed by matching the resulting detections to known clusters in the simulations. Several matching procedures were tested, thus making it possible to estimate the associated systematic effects on completeness to <3%. All the tested algorithms are very competitive in terms of performance, with three of them reaching >80% completeness for a mean purity of 80% down to masses of 1014 M⊙ and up to redshift z=2. Based on these results, two algorithms were selected to be implemented in the Euclid pipeline, the AMICO code, based on matched filtering, and the PZWav code, based on an adaptive wavelet approach.