On January the 27th, 2017, we organize a day on machine learning in astrophysics at SAp, CEA Saclay.
All talks are taking place at SAp, Salle Galilée (Building 713)
14:00 – 14:30h. Valeriya Naumova (Simula Research Laboratory)
14:30 – 15:00h. Stephane Paltani (University of Geneva)
15:00 – 15:30h. Ben Hoyle (Ludwig-Maximilians-Universität)
15:30 – 16:00h. Coffee break
16:00 – 16:30h. Alexandre Gramfort (Telecom ParisTech)
16:30 – 17:00h. Pierre Blanchart (CEA Saclay)
17:00 – 17:30h. Joana Frontera-Pons (CEA Saclay – CosmoStat)
Image separation using multi-penalty regularisation
Valeriya Naumova (Simula Research Laboratory)
Photometric-Redshifts Computations with Machine-Learning Techniques in the
Context of the Euclid Mission
Stephane Paltani (University of Geneva)
ESA’s Euclid mission aims at uncovering the nature of the mysterious dark energy through the determination of its equation of state. One of the two main probes of Euclid, weak-lensing tomography, requires approximate, but very accurate, knowledge of the redshift of the galaxies. A group, the so-called Photometric-Redshift Organization Group, is in charge of
determining how to address this question. A number of algorithms, most of them based on machine-learning, have been compared in the Euclid Data Challenge 2. I will review the performance and shortcomings of these algorithms, and present several approaches to that we are exploring to improve the accuracy and precision of the Euclid photometric redshifts.
Machine learning methods for star/galaxy separation
Ben Hoyle (Ludwig-Maximilians-Universität)
Deciding which photometric properties, or features, to use in star/galaxy/quasar separation algorithms is often heavily influenced by the user and often suboptimal. I highlight some data driven methods including feature generation and pre selection, which provide guidance for inputs into both simple algorithms, and machine learning algorithms for star/galaxy/quasar separation. I showcase these methods using data from the Dark Energy Survey, and from the SDSS and follow up programs.
Learning from neuroscience time series by minimizing objective functions
Alexandre Gramfort (Telecom ParisTech)
Understanding how the brain works in healthy and pathological conditions is considered as one of the challenges for the 21st century. After the first electroencephalography (EEG) measurements in 1929, the 90’s was the birth of modern functional brain imaging with the first functional MRI (fMRI) and full head magnetoencephalography (MEG) system. By offering noninvasively unique insights into the living brain, imaging has revolutionized in the last twenty years both clinical and cognitive neuroscience. Over the last 10 to 15 years, driven by more open data and recent algorithmic progress the field of
brain imaging and electrophysiology has embraced a new set of tools in order to extract knowledge from data. Using statistical machine learning new applications have emerged, going from brain computer interaction systems, “mind reading”
and cortical source imaging at a milisecond time scale. In this talk, I will focus on this last problem and detail some recent contributions at the interface of statistics and convex optimization under sparsity constraints.
Recognition of user context on audio-motion signals using “deep learning” methods
Pierre Blanchart (CEA Saclay)
In this presentation, we review the work carried out within our team concerning the recognition of the user context from signals of audio-motion sensors embedded in telephones. We present the different “deep” architectures implemented as well as the underlying optimization problems linked to the variability of the user contexts and the difficulty to freeze all these contexts once and for all in a single model that no longer evolves. We present on-line learning contexts in order to adapt to the user, which introduces constraints of re-learning networks which lead to problems of reactivity to learning in the front of new data and the conservation of the performance of the models learned from previous data.
Unsupervised feature learning for galaxy SEDs with denoising autoencoders
Joana Frontera-Pons (CEA Saclay – CosmoStat)
With the increasing number of deep multi-wavelength galaxy surveys, the spectral energy distribution (SED) of galaxies has become an invaluable tool for studying the formation of their structures and their evolution. In this context, standard analysis relies on simple spectro-photometric selection criteria based on a few SED colors. If this fully supervised classification already yielded clear achievements, it is not optimal to extract relevant information from the data. In this work, we propose to employ very recent advances in machine learning, and more precisely in feature learning, to derive a data-driven diagram. We show that the proposed approach based on denoising autoencoders recovers the bi-modality in the galaxy population in an unsupervised manner, without using any prior knowledge on galaxy SED classification. As well, preliminary results illustrate that it allows capturing extra physically meaningful information, such as galaxy mass evolution.
This event is funded by the European Research Council (ERC) via the Lena project.
CEA Saclay is around 23 km South of Paris. The astrophysics division (SAp) is located at the CEA site at Orme des Merisiers, which is around 1 km South of the main CEA campus. See http://www.cosmostat.org/link/how-to-get-to-sap/ for detailed information on how to arrive.