Optimal transport has become a mathematical gem at the interface of probability, analysis and optimization. It is a theory longly developed by the mathematician community, started by Monge and followed by Kantorovich which found applications in several fields like differential geometry, PDEs or gradient flows just to name a few.
Lately, it began to make its way into the machine learning and data treatment community. The optimal transport can be used to define a distance that is very useful when comparing histograms or point clouds, a typical scenario in nowadays applications. Some breakthrough contributions, like the entropic regularization, allowed to convexify and efficiently solve the transport problem opening the doors for many applications like Wasserstein barycenters or dictionary learning for example.
Nevertheless, Optimal Transport has not entered fully into the signal treatment community. One of the obstacles is the fact that the theory is well developed in the space of nonnegative measures but very little work has been done to extend it to signed measures. Considering a machine learning point of view, this presentation will deal with some theoretic aspects of an Optimal Transport based "distance" for signed measures that can be useful for future applications like Blind Source Separation. An algorithm for its efficient calculation will be presented as well.