||In many modelling scenarios we have to deal with a significant number of observations streams in order to describe a specific phenomenon. These can be of a wide variety of types both continuos and discrete. Modelling such data poses significant challenges, both in terms of algorithmic and computational complexity. In order proceed a common strategy is to exploit dependencies in the data to reach a reduced dimensional representation of the data. This can interpreted as a factorisation.
In this talk I will present two different probabilistic modelling approaches: the Gaussian Process Latent Variable model and the Bayesian Network. Both aiming to learn models of the data but based on different assumptions leading to different factorisations. Our target application is in a human robotic grasping scenario, which is characterised by multiple high-dimensional observation spaces posing significant challenges on the models. We will describe the benefits and the draw-backs off each approach and show how they can be used in a complementary fashion