 Hello, I'm Emilia Magniani and I'm going to talk about ODE inverse problems I'm kind of new in the group, so I'm going to present the work that my colleagues did some time ago That you can see on the screen If we have a dynamical system described by an ODE depending on a parameter theta that you've seen probably multiple times by now What you normally want to do is either you want to infer the parameter theta knowing some data Or you want to predict a solution given the parameter theta and the related mathematical concept are forward problems and inverse problems In an ODE inverse problem We are given noisy observations of the solution and we want to estimate the parameter theta Whereas in the ODE forward problem, we are given the parameter theta and you want to estimate the solution of the ODE And we observe that ODE inverse problems are commonly treated as likelihood free That means that the Foubert map that maps the parameter theta to the solution Cannot be approximated exactly because it has to be numerically approximated by an ODE solver And this comes with an unaccounted numerical error But do we need to treat them as likelihood free? Well, no we argue that this is not a fundamental constraint But there are a non-optimal approach of classic ODE solvers that do not return a likelihood at a point estimate But if we use Gaussian ODE solvers, then we can construct a local Gaussian approximation of the likelihood and Compute its gradient on the left side You can see a plot where is the uncertainty aware version of the likelihood and on the second row You can see the unaware version and the the true parameter the black cross Plies in the region of high probability only in the aware version Where's on the right? We compare Gradient Bay sampling in orange and likelihood free sampling in blue and you can see that the orange Dots move much faster to the mode compared to the blue dots so we can outperform standard likelihood free approaches So if you think your problem doesn't have a likelihood think again