 Hi, I'm Philip. If you've made it this far through this sequence of videos, you've learned quite a lot about differential equations. Maybe you're wondering, why should I care about differential equations? I'm a machine learning engineer. Well, solving differential equations is a numerical problem. And actually all computations that happen inside of a learning machine arguably are also numerical problems. What happens inside of a computer when you train, when you fit, when you learn a model based on data, is the solution of a numerical problem. It's an integration problem if it's a probabilistic machine learning model. It's an optimization problem if it's a statistical estimation. It requires the solution of differential equations indeed to predict the future, like for example in reinforcement learning and quite often it just requires the solution of a linear algebra problem as the base case for all of these tasks. For Gaussian integrals, for quadratic optimization, for linear differential equations. You've learned in the previous videos that you can think of solving differential equations as a machine learning, as an estimation, as an inference problem. And the same applies to all of these other numerical problems. You can think of numerical computations as the estimation inference on a latent quantity, like the value of an integral, given observations of a tractable quantity, like the value of the integrand in various points. So estimation of a latent quantity based on data, that's machine learning. This means integration is a machine learning problem. Optimization is a machine learning problem. Solving differential equations is a machine learning problem. Linear algebra is a machine learning problem. Isn't that cool? And that means we can phrase computation as multiplying a prior with a likelihood and dividing by the evidence returning a posterior as Bayesian inference. Why is this useful? It's useful because it allows us to use the toolbox of Bayesian machine learning within computation. We can use the prior to encode what we know about our numerical problem to reduce its complexity. We can use the likelihood to tell the algorithm that some of our computations might be a bit unreliable. They might be stochastic. They might be low numerical precision. We can use the evidence as a generic framework to estimate hyperparameters of numerical algorithms, and we can use the output of this process, the posterior, to quantify uncertainty about the result of the computation and hand it forward to whatever comes next in our computational pipeline. This is the idea behind probabilistic numerical computation. It's what a large part of this research group works on, but also what some of our colleagues across the world are studying. If you'd like to know more about our work, then please check out our publications, which you can find on our website. Check out the software library for probabilistic numerical computations that you've already heard about in previous videos and subscribe to this YouTube channel because together with our other machine learning colleagues here in Tübingen, we are publishing and telling you about our work on this channel every now and then. Thanks for watching. That is the end of this afternoon.