 So now we have a potential function to calculate forces. The other part I'm going to need, I'm going to need some sort of algorithm or recipe for how to get from acceleration to velocities to positions. Purely mathematically speaking that is easy, it's just I have a derivative and I need to do the opposite process and that's called integration. So I'm going to need to come up with a way to do integration. And I'm going to use a particular method called the leapfrog algorithm. Or the leapfrog integration scheme. The reason why this is a bit more complicated than mathematics, I'm going to need to do this for finite time steps. And mathematics would be correct in the limit of infinitely small steps. And to handle this I'm going to draw a some sort of timeline here. And then I will draw, we have a time t0, leave some space, t1 and t2. But I will also draw some half times here. I have t0.5 and t1.5. I know that looks strange. The idea behind these half steps is that if I want to move from t0 to t1, but I might want to add a contribution from another function, my acceleration or something, it makes sense to take that at the half time point. I'll show you why this ends up being nice. So assuming that I know the acceleration, I know the positions at say a time t1. And I also know the accelerations at that time t1 because I got that from the force. I can say that I would like to calculate the velocity at the time t plus delta t, which is the time step divided by 2. That is the old velocities at the time t minus the time step divided by 2 plus the acceleration at time t multiplied by the time step. So it's pretty much just a linear approximation. What that means that if I had the accelerations here, I'll draw a small a there. And I had my velocity here. So what this effectively means that velocities at the previous half time step is jumping over here to a new velocity at t plus half a time step. And then it also makes sense, right, that if I'm doing this process that the acceleration contribution comes exactly halfway, at least it's neat and symmetric. And then I'm going to do almost the same thing for positions. But for positions, I'm now going to say that the position at t plus a full time step, that is the old position at the full time step plus the velocity at t plus half the time step multiplied by the t. And what that now means is that at the same time step where I had the position, the position now jumps over the velocity at the half time step. And then I get a new position here and of course also a new velocity. So what this means is that the name here comes from the game when you're jumping over each other's backs just like frogs. So the positions and the accelerations forces here, they will always be at full time steps while the velocities will be at half time steps. It's a really efficient, neat and accurate integrator for the way we use it. So why have you never heard of it? I bet you've heard of integrators in American analysis such as Runge-Kutta, right, or Beeman. Well, the difference is that in American analysis you're frequently integrating a simple function such as sine x. The cost of evaluating sine x on your computer is nil. So you can afford to evaluate that a billion times. It doesn't matter. All you're after is very high accuracy. But that's going to be different here. So what happens here? Every time I need to evaluate the acceleration here, I need the force, right? But the way I got the force was to evaluate that very large potential function involving all bonds, angles, torsions, non-bond interactions, the Coulomb interactions. It's going to be very expensive for me to evaluate the value of my function here. So while my goal here is not to get velocities or positions with 15 decimals, I don't really care. I don't know the velocities accurately anyway. But it's going to be imperative for me that I can take long time steps. And all those other integrators you know of, they end up being accurate because they take very small time steps. That's the opposite of what I want. While leapfrog ends up having a very nice property that it has small errors at large time steps. So if I were to draw a function, say sine x, leapfrog has the property that it's able to do a very good integration of such a function with roughly five points per period. There might be some short term error here because leapfrog might not exactly integrate this function, but it's going to integrate the pseudo function that's close to it. And the point is it will not start to deviate. It will stay close to this as long as you have roughly five points per oscillation. And that means that leapfrog for us is really optimized to take long time steps. Why is that important? Well, remember that I said that we're limited by computing power. If my time step can be 50% longer, I effectively get 50% better total performance. That's a huge difference to me. And that's why molecular simulations tend to use leapfrog rather than super accurate short term integrators.