 I'm going to use a simulation to help you visualize how errors can grow in a computer model weather forecast. This graph will actually show two model simulations, one with a blue trace and one with a red trace. The simulations are based on the same model with one key difference, the initialization. We'll pretend that simulation one, which will be marked by a blue trace, has a perfect initialization, which in real life is actually impossible because we can't perfectly measure the atmosphere everywhere at all times. Meanwhile, simulation two, which will be marked by a red trace, has a small error in its initialization of just 0.01%. These simulations are based on the Lorenz equations that describe chaotic systems like the atmosphere, and we'll imagine that our simulations are predictions of atmospheric vertical motion. Computer models have to predict vertical motion in order to predict things like cloud formation and precipitation. We see that the simulation's predictions for vertical motion are basically identical initially with perhaps some very small differences where you can pick out the difference between the blue and red traces from time to time, but overall there's near perfect agreement. As time goes on, let's see what happens. Eventually the disagreement grows larger until the solutions become wildly different. Now let's see what happens when the initialization error in simulation two is a bit larger, 1% this time. We'll start the simulations and we see that there's near perfect agreement initially again, but we start to see bigger differences in the solutions faster this time and the solutions very quickly bear no resemblance to each other. And so it is with computer models that predict the weather. Errors in the computer models initialization grow in time. Bares tend to be small at first, but as they grow larger, eventually the models forecast bears little resemblance to what will actually happen with the weather.