 Hello, everyone. This is Alice Gao. In this video, I will discuss the derivations for the filtering formulas. In the previous video, I showed you how to calculate the filtering probabilities using the recursive formulas. While performing the calculations, have you wondered how the formulas were derived in the first place? Do you believe that these formulas are correct? Let me show you the derivations of the recursive formula in this video. The derivation requires several steps. This slide is full of formulas and may look intimidating at first. However, if we break down the derivations step by step, you will realize that every step can be justified by a rule that we are already familiar with. I will explain the derivation using six slides. On each slide, I will show you one step and ask you to pick the correct justification out of six options. Base rule, rewriting the expression, the chain rule or the product rule, the Markov assumption, and the sum rule. Step one, pause the video and choose an answer. Then, keep watching. The correct answer is B. We simply rewrote the expression. Recall that O sub 0 to K is a sequence of observations. This step splits up the term into two parts. One part for the time K only and one part for the sequence of observations from time 0 to time K minus 1. Step two, pause the video and choose an answer. Then, keep watching. The correct answer is A, base rule. It's easier to see this when you cross out O sub 0 to K minus 1 since it appears in all three terms. We effectively switch the places of S sub K and O sub K using base rule. The reason is that our model gives us the probability of the observation given a state, but it does not give us the probability of this state given an observation. So it is more convenient if we have a probability in the form of the probability of the observation given a state. Step three, pause the video and choose an answer. Then, keep watching. The correct answer is D, the Markov assumption. This step removes the value O sub 0 to K minus 1 from the first term. We can remove this value since the observation at time K only depends on the state at time K. This is a sensor Markov assumption. Given S sub K, O sub K is independent of any previous observations. You can also look at the Bayesian network and verify this independence relationship using D separation. I'll leave this as a practice problem for you. Step four, pause the video and choose an answer. Then, keep watching. The correct answer is E, the sum rule. We use the sum rule in reverse to introduce S sub K minus 1 into the second term. We can see the reason for doing this from the Bayesian network. The second term contains S sub K and O sub 0 to K minus 1. These two parts are not directly connected. Introducing S sub K minus 1 connects the two parts in the network. Step five, pause the video and choose an answer. Then, keep watching. The correct answer is C, the chain rule or the product rule. This is easier to see if we cross out the last value, O sub 0 up to K minus 1, which appears in all three terms. We use the product rule to write the probability as a product of two probabilities. Step six, you made it to the final step. Pause the video and choose an answer. Then, keep watching. The correct answer is D, the Markov assumption. This step removes the value O sub 0 to K minus 1 from the first term. We can remove this value since the state at time K only depends on the state at time K minus 1. This is the Markov assumption. Given S sub K minus 1, S sub K is independent of any previous observations. Again, you can verify this using de-separation. I'll leave it as a practice problem for you. That's everything on the derivations of the filtering formulas. Congratulations on making it through this video. Since you made it to this point, please post any emoji on the week 7 post on Piazza. Let me summarize. After watching this video, you should be able to do the following. Describe the justification for every step of the derivation of the filtering formulas. Thank you very much for watching. I will see you in the next video. Bye for now.