 So, of course, math is great at finding more efficient ways to write things like that. So we can use the summation sign and we can introduce this index K. K is going to be our counter that counts along the length of our kernel. And so it'll go from minus P to plus P, and that will always cover the length of our kernel. So to write this more concisely, for the first element of our result, y sub zero, we take an increment K starting at minus P all the way to plus P, and at each one we multiply x sub minus K times w sub K. So the very first element would be x sub minus minus P, or x sub P, times w sub minus P. So you can go through element by element in the equations we just wrote out longhand and see how these line up. The one thing that is different about these is if you keep going, you'll see here by the time K is equal to P, we're multiplying x sub minus P times w sub P. x isn't defined at minus P, it's only defined at zero through m minus one. So we have to add the footnote any time we try to access an x index that's outside that range we just assume x equals zero. So really x extends to plus and minus infinity, it's at zero for most of that, but between indexes zero and m minus one, that's when it's interesting. And we can repeat that then for the next position of the result, y sub one, and y sub two, etc., all the way up to the last position, y sub m. And if we want to, we can take, and instead of explicitly counting through all the positions of our signal, we can use the index j and just say, hey, for any position j between zero and m minus one, we can use this expression. So y sub j is equal to the sum of x sub j minus k times w sub k. And in each of those summations, k goes from minus P to P. So this is the beautifully short way to write all of this that we wrote out longhand the first time. So math is beautiful, exhibit 556.