 So far, we've been assuming that the number of outputs is the same as the number of inputs. So that means that we take our kernel, we flip it, it's odd numbered, and we put that central value of the kernel, line it up with the first value of our inputs, and then do our sliding dot product across and stop do our last evaluation when the center value of the kernel lines up with the last element in our signal. What that does then is it gives us an output result that's exactly the same length as the input. There's actually no reason why this has to be the case. So we can do something called padding, which is adding reasonable values to the end of our signal so that we can convolve that kernel across the whole length so that even when the far right tail of that kernel first touches the first non-zero value of the signal, it can still give us an output result. When we pad with zeros, we can get a reasonable set of outputs to go along with that. When we, the process that we were talking about before is essentially padding with zeros, assuming that everything that is outside of that is zero and treating it as such. But there are other things that we can do. There are lots of ways to pad. We could actually pad with a constant value. This then might be more representative of the data that we're using. The goal of padding is to extend the data set in a neutral way so it doesn't introduce new information or new artifacts any more than necessary. So it could be that for your particular data set, let's say you're estimating temperatures, and you'd like to pad that out a little bit, an average temperature wouldn't be zero. An average temperature on neutral temperature might be 50 degrees Fahrenheit, 20 degrees Celsius. And you can use that as a fairly neutral way to extend your data set. Notice, though, that it does give you a different result. Those padded values matter. They get folded into the convolution result. So you want to think about what you use there. Another thing you can do that might be more representative of your data is to do a mirror padding. So if you want to pad with, say, five values, you actually go and take the last five values of your data set, flip them around, and tack them on to the end. Similarly, you can take the first five, flip them around, and tack them on to the beginning. This mirror padding gets you a nice neutral extension of your data set for the most part. But you have to be careful of weird cases. For instance, if you're tracking weekly phenomena and, say, your weekends are very different from your weekdays, if you do this, you might end up with 10 weekdays in a row or four weekend days in a row. So it's not entirely neutral all the time. You have to be aware of what your data means. Notice, again, when you look at the results, the results are different depending on how you pad it. Another thing that you can do that occasionally makes sense is to do circular padding. So now, instead of taking the elements off the end and flipping it and tacking it on the end, as we do in mirror padding, we take our five elements off the end, reach back around, and tack them on to the beginning of the data set. Similarly, we take our five elements, or in this case, our four elements, off the beginning and reach around and tack them on to the end of the data set. This can be particularly useful if you're working with data that you know to be cyclical, you know to change on a regular basis, and if you sampled it over a round number of those cycles, it can be a nice neutral way to extend the data set. Again, though, notice that it gives you a very different answer depending on how you pad. So the takeaway here is to pad mindfully because when you're padding, you're actually adding data to your signal. If you don't do it well, you can end up corrupting your signal.