 This STR function, the DunderSTR function, is not essential to the computation for the block, but it is something that I've made an effort to include in every block in Cottonwood. This allows for a nice text summary of whatever you create to be automatically generated and stored each time you run it. Here, for the convolution block, it notes that, hey, this is a one-dimensional convolution block. Here are all the important parameters that define it. Here is the initializer method, the optimizer for the weights, the optimizer for the biases, which in turn get their own DunderSTR functions called and reports their type and their parameters as well. This is part of the report that gets generated. I strongly recommend any block that you might create to include an explicit DunderSTR function so that it will automatically then be included in the report that gets generated. We'll take a look at that when we run this a little bit later. What's left now is to implement the forward pass and the backward pass of one-dimensional convolution. We're going to kick the can down the road just a little bit for both of these. So with the forward pass, we cheat and we say, well, we have this magic function, calculate outputs. We pass it our inputs, we pass it our weights, we get the outputs, we add in the bias, and that gives us the result. But this strategy is one that we're going to apply several times in a row each time shaving away at the computation that needs to be done until what we're left with is just a small nugget. We'll do the same thing on the backward pass. We bring in the gradient, the partial derivative of loss with respect to y, the output of this block. From our prior analysis, we know that the gradient of the bias is equal to the gradient of the output. So we can set that aside. And then we'll use this magic function that we haven't written yet, calculate weight gradient, pass it the output gradient and pass it x to get the weight gradient, and then down below toward the bottom calculate the input gradient. We'll pass it the output gradient and the weights and get the input gradient back. We'll send both the gradient of the weights and the gradient of the bias with the weights and the bias to their respective optimizers so they can be updated appropriately. And then we'll pass the input gradient back. And that gets passed back to whatever block comes before it in this structure. This is a high level cursory implementation of back propagation for one dimensional convolution. Now our next step is to dig a little bit deeper on both of these.