 So as you saw, the code that we gave you allows playing multiple games in parallel. Why are we doing that? Well, the key to successful deep learning is having a lot of data. Time, in this case, is what prevents us from using lots of data. And in our case, more games means more data. And hence, we want to make it fast. Keep in mind that the code that you're working with today is running on the GPU. We will pick up on that theme at a later point of time. But basically, in deep learning, we're often working to make things fast, primarily. Why is more data so important? Here's a great analysis from a very old paper, Banco and Brawl 2001, where they worked on a text case and varied the amount of data that's available and the different algorithms that are there. And then they applauded test accuracy. And I should say, today, we see the same effects if we look at transformers that use far more words than the systems that Banco and Brawl were working in there. But what you see is the quality of the algorithm. And that's not dwell on what exactly those algorithms are. Makes a difference. It makes a considerable 0.5 test accuracy difference. But the amount of words that we use, the amount of training is much more important than all the other factors. So more data is the main thing when we use deep learning. In fact, one meaningful way of thinking about deep learning is that it's just a trick that allows us to deal with unbelievably large amounts of data. So let's look at how the code handles simultaneous games. We'll look at the different parts of the code. We effectively have a 3D stack. Now, like where we have lots of bots, we have the positions i, j on the bot. And we have basic key, no, hold on. We have j, k, the positions on the bot, and i, the number, the index of the game that we are talking about. You see, typical tensor, something that PyTorch is very good at handling. So have a look at the relevant parts of the code and do not spend more than a few minutes on it.