 Engine board game, sorry, turn-based, and it, the nice thing about this approach is that it completely learned the game from scratch. It beat the world champion in 2016, and after that game, or after that program, they try to generalize the algorithm and wrote a new version called Alpha Zero, which was produced as a preliminary paper that was able to solve, or solve, was able to play Go very well chess and a Japanese game called Shogi. And when it was able to play chess well, it interested me and some others to start working on a software program that would implement this work from this preliminary paper, which we called Lila Chess Zero, which was started in 2018. Now it was forked from Lila Zero, which was worked on by a guy called Giancarlo, and he wrote it for Go. So our version works for chess, that is. So just a bit of game theory, I guess. In the case of a turn-based game, the best way to, or the most optimal way to compute a solution is to fully compute the full game tree. So in the case of tic-tac-toe, you can see the empty state at the top, that's the empty board, the start position. And then you can compute all the first positions for the first player, then the second positions for the second player from those first positions, and so on and so forth, until you reach some terminal note, which is either a loss for you, a draw, or a win. So for tic-tac-toe, this is still feasible to fully compute in modern hardware. It's about nine faculty states, but for chess or Go, this is completely untractable. So what you need to do, as a traditional chess engine, is to determine somewhere halfway the tree how good a certain position is. And the way they would do it is just by encoding a lot of heuristics and rules. So determining how many pieces you have versus the opponent, how many free spaces do you have versus the opponent. From that you can sort of compute an estimate of how well your state is in the current position. It is not perfect, but it's built upon centuries of work from many grandmasters that play chess, or in the case of Go, thousands of years of knowledge from the game of Go. So it's basically, while still very impressive, it depends heavily on the human knowledge that we gained over the years. So the beautiful thing of the Elf Zero project, or Lila Chessero, its open source version, is that it only requires the rules of the game. So it needs to know how do you transition from one state to the next. So what are the legal moves in a certain state? And how do you go to that state? And by only giving it the rules of the game and how you can win or lose by doing a checkmate, it will play against itself and it's relatively improve its knowledge on how to play the game well. It will try to basically, from this game tree, it will try to learn this evaluation function that instead of relying on the human knowledge for centuries, it will learn from itself using neural networks. And the nice thing about the latest improvements in the neural networks is that it's, well, they are able to learn image data very well. So all we need to do, more or less, is just give it a representation, an image representation of the current chess board or Go board. And it will give out a probability of if it's either winning or losing or an expected value, if it's winning or losing or if it's going to be a draw. And the beautiful thing about this visual interpretation, instead of these hard coded rules that were usually the case, is that it's able to really play positionally very well. And this creates beautiful gameplay, which I encourage you, if you like chess or Go or something and you've played traditional engines before, they can be rather dull to play against. But using this neural network and reinforcement based learning method, it will create beautiful gameplay, which is fascinating to play against. And if you also see the games that it played, that Alpha Zero played against itself, you can find various YouTube videos on it. It's, I mean, I'm not a good chess player, I don't even know my rating. But I could see that it was really beautiful to observe how it was doing this. Very positionally, not a lot of calculation per se. Well, actually it can do both very well, of course. That's why it's a very good chess engine. So how am I doing on time? So we did run in a lot of challenges technically. So the preliminary paper by DeepMind missed a lot of the technical details on how to, the architecture of the neural network. Google had a massive amount of compute power that they threw at it, whereas we would rely on the community to lend us their GPUs. And they would play games on their GPUs. These games would be uploaded to a server. Initially they would be uploaded to my homemade NAS and then trained on my local desktop computer. But as the project grew, it quickly became untractable. So we did a bit of crowdfunding and got some dedicated hardware for it. But still, the clients come and go. So we had a very variable compute budget compared to what Google had. They just have a lot of dedicated cloud compute they could throw at it, so they knew exactly the budget or the amount of compute they had for their project. For us it could, I don't know, when we started it was 10,000 games per day. We would train a new neural network on for the next iteration. But then as the project grew, I imagined hopefully we'll get to 60,000 games per day, one day, that would be great. And then people started to activate their supercomputers from universities. And at some point some guy donated 50,000 US dollars to cloud computing, at which point we were at 2 million games per day, which was interesting for the backend stuff we wrote in order to feed in these gameplay data to train a new neural network on our GPUs. So yeah, that was a lot of interesting challenges. So yeah, so just as a somewhat of a plug, we're always looking for contributors. So if you're a developer or you want to help, if you're cold and you want to warm up your room, use your GPU. If you're well aware of chess, if you're interested in chess, and if you know how to compute ELO estimates, you can also help in that regard. We have a Discord location, which is very active, you can join there. The project itself is on GitHub. So the top two links are for the downloading of the client, you can also find it on GitHub. It's all pre-compiled if you're a Windows user and if you're into Linux, there's also a pre-compiled binary, I believe. But you can also just compile it yourself. It's C++14 and it works very well. So yeah, thanks to a lot of people. One more funny thing I'd like to notice is that in November 2018, the core developers got an email from the manager of Fabiano Caruana who was challenging the world chess champion, Magnus Carlsen. And we were invited to join there, which was a very nice experience. He was actually using or investigating Lila chess zero as a way to train his skills, to improve his skills, and yeah, it was a lot of fun to be there. So I guess that's about it. That's what I got. If there are any questions, let me know. Okay, thank you very much. I think we have time for one or two questions. Anyone with a burning question here? Often in the end game, somehow it takes very convoluted paths to winning. Sometimes Lila zero in the end game takes very convoluted paths to winning, like for instance, moving to a night or something, and do you know why? So that's true. We don't have a clear answer yet, but at this event, Demis Osabis from DeepMind was there as well. And he said that their version, Alpha Zero, did not experience any problems with this. So we're still kind of struggling. What we've now done is basically as soon as the user data, the game data is uploaded to our server. We replay the end game using table bases in order to improve the final data. But it does help, but it's not very elegant in a way. I'd like it to be fully from scratch and not using this table-based data. But it sort of works, but it's still a mystery. We're still looking into that. Any more questions? Let's give it to the guy in the volunteer T-shirt. He's been working hard for it. Oh yeah, how strong is it? Yeah, so it's among the top four engines in the world right now. For the latest data, you can find it in the Discord. There are many, many volunteers who actively keep Excel sheets that try to compute how powerful it is. But it is on commodity hardware. So the ELO is not up to par when you throw it against a lot of very powerful hardware. There are also various chess tournaments that are being run from T-Kec and CCC. I think it's about 3600 or something now. But I cannot give a very accurate answer there. It really depends on the hardware. OK, I think we're out of time. Thank you very much, Volker.