 I'm a little bit more of a science person. Boss, how would you describe yourself? I'm more of an engineering person. And what we typically do is our employer gets us these Fridays, and we're allowed to do whatever we want, like once a month, do whatever. And our favorite hobby is to over-engineer a funny problem. So, boss. Oh, sure. Yeah, we'll do that one day. So I see that the laptop has now been moved upstairs, so that should be fine. Ah, OK, Grant. Well, we'll now do the thing where we stand in the middle here. But what I'm about to tell you is a story about an algorithm. And there's a very cool and highly applicable part of the algorithm. And there's a not-so-applicable part about the algorithm. But there's something very general that we discover that we're actually pretty damn proud of. What I will say, though, is I wouldn't approach it the way that we did. Grant. OK, so there's a button at the top. Yes? Yeah, OK. Play all the way at the top. No, no, no, no, no, no, no, no, no. Left, left. Again, this is what is messing up the timing. So what we'll do is when I do, yay. So, boss, if we wave, that's when we want the next slide, right? Let's do that. OK, so let's see if this works. You can press space bar. Hey. OK, welcome, everyone. We had a slight technical error, but we'll make it work. What we'll be talking about today is a silly problem that we're going to over-engineer. And when I say over-engineer, I mean over-engineer in all caps. This is a card game that I usually love playing. However, that's my lovely girlfriend. And my lovely girlfriend typically wins. Very typically, she wins. She actually wins quite often. And this got me thinking I bought this book called Real World Algorithms. And as my girlfriend was sort of counting how many points she got and I didn't, I figured out let's maybe use an algorithm here. Can I use an algorithm to get better at this card game? So the goal is we're going to find a helpful algorithm. This seems like an easy problem. Surely we can solve with an algorithm. If need be, I'll borrow my boss's credit card for any cloud resources I might need. I'll learn from it, and then I'll probably profit win for my girlfriend once in a while. That'd be totally nice. This game of Sushi Go, though, it can get quite deep. And the small thing, we have to keep it simple. I want to get better at the card game myself without the aid of a laptop. My girlfriend would really think it's cheating if at every decision I need to make, I need to consult a terminal. I think that's fair, right? So none of that voodoo reinforcement, deep learning, kefuffle. I'm not going to do any of that. So also another reason to keep it simple. I know who I am. I'm going to over-engineer it anyway. I'll find help with the over-engineering. So we really want to try and keep it simple, because we're going to make it over-engineer anyway. So again, this is the card game, and just a gist of it, what the computer science problem is. These are the cards that you can play, and I just want to figure out which cards are better than others. So I have a list of all the card names, and the only thing I want is I want that list to be in the right order. The best card should be in one end, the worst card should be in the other end, and everything needs to be ordered correctly. And if you want to do that, you know, you need to wait to simulate an order. So I've got this function, the code's not too important, but you can put an order of cards in there, and I can simulate a bunch of times against a random player, and that'll be my score function. So I want to optimize the order. This is how I'm going to optimize. This is what says if an order is better or worse. Unfortunately, I've got 14 cards, so that's quite a lot of combinations. And if you think about it, this is very similar to the traveling salesman problem. The traveling salesman problem is also ordering cities in which you're going to visit. So I already knew this, like winning from a girlfriend was going to be hard, but this implies that winning a card game is NP hard. So okay, you can smell the over-engineering from a distance at this stage. However, and this is sort of a thing that I want to give you before you go home, we now know it's an NP hard problem, but it doesn't mean that we should stop thinking. Sure, we've got lots and lots of combinations here, but maybe we can think and make the problem already a bit easier. And the thing is, yes, these are the cards, but I play this game well enough that I do know that some cards are inherently worth less than other cards. You can read that from the rule book. You don't have to, like if you don't know anything, you might not know this, but just from reading the rule book, I do know that if there's ever an order in which an egg is worth more than a squid, that order is already wrong, so I shouldn't even consider that. Now, if you take this into account, then already the number of combinations decreases by quite a bit. It's at a 36, like speed up if you look at it that way. And that is one lesson at least, like we still have quite a few combinations left, but the small little thought, the small little hey before I do any sort of code, can I, oops, the hardware's being patched at the moment, the hardware's being patched at the moment. But if I have any form of thought that I can put in before I do any code, this is a 36 speed up. This saves me 36, the amount of processing power. Never forget to think before you code. More people should do this. So even with this reduction, you know, it's kind of a lot of compute I probably gotta do still. It's not a small search space. And what makes it worse, it's not that I only have to search there, but I gotta simulate a whole lot. For every one of those combinations, if I'm gonna do brute force, I need to simulate like at least 5,000 times to get a reasonably accurate score metric. So, okay, how about I was gonna do a 101 on evolutionary heuristics. I'm not gonna brute force it, but what I'm gonna do is I'm gonna say, here's a couple of orders. I've simulated them. I've got scores. What I'll do is I'll throw away the bad ones, keep the good ones. And what I'll do is I'll try to take the ones that are good. And make more orders that are like it, but a little bit different. If I repeat this a whole bunch of times, you might recognize this as sort of the genetic way of doing it. You will come to a cool conclusion that the green parts, the stuff that's heavy, that's where all the simulation happens, that's embarrassingly parallel. So that bit, we can do some scaling. Yeah, and then it helps if you've got an engineer. So we found a solution and also tell you a bit more about it. It's a bit unconventional. It's marketed for a totally different purpose, but we actually got something to work here. And Buzz will tell you a little bit more about that. Okay, yeah, next slide. All right, so how to get this new working in a nice way. Vincent showed with some simple mathematical tricks. You can get the search space down from 87, something billion down to two billion possible combinations, but it's still quite a lot of combinations. If you were gonna brute force all the possible combinations to find the best order to win your card game. Let's say a single simulation takes 0.1 second. You're still gonna be waiting for over seven years. So it's not very optimal. And we needed a bit better way to do all these simulations. So there's a lot of ways to approach this problem. Before this whole little project, we haven't worked with a lambs before and we thought, hey, we need a lot of CPU power for this and Amazon has this service called Lambda nowadays. So let's try to use it for this use case. So it's a compute service that lets you run code without provisioning or managing service. What it means, you upload your code or your function to Amazon and Amazon will deal with all the managing and scaling and provisioning of this service. So it doesn't matter if you have one request going there every now and then or a million requests, Amazon will do all the scaling of your Lambda function. But it's a bit like this. You have your Lambda function. At some point you're gonna make a request and the request is gonna return to you. Next moment, you wanna run a lot of simulations and the nice thing about Lambda is it skills almost instantly. So straight away you have access over a thousand Lambdas. There's a lot of ways to trigger it. So with a Lambda you upload your function and into a Lambda function, but you need a way to call this Lambda function. So Amazon provides you a couple of ways to do so. So SNS, SQSQs, you can also call it over HTTP and Amazon has another service for this called the API Gateway. Yeah, so let's say this is Vincent, he makes an API call to the API Gateway. We have an endpoint there called simulate and this will make the call to your Lambda function and will run your simulation and then return your result. The API Gateway is basically like endpoint where you can add like a REST API. So besides simulate you can just add your own functionality behind it and API Gateway will do many management things for you. So let's say your endpoint somehow leaks to the internet and you get DDoS, the API Gateway will manage this for you and do some throttling on all the traffic. Anybody who's ever worked with Amazon before knows how you have to go clicking around in the UI or if you wanna set it up automated. Yeah, you have to learn some cloud formation or some other deployment tool which might take a bit of time, which you might not want to do if you have this cool problem you wanna work on. So there is a tool for that, it's called Chalice. Chalice is a CLI tool to create applications using lambdas and gateways and it can also do some other things. What this allows you to do. So let's start with a hello world example. First you run pip install Chalice, you'll get access to the Chalice command-lining tool. Let's create a hello world application with Chalice new project hello world. What this does is it creates a hello world directory for you with some basically skeleton project for you. So there's a hello world directory with an app.py in there. This app.py contains your API. There's a get in your end requirements file and there's a .chalice directory. This contains the metadata of your lambda function. So once you deploy your lambda function with Chalice, it will store the metadata of your function in there and which is nice because you can check this into Git and then share the same lambda function with your colleagues. So when you run Chalice deploy, a couple of things happen. It will patch the directory into a zip and it will upload it to Amazon. So that's the deployment package. It will do some stuff like creating IAM policies and rules for you. We'll register a lambda function at the Amazon side, create a REST API which is this API gateway and then we'll deploy the whole thing. The last line, you see this REST API URL and this is the public endpoint which you can use to call your lambda function. This is what the app.py looks like. So it has a bit of the feeling of a Flask app. Most important part is the app.route decorator. So this route is like a magic thing in Chalice. It creates an API gateway endpoint for you, in this case on the root and it will map every call to that endpoint to this function which you have below it. It allows you to define multiple routes and you can very easily create a nice little API endpoint in the cloud and it's nice because you can basically deploy this thing with just running on the command line and you don't need to worry about managing servers or whatever and it skills automatically. One last tool to introduce. So on the client side we have lambdas and API gateways with Chalice on the local side. We wanted to use a tool called fire. Fire is a tool to create a very simple command line interfaces. So let's say you have a script with some functions and you wanna convert this into a command line tool. With fire it's super easy. So you have your main if name equals main in there you define your, basically your mapping between the commands from your command line tool to functions in your script. So in this case we have two commands high and simulate and they map to hello and simulate and the arguments for these functions automatically get mapped into flags in your command line tool. So with just a click of a button we now have a command line tool which can run simulation. Let's say in this case we give it 10 workers and then well let's scale this up a little bit and now we have just by changing the number of simulations to 900 we have 900 calls to our lambda functions. So very cool. So let's kind of, let's try to abuse these lambdas. So for iron range, many, many simulations let's go and simulate. It's not very feasible because what it's gonna do is gonna call your lambda function but wait, wait, wait, wait, wait until the simulation is done and get back to you. And we'll do this back and forth over two billion times. So obviously that's gonna take a very, very, very long time and you don't want to be waiting that long. So there's a couple of ways to do multiple jobs simultaneously, you can do some multi-threading stuff but today we're gonna look into async.io. So we run on a single thread but we're gonna want to run task concurrently. Just to set the context, here's a synchronous example. We have a long task, let's say this is our simulation in which we in this case sleep. We call this five times and then at the end we're gonna print how long this takes. Well, very straightforward. So what happens is we do a call to our sleep function that's gonna sleep and then print process task zero. This goes back in four or five times and in the end we see completed in five and a little bit seconds. Now asynchronous example with async.io. So there's a couple of new magic keywords you get with async.io but the main, the biggest thing you need to do is online 12, you have to instantiate this event loop. So this runs on a single thread and this is basically like a scheduler. You add tasks to this event loop and the task will tell you with the async and await keywords. They will indicate to this event loop when the event loop can go and run other tasks. So let's say you have this sleep function and with the await keyword you can tell the event loop, hey, I'm now gonna sleep but in the meantime you can go and do other stuff. So in this example what will happen it will start these five, it will call this long task for five times but they're all gonna say, hey, I just started but in the meantime this event loop can go process other stuff and some point in time my main process of execution will return to me. And the nice thing about it is that you can basically trigger these asynchronous tasks at the same time and then they will return to you in almost at the same time. So you see they're completely in about a second. There's one thing to note with asynchronous programming you don't know when tasks will return to you. So let's say you make a HTTP call it could take one second, it could take three seconds and you have to account for this in your program that the order of task is not guaranteed of course. So you make a call and at some point in time you will return your result. Last thing if you're gonna make HTTP requests with async.io you have to use this AIO HTTP library in which you have this session context. On line 12 you have to open this AIO HTTP client session so you have a single session and from that client session you run all your asynchronous tasks. So in our case we thought we wanna really brute force all these simulations and we wanna like let's just start a million concurrent coroutines. I think I forgot to introduce a coroutine concept. So if you prefix your function definition with async it will tell your, it will tell async.io this can run in a separate part outside the main process. And this thing is called a coroutine. So if you run many, many asynchronous tasks you will run many, many coroutines and they will open a file on your file system and a file system has a limit to the number of open files. So if you really open up too many coroutines yeah, your machine will not be able to handle them all. So you have to use some tricks for that. One, including async.io library is called a sem4. Got some notes here. Anyway, so sem4 is basically a limit to the number of coroutines you can have open. What happens if you call a new coroutine? A sem4 is basically a counter. And once a new coroutine opens or tries to open it will check with this counter. Hey, is there still space for me to open a new coroutine on the system? If the counter has reached its maximum it will basically just block until other coroutines have finished their execution and once that hasn't happened then you can start new coroutines. So I introduced a couple of tools, couple of libraries to do some asynchronous processing. Let's see what we can do with it. So at this stage I'm the evil scientist who just got a whole bunch of resources, so yay. The idea is before we're gonna do anything serious we're still sort of over engineering so let's benchmark some things. I'm gonna do deployment of lambda, we're just gonna start out clean. The idea is let's just boom, send a thousand things there. Let's just make them sleep so we can sort of really measure the overhead and check how long it takes before everything goes back. And two situations are what you're seeing here. On the x-axis is just the time in seconds. On the y-axis you can sort of see the first thing we sent out, the second thing we sent out and all the blue dots are essentially when we've sent stuff out. You notice that there's a bit of, you cannot just send everything immediately, there's a little bit of overhead even in the AAL. However, in the lower chart, that's a situation when the lambda function was just created and we're gonna send it traffic for the first time. In that case, it's not used to getting a thousand concurrent requests in one time so it has to spin up a whole bunch of stuff. And this is all the magic that Amazon does for you. You could do some cheeky things like you can measure the IP address and you can also measure the thread ID and that kind of thing to try to reverse engineer the way that Amazon does it. It's hard. One thing you do notice though, the moment that you've done this once, if you then send the second batch of lots of questions, then everything comes back in a more predictable pattern. So there is this notion of a cold state and a hot state in these functions, but you mainly only pay for it the first time you do this. Which is fine, I'm willing to wait once. However, we did notice that it's great that I say I wanna go at a sushi go and then I send a thousand requests from my laptop to the cloud. But I mean, there's gonna be lots of network overhead, probably, which is not necessarily a great thing. What might be better instead is if we start some sort of SageMaker instance or an EC2 instance and just do the command line from there. And we're just curious in how big the overhead in this could be. And it's massive. The weird thing here is that basically we're waiting for the entire batch to come back. So essentially, you're waiting for the slowest thing to come back. And if you've got a lot of network overhead, it's actually quite sizable. You're waiting more than that you're actually computing something. And this would be the point in time where usually I would give you a live demo, but I'm not gonna ask the kind engineer upstairs to do all the tap and do all those things. So here's basically what we get back when we run the command. I'm sorry it's static, but I can assure you it works. When you run this, you can see in AWS monitoring that you're querying thousands upon thousands of Lambda functions at the same time. The cool thing is we measure how long everything takes on the Amazon side and we can calculate how long everything takes on our side. So this is the point in time where I started thinking, ah, yes, let's really get kicking. We can really, you can call AWS, you can get more CPUs and all that's great. So this'll be great. However, there's a law in physics. There's this thing called Amdahl's law. And the idea is that if you have a program that 1% of the time cannot do anything in parallel because it has to sync, then actually the speedup, even if you have thousands of cores, is not gonna be thousands. And I'll just take the naive number. This is the way it scales. If I have 1% of time that everything is syncing and I use about 60 cores, then effectively the speedup will only be 40. So what I did is I just looked at the output that I had before. I looked at all the time that stuff is happening on Amazon side and I looked at all the time that I spend locally. I have my probability that I'm syncing and when I make a chart of that, then even when I have 10,000 cores, effectively this suggests that I would only have 4,000. So this actually got me thinking. Maybe I shouldn't be too optimistic about the speedup. Let's supply the math a bit further and then I came to an even more shocking conclusion because if you think about it, if you think back of that chart that I just made, I am not waiting for everything to come back. I'm waiting for the slowest thing to come back. And the more things I send, the larger the waiting time is going to be. The slowest out of 10 is probably not as slow as the slowest out of 10,000. You could do a little bit of math there. You should sort of look for the quantile of the sort of longest tail end of a distribution. So a boring math formula, you can take my word for it if you don't appreciate the maths. But now I said, okay, if I put those numbers in the same chart, it gets bad. We even get to a stage where adding more cores will actually make everything worse, which is somewhat counterintuitive because parallelism is supposed to be better. But it seems that there is some sort of optimum that they shouldn't be sending too much traffic because I'll be waiting for overhead more and more. So then you're wondering, how can I make this faster? And this is sort of an interesting problem. This is sort of the whole thing about scaling. This is an optimization algorithm, not even a machine learning algorithm, but I would like to find a way, preferably somewhat general, that I can maybe deal with this because then I can finally stop using my laptop and I can live my life in the cloud. And if you think about it, this is the problem we had and the whole point of where everything goes wrong is not the green part, but it's that red bit. Everything has to come back to some sort of central controller, some sort of central process and that's where all the state gets handled, but it's also the bottleneck. That red part has to wait until all the thousands of green parts are done. And then you start thinking, well, if that's the way that you could look at it, then maybe the best performance boost you can think of is to stop looking at this in sort of batches, but to maybe start looking this in streams. How about I just have some sort of queue where all the simulations happen and I first start with all of my members that I want to simulate. That's the yellow bit going into the blue bit. Then I still got my trusty old lambda function that can sort of listen on to the queue and it can start simulating. And then maybe instead of waiting for like the thousands of things to come back, I can also just say, whenever a new member comes back, is that member part of the 100 best things I've ever seen so far? If so, update the list of best things and generate a new character, put it in the simulation queue. And now I can still do micro batching. I mean, there's still some stuff I can optimize, but this is the reason why some of the people say that concurrency is not parallelism, but it may actually be better. Because I'm no longer doing things in huge batches, I think I will no longer have the amda law issue that I had before. It'll at least be a whole lot better. And that's the part where I think the algorithm started hurting a bit. Yes, yeah, space? All right. It's a very interesting way of doing a talk. So okay, in terms of the algorithm side, I've considered a bunch of things and stuff that I like about this flow, but Boss also had some discoveries on how to optimize this system as well. So, Jellis, there's a lot of work for you. It takes a lot of work out of your hands. It allowed us to go with a Lambda function and API gateway up and running within a few minutes. At one point, we started thinking, okay, maybe the call via the API gateway to lamb us is a bit of overhead. And we just want to call the Lambda function straight away, directly. Without this API gateway. I turned out to be a bit of a hacky way to get there. And yeah, our main issue with the API gateway was it has this limit for the maximum time of execution, which is 30 seconds. After this, your call would just time out. So if you make your simulations larger and larger and larger and larger, at one point, it's gonna cost you, it's gonna take over 30 seconds. And then you'll start running into timeout problems. The maximum time for a Lambda function, however, is five minutes. So we wanted to be able to use that. So the problem is Lambda has no URL public to the internet. You cannot just call it straight away. You can, however, use Boto. So Boto is a library to call Amazon APIs and then via both you can call your Lambda function. However, it does not do asynchronous calls. So it's a bit of an issue when you want to run these kind of algorithms. There is this project on the internet, on GitHub, called AIO BotoCore. It's kind of a asynchronous version of Boto, but it's a bit, not everything is supported. So there's a list of a few Amazon services, which you can call asynchronous, but they're like sort of half tested and there's warnings saying, use at your own expense. And also, the nice thing about Chellis and this API gateway is you just have to use this app.route decorator. It will create an API gateway endpoint for you and then route your calls to your Lambda function. If you're gonna move away from the API gateway, you will have to do this routing logic yourself. So it's a lot of work to get this to work. Other point, how about cost? The cloud is basically a pay for what you use and we want to know, even despite it was our boss' credit card, how much does this experiment cost? So Lambda pricing consists of two parts. One, you just pay for an invocation of your Lambda function. This costs $0.2 dollars per one million requests. And second, you also pay for the time you use a Lambda function. You pay for blocks of 100 milliseconds. The price you see here is the cheapest price you can get for a Lambda. You can, with the Lambda, you have this switch. You can allocate more memory and with that you get more CPU, but it will also make it more expensive. So with this formula, the price for making one million requests to Lambda cost $21. So let's say extremely naive, you were to brute force your entire, all the possible combinations of your sushi go. Yeah, it's gonna cost you a lot of money and in that case, maybe just getting a bit, big fat machine to do all the hard work might turn out cheaper. But we also discovered this algorithm converges very, very fast. You don't need to brute force only two billion combinations. So with just a reasonable amount of simulations, yeah, it would end up costing about $10. And this would basically mean running half a million simulations. Well, yeah, then you're pretty confident about having the optimal order of cards. So this will cost you $10. And let's say you have a big fat machine with 64 CPUs, somewhat equivalent to what you get with Lambda's. If you were gonna run that for 24 hours in Amazon, yeah, it will cost you a little bit more, approximately 76 euros, dollars. There's this switch in the Lambda user interface. You can allocate more memory to it. With that, you also get more CPU power. So it's nice because, yeah, your simulations will go a little bit faster. And it's a very CPU bound task that we're running. I think there's the next slide. Yeah, so if we were using the slowest possible Lambda, our simulations would take about, yeah, 20 seconds. But if we raise this number of, if we raise this switch to the maximum, turned out our Lambda ran four times faster. The number of seconds it will take will be somewhere between three and a half and five seconds. So this is nice, but there's a trade-off. So you pay 10 times more in this example and you get just four times speed up. We did find this optimum where our Lambda's would run twice as fast. So you pay half the price. So basically you get to pay nothing extra, but you do get twice as speed up, which was a nice discovery. So you have to kind of drown error with your Lambda functions to get the optimal between price and performance. Another thing, the default maximum for number of concurrent Lambda functions that you can have is 1,000. You cannot raise this beyond 1,000, but if you're nice enough to Amazon and you're willing to explain you wanna brute force a card game or run some simulation card game, maybe they'll rise the number of Lambda's for you. So some engineering conclusions. Lambda's, yeah. It's not really the use case for Lambda, but it's nice that you can basically outsource your CPU power to the cloud and yeah, combine this with Async.io because basically with Async.io it's if you have some IO intensive task, yeah, that's a nice use case for running asynchronous, but yeah, if your CPU is gonna run all the time and do some heavy crunching, it's typically not a good use case for running asynchronously unless you can outsource it to, for example, Amazon. And then you can run concurrently found simulations and just wait until they get back. During our benchmarking, we did discover there's a big difference between hot and cold functions. So Lambda's, they have this hot and cold state. If you call Lambda, it will spin up for you if it hasn't done so before from a cold state and then yeah, it remains in a hot state for a couple of minutes. So if you rerun your benchmarks, you're gonna get different results depending on the state of your Lambda functions. Just a generic concept in these genetic algorithms, state is a hard thing to handle. If you're gonna run many, many simulations at the same time, you have, you need some way to track your state and if there's lots of processes running simultaneously, yeah, you might run into some bottlenecks in your algorithm. Also, program with async.io. It can be quite nice for you. The basics are pretty easy to get right, but once you're gonna get into a bit more programming with async.io, it's kind of hard to get the details right. Sometimes there's this, you missed async or a weight keyword in your code and then you think like, hey, my code's running so slow and it's kind of hard to debug these issues. There's also some scientific conclusions. Yes. So my girlfriend's watching this over the stream and I suppose that you're all wondering, wow, you've spent so much time optimizing all of these things. We're doing like concurrent things and algorithms and it's NP-hard and we've totally scaled everything out of this. So obviously, Vincent won. Nah. In fact, what the kind engineer, if you could move your mouse cursor to the middle of that screen and then click, you should see a play button. Yeah. So my girlfriend made me show you everyone that. Could you press it one more time, just for good measure, please? Ah, whoop, whoop, back to, left arrow, left arrow. Press it one more time, please. Right, right, right. So good, my girlfriend who's now watching is happy. So, space bar, next slide, please. Yes, so the truth is, I mean, we love doing this, but we ran both the batching and the more streaming thing, like we totally played around with lots of cores, seeing the numbers go up was fun, but unfortunately we did find out after implementing everything and optimizing a fair bit that the algorithm tends to converge after two iterations anyway. In this case, we did have a lot of fun, like we did learn more from the road than the destination because I do think, like really, that we've got a nice little pattern here and if you want to do more grid kind of stuff, this is actually somewhat powerful and the costs behind this are actually kind of interesting. So, the conclusion, the squid is the best card in Sushi Go, note though that our approach is immensely naive. I mean, the game has a rock, paper, scissors element in it. You're totally not going to capture that with a list of cards that you put in a certain order. Nevertheless, again, our approach scales well and it's fairly cheap. We might still have a bit of a problem with premature optimization, I think that's fair to mention. But once again, we really did have a lot of fun with this. We really did learn a lot about serverless. We actually gained a concurrency grid pattern and we learned a lot about async. So, if I can give you any advice before you leave today, spend some time with your colleagues, find this very silly problem and optimize everything. So, I would also like to have a round of applause for the kind engineers who are pressing all the buttons upstairs. And also to the ones that weren't here before, there was like a kind, someone who was giving a talk here and stuff was completely failing. Be sure you reach out to the previous speaker because it was a bit of a bummer that a lot of these things failed. So, reach out to her. We're open for any questions. I think we've got time for a couple of questions. So, if you can come down to the microphone and run up. I think this chat was, are you, do you have a question or have you? Well, we can also, if you scream the question, we can also just repeat it. That might be, oh. Hi, that was super entertaining, thank you. Just a quick question, actually. I really love the idea of what you were saying about how you guys have a free Friday each month kind of work on stuff and I just want to hear a bit more about that. I want to emphasize that we agree. Yeah, yeah, yeah. I was gonna say I'm gonna try to make the pitch when I get back to work. So, do you mind just kind of expanding a bit more how it works? What kind of good things have come out of it? I mean, obviously this is a good one, but not anything else. All right, I'll take, yeah. So we have once a month, we have, or every four weeks actually, we have this Friday where you do not work for any clients. Everybody goes to the office and basically you're allowed to do whatever you want as long as it's kind of work related. There's no mandatory requirements or whatever. Just go and hack away. So sometimes people want to pick up new programming language and they read a book. Sometimes people want to work on a hack away on some project like this one. There's really no requirements whatsoever to do so. And sometimes phone things come out of it and sometimes you're hacking away for a day and you think like, ah, whatever I've done today, nothing's working, okay, that can all also happen. But it's always fun to learn new things in these days. And if I can recommend you anything, if you're gonna make the pitch to your employer, like, if you look at what we've done, I mean, we've spent a fair amount of time on this, but here's some of the stuff we got out. We learned a new programming paradigm. We taught ourselves a bunch of new cloud tools and we have a cool talk for a conference. So that's still a return on investment. And you allow yourself just that extra bit of creativity to work on things that you wouldn't otherwise do, which might actually save the day someday. Okay, I think we've got time for one more question. If my girlfriend ever wants to play another card game, I'm totally prepared this time. Okay, so quick question. Would you say that you failed because the modeling of the game was flawed? Like if you changed the implementation of the game in Python, would it be better to work? That sure didn't help. But so again, one thing you have to keep in mind with this, you're not allowed to use a laptop while playing the card game, right? So that's a hard limit. So the only thing the algorithm can do is teach you some sort of heuristic, maybe something that you don't know. And again, you could go for the deep reinforcement learning thing, et cetera. But my girlfriend would no longer want to date me if that's how we spend our Sundays. But definitely, yeah, if you really want to do this properly, there's a bunch of stuff. And also, if you're going to do the algorithm streaming, by the way, the whole genetic portion of the algorithm changes a bit, right? You no longer have population. So there's some numerics you have to, yeah. Okay, I think we're gonna have to wind it up there. Thanks once again, Vincent and Bas. Thank you.