 Thank you all for being here today. Just going through the stage to drop this here. Well, good. I'm really happy to be here. And today I would like to talk about a really specific and technical subject that is train, deploy, and use deep learning on an edge device. But this title is a bit misleading because I could have rephrased it into why is it so hard to actually make money or use an artificial intelligence application? First of all, what are we talking about when we talk about artificial intelligence? And I need to thank the former speaker for that because it's quite a good introduction to my talk. The thing is, it's pretty hard to have a formal definition of artificial intelligence. Actually, there's none. But hopefully, if you think of it as a context, you have a couple of words that are often associated to artificial intelligence. And in the previous talk, the one was deep learning. We hear that a lot. Deep learning and artificial intelligence combine all the time. But also, you can hear that is used for recommendation systems, for example. But if you step back in a couple of decades ago, artificial intelligence was all about data mining. It's a bit outdated now, but it was fancy in a time. But today we talk more about supervised and unsupervised learning, also reinforcement learning that we talked before. And you can apply them to many, many different fields such as predictive maintenance, chatbot, computer vision. And so forth and so on. So they do like buzzwords, right? But if you look at them, you realize that you can split them into two different families. On the left-hand side, you have the math. You have the tools to help you do the actual task. And on the right-hand side, you have the task itself. And when you look at the task, you realize that if you are to do it yourself, you need to think. You need to use your brain. So basically, whenever you use math and machine learning, for example, and especially today, apply to a specific use case that requires cognitive load. This is basically artificial intelligence, at least in the scope of the presentation. So basically, artificial intelligence is just machine learning in application or an application of machine learning. So knowing that, if you want to build your own application of machine learning, you probably think you're going to spend your resources this way. You're going to spend a lot of time and a lot of money to hire top-notch data scientists to work on an excellent model to make sure that it has the greatest performances ever. And once the model is trained, while you're going to do some wires and pipes, just make sure that it runs on an information system, maybe a small web application to serve the model to the end user. And if you feel like so, maybe a bit of monitoring just to know if the model is good, if it's been queried a lot, and so forth and so on. Well, this is quite the misconception here. According to Google, this is what a machine learning system looks like. And it's quite impressive to think that this teeny-tiny black tile in the middle is pure machine learning code, and the rest of it is nothing about machine learning itself. But the thing is, and you guessed it, the sizes of the tiles are actually related to the complexity of the task it's asked. You can see that you have three different types of tasks. Yellow is about machine management infrastructure. Deep blue is about data handling and data transformation. And light blue is about ops, how you run the actual application. So it's a bit counter-intuitive to think that a machine learning application is solely about machine learning itself, but so much about code. Well, I would like to use a use case to exemplify this. I want to use a Raspberry Pi that is a small computer here. And well, Raspberry Pi are really, really interesting computers because they're cheap, they're 35 euros, and their capabilities are roughly the capabilities of a computer from the beginning of the century. So you can do a lot of stuff, and they come with a lot of accessories that you can plug into the Raspberry, such as a digital screen here, digital touch screen. And so the use case is I want to draw something on the screen, and I want the Pi to tell me if it's a car drawing or not, right? And in order to do so, I want the Pi to use deep learning, a neural network. Pretty simple, and I'm not going to be a millionaire with this ID, but hopefully it's enough for you to understand what I'm trying to say here. So first and foremost, I said that I need to use deep learning and neural network to do so, so I need to train the model. And I'm not going to lie to you, actually data availability is a thing in machine learning, so the data availability made the use case and not the other way around. I use the data from the Quick Draw application, this is a Google application, and basically just ask you to draw something on your phone, and you have 20 seconds to do so, so maybe a car, maybe Mona Lisa, maybe the Great Wall of China, and then while Google decided to store all the data with the actual label, with what you're supposed to draw. They open sourced the data set a couple of years ago, and it is great quality data if you want to play around, especially with drawings. The thing is, I do have the data, but now I need to train the model, and training a model is actually really, really expensive in terms of computing power. I tried to train the model on the Raspberry Pi and actually the car burned, the computer burned. So you need to have a computing power on demand because you don't want to spend too much money on a machine that you're going to use a couple of hours, so you just go on the cloud, that's how it goes today. You go on AWS, you can use any cloud provider you want. I chose AWS because the application is Google, so it kind of events the debate. It's pretty easy to have the right machine to train your model within minutes. Last, but not least, I'm not going to code the entire training process, it's really hard, it's really cumbersome, and actually so many people have done it before through frameworks, and the one I use is Keras on top of TensorFlow. There's many of them, if you want to use PyTorch and MixNet, whatever you can do, but they help you to code the training process. And within 20 minutes and 50 to 60 lines of code, I do have my model that is trained and that is good. So it's pretty fast, pretty easy, and I have my model telling me it's a car drawing or not. So it's just the first phase now because I said I want to use the model on the Raspberry Pi. Well, I need somehow to put it on the Pi. And you can think that the model is easy to put there because I just need to know, just don't allow it on the Pi, but it's not the case. Unfortunately, a neural network and especially, well, it's also true for all the models, is not just something completely abstract that can run anywhere. Actually, it has some sort of a footprint of where it's been trained and what types of machine has been used. So the thing is I can't just download the model on the Pi. I decided to do something quite difficult, in fact. I decided to simulate a Raspberry Pi on my computer and on the simulation of a Raspberry Pi, such as the simulations we mentioned before, I decided to download all the stuff that I need, the languages, the libraries, the frameworks, and the model itself. And when I'm sure on my computer that can run, I just package the image on an SD card and I plug the image into the Raspberry Pi and it fires up. Funny thing is it took me 20 seconds to explain you how I did that. The thing is it took me two months and more than 500 lines of code and also three different softwares. So it's 10 times more difficult than just training the model with respect to the numbers of lines of code, right? But still, I managed to do so. And there's one last step, because if you want to use the model on the Pi, well, you need to build something around it, right? You need to take into account the screen itself, take the drawing, process with the model if the drawing is a car, and make a decision. If it's a car, you just serve something to the end user to say it's a car and if it's not, it's the same thing. So you still need to write an application, you know, to have some sort of wrapping around the model and this application on the Pi is also a couple of hundreds of lines of code and this is five to six times more complicated than just training the model if you assist the difficulty with respect to the lines of code. But still, I made it too. But hopefully you can understand that it's not just about the model. So why is it that the application is not the model itself? Well, if you look at the value chain, that is, every time I query my model, I want an answer from the end user point of view, the value chain is quite long. The first step is actually quite simple, but it's a bit apart. So you do need to have an environment where you have the data, the infrastructure and the frameworks to train your model and to have this artifact, this object. So this part is all about data verification, collection, feature extraction, a bit of machine resource management too because you will be on the cloud. So you need to operate machines on the cloud. And of course, there's a bit of machine learning code, the 50 lines I mentioned before. From this point on, I still need to deploy the model and you will see that it's nothing about machine learning. So I said that I need to put on the Raspberry Pi somehow and this part is all about configuration and process management tools. Configuration, because I need to make sure that the environment on the Raspberry Pi is the same as the environment on the training environment. And this is quite difficult, actually. Those are the 500 lines of code. Once it's done, well, you can start the value chain from the user point of view where you want to have the decision about the drawing itself. So the user generates a drawing on the screen and then you want the application to catch this drawing, transform the drawing and extract the features that are required by the model to make the actual prediction. And this part is nothing about machine learning itself. It's all about data collection, verification, feature extraction, and also a bit of resource management because you need to play with the screen, maybe another application. You need to make sure that the different tools are working together. Once it's done, there's this loop between the application and the model itself. So the application is going to serve query the model with actually the features where you want a prediction on and then you're going to have an output that most of the time is going to be a probability. So this part here is really important. It's all about serving infrastructure. How the application is going to communicate with the machine learning model itself. Here on the pie, it's on the same machine through the same process, but it can be on a different machine. Maybe your model is going to be, I don't know, reachable by an API. Maybe you're going to have several applications querying the same model on only one machine. So this part is really important and there's also a teeny tiny part about machine learning here because the actual prediction is going to be made by machine learning code. And actually in Python, and most of the time in all programming languages, it's just one line of code that is pretty simple. It's a function that is called predict. So once you have the output, the probability, well, this is not the end because you're not going to serve a probability. You don't know what to do with it. So the application needs to make a decision with this probability. And according to this probability, say if I have a 60% chance to be a car, I'm going to say it's a car. If I have 30% chance to be a car, I'm going to say it's not a car. And in between, I'm going to say, well, I don't know, make another drawing, right? But you need to serve this information to the user. So you can see that all those five different steps are all part of the value chain and that you have this huge part of the value chain on the right-hand side that has nothing to do with machine learning. And especially, it's really difficult because all those steps do have different life cycles, but they're all part of the value chain itself. So you need to make sure that you combine them the right way. And in order to do so, you need to automate those steps with code and not only machine learning code. And I would like to finish with that. You have two advices to give you today if you want to make machine learning applications. First, you need to have code everywhere. You need to automate the entire process of the value chain, not only just the training part of it. And second, sorry, if you want to make this application, well, start with the right-hand side of the value chain. And even if the model is not good, even if it's just a simple model, saying it's always a car, just do it so that you integrate the model on your production environment and hopefully you can iterate on the model afterwards. And if you do so, you're going to be successful with building your machine learning applications. Last, I do have the Raspberry Pi. So if you want to play around with it, I'm going to be sticking around to the edge. So just feel free to ask me. Thank you. Fascinating talk. Tell me more about the Raspberry Pi car races. How do they come out and how do you train the cars to compete? That's great, actually. So it started in the US, actually, and in France we just copied the US. But the goal of the competition is to have a small toy car to be autonomous on a race, right? So you need to have three laps on the fastest you can. And to do so, you have to use deep learning, actually. So those are pretty simple rules so you can do a lot of stuff around that. And actually it's pretty cool. It's been on for two years now. And you can have different strategies. You can just train your model with real-world data. I decided to go with the simulation and working a lot on reinforcement learning, too. And which is cool is that the competition is just an excuse. This is just a way of bringing people together, discussing about these, and making sure that we have fun working on machine learning. Awesome. Sounds fun. Thanks, Council. Thank you. Great talk. Thank you.