 So know that we have learned, I mean, what is AI, what is, you know, deep learning neural network, a little bit more in this case, what we can do, I mean, with different kind of machine learning, we'll dig up a little bit more in the artificial neural network, I mean, the deep learning part, and explain the position of the QBI, what ST is proposing in this case, and what you will have to learn, I would say, either by yourself or using partners in this case. I think it's important to have it in mind. So these are the five defined steps in this case, and the first one is a capture of the data. Of course, before, and it was quite interesting from the presentation, you have to think about what you want to do. It's not, you know, for fun in this case, so, and this is, I think, the most important part to understand the target, and then you will have to verify if you reach what you are reaching, and otherwise you will have to change. So this is part. But then once you have the idea, you have to capture the data, and capturing the data is okay. It could be okay. You just have to, you know, to record something, and that's all. No, you have to take care about how you are, you know, recording the data in which condition. So I can take, I don't know the example we faced during the AudioSense classification, because for the AudioSense classification, I mean, the small demo you were playing with before. So we had to record hours and hours of indoors, outdoors, and in the car stuff. And, okay, and at the beginning things were working well, because the room was, you know, a medium room, but then we had to try in a bigger room, then we have to try in some specific room. So that means we had to capture a lot of data and to remember in which kind of, which condition we were in order to, in case of bad training, to come back and maybe modify and change all of these. So it was important. So capturing is quite simple, but this is a basic. Because if you are recording bad, I would say capturing bad information or not proper information, because you are subcontracting it and the guy is not really taking care about this, or you are recording a motor and normally the motor is like that, but I don't know because someone cleaned the room and changed the motor and the motor is not exactly in the same position, then you will get maybe trouble. This is the first thing. Then there is a cell you are capturing the data using sensors. Okay. But then in the final application, you will maybe use other sensors. So maybe they will have also discrepancies. You have to think about this. And this is important. I mean, baby, to use the sensor, the same data, the same sensor for the capture and for the final application, or maybe not to have, you know, to stay global in this way. So these are things you have to think about. Because maybe you will record using a normal microphone, you know, and in final application, you will use a mem sensor, which have different resonance. So really, it's something you have to think about. It's not only recording the data. It's how I'm recording the data. And okay, the next step is to clean the data. So to be sure that you are not recording stuff you don't want to record in this case or adding a noise or adding, you know, unusual noise. Then you have to, in the case of, I will say, specifically, you have to label the data and then prepare and think about neural network topology. And the third step is then you have to train the NN model using, you know, as we see, TensorFlow using a big stuff. So for this part, for this area, there is no ST tools in this case. I will show you because there are plenty of tools on the web. I mean, in this case, our big partners, a big name are creating their own tools in this case. So this is something you can use. The point number four is then to convert it. Once you have a train network, so that means a neural network, then you will convert it in a C code in order to be able to, you know, to use it on the microcontroller. The fifth step is to process and analyze. I mean, see if you have any discrepancy between the theoretical model and the real model because don't forget that you are converting in this case. So that means it could be that the result is not completely 100% the result of the theoretical model. This is first of things. And then here converting, you will maybe also want to compress because they are also optional to compress the model in order to fit in less memory. And then you will have to check. I mean, if you are still in line. This is also important step. And the final one is, of course, to check if you reach what you plan to reach, which was also final zero zero. So these are the five identified step we had, you know, implementing or trying to implement neural network into a microcontroller application. Let's go a little bit more, I would say deeper and what is the state is proposing as a tools. Okay. So we have some tools. And this is one of the tools we have on the table is, okay, the tools we are proposing in order to, how we say to be initiated or to start to walk or to start to develop. It's maybe not the tools you will use at the end. You will maybe develop your own tools. You will maybe use tools from our partners. So again, if you're interested in tools, ask the partners because they have also developed their own tools and they will maybe help you with this tool. So ask them, please, in this case. But, okay, to capture data, we have tools. And this is using the mobile phone in order to clean, label, and, okay, and at this step, you will have to think about the NN topology. So we are not helping you for the NN topology, but you have to think about the NN topology, neural network topology. For the step in the middle, as I say, there is no ST tools because you will have to go in the new patents to learn Python in this case and use tools which are existing on the web. For the first and fifth step. So for the first step, it's a Kube AI. So finally, the Kube AI we are proposing is there. And this Kube AI, which is part of the Kube MX, this option from the Kube MX, is there to convert, again, like it is, to convert the code or the neural structures, the neural network structure with the white and the visual structure into a C code. Or we say meta code, let's say like this. We'll present it. Okay, so that means for you a C code and also helping you to verify if the code which we are transferred, how we say, or converted is, you know, from the reliability at the same level as the example, for instance. On the theoretical form and also on the practical form, so on the target, directly on the target, you will be able, using these tools to check on the target if you have discrepancy between the theory and the practical model. And I think it's quite important for you because at the end you have to have a walking model, yeah, close to the theoretical. Okay, and finally to process and analyze, so this is the board we have. So let's present a different part that we have. We have a very small board which are called sensor tiles, including SD cards. So we will have the possibility to put some SD card to record a lot of things. It's the one we use, for instance, for the audio send classification, because it was small. We put it everywhere and a lot of ST people were wearing it and then, you know, recording a lot of stuff. It's nice. The only missing point, I mean, for us today on the workshop is there is no ST links. So you had normally to add an extra component. It's the reason why we do not select this board. But this board is existing. Then we have the next generation of this board, so including IoT. So this is a sensor tile dot box. And this is also water resistant and including Microsoft Azure access and so on. So much more IoT stuff. But again, with SD cards, you have the possibility to record a lot of data. And finally, this board. This is the one we have, which is quite good for IoT, which is quite good because you have an ST link on board. So that means you can program. The only drawback for artificial intelligence or what we are doing today is we only have a quad SPI with one megabyte. So it's not so much. And for audio recording, for instance, it's roughly one minute of record. So it's not the best board for this kind of stuff. But okay, we just again bought to play with to to understand the principle. This is what we will do today. Then if you really want to record a lot of stuff, either use a partner's board or use a board with embedded SD card on it. Okay, this slide is no, you know, combination of sensor and STM32. So as I mentioned this morning, ST also are also providing smart sensors. What we call smart sensors or mem sensor with some machine learning core embedded on it. So in this case, what you can do. So with some decision three and machine learnings with different stage. So that means you have algorithm you can learn. You have a certain amount of memory. So you can do some of the stuff. And which could be good in this case as a pre processing. So you can, I don't know, you have a accelerometer and you want to detect if something is falling down in order to switch off the system or to protect the system or save everything. So maybe you can use this one because it in this case is good enough in order to, you know, to detect if something is falling down. But if you want to have something more complicated and the good thing is okay, consumption and so on because everything is in implemented. So you will have a less conception. It will be do, you know, on the fly. So this is something we are proposing. But then if you are combining it to a STM32 in this case, then you are adding the deep learning or neural network in this case, staff. Okay, and you can combine both. You can first detect if something is falling down, then switch on the STM32 and then, you know, start the neural network staff and then define if it's really falling down or what is happening. On which condition what exactly, exactly happening. So you can combine both. Collecting the data and architecting an end topology. So that means is where we have in this case, you know, selected partners. Of course, they can help with more because I've experienced about or a little bit before or after. So it's depending on the partners we have. So we have a dedicated website for AI for STM32. And inside the website, you have selected partner in this case, mean that we were in communication with the partners and they're working. They are knowledge about ST in this case and then know, okay, what they're talking about. And to clean the label and the topology, we have the Bluetooth sensor mobile phone application you already have installed and play with. And this, the code are open in this case. So you can find the code on the website and then you can modify, you know, slightly is giving for you as example, you know, a start example, you know, to define or to develop your own application. For the training neural network model. So this is somewhere in the middle. So we mentioned already TensorFlow and Keras. So we are also compatible with TensorFlow Lite microcontroller. So TensorFlow, which is dedicated for microcontroller. So generated structure, neural structure, which are, I would say, which fit more with microcontrollers and microprocessor in the past. So we are compatible. And there are lasagna and cafe TensorFlow and Keras. This is more the Google in this case. And the rest is, you know, the big name in which are unknown in this case. And the conversion tools already mentioned and then analyze for the deep learning. This is a QBI roadmap because what we are proposing in this case, as I say, from the STM 32, maybe I did not mention it to everybody. What we need for the QBI is at least a Cortex M4. So Cortex M4 or Cortex M7, because we have integrated floating point unit inside. And these are the reason, of course, we have partners. We already have algorithm which are running also on M zero and M three because it's also possible. You know, it's just a question of inference time. So that mean accuracy and speed of the SAP. But if you are happy and if you have a simple structure, you can also do it in a small microcontroller. But for the QBI, QBI at the moment is only supporting Cortex M4 and Cortex M7. Supporting floating point support since beginning of 2019. Supporting, of course, the new board and also the quantization. TensorFlow for microcontroller, which is also called TensorFlow Lite. So we are already supporting it. I think it's supported since the version, it's not the version you have, but the next version. Because for this workshop, we've frozen a version just to be sure that everything is running well and so on. But if you have taken the next version, so it's supporting this one, which is already available. The current one, yes. And we are additionally layer. So we are supporting always more and more layer. Okay. And the next step is in 2020 is to support the ONNX structure. What is ONNX? It's everything except Google. So Google is generating a specific format in this case. And all other, I would say, company are also generating in the ONNX. It's a unified format for this kind of stuff. And we will support it in 2020 because it will be defined, finally defined in 2020. Or finalized, I would say in 2020. It's the only reason. Okay. This is the roadmap for the AI, QBI. And again, what I explain is not only it's a toolbox, what we are proposing. So that means you have some tools, software tools, hardware tools in this case. You have a community and for all the function pack, and we are using one function pack today. So this is an audio slang classification or this is a sensor pack, let's say. We are providing the audio and the motion example. So almost everything is inside the package you have. So the goal is if you want to start from scratch, you can reuse the package and then step by step over the five step, redo it by yourself. This is our goal, in order to get a little bit of confidence and then to develop your own ONN network. So we are providing everything in source code in this case. And you have dedicated community. This is the package, the FP package. So AI sensing package we are providing. So inside this package, download the package, you will find documentation and all the material you need in order to develop from scratch the neural network. Okay, this is a pack scope. So it's educational, so it's for you to learn from it. It's a simple based development board and this is something you can do at home in this case. And it's including poor optimization because if you are using microcontroller it may be because of the price, but it's also because of the optimized ultralopore capability. I mean, for a neural network it's quite important. These are the sensing. The only, I would say, library which is in a binary and not open is the AI NN library from SD, in this case. Why? Because, okay, you can imagine that it's a Cortex M4, M7. So you have a lot of on the market who don't want to have other people using it. But this is the only thing which is, I would say, in a binary. But normally we just have to use it in this case and we'll explain how. The rest is open and this is all what the package is offering to you. So firmware on the air and so on. So we are starting with a hardware supporting different board, including the board you are using. And then we have the whole hardware exception layer like we have in all STM32, in this case. In the middleware we are proposing some pre-processing stuff. For the audio, we are proposing the audio pre-processing log mail. And for the accelerometer, the gravity and reduce stuff. So we have some algorithm dedicated to what we are using in order to pre-filter or to pre-process the data. And then you have the standard package. Again, this is an example you have. So this is an example we are using. So there are different paths. The first path is in order to record new. Because in your case, you will have everything open. So you have all the package, you have all the recorded information. And maybe your idea will be to optimize it. If you want to optimize it, you can. I mean, no problem because everything is open. You will just have to use the embedded audio board you have. Record it using the apps. Save it on the SD card or somewhere else. And then you will have to relearn the system. So you have this path. And the second path is what we have done this morning, is the application itself. So this is using the application. So you are inside, outside, or in the car. And it will be detected automatically. This is for the audio send classification. With the same sensor and the same board and the same application, we also have human activity recognition based on accelerometer. So what are we doing? It's a five-class package in this case. So what we are detecting, if you are doing nothing. So standing. If you are walking, running, or bicycling, or in the car. Just using the same. And you can have both running at the same time. With the new package, you can have the board. And you can detect both at the same time. If you are in the car. So we will detect in the car and in the running car in this case. Okay. So just to finish the QBI. Okay. This is the tools. And again, it's doing five percent of the job. The final job, because the big part of the job from, from the five step are before the first, the second, and the third step. These are the big part of the job for this. But nevertheless, so with the program and all the staff. So we are normally proposing something ecosystem in order to develop, you know, properly a neural network on the SM32.