 Hi and welcome. My name is Rafael Taubinger and I'm going to show you how a machine learning application can be created from a starter project in ImagineMap AI to a final embedded application in IR in Bebet Workbench for ARM and deployed to the SensorTile multi-sensor development kit from ST. First, ImagineMap that's a partner from IR Systems has ImagineMap AI that's a development platform for machine learning on edge devices. It allows users to go from data collection to deployment on an edge device in minutes. So ImagineMap AI helps users to convert machine learning models to optimize at C code at the click of a button. This of course saves months of development time. So ImagineMap AI integrates with IR in Bebet Workbench and allows you to add TensorFlow models to your project and convert them to C source code that can be optimized with the IR toolchain for the fastest and most compact output of the final application. More details about ImagineMap AI included supported TensorFlow layers and activation functions can be found on the link displayed on the screen or at the description of the video. In ImagineMap Studio the tool covered entire machine learning workflow optimized for embedded devices with applications for audio, gesture recognition, signal classification, file detection and material detection like displayed on the screen. In this video we will also make use of the SensorTile Vox multi-sensor development kit from ST to run the machine learning model and process the sensor data in real time. More details about the board can be found on the link displayed on the screen but also again on the description of the video. The SensorTile Vox is a ready-to-use kit with Virals IoT and Verbal Sensor Platform to help you to use and develop applications based on remote motion and environmental sensor data. The board fits in a small plastic box with a long life rechargeable battery for the Ultra Low Power ARM Cortex-M4 microcontroller and includes many high precision sensors like displayed on the screen. Temperature sensor, we have inertial measurement unit, accelerometers, magnetometers, altimeter, pressure sensor, microphone, audio sensor and finally humidity sensor. There is also a programming and debugging interface that connects smoothly to the IR embedded board punch and included is also adapted to make that all easy. So now it's time for the demo and let me show you how this all comes together. So I have here ImagineMap Studio that I downloaded and installed it already. You can get it of course from the links that I already mentioned before. So first time you started you get some tool tips of course to make it easier to know how you create a project and so on. I will just move forward here and first thing I need to do I need to create a new project and we will be using here a starter project and as you can see there are a few ready starter projects from human activity recognition or even radar gesture recognition. There is then the keyword spotter that's the one that actually we will be using here and we want that device at the edge to identify two comments up and down so and finally there is also indoor outdoor detector. So what I need to do here is just select here the starter project. It will ask for a name and let's name it just AI demo. From here I also know that I want to save it of course in a specific folder here so I have already done that. It's also saying that it will download some information if I confirm here. So from here now I have the starter project and there are of course first we have the data and that's where you can of course collect additional data. I will mainly use the starter project the way it's provided if I just click here for example in one group here and just right here you might see how it works. As you can see it's recognizing the keyword down and setting different ways maybe with some noise and so on. That's one example and if I look here maybe on likewise the comment pretty similar. Again also setting different ways maybe different scenarios rooms and so on so here is where you can do all that step of collecting data then of course you can do some additional tuning here and settings. There are some preprocessor information some information related to training but what I want to focus here is actually the model and if I just click here on the model we will see that we have some additional properties that we can change about the network here evaluation I mean the procedures and so on but what I want to focus here is actually the seed generation here for the edge device everything will be actually saved here in this output directory edge we use the seed prefix here that it's predefined and what I'm going to do here once I built here this model we'll see that it generates us all the information and it's mainly the dot c and dot h file that it's being generated here's some resources that they're required and if we look here on the left side we can see the two files model 11 dot c and model 11 dot h and these files actually to be used now at IR embedded workbench so if I move from here to IR embedded workbench I have my model I trained it I have it maybe optimized unit so I can go to IR embedded workbench I'm using here a ready project but of course if you want you could use stm cube mx to generate let's say a new project with some additional settings of course you can generate a project that is compatible with IR embedded workbench but on the same side you know if you use the board selector you will already get some examples too that you can use so another option is if you go to the website from st st.com and look for the sensor tile board there are of course some resources also available here if I scroll down there are some out-of-the-box projects that you can use good so let me move back here to embedded workbench and there is a very good integration between ImagineMap Studio and IR embedded workbench so the way it works or what you can do actually is that by using the .h5 file the model file you can make sure that the c and .h file are always up to date and generated here from embedded workbench directly and the way that this works actually is if we look here inside options custom build it can actually recognize the .h5 file from the model there is here on the command line the ImagineMap command line utility with some paths and so on and finally then the output files .c and .h so from here what I need to do is actually add the .h5 file to the project and since I know where it's located make it easier here so I'm looking for the model so the file can be found here and if I open this you will see that it's populated here on the project so have the model 11.h5 and every time I do a build on this project we will mainly see what happens here so I will just first make here and the file should be regenerated by every build here this should be very straightforward and easy meanwhile the project is building I can actually have a look here in the files itself and there are mainly three functions that need to be invoked and these functions are actually the imi init the nq and dq so these are the three functions that need to be used because they are these are the functions from the model and the use is very straightforward so if we have a look here inside main we do that call to the init function right in the beginning right after hardware init and if I look on functions here I should just go here directly to the audio process function and what happens here is this function it's mainly call it every one millisecond when data is available and then there is some processing happening here so we mainly have the nq first and after there is a dq so this is all what you need to do have a routine that can take care of this good now I have my project all built here we don't have the time to go into much details here but what I can tell you and you can probably repeat that too by using the IR compiler if we just look here on the code size I can tell you that without any optimizations you need at least 34 to 36k more so the IR compiler is of course very powerful and you might be wondering that here the read only data is maybe a bit bigger but that's because the model is not fully trained or not that optimized I would say good now that I have all in place here I can of course also connect to the target what I have here is I have ijet from IR connected if I go to the options here we will see here ijet and the interface we are using is swd there is an adapter provided as you can see in the picture once you have that in place you can just download to the target and do the deployment and from here we land at May and all the debugging capabilities are then available of course you can look inside memory you can have access to the registers so all is there so as you can see from imagine map studio generating c code building it then connecting to the target and you can even do some additional debugging here to make sure the application works fine so if I want to show here how it works in practice I will just connect the board here correctly and what I can do here if I just start putting here we will be able to see what's coming in here and it's already printing some information and the way it works actually you need to connect to the stble app from your smartphone and from there once you connect to it the microphone will start to be read and that's what's happening now up down up down so as you can see it's recognizing the comments I'm giving up or down and this is actually the way it works so perfect this is our first machine learning application directly on the edge great so what are you waiting for just try out your first machine learning application based on tensorflow models with the complete flow from imagine map ai irin bevet workbench for arm and a compatible development kit like the st sensor tile multi sensor board thank you so much for your time and I hope you enjoyed this video