 Moj na vsem, danilo Pao. Prostam za vsej tehnologiji v Agrate stmikroelectronici. Počutno vstajem stm32, kubemx.ai, ki je nekaj nekaj klijančnji servet, vzvečnji vstajem stmikroelectronici. Po zelo je všeč vsezvačno transformicija o nekaj prečenjeljnev neuralne hornini na optimizaje, memorijala optimizaje inwardočno komputationalno vsezvačnev library. In ampak se borek bi urobil na knee stm32 mcu platform. Zelo je nrhan, če sveč kaj glasnja izprade v brojzji. Zelo je občino vendnutena primeraj z simpliciti. In je dobroga vsečenja vsečenja, kaj je lasan in soteano, keras in sotensov flow in cntk in as well as teano in for sure café. Daj je tudi začetno izgleda, da je izgleda, da je naša začetnja aplikacija. Odplikacija in tudi zelo zrpotičenja in tudi zelo zrpotičenja, zato in začetnja aktivitavna rekonicija. Platformi, ki se izgleda, je naša svoj vzivna z vzivnjavovosti v stm-32, Nukleo F4 in F7, as well as the sensor-tile L4 core with a variety of clock frequencies, flash and dynamic memory. So the demonstration I'm going to show is based on audio processing on the sensor-tile L4 In particular, let's start from the client web page that is the following one. I recall that the tool is called stm32cubemix.ai. So the first is to show what type of interfaces popular deep learning tools are available. In that case, the audio processing has been developed through lasagne. So we pick up this choice, we create the name of the network, in this case, audio sync classification, and then we'll select the model. The model we assume that is pre-computed, so I will select the file that typically are provided by the customer and then upload. So it means that the client is sending all those options to the server that he shortly will show various information. First of all, that's the picture that show the pipeline with the different stages of the network. For example, 2D convolution, non-linearities, dense layer and other end-up stages. Secondly, we show also the computational estimation, the complexity estimation of the network. For example, the number of multiply accumulator, since this is directly linked to the complexity of the network. Then the memory requirement in term of RAM and ROM. Now we need to choose what is the platform that we would like to map the neural network. So in this case we are going to select the sensor tile that has 80 MHz, 1 MHz of flash and 131 KB of RAM. And then a range of compilers are available, the one that STM32 supports. In this case our choice is Kale. Then also we will run some validation, which means the capability to convert the network that is as precise as the one that customers submitted. We update those options, those options are sent to the server that will process and now the server will confirm that the memory used by the neural network library fit into the available sensor tile, as well as the validation has been passed successfully since the average error between the submitted and the generated library is equal to zero. Now we are ready to download the library that will go through a number of steps. For example, creating the library through the compiler that in this case has been chosen as Kale. And in a while the library will be downloaded. So which means that the customer will get access to the converted, optimized neural network. And the number of optimization are needed because the STM32 has a limited clock frequency, a limited RAM. So we need to fit a network on limited resources also for low power reasons. So the downloading is happening and as you can see in the folder we downloaded in a zip that will be unzipped. And then inside you will see that for example in Lib there are the models for the STM32 for Kale compiler, as well as for GCC because the customer may need some validation on his own PC as well. Now this file has to be copied into the application project that is this one. So let me copy the libraries. That's the typical steps that the customer would need to reproduce in order to integrate the library that has been generated into his own project. And then let me open the project itself. And I'm going to compile it F7. So in this part of the picture you will see, so the compilation went perfectly. And now let's install the library on the sensor tie. The compilation is very close. Ok, it has been done. Now I can run the library. The library is running. So let me switch on an audio file. For example outside noise. So let me detach the sensor tile from the board. I will reboot it. So why it is flashing two times? Because it is recognizing the outside noise. So neural network in this moment is recognizing that there is noise due to the ambience. And will flash two times to notify that. So this concludes the demonstration of the audio sync classification through the STM32QBMX.ai tool chain that supports any STM32MCU platform. I'm very happy that you followed through all that presentation. And please contact me if you need any help or any further explanation. Thanks for your time.