 Okay, so it was quick introductions to position the QBAI and now I will let Guillaume to go a little more in depth in terms of technical. So, today we're going to talk about a function pack for AI called FPAI Sensing 1. So there are several function packs available here at ST to get you started building application. It could be like an IoT cloud function pack, one for predictive maintenance or for allura. But we'll be focusing on today's function pack called FPAI Sensing 1. This function pack has two example neural network implementation in it. The first one will be the acoustic scene classification where it will listen to ambient noise and tell you if that noise is an indoor noise, outdoor noise, or in vehicle noise. The second one is the human activity recognition. So it will look at accelerometer data and tell you which type of activity you're doing, whether you're running, walking, or jogging, and so on. So it's supported on several of our boards. So you can use the Nucleo platform with the shields for the sensors. You can use the sensor tile or the newly announced sensor tile.box or the IoT board that we'll be using today. So the sensor tile is good for form factor implementation with battery powered devices. And the IoT board is great for a single board development with the embedded ST-link programmer and the UART connection. The sensors, they're all tied, the ST sensor are all tied to the hardware using ST libraries and third party libraries. We have a dedicated ST AI library for neural networks, audio pre-processing, and then on top of that sits the user application to tie everything together. Other features implemented in this function pack are the BLE connectivity. This is just to configure your hardware to send the output communication to the screen and view the results of the neural network. We have a low power implementation using free RTOS and the STM32 advanced stop modes. We have a photo feature for firmware updates and neural network updates. And then the form factor implementation shown here. So we have the neural network will be running on the STM32-04 microcontroller connected either to the microphone for the audio input or the accelerometer for the human activity recognition. For the human activity recognition, so this was done using an internally collected dataset that we collected using a sensor tile stored on the SD card. We then use that data and train a neural network model on a powerful computer and then use the tools like QBAI to map it to an STM32 microcontroller. There's also some embedded pre-processing running to remove, for example, the gravity of the sensors. Then the other demo is the audio scene classification. So we also recorded some data internally and then it can listen to the audio and tell you whether you're indoor, outdoor, or in the equal. The pre-processing that we'll be using for this is the log mail spectrogram. So this is just an FFT followed by the mail filter bank application so that the signal looks more similar to what your ear can hear. If you were doing keyword spotting or trigger word detection, you would be using something like MFCC. Other demos we have is the handwriting character recognition. So this one is shown outside where you can just write with your finger on this LCD touchscreen letters or uppercase letters or digits from 0 to 9. And this one has a very small footprint of only 26 kilobyte of RAM, for example. But then we can see a more resource-hungry application such as the food recognition. So this is a computer vision AI application. There's a camera that's able to look at a dish and recognize up to 18 different classes of food. The difference with this one is that it has been implemented in fixed point numbers. So instead of using 32-bit float numbers like we were using before, it will be using 8-bit integer numbers for the calculation. So this will reduce your memory footprint by 4. And it's also faster for your inference time.