 Presence detection can be used to turn on a light, open a door, or any other personalized system. Such type of devices typically use a passive infrared sensors to trigger an event where when a movement is detected. But the problem is that you don't always want to get your system triggered when there is a movement. If there is a cap passing by or because a branch is moving because there was some wind, you don't want to wake up the system. You want to be sure that a human is present. Having too many false triggers can be a waste of resources and computing power, meaning a lot of energy is wasted. Hi, I'm Guillaume, and today I'm going to show you how you can use AI to solve common detection problems on low power and resource-constrained devices. In version 2 of our function pact for computer vision, we introduce a new application called person-presence detection. The person-presence detection is an image classifier that is able to detect the presence or absence of a person in an image. FPAI Vision 1 is a great way to jumpstart your STM32 project. It includes starter code to run computer vision applications on the STM32-H7 Discovery board. So let's jump right in. Our demo runs on the STM32-H747 Discovery Kit board, and the image capture is performed using a camera directly connected to the STM32 using a DCMI interface. For even more low power capabilities, the demo is also available on the STM32-L49R Discovery Kit board on demand. The model for this application was directly taken from the TensorFlow Lite for microcontroller person detection example. The original .tfl file was imported into stm32cube.ai to generate optimized STM32-C code. Ok, so let's go and see the board in action. When a person is detected by the board, you will see a person labeled displayed on the screen and its confidence score. When there is no person in the camera field of view, the board will display a not-person label along with the not-person confidence score. The embedded neural network can differentiate humans from other moving objects such as cars, bikes, or animals. The person does not even need to be moving to be detected. Here, for example, if a bicycle is moving in front of the camera, the board would say not person. But when I start walking in front of the camera, the board is able to detect my presence and it says person. For best performance, the video stream and inference results are displayed using the Chromart hardware accelerator inside the STM32-H7. As you can see, Discovery boards are a great way to create and evaluate proof concepts. You can load the binary onto the board and test the application in a real environment using form factor hardware. To create this demo, we downloaded a pre-trained model from the web. It is a pre-trained MobileNet V1 that was imported into STM32-QBAI to generate optimized C code. The tool will give you a quick overview of the model memory footprint, complexity, and parameters. In the embedded world, this type of information is crucial to say whether or not you can run such-or-such model on such-or-such STM32. Inside the tool, you can see different tabs. The model topology of the imported file and the generated C code. SRAM memory usage layer by layer with the so-called activation buffer. You can also find some additional generation options such as the allocate inputs and the split-way options. The allocate inputs option allows you to reuse part of this activation buffer for the input buffer without having to waste valuable memory allocating a separate input buffer, for example. The split-weights option will let you have a finer-grained memory placement with different layer weights and parameters. Here, we can see in the neural network information view that the weights are occupying around 200 kilobytes of flash and that the model requires only 50 kilobytes of SRAM to run in inference. Once optimized for STM32, the model runs with an inference time of 37 milliseconds or in other words, up to 26 frames per second on the STM32 H7 ARM Cortex M7 core when running at 400 MHz. This innovation is made possible as the STM32 Cube.AI tool enables us to generate optimized code to fit in the MCU's internal memory. For applications with a very tight budget, the STM32 L4 family of MCU is a great choice. Running AI algorithms with a low inference time allows you to spend less time computing results and more time in stop or standby mode. Person detection is a key AI component that will help reduce power consumption for many applications that would otherwise rely on PIR sensors. For example, you could place an L4 MCU in each lamppost to turn on the light when someone is walking by but not when it's an animal. The STM32 H7 allows you to run even more complex solutions. So for example, you can run the person presence detection algorithm to wake up an even more complex solution. So you would first use the camera, detect whether or not there's a person and then run face recognition on that person only when a person is detected. The FPAI Vision 1 function pack can also demonstrate other use cases. We have the food classification example. We've designed a firmware in such a way that you can easily port it to your own use case. You simply have to regenerate a new set of network.c file using STM32 Cube AI. The tool can import TensorFlow Lite files, Keras or PyTorch and generate optimized code for the STM32 that is also memory efficient. FPAI Vision 1 is already available and free to download on st.com. To learn more about our AI solutions for microcontrollers, please visit our website.