 Hello to all the members of the Arduino community. I'm Federico Iaccarino, Product Marketing for MemSensors Ecosystem in ST Macro Electronics. With this tutorial, I would like to answer to the question that a lot of you have been asking for a while. How can you save battery power and MCU load in your system for scenario detection applications without compromising performances? Well, with LSM 6.0x, our new inertial sensor with embedded AI, you can easily do it, and I'll explain you how in just few steps. What's the best way for Arduino community to get to know LSM 6.0x and its functionalities? This is the Arduino Nano RP2040 board, IoT NanoNode that offers onboard acceleration and rotation sensing thanks to this inertial module. Embedded in such a small package, you will also find an AI in the Edge Core, called Machine Learning Core, which can help you to achieve some impressive results for scenario recognition. Before showing to you how this AI can be used, and I promise it won't be that hard at all, even if you don't have any background on AI, let me just briefly introduce what is Machine Learning Core. The MLC Machine Learning Core in LSM 6.0x is an in-sensor engine that provides dedicated hardware to implement a classification-based AI algorithm called Decision Tree. In a traditional approach, the microcontroller unit will be directly receiving the continuous flow of sensor data and allocating precious computing power to resolving the Decision Model. By using a specialized hardware built close to the sensor outputs, you are able to completely offload your scenario detection model from the MCU. This yields great benefits for your system's general performance. The specialized MLC hardware also runs the algorithm very fast, and at a fraction of the power consumption that an MCU will require. As I have just hinted, among many different Machine Learning Models that can be used traditionally, the MLC found in STMM Sensors is a supervised and classification-based AI algorithm called Decision Tree. Let's explain what that means. The MLC is a supervised core because you will need to train it to generate the Decision Tree Model. But, as you will soon see, the process is greatly automated and simplified thanks to our Unico GUI tool. With its highly optimized hardware, the MLC very efficiently implements a classification-based AI algorithm called Decision Tree Model. But what is a Decision Tree Model? If you are not familiar with the concept, let's look at a simple example to clarify this. Let's start from the real estate problem. This means, assuming we want to sell a house, if our customer can be interested or not in buying a house. And let's introduce three words that we will use frequently in this tutorial. The problem, or understanding if the customer will buy the house. The features, or inputs to the system, which we will use to take our decision, and decide the output from the algorithm, buy or don't buy. On the right, you can see an example of Decision Tree built to solve this problem. We have a first Decision Node where we need to take a decision if the customer's salary, one of the features, is above or not a given threshold. If it is above the threshold of $100k per year, we may think that the customer will be interested also in many rooms, another feature. So we have another Decision Node where we check another threshold. We proceed in this way until we reach a so-called Leaf Node where we can take the final decision if the customer is going to buy or not the house. In this structure you can see the key elements of Decision Tree. The features, the Decision Nodes, the Thresholds and the Leaf Nodes. Do you think that we are using an if-then-else traditional approach and not artificial intelligence? Well, the key point here is that AI can be used to generate this tree. Here for the sake of example, I already built for you the Decision Tree, deciding a priori what can be reasonable features, in which order I check them and which thresholds are used. But in the machine learning world, as we said before, we are going to generate this Decision Tree automatically starting from the data. So, you should now have understood what the Decision Tree is and that the MLC in sensors can synthesize this type of models. Let's assume that you want to start the development of a scenario detection fully embedded in ST Sensor thanks to MLC. What are the steps that you should take? First, you need to acquire the sensor data that is useful to describe the scenarios that you would like to identify, saving at least a log per scenario. Then, you need to label the data according to the log content. This means that data acquired while walking should be labeled as walking and so on. Using the labeled data, it is possible to generate a Decision Tree using the tool provided by ST. The Decision Tree will take the decision on which scenario is identified. Once the Decision Tree is ready, it can be converted in the sensor language and uploaded in its MLC. Finally, it is possible to appreciate the results of the MLC outcome. And here we are in the Arduino IDE. I've already opened the first sketch called RP2040 Data Logger. Before proceeding with the code description, let me highlight the pre-requisites needed for compiling the sketch. First, you need to install the board files for Nanobords. You will find them in the Boards Manager as Arduino Embed OS Nanoboard. Click on Install and wait for the process to be completed. You also need to download two libraries through the Manage Library function, the Wi-Fi Naina and the STM32duino LSM6DSOX. I've already installed all the pre-requisites on my PC, so I can proceed. As you can see, we include the libraries that we will use here, line 1417. Here, line 1925, we find the sensor settings. These are the standard settings that you will find in the sketch and that are used to power up the sensor selecting the working full scales. The rest of the code is quite long to comment, but if we jump to line 160, we can see the core of the sketch. The RP2040 will be programmed as follows. If the user does not double tap on the board within 10 seconds from the power-up, the board will go into Mass Storage mode, enabling the possibility of downloading the flash content via USB. However, if the user double taps within the first 10 seconds, the green LED will turn on and the board will go into Acquisition mode. The board right now is ready to acquire the sensor data and just waits for a second double tap to start the logging. When logging, the green LED will blink. Another double tap sequence will stop the acquisition. Now you are ready to upload the sketch on your board. Verify the code by clicking the check button and once completed, select the correct COM port that points to your RP2040 before clicking on upload. Now the board is fully programmed and ready to log. To start the logging, connect the USB cable and double tap on the board. If the green LED turns on, it means that the board is in Acquisition mode. Double tapping again will start the data logging and when you have acquired a sufficient amount of data, you can double tap again to stop the acquisition. Let's acquire the dataset of the three scenarios that we would like to identify with the MLC, idle, medium intensity movement and high intensity movement. And let's keep the acquisition time to 5 seconds for every scenario. It's more than enough to train the decision tree. It is now time to take all the data from the board flash and transfer it to the PC for the second step. With sketch number 1 still uploaded in RP2040, we can reset the board, wait for the blue LED to pop up and then the PC will recognize the board flash memory as mass storage. We will find here the three data logs that we just acquired, with the initial number to express the order of the acquisition. So, file number 0 is the static position dataset, number 1 is the medium intensity movement and number 2 is the high intensity movement. Let's move these files to the desktop and then let's open Unico GUI to proceed with the tutorial. We can now open Unico GUI. The software tool will let the user import the data log and create a decision tree based on their elaboration. Let's start Unico GUI and unselect the communication with motherboard option. Then, let's select LSM60SOX in the iNemo module section. Once the software is initialized, let's click on the MLC button. Let's click Browse and find the log files that we just copied from the RP2040. For every data log that we import, it is necessary to specify the class label. In this case, I will use the name IDLE since we are importing the data log acquired when board was still on the table. Let's repeat the sequence with the other two data logs using medium intensity and high intensity labels. Once all the logs have been correctly imported, we can click on the Configuration tab and proceed with the creation of the decision tree and configuration file itself. Once the tab is selected, we will choose LSM60SOX as device, MLCODR at 104Hz as the sensor ODR, and only Accelerometer as input because, for this particular application, data from the gyro will not be necessary to identify the class. Please remember to specify correctly the full scale and the output data rate that has been used during the acquisition. In this case, the full scale was 2G and the ODR was 104Hz. Proceed with the configuration by selecting 1 decision tree and window length of 52 samples. This parameter can vary from 1 to 255 and it specifies the number of samples that MLC will elaborate every time it has to identify the class. Let's skip the filter configuration and let's go to the feature selection. Here it is possible to select which feature will be used to generate the decision tree. One way to proceed is to select as many features as we want and let the decision tree generate our algorithm to pick the most relevant ones. Unfortunately, this can also cause a common problem called overfitting that will worsen the MLC behavior. The suggested way to proceed is instead based on data analysis. I plot the data that I acquired with RP2040 and, with a simple visual comparison, we can see how the variance of the Accelerometer signal changes in the three working modes. We will use this feature to detect the classes, so I will select it in the menu and click Next. Now we need to save the IRFF file, which contains the extracted features, in any path. Let's save it and then click Next. Now we can decide which numeric value, which is the output of the MLC, will describe the class. By default, we will have a value 0 for idle, value 4 for medium intensity and value 8 for high intensity. Let's change the value for idle from 0 to 1. It will be useful for sketch number 2 programming, keeping note of our decision. It's now time to generate the decision tree. Leave all the parameters to the default values and click Generate. The decision tree will be generated. Click Next. Skip the Metaclassifier settings and, finally, we can generate the UCF file, which contains the sensor commands to be sent. Select a target file and click Next. Once the UCF file is generated, we need to convert it in C code. Close the MLC tab and go to the Options tab in the main window. Here you will find the C code generation tool. Click on it, select the UCF file that you just generated and you will obtain a .h file that we will use in sketch number 2. With the configuration file in our hands, we just need to send it to our sensor so that the decision tree will be up and running. We also want to visually check the outcome of the classifier, so we will tie the decision tree result to the RGB LED that we have on board, identifying the scenarios with different colors. Let's have a look to the sketch number 2. This sketch number 2 is much easier to describe, since it basically loads the configuration file in the sensor and programs the board so that a particular LED color shows up according to the MLC output. As you can see at line 7, we include the .h file generated by Unico. Remember to place the .h file in the same folder of .ino file before proceeding with the verification. At line 44, we recall this .h file and upload it in the sensor. It is a series of write comments. At line 98, we also find how the sketch uses the MLC output. As you can see, there is a case structure that handles the MLC output and assigns a LED color to every output value. In this case, the sketch is pre-programmed to handle output values 1, 4 and 8. If you remember what we did in Unico, we selected 1 as output for static scenario, 4 for medium intensity scenario, and 8 for high intensity motion scenario. Here you can also see that value 1 is associated to blue LED, value 4 to green LED, and value 8 to red LED. Validate and upload this sketch for the final trial. Now the board is programmed with the test sketch. As you can see in idle position, we will see the blue LED. If I start to slightly move the board, the LED will turn green, and if I shake it, it will become red. The decision of which LED should be turned on is completely taken by the sensor MLC. The MCU is doing absolutely nothing but checking the sensor interrupt for a new MLC value. Once the interrupt is received, the MCU will check the value and turn on the correct LED. Obviously, you can adapt this example to any of the applications that you have in mind. This application is just one of many scenario identification applications that you can develop. But now, coming back to the question that you all asked. How much energy can we save with this feature? What's the consumption of the system in this configuration? Well, with an optimized power consumption of the accelerometer, low power mode, 52 Hz or DR, we can reach a consumption of 25 microamperes, at which I should add 7 microamperes for the MLC. This means that with just 32 microamperes and no computational payload on the MCU, you get a fully working scenario identification application. Quite impressive, isn't it? With few microamperes, you can enable high-end scenario recognition routines that can help your application to switch from hibernation to wake-up status and help you to save battery when it's not necessary. So, as a takeaway, I invite you to check our Github page for more MLC examples ready to be imported in your application or to test with your RP2040. If you have any questions, you can reach us at our community webpage where you can share your ideas and look for help on your application. If you need help to program your RP2040 or have any questions related to Arduino World, you can visit the Arduino Forum and the RP2040 subsection. Thanks for watching this tutorial and I hope to see you again for more tutorials on MEMS sensors.