 I would like to take this opportunity to introduce one of the most important industry trends that represents a huge opportunity for everybody on this call that is basically leveraging smart sensors with machine learning capabilities to optimize IoT applications. Well, first of all, I wanted to start by giving a quick overview on the smart system challenges that we face. The IoT market has been empowered by combining the Internet and its ecosystem of cloud computing. In traditional IoT applications leveraging cloud computing, data is acquired at the node or gateway and then stored and processed directly in the cloud. Decentralized approach is very common and has been working well for many applications, but in today's market, several companies are actually noticing that these architecture requires huge amounts of data to be stored and sent to the cloud and then stored and processed in those infrastructures. At the same time, there are time sensitive applications where the constraints of this centralized processing approach are becoming even more evident. As examples, network availability, bandwidth limitations, privacy, security constraints are limiting factor actually affecting cloud-based products. And of course, low power consumption is one of the main characteristics of any IoT node, so optimizing that portion is extremely important. And our response to these challenges and how we want to tackle this situation is basically in line with the latest industries to provide distributed intelligence to allow part of the applications to be processed locally. And then in edge computing, data acquisition, storage, processing, and the decision-making is actually done at node level. And then only a subset of those data are processed at gateway level. And then last but not the least, data storage, training, and then complex processing is done in the cloud. So you're really optimizing and taking the best of each of every one of the aspects or each of every one pieces of hardware that are involved in these ecosystems. And from an ecosystem perspective, ST can support companies in two fronts based on AI at the edge. The first one is basically that now you can map and run pre-trained artificial NATO networks or ANN using the BroadSTM32 microcontroller portfolio thanks to the STM32 Qube.AI expansion pack that basically enables AI on STM32 Cortex-M-based microcontrollers. And at the same time, we also offer advanced MEMS and sensors, which is the key topic of this presentation, part of our X Factor family that contain digital functions optimized to run machine learning algorithms that allow applications to distribute data processing between the IMU and the host processor. And this is exactly in line with our vision of allowing complex applications to benefit from increased efficiency at system level and also leveraging machine learning techniques to provide high performance processing directly at the edge on the sensor node. And if we think about a typical application use case, we are actually allowing our customers to transition from a traditional approach that has been working very well, by the way, of collecting raw sensor data and then running all the necessary processing inside the host microcontroller. And as you can imagine, this is not an optimized implementation when it comes to system power consumption. So our response and how we can allow a better working principle. So to allow that, the machine learning core functionality of our latest sensors allows not only capturing sensor data, but also running a machine learning algorithm directly inside our sensor, or in this case, a six axis IMU accelerometer and gyro combo. At system level, this plays a key role by allowing the host microcontroller to focus only on a high level processing that is triggered by the machine learning model implemented inside the sensor. And in short, what this really means is that you have higher computational power at sensor level and reduced system power consumption overall. And additionally, it's important to highlight that these architecture model actually allows sensor nodes to be simplified and rely on more power efficient or optimized microcontrollers bringing unprecedented value at real application value. And now diving into the machine learning core itself, so what it is, I wanted to take a moment and walk through the key building blocks inside our latest sensors. And as a starting point, our latest IMUs with machine learning core benefit not only from sensor data coming from the accelerometer and gyroscope, but it also allows the usage of external sensors that are actually connected to the IMU through an auxiliary I squared C bus. From there, the sensor data flows into the computational block of the device, benefiting from filters such as high pass, band pass, and second order programmable digital feed filters that actually allows you to customize and improve your data capture. And together with the filters, we also have features that can be selected to better describe the data pattern being captured. In this case, the available features are mean value, variance, energy, peak to peak, zero crossing, peak detector, minimum and maximum values. And after applying those filters and features to your data, the device then relies on a set of decision trees that will run a machine learning model based on data. And it will provide as an output different results classified by the implemented model. In addition to the decision tree, you can also rely on the metaclassifier feature to allow even higher reliability when providing the decision tree results to the main host microcontroller. And to illustrate this concept, I would like to walk you through the key steps related to the machine learning core and how to embed decision tree within our sensors. So the first step consists on capturing sensor data related to the different conditions you would like to model. Then once enough data is captured, the next step is to label those data that represent the different classes to be identified. It's also at the label data stage that the developer will select the filters and features to be utilized within the MLC. And once this step is completed, now it's time to basically build your decision tree that will provide the classification results of the model and the necessary outputs to your host microcontroller. And then with the model ready, you can now program the registered settings of the sensor and test the created model. Of course, the last step consists on performing real-time tests and processing coming sensor data based on the algorithm that you created. And for each one of these steps, our customers can rely on a series of software tools such as the Algo Builder, the Uniclio GUI, and the Unicographical User Interfaces that will allow an easy yet effective development process. And just to give you a real example, let us take as a reference an activity recognition application. Let's say, for example, monitoring a user stationary walking, fast walking, and jogging. You will collect multiple data logs for each of those classes. And then based on data science knowledge, define which features best characterize those conditions. And from there, rely on our tools to generate a machine learning core model that then can be programmed into our sensors. In this case, let's consider the LSM6DSOX, which is our ultra-low-powered six-axis IMU that is ideal for wearable and IoT applications. Here at the bottom of the slide, I also included confusion matrix. And this is very important to illustrate the performance of this implementation. In this case, as you can see here, the confidence level of identifying the right condition is well over 95%. And then, again, you're offloading your host microcontroller and running all this processing inside your sensor. And finally, just to give you a full picture of the X-Factor six-axis IMU family of devices featuring the machine learning core capability, we have key products that are targeting different industries. So the first device is the one that I just mentioned, the LSM6DSOX, that is focused on low-powered applications such as wearables, asset trackers, and IoT nodes. The LSM6DSRX, that is then focused on VR, AR navigation systems and high-performance sensor fusion-based applications. And last but definitely not the least, the ISM330DHCX, that features on top of the machine learning core capability, extended temperature range up to 105 degrees Celsius, and 10 years of public longevity commitment on ST.com. I also wanted to take this opportunity to highlight the key hardware evaluation tools available to allow machine learning core evaluation and development. The first one is the ST eval MKI109V3, the professional MEMS motherboard. This board can be paired with a dual 24-pin socket featuring any of the sensors, any of the ST sensors to allow a deep level of evaluation through Unico GUI. The GUI will allow the user to explore all the steps of the machine learning core process and will enable a deep level of customization by letting the user configure each and every register setting of the device. And at the same time, if you want to work on a form-factor evaluation, you can rely on the SensorTile.box platform, including all the latest sensors from ST. And in special, the LSM6DSOX is in there. The SensorTile.box is not only a very flexible platform, allowing data logging directly through the microSD card or through the STBLE sensor app, but it is also fully compatible with our AlgoBuilder framework and also relies on Uniclio GUI for data visualization via PC in case you want. If you're interested in learning more about the hands-on experience and evaluation of the machine learning core, we also have a series of videos and webinars fully available to walk you through the step-by-step guide on how to get started with the machine learning core technology.