 Yes, I want to introduce myself. My name is Marcus Meyer. I'm Product Marketing Manager for the microcontroller, MPU, and AI group in the Americas. And I will give you an introduction today how we bring AI to the edge with our S-52 microcontroller. So first I will explain to you what is AI, then we will go over what tools we offer to bring AI to the S-52, both hardware software and function packs, and also some real-life use cases. Now, let's have a look at the result of the poll and it seems that most of you have a basic understanding. We have very few experts and then some people who are nothing or very little, so that gives me really a nice track into our next first topic, which is what is AI? This is the big question. Currently there's a lot of talk about AI, but also a lot of confusion. Let's clear some of it up. Some definitions you might hear frequently are artificial intelligence, machine learning, and deep learning. AI is a very broad term. You can use it for any technique that enables computers to mimic human behavior. Machine learning is a subset of AI. It refers to the algorithms and methodologies that improve over time through learning from data, and deep learning is a subset of machine learning. Normally, the deep learning algorithms use multiple layers that mimic the neural network of the human being. The term artificial intelligence was first created in the 1950s. And the study of AI has gone up and down during the last decades. Now, we are in the most recent AI wave. This wave is not only a result of more advanced hardware and algorithms, but most important, it is big data. Data is the cornerstone of AI. The more data you have, the better the performance you will achieve. Probably all of you heard about AI in one way or the other, and it is already used very widely in our daily life to improve or make certain tasks even possible. Some samples are face-or-voice recognition, autonomic, predictive maintenance, anomaly detection, medical diagnosis, disease detection or prevention, handwriting recognition, credit fraud detection, translation engines, shopping suggestions, and a lot of others. As you can see, AI is already playing a big role in our daily lives. So what these examples have in common is that most or not all of the computation happens in the cloud. But we are currently seeing a trend of it moving from the cloud to the edge. But what exactly does it mean and why is it done? As mentioned before, a lot of the AI activity happens in the cloud. The way it works is we use sensors to collect data, then transfer them to the cloud via gateways and do the heavy lifting such as running neural networks there where there's almost unlimited computing power. Then the results, decisions, or actions are pushed back to the endpoint also called the edge device. There are several issues with this approach. It requires a constant internet connection, a lot of bandwidth might be an issue with some applications with limited bandwidth such as Lora or no interconnection at all. There are also other issues such as network reliability, latency, and security. To address these issues, some or all of the decision-making and the AI capabilities are moved to the edge, which reduces the bandwidth and latency and improves the power consumption and data security. It allows mission critical and time-sensitive decisions to be made faster, more reliable, and with greater security. One good example of a distributed AI approach are voice assistants such as Alexa or Siri. The detection of the trigger word, for example, Alexa on the edge device and once it is detected, the connection of the cloud is established and the recognition of the following voice commands or questions are handled in the cloud and the result is sent back to the edge device. And I'm also wondering how many of you guys as Alexa has now went off during my presentation. So now, how does ST fit into this? As you can see, our broad portfolio of STM32 MCUs and MPUs with over 1500 different part numbers can be used both at the end point, the gateway, as well as for communication. Also sensors and actuators and other ST parts are a good match for a broad range of edge applications which require AI capabilities. Now, looking at it more in detail, the big question is, how are we getting neural networks to run on the STM32 on an edge device? The solution is the S32 Cube AI neural network conversion. Now we need to understand the different tasks first, which are involved if we want to use neural networks. There are five key steps behind neural networks. First, we need to capture data, a lot of it, in some cases, hundreds of thousands or even millions of data sets, the more, the better, as it improves the accuracy of the model. Second, we need to clean and label the collected data and select and build the network topology using any of the available frameworks such as TensorFlow, Keras, Onyx or others. As a third step, the neural network needs to be trained and optimized for the specific use case. These first three steps take a lot of the portion of the development time and unusually handled by data scientists, machine learning specialists and mathematicians. With step number four, we are diving back into the STM32 world. We import the drain on network model into the STM32 Cube AI tool, where it is analyzed. And optimized and then converted into optimized code for the STM32. And finally, in step number five, we run the generated neural network called also called inference on the STM32 MCU RMP. Here are a few more details about the last three steps of the process. As mentioned before, we support the output files of many popular neural network frameworks such as Kiffy, Keras, TensorFlow Lite, PyTorch and Onyx, which means any framework that can be exported to Onyx, to the Onyx OpenFormat is supported. We are continuously adding and updating more frameworks. The STM32 Cube AI imports and analyzes the models and based on the output offers a selection of different possible part numbers thanks to the built-in product selector. It also offers several optimization and compression capabilities to reduce and optimize the memory footprint. In addition to the STM32 Cube AI, we offer an extensive toolbox and ecosystem for AI applications. It includes software examples, so-called punch bags or SD development hardware. The STM32 community with a dedicated AI channel, trainings, hands-on videos and moves. It is rounded up by the dedicated AI partner program to provide engineering and design services specifically for machine learning or deep learning. Our partners here are companies with a specific expertise in the area of neural networks and machine learning, and they play a critical role here. This is especially important when you don't have a deep knowledge of AI, neural networks or data science. Those partners can assist with their specific expertise. Now, I want to give you a brief overview of our function packs, which are the software examples and the supported SD developments. The four main examples we are offering are audio sync classification, human activity recognition, which are both included in the FP AI Sensing One Package. Food recognition is part of the FP AI Vision One Package and the X-Linux AI-CV Package. And the X-Linux AI-CV Package supports image and object classification on the STM32 MP1 MCU. The audio sync classification is an audio example of the AI Sensing One Package, which can classify three scenes using a microphone. It's indoor, outdoor and in-vehicle. In the same FP AI Sensing One Package, there is also a motion example, the human activity recognition, that can recognize five different motion activities using a motion MAMS sensor. They are stationary, walking, running, biking and driving. This function pack supports several different SD boards. For example, the sensor tile board with the SF32L4 all-power MCU, the blue energy VLE radio, pressure sensor, accelerometer, gyroscope, magnetometer. The sensor tile.box includes a more powerful STM32L4 plus ultra-low power MCU and more advanced high accuracy low power sensors, such as the LSM60 SoX inertial sensor with machine learning capabilities. The outer board, the IoT node, offers in addition to the STM32L4 ultra-low power MCU and different sensors, more connectivity beyond BLE, such as Wi-Fi, sub-1 GHz and NFC attack capability. Another function pack is the FP AI Vision One. It has food recognition examples that can recognize and classify 18 classes of common food, such as pizza, hamburger or Caesar salad. It runs on the STM32H747 Discovery board with an add-on camera module. Soon, we will also release the FP AI Vision Two package, which will add person presence detection also on the STM32H747 Discovery kit. And also, for one of our latest additions to the STM32 portfolio, the STM32MP1, we created the X-Lenix AI CV package. It includes two computer vision examples, image classification with about 1,000 objects classified, and multiple object detection with 90 different glasses. The function pack supports the STM32MP1 Discovery kit and EVAL board, or the Aventure 96 board. Either USB camera or built-in camera module can be used. To summarize, the different AI solutions either from ST or our partners support our complete STM32 portfolio of standard MCUs and MPUs without any specific AI or neural network hardware. If you want to find out more about our AI solutions for the STM32 family, go to st.com.com.com.com.