 Hi, my name is Alex and I'm the head of the intelligence team in Actonion. Actonion is a deep-tech software company that brings intelligence to industrial products. We offer customers free K-technological assets developed in-house, an edge AUT platform, embedded AI, and secured connectivity for low-power devices. And today I will demonstrate our K-expertise in embedded AI. During the demo I will present AI intelligence at the microcontroller level for detection of industrial equipment states based on the vibration sensor measurements. A federated learning principle when multiple edge devices will contribute into a common AI model. Distribution of the AI model over the air without a firmware update. And anomaly detection and decision-making at the edge. So, how it works? Here we have two demonstration kits. Each of them consists of a sensor tile wireless industrial node mounted on a motor. A display connected to the node. And additionally, one of the kits is connected to the flashing light alarm. Both STVN devices are securely connected to the portal in cloud. Secured connectivity is based on custom TCP over BLE protocol developed by Actonion. Workspace in portal. Units these devices and corresponds to one shared AI model for all of them. The AI model is updated in real time based on vibration patterns coming from the devices. It worth saying that vibration sensor measurements remain at the edge and only vibration patterns will iteratively edit to the shared AI model. When any of the devices learns a new pattern, it's report this pattern to the cloud for distribution to other edge devices. User involvement is not required. The AI model will be seamlessly distributed across multiple devices without a firmware update or device reboot. So, what are the benefits? Well, the deleted learning rounds could be done on a limited number of machines reporting to the cloud. For training of a common model and then the model will be distributed across a fleet of devices that monitor the state of the same type of the machine. Multiple machines with different states can collaborate for training a robust AI model without datasets exchange, thus decreasing the time and efforts on transition from prototyping to industrial stage. Let's start the demonstration. Initially, devices are connected to the cloud and there is no AI model applied to devices. Via the portal, I initiate the learning round on the device placed on the first demo kit. Also, I will disconnect this device from the cloud in order to prove that patterns are recognized not in the cloud but at the edge. As we see on the LED display, the device has recognized a first vibration pattern which corresponds to the motor of state. Some model parameters like amount of patterns and model size are also shown at the display. Now, let's switch on the Internet connection back to allow device to distribute newly learned pattern to the cloud. Once device get connected, the information about its patterns become available in the portal. Let's label this pattern as motor of. To demonstrate some federated learning principles, let's add second device into the same workspace in portal. As soon as the second device is added into the workspace, the already learned model is automatically distributed to it over the air. This procedure does not require building of new firmware or any additional manipulations. Now, you can also see model parameters on the display of the second device. As the second motor is currently off, we expect that second device will also recognize the motor of pattern. Let's check it. And here we are. Now, let's turn on the second motor and wait until the second device will learn a new vibration pattern and contribute this pattern into the common model. As you can see, the second device learned a new vibration pattern. Let's label it appropriately. As soon as the second device reports a new pattern into the common model, this model is automatically distributed into the first device. You can see appropriate model information at the display of the first device. Now, as there are the same motors configurations, we expect that first device will recognize the same motor on pattern as learned on the second device. Let's check it. As you can see, motor on pattern, which was learned on the second device, detected as expected on the first device. And as a cherry on the top of the cake, let's do some show and also demonstrate the power of edge decision making. I will create an alert rule in the portal for a new pattern detection on the second device. Once configured via the portal, rules are also being processed directly at the edge. To show this, I will disconnect the device from the cloud. Now, I will emulate new unknown state of the machine, which we consider as abnormal, and wait till device recognize it. So, the device detected new pattern, and the rule was triggered on the device, causing flashing light alarming and stop of the motor. Once the motor has been stopped, the device recognize the already known motor off pattern and stop alarming. It worth noting again that all these features like AI inference, federated learning round, and decision making are backed by low power but powerful STM32 microcontroller. Thanks for your attention.