 My name is Mieszko Miruński, I'm from AV System. We are focused on the device management and all the aspects of it. So today I wanted to talk about management of IoT tiny ML devices. And what we're going to talk about today, first I would just do a short introduction what actually is tiny ML in this scenario. Then we will move on to the anomaly detection IoT device demo and just showcase a real example on what can be done. After that, some short talk about light with M2M overview and how we can combine the light with M2M with tiny ML to manage the devices. So starting with the tiny ML introduction, tiny ML stands for Tiny Machine Learning. It basically refers to practice of running machine learning algorithms and models on low power, small scale devices, typically at the edge of the network. And these devices include microcontrollers, sensors, wearables, and other embedded systems. The goal of tiny ML is to bring intelligence and decision making capabilities directly to those resource constrained devices, enabling real time data processing and autonomous functionality. So basically the cloud based analysis of devices towards sensor data is inefficient due to the sheer volume of data that the device needs to transmit. A more efficient way is to process the data from sensors and directly on the device using tiny ML, for example, analyzing the acceleration in free axis so that we can detect complex movements and vibrations which could give valuable insights. And enabling such these cases as predictive maintenance, monitoring utilization of valuable goods, or classifying movement of people or animals. These days more and more sensors are developed. In addition to the sensing capabilities, smart sensors come with an embedded MCU, which runs the tiny ML model and communicates only the detected patterns to the main MCU or the device. This tiny ML integrated sensors are referred to as sensor 2.0 power digmo. And smart sensors simplify the implementation of tiny ML as it allows for retrofitting existing devices with tiny ML capabilities without having to redesign the whole solution. So the key features of tiny ML, the low latency by running machine learning models on the edge. Tiny ML reduces the need for data transmissions to the cloud and real time interference device results in low latency decision making. Power efficiency, so those algorithms are designed to be highly efficient. We're requiring minimal power consumption. This enables deployment on battery powered devices extending the operational lifetime. And privacy and security, because with tiny ML the data remains on the device, reducing the need for transmitting sensitive information to external servers. So this enhances privacy, mitigates the potential security risks, and allows for localized processing. For the use cases, the anomaly detection, so analyzing the sensor data in real time and detecting anomalies or abnormal patterns, for example, in the industry setting. Predictive maintenance, this is really similar to that. But here we check if there are possible patterns to known issues that can be happening with the machinery. The energy management, this can also be of use if we see different behavior of the device in different energy conditions. Then we can use this sensor data to minimize the energy wastage. Just to recognition, so just enabling hands-free control of the devices and triggering specific actions based on that. Environmental monitoring, so this we can just process the data regarding to the air quality noise levels humidity directly on the device and not moving this data to the cloud. And of course, health monitoring and early detection of symptoms in the patients, for example. So TinyML in IoT opens up possibilities of running machine learning algorithms directly on the air devices and this reduces in the way the need for cloud connectivity. So one of the key challenges just here is to keep the TinyML model reliable post-deployment. Because oftentimes, data sets used for training the model change over time and differ from the real-world data and this leads to inaccurate models and in addition, the environment context may change over time. So we have the deterioration of the model quality and continual learning refers to the ability of TinyML models to adapt over time. This can be accomplished by learning from new data sets without the need to retrain the model through scratch. And though the continual learning methods are well known, the practical implementations often miss when running the models on resource-constrained devices. So what can be proposed here is the missing clean that is light with M2M, which allows to securely store the data from the device and update remotely the model on the device. So we will move to the anomaly detection demo and check how it looks from the high point of view. First, we have our machinery. So we have, in this case, just a simple fan with an accelerometer attached to it. So we have a vibration sensor and an IoT device with connectivity, which can run the model. First, we take care of the TinyML flow. So in this case, we are using the Edge Impulse platform to collect the data from the accelerometer and train the machine learning classifier to generate standalone libraries for C. And this library containing the TinyML model can run on the device and provide a signaling about the pattern detection. After that, we add the connectivity part of the device. So in most of the time, the device will be working in the standard operations. So the device will be gathering signals from the sensors, evaluating them and providing the TinyML insight both to the outside world and to itself. So based on that, the server can manage the device and schedule operations on it or operate on different other devices. Talking about the whole process, we can use the whole flow to update the model. So we can, as well, read the whole raw data from the sensors, retich the model on the Edge Impulse site, on the ML Ops site. And after that, we do the photo on the bootstrap device so that the model is reconfigured and updated. So we've talked about the TinyML. Let's move to the Light with M2M what it is and how it fits in all of that. Light with M2M is basically an application layer communication protocol. It was developed by OpenMobile Alliance Peckworks to simplify the massaging and device management of IoT devices. This basically simplifies the design and development of IoT solutions by standardizing the data format and taking care of the complicated processes of secure data authentication. The standard also defines the process of sensor data collection, connectivity monitoring, and firmware updates. In the architecture of Light with M2M, we have three components. The Light with M2M client running on the end device, the Light with M2M server, which manages the device. It's data and firmware in the cloud. And the Light with M2M bootstrap server, which is a cloud service to authenticate and provision the client. So the client communicates with both servers. It ensures a secure and encrypted connection with those servers and sends the data in the right format as dictated by the standard. The server receives this data and manages the device by configuring different parameters. And the bootstrap server allows for changing to which server the device should be connected. The standard uses the so-called IpsoSmart objects, or just Light with M2M objects, to represent configuration, functionalities, and sensors to realize that data interoperability. So when structuring the object in a specific way, a language is created that both the Light with M2M client and server can comprehend. This language contains objects, object instances, and resources. So each data message is sent using a data format in this defined way. And the device contains different building blocks. Each of these blocks is represented by an object and then identified by the object ID. So the top level is the object, which contains different object instances. For example, we can have object with a thermometer, which can have three instances, because we have three different thermometers in our device. And each of these instances have different resources, so they can contain data like the minimum maximum values and the current value of the sensor. When we look at the standard of the data, here we see that each object has a designated object ID, same with the object instance ID and the resource ID. The object ID is provided by the standard, as well as the resource ID, while the object instances are just representing the different sensors, objects, machinery on the device. So for example, when we have a temperature, and then we have the object 33003 slash 0, because that is the 0th instance, and slash 5700, which represents the resource for the sensor value. In this way, we can easily encode the data without need for a very long string to show what is the value and what it represents. How does it connect with the tiny ML? We start with the standard data collection. So we have the objects which represent the state of the device and its sensors. And they can be updated in two different ways by remote reading for the lighted-end-term server and by registering the changes in the state of the object. So each time the device detects a change in the state of the objects, the appropriate notification is then sent to the server. So for example, we have the object representing the accelerometer, which has an assigned ID, and it represents the values measured by the accelerometer. And this can be used to monitor power poles within the sensor values of regular intervals, which enables the possibility of estimating the current inclination. However, of saving the rapid change in the state of accelerometer object may give information above power overturning. And it should only send an update when there is a significant change in readings because sending multiple notifications at the same time will be consuming the cloud resources. If we apply this object paradigm to the whole IoT device, including the tiny ML, we can see how it can be all integrated together. So one is the simple output of the accelerometer at the bottom, which is the raw value, which we don't want to send all the time to the cloud. And what is above that is the tiny ML feature extractor and classification. So we have different, for example, pattern detector object and additional objects like fumer object and device object, which can be used to manage the device. Starting with the ML model object, this is used to describe the current model. So we have the model name and its version. And this can have only a single instance if we have a simple device or multiple instances if there are more than one model on the device. And this allows for upgrading or learning the device and checking what is the actual model currently on the device. We have also a pattern detector object. It can be used to represent the results of the pattern classifier applied to the sensor data. And an object can have many instances, one for each classifier output class. It was assumed that for each time window in which sensor data is analyzed, the result of the classification is the class with the highest probability. Anomaly detector object. So this is used to report the situations for which the anomaly detection module detected abnormal behavior. And the high amount of anomalies found in the sensor data means that the nature of data is significantly different for the training data. So we could know from that that the classifier that is running on the device needs to be updated or changed into something else if this state is in such way or the device is really malfunctioning in that domain. We also have the classifier object. This is used to report the detailed results of the anomaly detector and the classifier output. So this is updated with each iteration of the ML model operation. So here it could be a lot of data sent to the sensor. Or we can just check that specific state occurred in the device and just send information and notification about that. We also have the anomaly analyzer object. So in this case, if there is anomaly detected in the device, this object will gather the data that happened during that time of the anomaly. So we can do a further inspection on it and check if the readings are incorrect or the model needs relearning on that. So the summary of the objects, we have five different objects that can be used to manage all the states of the machine learning on the device. And thanks to the open standard and the standardized way of doing that, we can do it for different devices, different objects, different sensors. And have a single backend on the server side to manage all of those sensor devices, display them accordingly, and just have a common translation between the server, the device, and what happens in this way, as well as updating the whole module and model during the full photo or executing some relearning on the device. Yeah, so that was pretty much all for me. So for a summary, this is just a small option, what can be used in this case, how can we update the model, the instances, how can we do such things in the restrained devices and in structured way that the lighted MTRM allows us to and how to use it for our use case. OK, so thank you for your attention. And are there any questions on that? Yes? Hold on a moment. So yeah, my question is, I see that on the object for configuring the model, there is a resource with opaque data. Is there a format, a standard for managing this kind of model, or is it implementation dependent? This is implementation dependent, so it really depends on what type of models are you going to be using. For example, it can store the parameters of the model or even the model itself if you want to. It really just depends on what tiny ML model are you going to be using. Thank you. Any other questions? Thank you very much for your presentation. Thank you.