 Welcome everyone for my talk about interfacing sensor with Zafir for IoT devices before getting into the topic. I would like to have a quick question for everybody. So how many of you already use Zafir in one way or another probably use for development developing a custom some sample application or something towards Zafir. So how many of you already worked on Zafir? Yeah, that's more than two-third of a crowd. Thank you. So then let's get started. So I am Dinesh Kumar K and embedded software engineer at Linemus. At Linemus we primarily focus on embedded Linux as well as Zafir Autos related developments for our customers and various customer hardware developments. So I myself primarily work with Zafir in terms of board support drivers and as well as to some extent for SOCs and we also support our customers in terms of application development let's say for both IoT devices or non IoT devices technically edge computing or battery powered and other various cases. So I also work with embedded Linux development in certain aspects technically providing software services for board support packages, UBoat and also we deliver the product solution in the Yachto. I also work with OTA solution like Rao, SW update together with set of UI application in the back end which we have at Linemus. So I myself live in down south part of India in Cointur. So let's get started with our actual flow of our talk today. So before getting into the actual talk about how sensors of system are lied in Zafir Autos and those talk we briefly go through what a sensor is how it can be classified and so on with some basic info not much of it. Then we'll see how sensors of system in Zafir Autos is and see how it interfaces with the application and so on. Then we'll see how to add a new sensor in Zafir probably you might be interested with this particular topic because you might already have a sensor with you and then you'd like to add this sensor to add this support to the Zafir in sample application and to like to have a play around with the sensor. And finally we speak also about some key insights which I explored in our software development cycle for sensor and Zafir and some key tips and tricks which can be useful for your application development or a driver development for sensors and Zafir. So before going to going to anything what technically a sensor is? So as a Wikipedia says sensor is a physical device which sensors reach the physical world environment or physical data which can be responded or recorded that could potentially be a sensor. So it can either reach the data like temperature, humidity or pressure or acceleration those steps in terms of reading information from the sensor in the physical world or it could provide technical events like reading more than a threshold limit something like pressure limit which is more than certain value more probably we can detect something like methane or carbon emission which is more than a certain level in a factory or a manufacturing unit or so. So those kind of steps which we can interface with the physical microcontroller platform and observe it. So those are all the part of sensor classification so now we know that what a sensor is? So how do we classify this into a broader category sensor can either be a analog sensor or it could be a digital sensor. So analog sensor is a output and analog signal which will be varying in accordance with time and then we need to capture those signal in a ADC block and analog to digital converter block in a microcontroller and sample it and make it as a useful information after sampling at a certain acquisition rate and so on. Where in the digital part of it we have the value straight away comes from the sensor in a form of a digital values meaning we can get a temperature value out of it directly from the sensor or acceleration value directly from the sensor. So technically this information will be a shadow the serial peripheral interface which could be a SQC, SPI or any sort of a serial communication like a CAN bus or also even considering into this matter as it is a serial interface also. So then what is potentially a potential role or a vital role the sensor can play in IoT devices. So an IoT device in any form you can think about capturing some X or Y parameter or a data and process it or sends this information to cloud or the server where we get a process and some extent in the server side and have a data visualization platform in the back end. So the data are captured and controlled we are using a control logic like a PID logic and so on. We could also speak about fleet management manufacturing and there could be n number of possibilities in terms of IoT devices to use a sensor and make use of its functionality to achieve their end use cases. So now that we know what a sensor is and then like now we have to speak about how sensor is laid out in Zafir. So people who come from the operating systems like Linux or free autos or embedded or any kind of operating system for that matter you might already know what a driver model or a device model in one way or another way so like say device model as in the devices will be connected over the bus and then it could be a physical bus and then the physical bus is virtually represented back in the software side of it. Then we have the connection of the devices and like we have a drivers or compiler during the run time of a system whereas in this Zafir case the devices are made in the compile time. Also we see some detail in such insights about how is a Zafir sensor system specifically and how it will be interfaced with internal IO bus or external IO bus which we will see in a moment. So also we speak about certain sensors of system APIs which will be useful in developing an application or your own drivers in the upcoming steps. So what a driver model or the device model specifically in Zafir is like we have various devices in this particular block over there from device A to device D where the couple of devices device A, device B connected to I square C, device C and device D connected to spy everything is connected to a single line of bus and the devices are enumerated and placed over there from the devices perspective the devices created and then user application communicate through the physical peripheral through this generated devices. So the whole point about the device model representation is just to showcase some insights like say there is no bus for each and every independent group of platform like in Linux. So as a first thing in Zafir how does a device look like? So struct device is almost everything in terms of representing the device as well as the whole operation of a driver as well. So we use this struct device everywhere in terms of like driver in terms of application side for consuming this particular device. So struct device is almost everything like when I say a device model which has only one single bus and all these devices are treated as a single device. So every device is represented with this particular struct device and what exactly we have within this particular struct device maybe we will go in detail in the following slides but in brief summary we will have a name for each device and then also configuration instances or configuration details which will be like a device specific configuration and also we will have a data the private data which will usually call and like a driver private data specifically in any other operation system we can say a driver's private data. So that would be kept and will be used during the run time of a driver where the configuration part which will be used for the configuration certain aspects let's say in case of you want you control the flow control or the you control our board rate or so on dynamically those all comes into the configuration part whereas the data part where we feed some data from the physical device which is a serial device to get the serial data from the FIFO of the IP block and then get the data off of it. So such things come under data block of this struct device. So in some of the cases like we need synchronization mechanism we will use some logging mechanism those initialization parts are also come into the data part and apart from the config part and data part there is a one other important thing which is a API which is a most important thing for any specific drivers in software which will have more details about how the devices operate and how do we differentiate one subsystem from another in the later upcoming slides. So we will see more about the API part in the later upcoming slides so that is one of the important aspects which I missed is power management. So in every sensor nowadays IoT devices, edge devices or node devices they are probably powered with very low power cells like coin cell, 3-volt cell and etc. So those things needs to be have a power efficient stuff. So in those cases we need a device power management. So such things need to be handled specifically in the device specific section. So that is why we have a PM device so anyways we will have some following explanation about power management in the following upcoming slides. So as I said like earlier the device model is generic and super generic in a way it is represented by a struct device. So that is as I said struct device is everything we deal with this particular struct device and it is all the device model it is L phase we will see how it is binding together and then how the device is laid out and how the driver and other things are binding together. But then struct device is everything as a part of a device so what else what more we have it here. So we have an unit level any sequence of devices which are all instance based one they are only differ by a level and a priority meaning you will have a UR device and then you will have a sensors connected to a UR and then you will have I2C as a bus then sensors like accelerometer connected to the bus then I2C device must be initialized before the accelerometer sensor itself initialized. So at that moment unit and level are most important when we consider it. So there is also a device API which defines the nature of a subsystem which means we do not have a struct I2C struct spy device or a struct PGA device struct so on so on and so forth. We do not have a device dedicated device for each and every class of it we only have a devices for all the category which is a struct device and how do we really differentiate from one device to another technically speaking device API is the only thing matter which differentiates every other subsystem sir like if you take a UR subsystem you will have an UR FIFO read FIFO write and so on like if you take an I2C subsystem you will have I2C transfer I2C read and write so that is where the devices like that is where the driver APIs are differentiated. So that is a basic differentiation but each and every subsystem will have its own representation of the API. So if we go back to this previous slide we will see an const void point API which meaning it is a void point API which is a type independent data type where we represent specific subsystem or each and every subsystem so it is not a type specific thing so on the device we have a subsystem API which is not specific to any of which do not specific to any of the peripheral devices so but then devices are so statically reserved for every device tree node at a compile time meaning every device objects are coming together and device tree nodes are put together at a compile time and later it can be taken out and used for various purposes in drivers as well as application side of it. So then now that we know what how device model will look like we need to know how to define a device driver in a define a device in a driver so how do we do that we need which we so how we can do that we need to use this API and oppose the parameters required for those API like instance number, unit function where the device specific initialization should be happening this particular function call back or function itself will call by the initialization code of a driver or the kernel before it even comes into the main function it will be done at the kernel run time so let's say if a temperature sensor needs to be initialized or the set of resistance needs to be initialized to specific state and so on not assuming this particular sensor is connected but it is connected to SDA but it is not connected with the SCL line so those kind of sanity checks and also initial configuration needs to be done in the unit function of it so such functionality implement needs to be implemented in unit function then we have a PM device I will come to the PM device later and then we have a data pointer which stores a private data of device specific thing where like in our device will have something like synchronization logs and device specific buffers and so many things those definitions of a private data should be come into this data pointer and as I said earlier configuration pointer where the configuration of the devices should be stored and the references of that device specific configuration will be possible to that configuration pointer then we will have a level and a priority which are the most important things here like level is something like a group before the whole zephyr or something like you will have a pre kernel one pre kernel to post kernel and application level so these are like groupings where let's say five of a subsystem is grouped under pre kernel meaning this needs to be done or initialized before the kernel let's say timer clock and so on needs to be initialized before the scheduler so that the scheduler and other things can function properly let's say we need to do I2C spy which can be initialized under pre kernel to level because those subsystems are not like super critical like clock or timer so that needs to be initialized under pre kernel to level then comes our sensor driver where those things are happened in a post kernel level which needs those configuration from the pre kernel one and the pre kernel two levels so we have this multiple levels in zephyr and we have a priority under it so under each of the levels the priority may vary so like there will be a priority for I2C priority we have a spy priority we have a ur priority and so on like if you want to connect I2C sensor then we have to initialize that I2C subsystem driver before initializing our I2C sensor driver so this is how those parameters are possible to this device DT in defined API and then the most important thing is API pointer which is as I said earlier this is how the every subsystem is differentiating like API pointer here mentions like if we are using sensor API those device specific API's we created under sensor driver will be referenced will be given as a reference to this API pointer so what are the things which we can like we discussed so far how do we define this particular device as an instance here as a driver so as you can see in this particular code snippet device DT in defined is the one which needs to be used with all the parameters which we discussed earlier where you can say index is like it's an integer based on from zero to n minus one based on the compatibility property we are we are using and how many compatibility property we are going to use in our device in order and then the device specific unit function which I show which will be look like this here you can see the device specific unit function where the UR devices is get using those pointers and there is an API device is ready which is used for a sanity check like the device gate function will like here we get the UR device by device DT get DT inst bus that device driver API. So this device driver API is going to this unit function and device is ready API is check for the actual device is in it or not initialized or not so as I said earlier those synchronization mechanisms are have to be initialized in the unit functions of a driver so that also happening in this level. So then if you are having a interrupt like the sensor provides the physical sensor provides an interrupt which also needs to be initialized rarely like moving into the applications level. So this is how the device is initialized for this particular sensor those compatible name is hudseygrove.com which is an fingerprint sensor so this is how we define a device this is how the unit function of that of this particular device will look like and then so coming back to this slide where you have the map file which shows a unit level and you can see the lot of addresses mentioned and what actually it depicts is like how the resources are initialized and what in what levels the services the resources are initialized. So like if you see at the top level of the table you can see ESP32 clock and the kernel heap services here why it is ESP32 is because I use ESP32 as my development environment. So the clock and the heap services are initialized with the maximum priority of 30 under a pre kernel one and then followed by GPIO followed by serial and then you will see at pre kernel level 2 there is a timer initialized. So this shows that clock is in so only after clock is initialized timer can be initialized then coming to our device level it initialized under a post kernel level with a priority of 90. So then the next thing is a device so as I said before device creation itself is like a reservation during a compile time there is no concept of a device creation on the fly during a runtime based on whatsoever like device DR or whatever. So in Zephyr we don't create devices during runtime we created we create device during the compile time as a decision. If you look at the same device mapping files in the Zephyr map I got from the build you can say see like a group of items listed here from the device start to device end you can see at device pre kernel one start there is a device mentioned at device DTS odd 27 which is a device initialized which is a device created for a clock and you can see from the fifth line from the bottom you can see device DTS odd 64 where our sensor driver is initialized and it assigned with the name device DTS odd 64. So this is how the devices are created and looks like in the map file you can see it when you build a sample of it. So we define a device using a device API in a device driver so how do we define a device in a device tree node this is how our device will be look like inside a device tree node there is a UART node which is a primary node then we are going to connect a sensor which is based on UART which will be comes into the UART main node. So you can see as you can see there is this node called node with a label FPS and you can see the compatible property you can see the reg property and integer pavo property. So here you can look at the compatible property that is the main property which creates the instantiation part of it. So this is how the device node will look like so basically what this device tree will happen to this device tree is all the DTS and overlay files are converted into Zephyde or DTS in the build time and then using the YAML file which is a description of the sensor we are going to use with the combination of YAML file and the DTS file we created those are parsed into a C headers which will be look like this for this particular sensor the generator C headers will be look like the processes here as in this image and the C headers which are going to created will be look like this. You can find this also in say the build directory of the of your sample. So we can we already see how we define a device in a device driver how we define a device in a device tree node. Now we are going to see how to obtain a device in application. So this is the code snippet which shows how we can get a device from application. So device dt get1 which is a major which is an important API used here which is going to give the reference of a struct device which is going to be created for that particular compatible property. As I say as you see this only the compatible properties has to grow from 0 to here. So we are going to get the device through this compatible property using the API device dt get1. So that is not the only API available to get the device. You can see there are four other multiple ways we can get the devices. So instantiation part so now we have the device instantiation here in this slide. This is how the multiple instantiation will going to happen. We are having a physical sensor device sensor A and sensor B which is connected to the one single bus and then that sensor A and sensor B both are having both are same sensor and they are having the same peripheral you got so those devices so those physical sensor are only differentiated through the struct device. You can see the overlay will be look like this and from those overlay the devices are going to be created like this. So as I said struct device is everything this is how the device is created and from those device to you overlay. So we have the device instantiation part and all other things now we now spoke about how the sensor subsystem in Zafir is laid out. So as I mentioned earlier we do not have a virtual bus or for each and every IO peripheral or serial peripheral. So where in I2C the spy and you got everything is a simple IO bus here and which will be directly consumed by the sensor subsystem over there that being like own exception which I can compare right now here is this like there is no bus meaning probably the sensor itself is connected to the SOC itself certain cases like Nordic and where you have a temperature sensor which is connected directly to the CPU bus which is a exposing set of register to read a temperature value from it. So that is how sensor on SOC IP will look like and those the peripherals are connected to that IO bus. So but from the perspective of Zafir like I2C mosfet are in the slave will be connected externally so spy mosfet that is slave will be connected externally. So such things will be handled by the I2C subsystem spy subsystem you got subsystem respectively. So the peripherals are communicated those actual physical peripherals are communicated to the sensor subsystem through this respective individual subsystem like I2C subsystem spy subsystem only you got subsystem. The actual sensor going to get the data from the higher level device driver API given out from the sensor subsystem and there is an optional layer called external sensor SDK as you can see there is an option there is a probability like Woot electronic having this external sensor SDK where all the data getting from the sensor readings are happening at the code level in the external SDK. So we are going to export that external SDK into the Zafir as a module and then we are going to use that external SDK for getting the data through this peripheral subsystem API and sensor subsystem API. So that is how we can import the external SDK into Zafir. So this is the overall view of the sensor subsystem and Zafir looks like. So then now coming to the sensor subsystem like in Zafir it provides various operations like sensor sample phase channel get attribute get these are the APIs provided by Zafir for a sensor subsystem. These are classified into basic two operations like reading a sensor data like when we are going to read a sensor data we are going to use those respective APIs and for asynchronous call back or like say when you are having interrupts like you need a data on getting a particular interrupt so that comes over another API called sensor trigger set. So these are the APIs provided by Zafir for a sensor subsystem. And so let's speak each and every API individually. So that's how we need to implement this for our own sensor. So what is a sample phase? Basically a sample phase needs to be implemented on a sensor to fetch all the channels or data from all the channels. So what is basically a channel? Channel is like if you have an IMU sensor which is providing gyro, acero and other things like velocity and so on and every parameter is considered to be a channel. Each accelerometer it's having an X, Y and Z axis. So those X, Y and Z axis are considered as a channel. So this is how the channel sensor is classified a sensor can have multiple channels for example if you take a BMP 288, 280 which is a sensor which measure pressure, ambient temperature and humidity. Those pressure, ambient temperature and humidity are the sensor channels from those we will get data from the sensor to our device driver. As I said earlier device driver have a device private data that's where the channels, that's where the sensor readings are going to be stored. So then with the channel gate API we are going to get those fetched data from the device driver private data to our actual application through this channel gate API. Attributes are nothing but like if you have an accelerometer you would like to have a threshold. Let's say if the threshold reaches more than 100G then you have a 20G or 100G like you need to have an alert or even an event which will be triggered. So for such configuration need to be set we will be having an attribute. So that's where the attribute comes into the place. So sampling frequency those things can be set accordingly in the attributes and get the attributes data from the application user perspective. So and then the final part is a trigger which is an event or you can say interrupt physically underlying. So when we set a threshold how does the device speak back to us? So that's about the call back functionality you can register with and then you have an interrupt fired then you have then you will be handling the interrupt accordingly. Let's say if you can acquire a data with a FIFO or so on that's it like data acquisition. The whole thing can happen here and also that just an event it could be just like a fall detection or a double tap detection. Those are just like an events can be forwarded to the application side. So make sure that the one thing is common across any operating system like for this matter you do not sleep on an interrupt handler. So you do not need a large set of operation under interrupt handler. So these things can be offloaded using the Zephyr API like Zephyr has an work queue. So those interrupt handling parts can be submitted to the other thread. So these are the APIs provided for the sensor in Zephyr. So these are the channels and these are the sensor attributes which are already presented at the Zephyr for various devices like as I said accelerometer having the channel sensor channel X like as I said sensor attribute has a sensor attribute low threshold up threshold. So these are the sensor channels and the sensor attributes available at the Zephyr. So again these are the sensor triggers available at the Zephyr for user to communicate with the actual sensor. So now that we spoke about whole part of like how the sensor subsystem is, how the APIs are arranged in Zephyr. So now we can move into how to add a sensor in Zephyr. So compatibility the first thing. So this is the most important thing most common thing let's say if you have a PCAL driver I mean I'm speaking about sensor driver like any form of driver if you have a driver called a PCAL then like it could be PCA 96 or whatsoever. But then you already have a PCA 95 double X support in Zephyr. Then you have to add up to the existing driver. So these are the possibilities you can add a sensor support in Zephyr. If a sensor have multiple serial interfaces like I2C or SPI they can be offloaded in separate way and the common sensor getting parameters are put into the sensor driver. So these are the ways you can add a sensor in Zephyr. So addition to the driver you can add a device or a shield to contribute to the Zephyr. So when you are going to develop a driver you have to best practice is to have a sample and test cases so that Zephyr is going to other people are going to use it under Zephyr CAs having no issues with it. So this is how you can add a device in Zephyr. So tips and tricks are I land my own driver about it. So always prefer even based application over polling. Like if you are having a sensor with inter-property use an event other than polling. And avoid Memcopy within the driver in all possible ways. So as you can see these are the tips and tricks I have used on developing my sensor. Then so what's here about save nature sensor providing specific power management API. This is the state diagram of power management API which can provide from the Zephyr. So Zephyr provides actually two power management activities like system power management and device power management. We are going to use device power management in the sensor use cases where device runtime power management can be handled from the user application like when to put the sensor into sleep or when to put the kid back the sensor into the active state. So these can be done with power management API probably with a sensor. So these are the ways sensor interfacing with IOD devices. So nowadays battery operated edge nodes are more common in usage. So sensors have to be in low power mode and active sampling need to happen with micro controller activities. So these are the references I have taken. So there are a lot of things mentioned from earlier in the Zephyr developer summit on the ELC presentation. So thank you.