 Hi everyone, I'm Puneema. I work as a software engineer for Intel. So here's my co-presenter, Sharath Kumar who is also a senior software engineer working for Intel. Let me share my presentation. So we are going to present about secure device onboard on Starling X. Here is the agenda of the presentation. We talk about briefly the Starling X architecture, we go through what exactly this DOE is, and then we talk about the integration and development steps that we have taken in order to support this SDO on Starling X, and we end this presentation with a demo. Starling X. Starling X is a complete Cloud infrastructure software which runs on the edge used by the most demanding applications in the field of industrial IoT, telecom, video delivery, etc. This is the architecture diagram of Starling X that we have. Starling X can be deployed on a bare metal or it can be deployed as a VM over the host machine. Here you can see a base Linux host OS is there. On top of it, we have CentOS and which supports Kubernetes-SEF-collect delivered. Kubernetes is basically for container platform, set for storage, collective for logging mechanism, and the work for VM. There are so many components which we have not shown in this architecture diagram. On top of this, we have, the Starling X have developed infrastructure services. We call it as flock services. The first one is a configuration management, which is basically used for installation, nodal configuration, inventory management, etc. For management, it is used for setting, getting and snoozing the alarm for the customers. It also provides a framework for other services, for basically for alarming and conveying the information to that bin. Host management is basically used for life cycle management of the host, and service management it provides the status of all active and passive services running on Starling X, and software management. It is used for, basically for software update. And one more thing to notice is that Starling X also supports open source projects. For example, open stack is deployed as a Kubernetes workflow on Starling X. That's how we have it here. And Starling X is very easy to deploy. It is having low-touch manageability. That is, we have, we provide a feature of fast recovery and then rapid response to the event. This is all about Starling X and SDO. SDO stands for Secure Device Onboard. It is also an open source project which is in the process of becoming industry standard through FIDA Alliance. Open source, it's open source project. SDO stands for Secure Device Onboard, as I told before. It is a process in which the device securely onboards to its target platform cloud service. When I say onboard, it is the first time the device establishes its secure contact with target platform service, which also lies on the cloud. And here, the first thing you'll notice is that on the manufacturing side, the OEM or the OEM does two things. So first thing is the root of trust, which is in the form of silicon for Intel devices, which can be ECDSA for ARM-based devices. And one more thing that he also generates is an ownership voucher, which is in the form of digital format, which contains the details of device and also the manufacturer details. And second thing is that this ownership voucher travels across the devices through the supply chain. We haven't shown in this diagram. So until it reaches the target platform, and this orchard.json file usually it will be, it will be rolled onto the cloud service. The same information is synchronized with the renderer server. That's how the renderer server will get to know the information of the device and also about its ownest target platform. And now the renderer server waits for the device to boot come. And the device reaches the phases of installation. It gets powered on. The only information it knows is the information IP address of the renderer server. It goes and contacts the renderer server. And renderer server authenticates the device using the owner.json and device as well authenticates. And it knows about renderer server, then it's no need. And after this authentication, the device contacts the target platform using the IP address provided by RV service. And here also mutual authentication takes place between these two agents. How it does is using the orchard information and using the root of trust which is there in the device. After a secure terminal establishes between these two agents, the payload required for provisioning the devices will be transferred. This payload can be as simple as a password or it can be a bundle of software which is required for the device to do its role further after the installation. So this is all about the overview of STO. Let's move on to the integration. And this is just an example supply chain use case that I have put. So you can see that renderer server which for this presentation, we'll be running on Starling X and then we show it. And our next step was that IoT platform service also to run on Starling X and show. This is the future plans that we have. And coming to the integration step, the first thing that we need to do is to fetch the binary drop from the STO product. And the link is provided over here. Container is the RV service and deployed on the Starling X. And one more work that is involved is the maintenance. So whenever there is a new drop we need to redeploy it on the Starling X. So when we started this work, so we had so many questions like if the RV service which is already working core on the Docker swarm, so what exactly are we doing? Being ourselves as a beginner, so we had this question, what is Docker form? What is Kubernetes? Why we are having this conversion? If we are doing this porting, how are we doing? So during our journey, we learned the Docker swarm. It is only suitable for smaller, simpler and more straightforward deployment. And Kubernetes, which actually very scalable used at the enterprise level and it supports many other features with a swarm doesn't support, even though it is very easy to understand Docker swarm and Docker compile format that Docker swarm supports. And one method that comes in handy is to use K-compose, but using this, you get a lot of deployment file and it is easy to get over hemmed and leave the task because there are so many files that get generated. To get started, what I recommend is to go through the Docker-compose and see what work is involved. And this is the file we had initially. So you can see that we have a server, so it has its own secret files. It has its configuration. We have this network information for the RV service. So the configs and secrets were very straightforward for the conversion. We just cured volume support from the Kubernetes. And then the difference was only the syntax and I have put it in the backup if you are interested, you can go through. The most important thing was network. So our colleague, Sharath Kumar will be discussing more on the network part, how we did the porting work. Sharath, over to you on this. Thanks, Puneema. So I would like to discuss like the basic background networking drivers, what we used in Starling X, which is taken from the existing Docker swarm. So by default, Docker swarm uses overlay network driver for its cluster, right? And that was a challenge for us to convert that driver to a Kubernetes native driver. And in the Starling X, we're using Calico which is a default network driver, which will be available for deploying your pod or services or it could be deployment. So this was a simple example where we're comparing Apple to Apple with the swarm drivers, network drivers with the Kubernetes existing drivers for the Starling X platform. When you see this deployment file of service for the Kubernetes, you can start with APA version, V1 which will be the version for deploying services. So the kind of the category where second length states that this deployment is about service deployment. Further going forward, at least we need to understand what is service is all about, right? So when pods are created or when it's workloads are created in the form of pods for any Kubernetes application, it will be like deployed any node. So as a end user, how you access that? So for that, you need some kind of end points where you can access those pods or the respective IP address of the pods then service is the solution. Service is nothing but combination of IP address along with the port. So this how each individual pods can be accessed across anywhere and it could be any number it can be searchable under the Kubernetes cluster by using service as well as labels. So then going forward the metadata like for the deployment of service you have to give some name, right? Otherwise how we can troubleshoot or how we can access those services. So the naming part is here actually under metadata. And the last and not least final piece is like spec the specifications for the Kubernetes deployment. And service has a two type of support, right? For the access part. One is a node port which actually provides access externally to the pod. But the second one was cluster IP where inside the node you can communicate or you can access the different pods communications. So, but for how here like sure we want to access the parts of the SDO applications across the multiple nodes. Then cluster IP will be not the solution. So we are accessing default as a node port. Then we are providing some port information where we can access the port by providing 8000 as a external and internal port where communication can be happened. And they're providing a node port as a 30,007 where the external users can access the pod from any node of the cluster. So this is the overview of the network of the existing Starling X integrated with SDO. Thanks for your time over to Koneva. Yeah. Thanks Sheriff for that quick explanation. It was very good and helpful. And this is how the diagram it looks like when we deploy these Kubernetes resource file on Starling X all in one setup. So you can see that this is how the external world sees it. This is IP address of the node and then the port it wants to listen to. And then the RV service which is having backend as Redis and is managed by a DB manager. And one thing we want to tell is a troubleshoot that we did in order to have this successfully running on Starling X is that when we run it for the first time it was creating crash waiting for the DB manager. The problem was that the container of Redis and then RV service were part of single pod. So this was causing synchronization issue. So what we did is to have this in it in it to wait for the Redis server to come up and then RV service to come up. And these are the commands for deploying and then to see the locks. And this is the resultant Kubernetes resource file that we have. And this is a snapshot that we have taken after running it successfully on Starling X and this is a last time RV service. And this is how the entire architecture would look like after the deployment. And yes, so we have successfully launched RV service on Starling X. Now comes the questions when you have many services when you want add many environment variables it's not easy with one cube control apply and then deployment file having so many environment variables it become very cumbersome. So we need, we also need to increase our knowledge and use a facility that the open source provides. So in that regard we have done this Helm chart creation. Yes, what exactly is Helm chart? Helm is a Kubernetes package manager which is similar to DNF that we see which runs on Fedora. If you want to do a quick comparison it is like just like DNF is a RPM package manager. Helm is a package manager for Kubernetes resources. We come across world call as chart in the case of Helm. So that is nothing but the Kubernetes packages. There's nothing big thing over here. And this chart can be uploaded onto the central repository just like we do for Docker images. And this is just a comparison table that I've put in order for the user to get understanding of Helm. And this is the folder structure that gets created whenever we do Helm install. The first one is chart.ml, then template and then value.ml these three are very important. The rest of them are not concentrating because not important it is not mandatory to have. And one important thing to notice is that as you go on increasing your skill sets you can actually use chart which tells about the dependency between the charts. And to begin with this chart.ml template and values.ml is you know, and what we have done we just showed your server service.ml file that we had put in template. Whatever environment variables, the configuration and the secrets we put it under value.ml and chart.ml just the information about the chart and what it does. And these are the important commands to keep in mind. So obviously we should have Helm and Kubernetes on the styling case which is there already it provides it. And Helm lint is very useful for checking the syntax and for debugging you can use this particular command I have put over here you can actually package it and then install using this package or you just keep the path of the chart and here done. And this is how we successfully deployed the RV service which is one of the entities for the STO to happen successfully. And now coming to the demo. This is the environment setup that we have for the development purpose. We just use a simplex setup where the controller, computer and storage is on the single node and we have manufacturer supply chain toolkit on our development machine. Same goes for the virtual device which comes as part of customer reference implementation and IoT platform service which is also on our development machine. The only thing to notice is that very important thing is that end of our service which runs on the styling case. Let's see how we do it. So we're gonna play a video of it for the interest of time. Yeah, let's see it. So the first thing as I told before is the device initialization. You can see it, the device is there. And then I'm going to use a manufacturer CTK which I've set up already. So I need to give the port address of the RV service to the devices. So this can be done using the MySQL commands using the workbench. And then we see how this happens. So I'm gonna fast forward this a bit to show you the, so this is a MySQL workbench. You can see that I'm going to add the IP address of the renderer server and also the IP address of the starting X to the renderer server. And I'm gonna execute this command. This is how the virtual device will know where it's RV service is lying. So you can see that the command is executed successfully now. And then this is the device that I have which comes as part of customer reference implementation. At the end, you can see the highlighted one is the IP address of the manufacturer. This is how the connection between the manufacturer and the virtual device happens. So now I'm going to execute this executable. So I have run this executable and then you can see that at the end the device initialization successful is coming. So as part of it, you can see that the third line from the bottom is the GUID which is a unique ID which is generated as process as a end result of device initialization. And then the third line from the beginning is a serial ID of the device. These two are important for the next process in the demo to carry out. So the next thing is that, so whenever we have a customer, I told that the information about the customer will be stored in the voucher. So you can see that yellow flag is a voucher.json. And then I need to assign this device to the customer. So once again, I'm going to use a manufacturer SDK and then MySQL workbench to do it. So let's do it. So I'm going to fast forward a bit, yeah. And then you can see it here. So I'm assigning the device to the customer. So as part of it, I have copied the serial ID of the device and the EP owner ECRASA is the public key of the customer which I've generated it and kept already. And then when I execute this MySQL command, it's like assigning the device to the customer. So I'm done with it. So next, what do I do? So the next thing which should happen is that I need to give the voucher information to the IoT platform device management service. So before that, get this owner.json from manufacturer SDK. Let's do how I do it. So you can see here that in the link, I have 10.66.244.106. This is where the manufacturer is running and the wording sees the owner.json content. So it basically has a device information and also about the customer. So now I'm going to copy this and put it under IoT platform SDK. Let's see, I'm going to fast forward it. So I'll come back to the terminal. So you can see it, SDO IoT platform under devices, I would create a folder which is having the name of GUID of the device and put the owner information over here. This is how the cloud service will get enough about the details of the device. Yes. So now let's run our RV service on Starling edge. For that, I am going to copy SDO on prime onto the all-in-one simplex setup. So I have executed it successfully. You can see that parts and then services are running already. So now let's move on to our IoT platform. You can see that it has started coming RV service. So now what we should do is, we should run TO0 scheduler which actually does synchronization of information between the RV service and the device target platform. That is how the RV service will get to know the device and its owner. So I'm running over here. So this is when the RV service and then the device will talk to each other. So you can see that it's a failure because RV service doesn't know that IoT platform is there. So we need to add this as part of white listing of RV service. So that I did it. So we need to execute TOO scheduler again. So you can see it is failed and then I have executed the command. Sorry. So you can see that second time it is getting executed. So let me fast forward to see the end result. Yeah. So this is second time the TOO scheduler happens and it is successful. So I didn't forward it correctly. Probably this time I'll do it. Yeah. So you can see that TOO0 is done successfully and then now we will move on to executing other protocols that is TO1 and then TO2. And then now I have to power on the device. That means that I need to execute the device. So you can see I have come to the device and then execute it and then start running other services. So this is the device, give the GUID and then run other services. So you can see the device onboarding has begun now. So there is something called OCS and OPS which does it. This is, this have run. Yeah, this is the logs from OPS which is responsible for TOO2 protocol. That is the conversation between the IoT platform and the device. And now let's see what happens on the OCS side. Yes. On the device log if you see, right? So you can see the device is onboarded successfully and then you can also see I am listing out the files that are downloaded. So you can see that payload.bin as I told before will come. So this is all about the demo. This is how the demo process happens with the RV service running on starting X. And this is the reference that we have. And yes, this ends the demo. So please join the community if you still have any questions you can ping me on the IRC. This ends the demo and thank you for listening. Bye.