 So, hey folks, okay, so welcome to Cloud Native Live, where we dive into the code behind you, Cloud Native. I am your host, Chaharyarabha Satyamindam here. I am in the state of Amsterdam, and I will be your host tonight. So every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will create things and they will answer your questions. In today's session, I'm stoked to introduce Tom Quinn, who will be presenting on Introducing Sheffield, a Kubernetes Native Industrial Edge. This is an official livestream of the CNCF, and as such, is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of the code of conduct. Basically, please be respectful to all of your fellow participants and presenters with that, I will hand off it over to Tom Quinn to kick off today's presentation, okay? So, hey Tom, how are you? Hi, Chaharyarabha, I'm good, thank you. How are you? Yeah, I'm fine as well, okay? So, yeah, I guess you are good to go to start your session, so best of luck. Okay, yeah. So, let me just start, share my screen. Okay, can you guys see it? Yes, it may be this way. Okay. All right, yeah, so hello everyone. I am Tom, a co-founder and chief SRE of the company, NSS. Today, my topic will be focusing on introducing Sheffield, a Kubernetes Native Industrial Edge. A little bit of introduction of the company, so we basically solved IoT interoperability problem using a Kubernetes Native way. So, with Cloud and Edge, we solved the whole IoT problem. A little bit of introduction to the, so here's today's agenda. First, I'm going to introduce Sheffield and then I'm going to introduce you to the demo. Since this demo includes IoT devices, I'm going to introduce the demo architecture as well and then we'll go to the live demo and at last we'll have a Q&A session. All right, let's get started. So, first of all, what is Sheffield? So, Sheffield is a lightweight production grade protocol and vendor agnostic IoT development framework. Oftentimes, when we try to integrate IoT devices into our software system, it introduces silos. So, the more devices we have, the more systems we bring into the software, the more protocols, the more ad hoc we bring into the system. So, with Sheffield, we actually abstracts everything into a Kubernetes resource. So, as shown here in the architecture, for each IoT device, we actually virtualizes into a pod. So, we call it a device of Sheffield. Within the pod, there could be protocol specific or driver specific or system SDK specific to the device. So, each device corresponds to a pod and that way we can deploy our whole stack just as deploying a software. And then on top of that, we can easily integrate softwares into the same system. So, we will use the same IT infrastructure to both manage our software and IoT devices. So, for Sheffield, Sheffield has two core components. One is called the device Sheffield that communicates what the device is. And the other one on the top, we have the Sheffield controller and also the Sheffield D for device discovery. So, Sheffield controller actually manages the whole lifecycle of the IoT devices. So, say if I discovered a device or if I manually input a device, the Sheffield controller will automatically deploy deployments, services, conflict maps accordingly. A little bit of introduction to Sheffield's architecture. So, here within a single device Sheffield, this is how Sheffield works. So, at the bottom, we have the outer device and then within the device Sheffield, we have a driver of the outer device containerized. Also, we have a layer seven proxy that's using HTTP, MQTT, or even gRPC. And then on top of that, everything is exposed as REST APIs. So, that way your application can be developed easily, just like developing a web application. Just a quick note. So, here we have a device Sheffield with the driver of IoT devices. But if the device uses a public standard protocols such as HTTP or MQTT, OPC, UA, stuff like that, we don't have to do that. So, we have a device Sheffield for general open protocols. All right. So, before we get into the demo, here is the architecture itself. So, can you hear me? Can it be? Zoom in a bit. Zoom out the screen, because people are kind of, yeah. Oh, okay. So, yeah, I'll zoom in a bit. How about now? Yeah, I guess that's awesome. Yeah, so this is today's architecture. I have an existing Kubernetes cluster that is running K3S. And also, I have two nodes. The master nodes runs on a Raspberry Pi, the worker node runs in AVM on my host. And in today's demo, I am going to integrate two devices. One is a IP camera using RTSP. The other one is called a Displacement Sensor. We'll get to that later. It communicates via the I2C interface on the Raspberry Pi. All right, let's get into the demo. So, first of all, I will cover the setup process. So, to set up a cluster, we have to set up the host individually. So, here we have the VM post. I use multi-pass, but if you have a, say, we went to a Linux host, you can just do this directly. So, we install WireGuard to ensure that the interface has a fixed IP. Set up the WireGuard and then we bring up the K3S master node using this. We have to make sure that we're advertising it on the WireGuard interface and then using the WireGuard interface for the IP. And then on the worker node, we join the master using K3S join, also announcing us using the WireGuard IP and the WireGuard interface, giving it the master URL. And at last, we label the K3S node as type worker to deploy worker resources. And lastly, we set up Shriffle. So, what I have here is a, just a cluster that's fast been set up. So, if I do get nodes, we have two nodes here. So, one is K3S, one is Raspberry Pi. K3S is the host that I'm sharing right now and Raspberry Pi is kind of right next to it. So, let's begin first by installing Shriffle. So, since Shriffle is Kubernetes native, we can just install it using one line of command. And as you can see here, most of them are unchanged since I've already applied and pre-downloaded everything. So, Shriffle introduces two custom resource definitions. One is called the Edge Device. Edge Device is a custom resource definition in Shriffle that allows us to control or manage the IoT resources. So, yeah, if I connected a camera, so there will be a Edge Device camera in there. And then the second custom resource definition is called a telemetry service. This allows us to push data to our desired data store. It could be SQL like a time series database. It could be a SQL server or MySQL or even an object store. So, that's that. And then we deployed a bunch of roles and namespaces. And then, at last, we deployed a bunch of service accounts to help with it. So, after we installed Shriffle, we can do a Git Pots. So, you can see here, we have the Shriffle controller installed and up and running. So, this will help us to control the and manage the full access of the devices. All right, so let's get into the first step of the demo. So, here is the architecture of the demo. I'll be connecting the camera into my worker node that has an IP of 192.168.64.9 and then view the camera's functionalities through a browser. And then, I have the master node with the worker node connected to the wild card. So, all we have to do is to deploy this camera YAML resource. So, if you look into it, it has a bunch of YAMLs. This is just a mix of YAMLs. So, the most important one is this one, which is called the Edge device. So, this is Shriffle's custom resource definition. It allows us to configure the skew of the device, also the connection and protocol of the device. So, the connection could be Ethernet, it could be USB, it could be different interfaces, but here we're using Ethernet. And the protocol can be, say, MQTT, OPC UA, and in that case, you fill in the actual OPC UA server address or the MQTT broker address. And that's the Edge device. So, in order to deploy a device Shriffle, you also need a config map. The config map is kind of like a configuration for the digital twin-order device Shriffle. And then within that, you give it the driver image of the device skew and then also the instructions. Instructions will be what abilities you would like to have from your device. Say, I am integrating an IP camera. I want capture, which returns the current frame. I want stream that returns the continuous stream from the camera. Telemetry servers, telemetries will allow us to periodically detect if the device is connected or not if it fell, it will switch to the fell state. And then the rest is just the deployment that basically is the same from what I'm shown here. So, we have a proxy and then the driver. So, within this deployment pod, we have one replica and then for the pod, we have a layer seven proxy that's called device Shriffle HDDP HDDP and then we have device driver, which is the camera Python version. And then here, we actually specify the IP address of the camera and then the RTSP port. If it's not a standard one, you can specify here and then the username and the password. Since this is just a demo purposes, I use plain text username and password, but in a real world situation, you will like to use Kubernetes secrets or other secret store for your own deployment. So, that's it for the deployment manifest. Let's go ahead and apply it into our cluster. So, all I have to do is kubectl apply-f camera.yaml. So, this applies all four resources into the cluster. And then to check the node port of the device Shriffle. So, since we're demoing, we're going to use node port for this service device Shriffle. Let's add port 30196. We can also check the status of the pod and to see which hosts it's running on. Okay, so it's running on the K3S node. It's up and running and has two pods. So, let's go to the browser and go to the interface. So, that's the IP for that and it's 30196. Okay, 30196 and then there it is. So, as you can see here, we have error device instructions does not exist. Remember, we configure that in our config map because we have three instructions and none of them is slash. So, therefore the root there doesn't have any instruction for that, but we can use slash capture. Which captures the current frame from the camera. We can actually try different angles and then move around a bit. And then if we wanted to see the stream, we can actually do stream, which gives us a continuous stream from the camera. And then that's it for the camera connection. So, we can also check out its resource get edge devices. Here we have the camera. And then if we describe it, we can actually see all its statuses and the connectivity things. And then that concludes the first part of this demo, step one. And then now let's continue to step two of this demo. So, step two, I will also add it. So, as you can see here, here's my current setup. I have a Raspberry Pi, which is our masternode. I have a circuit board, a grad board that is connecting the blue one here. Let's say analog to digital computer. And then here I have a displacement sensor. So, for those of you that are not familiar with displacement sensors or auto devices, this is basically a device you can use to measure heights. So, for example, the amount of string that pull out will be corresponds to the voltage data corresponds. So, acting like a potentiometer. So, this could be a use case, for example, if I'm placing this onto a larger device and then just by measuring the voltage, I can tell how much is the device being lifted or what is the height of the device. And then as you can see, since the device is directly connected to the Raspberry Pi, we have to deploy that resource onto the Pi. We cannot deploy it onto my physical host here. So, here is the architecture for step two. I've added this displacement sensor using I2C to connect it to the device Shifu Sensor on the masternode, which is the Raspberry Pi. And then within that, I have a driver. Let's see how the driver goes. So, the driver is such a general purpose HTTP server that has the ADS1115 ADC computer inside. So, it exposes one interface, which is called the slash sensor on port 5000. And all it does is once called, it returns the displacement and also the voltage. It calculates the displacement from the voltage. This 1250 is the maximum range for the displacement. So, I divide it by five and then times the voltage gets the actual displacement value from it. And then for the sensor deployment, similar to the camera, we have the edge device resource. Here we have a different skew. It's called SG20. And then the connectivity is given at an HTTP because the driver uses HTTP. And the address will be the driver's actual exposed port. And then here we have a different instruction called sensor. And then the driver image is a different one too. So, that's that. And then for the deployment itself, it's similar to what we've had before. We have a layer seven proxy plus the displacement sensor driver. Here, one thing to note that we have to market as privileged true, since we're accessing local devices or on Raspberry Pi, we have to get it. We have to get this flag to the container runtime. And take that's pretty much, let's see what it is. Yeah. And then for this node selector, we have to schedule it on the master since it's directly connected to the master. So, that's one node selector we have to use. And then we're pretty much good. And let's actually deploy the sensor. So, I do krup-cuddle-apply-appsensor. All right. It creates all four resources. And if I do krup-cuddle-get-esh-device, it will show we have a different device popping up. It's called the ash-device sensor here. And then we do krup-cuddle-get-pods-device-shrifu. Let's see, it's up and running. And then up and running on the Raspberry Pi node, which is the intended one. And now let's check its abilities. First, we have to get the node port of the servers. Just do that. And then it's on 3.15.6.1. And then I'll just copy it, paste it, and then, just do that. So, the actual port would be, let's just open up the browser. And we have the sensor value. I'll just listen to that. So as you can see, we have display span and voltage. And each time we refresh it, we'll get somewhat different value because it's analog, I won't say the same. But as you can see, it pretty much stays at 5.7 here. And for the voltage is 0.02 on the voltage. And that is because we haven't moved the sensor itself. So let's say, if we pulled this out a bit and just hold it, and then we refresh it, I'll go up to 119 now. And then if we slowly retract it, we're going to see it in real time. Retract, retry, and end up 6 or 5. That's the 0th, rather, from it. All right, so that concludes the second part of our demo. Any questions so far? No questions said, I guess. Yeah, you can continue. Cool. So let's continue to part three of our demo. So one might ask, so what can we do with those connected devices? So one thing about Shrevelle, as I mentioned before, we abstract devices into a set of APIs. So for example, for the IP camera, we abstract it to capture and stream. And for the displacement sensor, we abstract it to a slash sensor. So that made app development really easy. Say if I'm developing something in an industry scenario or just for my own purposes, I can just develop, just like developing a web app. So part three of the demo, I'm going to build an application on top of Shrevelle. Since everything is containerized and everything is exposed as pods, I can just build on top of it. So here what I'm going to do is to build an app, and then here's the requirement, when the displacement sensor detects displacement beyond a certain threshold, the app should log it and use the camera to take a picture of the object. So this sort of mimics a surveillance or door cam situation. So for example, if you put this displacement sensor at your gate, and then when the gate opens up, you're going to take a picture and see who's coming. So that's basically what we're trying to do here. So to do that, we have to write a application. So I have a simple Go application all using standard Go packages, nothing more than that. So the first, I have declared a few constants, giving the sensor URL and the camera URL. I'm directly using the service names since this is Kubernetes native and Kubernetes will take care of all the DNS things. So I can use their names directly. So the sensor URL would be sensor, and then for the camera URL, since I'm going to capture, take a picture from it, I'll use the capture URL. And then we have a pull interval that is 0.5 second. And for the sensor data, we have a struct that is directly corresponds to this struct we have here. So that way we can module it into the struct and then check on its value. So in the main loop for this application, we have a start pooling. This is the main logic to it. So in the start pooling, basically we loop for every 0.5 seconds and we get the sensor data. And if the sensor data is greater than 200, that is 200 millimeter, we're going to lock data. We can see this in Google Cloud of logs. And then we are going to do a image capture and then save it onto the desk. So how we get the sensor data is just a simple HTTP get request, nothing more than that. We marshal it into our data struct, that's it. So how we capture an image, similar to that, we do an HTTP get request to the camera URL and then just save it as a file inside the container for demo purposes. The format will be using its timestamp. So year, month, day, and then two seconds. That's it. And then the other two handle functions here, I have are to show you a better visual. So I have a slash images that shows you all the current capture images. I have a slash images that actually shows you each individual visual images. So if I wanted to download it, I can actually click on the image and download it. So all these two endpoints are served in basic, it's wrapped in basic as channel, nothing more. All right, so now let's continue to the view part. So I'll go back to the host mission, step two, right? And then I'm going to build the app. So do Docker build, let's do tag, let me be one, let's just go. Okay, this is going to build the application. And then we're going to save it into targz. And then since I'm running on multi-pass, I'll do multi-pass transfer, transfer the app onto the host. Okay, and then now back to the host, I'm going to do an import, which imports the application into their Docker local registry. So lastly, we're going to deploy the app. So just do kubectl apply dash app, app.yaml. So just show you the app now, it's nothing more than just the deployment and the service. So the deployment has that image for the app and then the service is just an import service. That's it. Okay, and then let's check where we have to deploy the application. And then I have to deploy the app onto the K2S North gap. So just like that. And then the service port of the service port is 32320. So this host slash images will be our URL. Well, let's check the applicant to see if we're correct. So yeah, slash images we are. All right, so let's go back to the browser and then here. All right, as you can see, we have a blank screen since the application is running and it hasn't captured anything because I haven't moved the displacement. So let's do a bunch of retract here. So allow, that should be a much longer, okay? Then we should get some images here. Let me refresh you. See, we captured four images here. We can view individual ones and check on the status. So see who's coming. And then we can do just show this slide. So you have a thumbs up and then allow. You should capture your pictures here too. All right, see, got that. Okay, so that is how you build an application with your physical devices. Any questions, Stella? Yes, we have a question, I guess. So the question is, I guess you can see. So how is a device if you position in today's age computing landscapes, such as LF age as your IoT age, AWS green class? Yes, so the difference between our solution or Shifu to the others is that we are actually cloud agnostic. And then we provide a framework itself. We don't actually provide any cloud provider specific. We are cloud agnostic, protocol agnostic, and then we are, it's all microservice. So let's say we have LF edge. We are actually under LF edge. So not only we are a CNC landscape program project, we are also a LF edge project. So that's that. And then for AWS or other other such as Azure IoT or AWS green grass, AWS IoT Core or IoT Hub, we can actually combine together, actually. So remember we talked about the tele-imager service that is within the device Shifu. We can actually push to Azure and AWS too. So we can work together. That's that. Okay, great. Yeah, so I guess, yeah, there is no question left, but I think something is mentioned like, yeah, it is mentioned like there is QBH that resolve integer, something like that. Yeah, so QBH is more, yeah, go ahead. No, no, no, I have no question. Okay, yeah, so yeah, just a little bit. So QBH is more of a Kubernetes, another Kubernetes basically. So it has some IoT integration, but like I said, if you wanted to integrate a device with proprietary driver, you can't do that with existing solutions or you have to do it the ad hoc way. So there's no way that lets you to just deploy them like Kubernetes resources. Okay, so I guess that question was missing on. So this is the question actually he missed out. So there is QBH that resolve intermittent connectivity and what eviction, so how your solution tackle this? Yeah, that's what I mean. Yeah, so for that, we don't really, so since we're running on top of Kubernetes, so that can be handled by Kubernetes. We don't really temper with that, but we do have our own mechanism that evicts pods to different hosts. Say for example, here I have a connected device onto a Raspberry Pi. Say if the Raspberry Pi failed, we're going to try to detect if it's on other edge nodes. If it's on other edge nodes, we'll move the pod there to the specific pod, but for the pod eviction, it's all handled by Kubernetes. Since we're Kubernetes native, we can take advantage of that. All right? Yeah, no questions left. Yeah, okay. Cool, so we have four demos in this session. So the last part of the demo is something that I would like to show you that is more towards AI or to show how shrivel can relates to AI. So we have this open source project called the shrivel plugin for a chat GPT. This basically allows you to control out the devices from chat GPT directly. So here's one going to show you and how we achieve that. All right, let's get into the demo. So for step four, we have this architecture, which is for step four, we have this architecture that is on our existing cluster. So we have all the devices connected and now I'm going to add a new service to it, which is the chat GPT plugin service. And then I'm going to go back to chat GPT and let's have GPT to get devices, get data from that devices in real time and render it in real time. That is the demo. And then in order to do that, first we're going to clone the repository. So just the one that I'm showing here. So from the repository and then we're going to modify a few things. Since this demo is only for a camera and now we have two devices, we're going to update this in a couple of fields. So first of all, we're going to modify the main.python to the actual IP from all court. So if we're going to main.python, we have to change it to our actual port numbers. So for example, the RTSP camera is on 30196. We'll update that. And then the sensor is on 31561. We'll update that. And that's the first step. And then for the second part, we're going to add the sensor API. So since I'm adding a new API into the chatGP service, I'm going to declare it here. So sensor basically a routes to the sensor URL, a sensor and then gets the data return as JSON format. And then next we're going to update the open API YAML. So this is the file here. We're going to modify two parts. First is the title. Privileci is just IP camera control API. And then now we're adding sensor reading into it. And then for the description part, trivia is trivia and it's just here. So now we also add that, allow us to read value from a displacement sensor. And for the API specifications, we're going to add slash sensor and then give it the struct here. So if you're going to the open API YAML file, we add it here, sensor. And then at the bottom, we have the sensor info type schema. And then we can re-import a plugin on chatGPT. I'm going to demonstrate this live. So, right. So let's start the Python server for chatGP service. Green main.py. And then the service is running on a local host 3333. So I'm going to input this, oops. I'm going to input this into chatGPT. All right, let's open up chatGPT. And then let's go into the plugin store and then hit develop your own plugin and then give it the URL. So it is going to do a request to the manifest file. So here we have a plugin for Shifu and then the open API URL. And then for the open API spec, it reads from the file we just updated. So IP camera and sensor reading and then API to access IP camera and it allows us to read value from a displacement sensor. And then we have the sensor API and the sensor struck at the bottom. Yep. And then we're going to hit install local host plugin. All right. So in order to make this work, not only we have to enable the Shifu plugin, we also have to install the webPilot. The webPilot plugin allows us to render say images right into the chat window. So that's it, that's all you have to do. So after that, all we have to do is to use natural languages. So for example, I can say, what is the camera? Camera scene right now. So chatGPT will be able to figure out on its own to see which device am I going to call and which API am I going to call. Since I'm asking what it's seeing right now, it directly calls the camera API. And then just to make sure this is in real time, let's see. We can actually regenerate this to make sure that it's in real time. Okay. Oh, okay. Something happened. Let's see if our server has actually, okay. How about what is the sensor? So it should read from the displacement sensor. And then, yeah, seeing the displacement as in unit and then for the voltage. And then if we go back to the camera, see, now, if I pull out the string, I should give us a new image, is it? Okay. Maybe it's not working for the camera. Oh, it is. So as you can see, I pull out this string here and then if I do, what is the sensor reading right now? Let's go into a new reading with the latest value that is retracted. So we have a displacement and then another voltage. All right, so that is how you can bring AI embodiment into your IoT system, similar to what we call the artificial intelligence embodiment. All right, so that concludes the last part of my demo. So we'll go back to the slides. Yeah, so that's the last part of the demo. And any questions? Any questions, anything? Yes, we have a question and that question is actually, so what LLVM does your chargeability use? Is there a cost for using chargeability plug-in? I think chargeability plug-in is available to general users right now, but in order to install the chargeability plug-in, say for Shifu, you'd have to join the wait list for that. Yeah, but the plug-in itself, I think it's available. If we start a new charge just by using GBT 3.5, okay. Okay, I see. So plug-ins, you have to use GBT4 and that is for the plus version of chargeability only for now. So yeah, that is the question. Okay, so I think I would like to add some few questions because I think that's the end of your session, right? That demo is the end of the session. Okay, so why do you actually build Shifu? So what was the purpose of it? Yeah, so the reason we built Shifu is that first of all, we are developers. So we understand that when integrating IOT devices, it's such a pain to integrate IOT devices. It's such a pain to build a software with physical devices. The more devices you have, the more silos you introduce because usually for industrial or for even home IOT devices, each device has their own ecosystem. So for example, if I purchase device from vendor A and then I would like to purchase a device from vendor B and then I end up having two systems and I usually have to download two apps to control them. So that's why we build a Shifu and try to solve this using the edge or cloud native way. Okay, that was awesome. Okay, so yeah, can you show us some case studies, actually? Yeah, so we actually have some on our website. So for example, here's a case study. Here's what the actual Shifu looks like in production. So this is one of our customers, we're one of our users. So they have a industry 4.0 lab. It's more of a bio biologies and synthetic lab. So they do synthesize bacteria or enzymes. So they have different equipment. So before if they wanted to say add a dispenser to their software, they have to change your software. They have to write the driver. They have to maybe reboot everything in order for it to work. But with Shifu, adding a device is just like adding another pod and then you expose it as a service. So all you have to do is update your business logic and that's it. You don't have to change your IT infrastructure and it allows you to focus on your own business logic instead of worrying about the infrastructure. Yeah, that was really cool. Yeah, something using the pod. Okay, so another question I would like to add is how do you solve IoT of that interoperability and problem? So that is a problem. And I think, yeah, how you are going to do that. Yeah, so yeah, like I mentioned before, we virtualizes devices into pods. So this actually solves the IoT interoperability problem. So each device has a standalone service. So device and devices will not affect each other. And then we can deploy devices just like the deploying our applications once we have the device driver or the device Shifu who can reuse it in the future. Say for my second lab, I only have the robot arm and the automated guided vehicle and not the others. I can do that. But if I have a new device, I would just deploy driver, develop driver for that and then integrate it with device Shifu and then deploy that as my new use case. So that way we solve the interoperability problem once and for all. Okay, great, great, that's awesome. Okay, so I think there are some questions coming up from the audience as well. So these are the questions. Like can you talk about the scalability of Shifu? So the scalability of Shifu is that since we're... The target of the devices that give us... Yeah, that's the answer. What was the question? So can you talk about the scalability of the Shifu? And you can see in the bracket, he has mentioned like target hashtag of devices. I have to go into the view right. Let's see if I can go back to the target. Okay, so the question is which one? Where is it? Oh, I see. So the scalability of Shifu actually depends on your cluster. Since we virtualize everything into services, just depends on your cluster. That is one advantage you can get from Kubernetes. So the more nodes you have, the more computing powers you have. So the more pods you can handle in your cluster. So it really limits on your... The limitation is truly on the computing power. Yeah. Okay. So yeah, there is another question coming up. Like he's interested to know like, so is Shifu an all new kind of service that was never before? Or is this an advancement over something? Yeah, Shifu is a brand new architecture or a brand new framework. But the thing that Shifu uses is all existing. So Kubernetes, edge computing, out native ecosystem. That's all existing. But the architecture itself, virtualizing device into pause and then abstract the device's abilities into a set of APIs. That is what Shifu introduces. Shifu doesn't add anything new to the Kubernetes cluster. You just install it as just from one line of command. That's it. Okay, yeah, great. So I guess there is no question here in the chat. Okay, so something I would like as a kind of contributor perspective. So is there any scope to contribute to the repository or is there any way to contribute or work with our team as a contributor? Yeah, we will come contributors. So we open sourced this project about a year ago and then we have about over a thousand stars. So if you wanted to see Fisher gets implemented on Shifu, feel free to create an issue for that. Or if you saw some open source open issues that request help, you can comment in and we can assign that to you. So I will come to contribute. Oh, awesome. So yeah, that's actually awesome because open source is something you're looking for here. Right, yes. So yeah, I think, so you also mentioned about it, right? Like where can we start with learning more about Shifu? Yeah, can you, I guess it would be awesome if you just show us from the screen, right? Yeah, I would just show it on the screen and then yeah, I'm ready. So the first thing you can just check out a repository. It's called a Genesis Shifu. And then we have a documentation site that's called shifu.dev. And then on there we have all of our architectures and all of our development materials that could help you onboard. We also have a quick demo for you to try it. So you don't have to have a Kubernetes cluster. All you have to do is have Docker. And then we can deploy this using just one line of command. You can start from here. We provide you, since we deal with physical devices, we actually provide, we have eight demo devices for now. So those are mock devices using real protocols, say M2TD protocol or socket protocol or PCA protocols that gives you a brief understanding of how Shifu works and how Shifu abstract different protocols into HTTP APIs. Okay, so I guess this is something you mentioned. Like I think you say it as Genesis and slash Shifu. Is it the right one? Yeah, so this one, Genesis, slash Shifu is our GitHub repository for this project. Okay, great, yeah. Okay, I hope this query has been solved as well. Okay, okay, so yeah, let's, yeah, can you, if there's anything you'd like to add, you can add it here. Let's wait for one or two minutes if the audience is having any questions or not. So if anything you'd like to add, you can add here as well. Are you getting some? So if you like a project, please give it a star or if you have any questions, feel free to email me or reply it. If you have any questions regarding more use cases, feel free to visit our website, it's cottagegenesis.com. We have different case studies and then we have different use cases and blogs, tech blogs. So for a tech related, go to shifu.dev for a use case related, you can go to edgegenesis.com. And then we have this shifu plugin for chatGBT repo here too. It's open sourced, feel free to give a comments or create issues for that too. And we have detailed guides on how you can set this up after you integrate a device into Shifu. Okay, so there is a question popping up. So the question is, can I install Shifu in any case like AKS or OpenShift? Yeah, so we are, you can say that. So we only require a Kubernetes cluster. So whichever it is, it's microkates or K3S or let's say, what else do you have? Microkates, K3S. And then a mini cube, we can also work on that. And by the way, Shifu is an official add-on for microkates. So you can just do microkates enable Shifu that installs Shifu right there just using that command. We're part of the canonical official add-ons. So it's been mentioned like micro shift, is it micro shift? Micro, so we have the different, so we have microkates, that's one of the distribution. Then we have K3S, K3S as in that from Rancher. And we also have a mini cube from that. And we can, you can install it on kind, as in kind K3S. So all those distributions you can install on top of it or the real Kubernetes that works too. Okay, great. So I think this is not relevant to the question like are the slides available to attendees? This is something one of the attendees asked. Yeah, I will be uploading everything, including the source code, all the setups, guides on to GitHub. So we'll open source this repository too, yeah. Okay, great. Okay, so I hope you will need to follow that. Actually, you did have people looking for that. Okay, so there's another question popping up. So is there any support or plan for intercluster, interoperability? So say K3S on the edge and TwinShifu on the edge, yeah. Yeah, exactly. So actually what we have here is a, not intercluster, but internotes. So as long as you have internet connectivity, you can have multiple clusters. Let's say if you're using Rancher for that, you can have a cluster on the cloud and then you can have the cluster at the edge, standalone cluster. So when network connectivity exists, you can actually pull data directly from the edge cluster. Okay, so yeah, great. So yeah, I hope there's no questions left now. So yes, we can end the session. So if we have nothing to mention. Okay, so I think we can end the session, right, Tom? Yeah. Okay, great. So that was really nice having you on this session. Hope to see you again, I guess. Yeah, so thank you so much, Tom. Yeah. Thank you. Let me take it to the backstage. Yeah, thank you so much. Yeah. Okay, okay. Then, okay, so thanks everyone for joining the latest episode of Cloud Network Live. We enjoyed the interaction and questions from the audience. So that was it for today and hope to see you again into the next live session. So till then.