 Good afternoon everybody, I'm super delighted to welcome you to this panel on edge IoT framework flavors. And we'd like you to walk away being able to choose your flavor based on the merits of each of these. I'm a principal engineer at Intel who has worked on open source solutions for machine learning IoT edge and the cloud. And our wonderful panelists today are yin ding. Hi everyone. So he's a senior architect at pure storage. He's one of the founders of cube edge. He brings cloud container and virtualization expertise. The next member on our panel is in our arch low. He works on Kubernetes cube admin and K3S. He's been an analyzing resource utilization of these solutions using some software called kbench. We have Fager. He's a senior staff engineer at Ali Baba working on the container service team. He's focused on automation, multi tenancy and edge computing. He's an open year developer. Last but not least, we have it. One who's a software engineer at Salesforce. So she has a lot of expertise in working on distributed systems that design development and test. Today she'll be representing K3S. So our first question to the panelists is what is the chief design principle of your project. When did you launch it and what is the size of your community. Thank you. Would you please go first. Sure. So, when we develop an open yard, our design principle is to try to maximum the Kubernetes is this ability to try to resolve our cash edge use cases, just using Kubernetes as as owns. There was open source in May last year and we donated it to CNCF in the first quarter last year and it is now a sandbox project of CNCF. We have, because we just started the project not long time ago. So the entire community is doing a small size, as I can say, we have about 40 contributors, but we are working very hard to spend the community. Ian, would you please go next. Yeah, sure. I work with Kuba Edge. So the motivation we developed this project is we want to simplify the edge application deployment and the lifecycle management. So better with the native Kubernetes API. So the design we have is a solve the networking problem between the cloud and the edge, and to manage the edge node and application on behalf of the developers, and they can run all this edge node application autonomously on the edge node. So for this project, Kuba Edge, we open sourced in November 2018, and we donated to CNCF on March 2019. So we enter as a CNCF sandbox project in May 2019, and then we graduated from sandbox and into incubation in September 2020. So currently we are a incubation project with CNCF. Since we launched this Kuba Edge, we already have a good community. We get more than 4,000 GitHub stars, more than 1,000 folks, and we have more than 300 contributors from more than three different organizations. And we also have more than 20 industry adopters. So it's a very organically healthy community now. Thank you, Itohan. Yes, so K3S was launched in 2019. And the main design principle is to be a Kubernetes distribution. So, you know, like we have Linux distribution is still Linux in the same way K3S is still Kubernetes, but it was designed for the edge. So it's meant to be lighter weight. So you can do everything you can do on regular Kubernetes, but it's quicker. It's less, it's not as heavy. And it's just easy to run. You can spin up a cluster in minutes, less than 10 minutes really. Awesome. So Kubernetes was brought to life with the idea of declaratively managing containerized applications. So it's basically a container orchestrator, and it was launched or it was open sourced in 2014. And it's been hugely successful. It's, I believe it's actually one of the most popular open source projects out there with over 3,000 contributors on GitHub. So yes. You have the largest clout here. Oh, I missed the number of contributors. There were 40 contributors in K3S. So you can go head to head with Faye, you know. Okay, so I thinking in our next question we think about an edge IoT application. So, and in that context, how does each of your solutions, you know, vary or work with it. I was thinking, what if I had a surveillance app. I just want to monitor maybe traffic cars coming going or which cars. And I wanted to deploy it at the level of like every traffic intersection across the state something as big as Texas. With that in mind, could each of you tell me how your solution would work. Faye, would you please go first. Sure. So, um, first of all, because open yard is just Kubernetes extension. It doesn't break the API capability. So, so all the management stuff we can use a leverage or traditional communities workload to manage the application like, like, like people usually do. But for, for cases like this, we introduce another level of abstraction, which we call node pools, which in the sense that we can kind of group your fifth closer nodes in the node pool and deploy your application for that node pool. So that can sort of some kind of the problem for the scalability. But other than that, it should keep us you just choose whatever they way that they are favored to run those relations, those applications in those, you know, edge nodes. So Faye, do you see them having multiple clusters and then orchestrating this traffic application on multiple of these edge clusters or a single large cluster, especially given that like Kubernetes has some like 5000 node limit. Yeah, I think I think that's if you go to the scale, like, like you mentioned, probably multi cluster is the right direction to do. But for now, open yard doesn't target for that. It's still a single single Kubernetes control plan. So we do have the intention for the number of nodes support. But that, that is something that we see. We're trying to resolve in the future, but not for now. Thank you. Thank you. Thank you for Kube edge. So it's easy to deploy a edge application. So for your cases, what your Kube edge is fully compatible with native Kubernetes APIs. The deployment is has nothing different from native cloud applications. What do you do it just do Kube or control API, deploy or apply. And what the different is you should enable the nodes you want to deploy application, then you can use the normal method to have another selector or other selectors to select where do you want to deploy this edge application. So from the developer point of view or operator operational engineer point of view, it's no different to for you to deploy a cloud application worse, a edge application. And for multi clusters, actually in one of our successful industry adopters, they have 50,000 nodes. So it's a way over one. What they do is they leverage some commercial word and if something like a Google derived from the Kube Fed project, or you can have your other applications because Kube edge don't touch the control plane of a Kubernetes, we just extend what's running on the edge node. So for the user, they can pick any multi Kubernetes cluster control plane and the Kube edge should. I cannot say 0% different. I mean change but it's very less change is should be most in most cases seamlessly compatible with multiple Kubernetes cluster control plane. Cool. So it depends on skill I know you said it's a large scale but for a smaller scale you could use a single cluster and use something like demon sets and use apply but for a larger scale I'll see, I'll see myself using a single node cluster at different locations and using a commercial cloud orchestrator. I know there are several multi cloud orchestrator so maybe something like cloudify to manage the different single node clusters. Oh, excellent. Thank you. Anya. It's going to be very similar to K3S. In this case, it's Kubernetes so for starters I'm going to bring up my cluster using Kube ADM as opposed to a different solution. Yes, it depends if the surveillance application is, if the scale is that of a data center then yes you can deploy it using some of the existing Kubernetes components like demon sets. But obviously we're talking about a larger scale here so yes we'll probably need something else that's going to manage the control plane so it's, you need a different orchestrator for the orchestrator which is Kubernetes so yes, you'll probably need something else on top to do the cool. So we do want these apps going at scale and especially that's the thing about edge and low latency and not having to send all your data everywhere so cool. So it looks like we have time to get a few more questions in so I'm going to sneak some in. Anya, you had mentioned that you work on K bench. Could you please tell us something about what K benches and what kind of resource utilization K3 versus K8 that kind of stuff. So yeah, K Bench is an open source framework that's used to benchmark the performance of the net this clusters. In the box you are going to get metrics like pod API call latency deployment API call latency and service API call latency. So it basically lets you know when you make API calls how long they take. So one cluster to the other you can get that information from cave and yeah it's it's basically Kubernetes benchmarking to that I actually enjoyed using. So is there any difference in these latencies when you compare them against K3S versus K8S keeping all things equal like the same hardware the same networking the same number of nodes. Yes, so yes you do have you do have you have discrepant not discrepancies you have differences so in the case of in the case of K3S, you could potentially have it's not it's not uniform across the board right so when you're deploying a pod using the pod API call latency metric inside K Bench, you could potentially have a pod being created at a higher latency inside of that metric. So when you're creating a pod using the pod operations inside of q afforded to you by QCTL, you could actually create a pod quicker inside of K3S, but go to K8S and it's created, not as quickly, but same pod creation operation when you're using a deployment to create the pods, the reverse could potentially be the case so it's not uniform. Okay, it's not consistent depends on the operation. So that also brings us to a different question of what's important to the potential. Yes, who's trying to create all part of this panel to get to help you choose your flavor. I mean, in all your experiences with IoT Edge applications, do you see these pods coming and going, I mean changing a lot are the edge IoT pods typically stable for code edge in my user cases normally is stable. So when they deploy a application to an edge node. So they typically connect to the sensors right so when the temperature or so they stay there and and report the status or do some data aggregation and the report to the center cloud. And the main thing is we worry about. It's maybe you're going to ask, but the application maybe, I mean, crushed, or the things that the one important thing is that these education can run autonomously independently, even the worker is not connected, you cannot report your data back to the center cloud, you should still work fine. So that's the, you know, our design principle so we derive from the cobalt to make sure we can manage with managing the edge application lifecycle on the edge side. So we need to make sure it's auto research or, or also handle this OTA update, the rolling update, the same as Kubernetes is due. But if you want to, I mean, the audience want to learn more detail, and you can, they can contact me or the community or welcome to look at our code. Yeah, very similar to this point. So, OpenYard resolving the autonomy problems, very classic edge autonomy problem like our code edge does, we kind of cache data in the local node. So in terms of the chain rate, again, I don't, most of our use cases we use deployment so you can, you can treat it as low running workload. Although we have use cases for like AI kind of thing, you have job kind of thing, but I would say the chain rate is still low. So, yeah, so, so, and then from the implementation perspective, or the trick that we play is kind of proxy networking between the Kubernetes to the API server. So you can assume me, you can, you can imagine there are one or two more hops, and which may, may influence the latency but I would say people should not normally won't be aware of those impact. Yeah, and like both of you said, I mean, there's not much churn in the pods as opposed to a normal data center. So things are pretty stable. So the latency to launch a pod is not that significant and things are awesome. So, my next question is, what do you each excited about in terms of features or something in terms of development happening on your specific project roadmap. And Faye, would you please go first. Oh yeah, sure. So, you know, open our side, in short, we have two main ongoing efforts. The first one is open other doesn't come up with native. The second one is to implement or design to managing the devices inside instead of we are integrating with another open source project which is called edge expungry, which is a professional managing the devices on the edge in a moment. So we are working on integration integrating this role system. So the design has been done and we have been on all the interfaces. We've done this quarter. So that's number one. The second one, the second problem we want to resolve is making the the edge node autonomy to your next level, instead of handling a single order restarts, we want to even handle the case that if a node is is dead. So how do you restart a pause that was originally was originally running road node to other nodes basically brings up certain kind of scaling cooperating in the edge side. So we're still coming up with the design and I wish we have some solution ready in the coming years. Awesome, because you really don't want to have to roll out a truck to that edge. It drives up the cost of IOT engine. Cool. And I'm super excited you're like integrating with EdgeX Foundry because I've worked on it. Would you please go next. Yeah, it's a little different from OpenYard. So at the very beginning we fully integrated with these IOT applications. So we have this called device controller. It's a CRD controller. You can control the device, use device shadowing digital twins to control your device from the center cloud. And we have built in MQTT and other protocols. There's other community contributors contributed more protocols and we are also looking for potential communication or collaboration with EdgeX. So we can fully integrate another community with more support. So in our roadmap, one of the important features we are looking for is the strength, I mean, Titan or the security from the Edge. Currently we have the registration certificates. But I think we think in the future, because naturally the Edge node is distributed into the outside world on the remote side. We don't have fan choice. Much easier to be hacked than the normal cloud node. So we say we need to looking for more secure Edge node protections. That's one. The other one is currently equipped at treat Edge node individually. Each node run as a worker node and only talk to the control plane. However, if it's a couple or a few Edge node, they group together, we are looking for how we can cluster them together to have a better resource utilization and HA solutions. Another thing I want to share is our community is growing bigger and bigger. We, I mean, set up a few CIGs, IOT CIG, AI CIG, so they treat different problems. As I think Fei mentioned, if the AI application, it could be a batch processing and the common goal and how do you treat the dataset and the models. So, so they have a different focus, but for the IOT CIG is more focusing on the co-ed with IOT devices, so more protocols and more user cases. So we are really excited. So if you're interested, please join our community to have these discussions. Also, you know, with this project I was working with at Stanford, and they're looking at distributed energy resources and their security is so important because if you have excess power from your photovoltaics, you want to give it back to the grid. And you do not want to disrupt your power supply. So I'm excited that you're working on more security and and I'll connect you up with them later. Thank you. Yeah, so currently K3S is just for Linux. So I know that there's some work going to be done to make it compatible with Windows OS so it's possible to install it in Windows. Another thing is currently when you install K3S, the default CNI that's the container network interface is Flannel. There's work being done to support models as well. And then another thing I want to add is, when you install K3S currently uses SQLite as the database, but some work is being done experiment, although some work is being done to see if it's possible to migrate your data to SQLite to LCD. Cool. For Kubernetes, since Kubernetes is basically their purview is a cloud, but it would be nice to see resource consumption with reduction work done in the community. That way it better supports the edge use case in particular to let resource consumption reduction. So yeah, it's more like will be nice to see that work that's actually being done. Maybe we can drive that so as we start looking at more edge applications. Thank you so much everyone for sharing with me what your roadmap is bringing and I'm sure our audience is super excited to hear that. So to wrap up, I'd like to say that, you know, Kubernetes is successful. That's why everybody's choosing its API. It's super nice to see that, you know, K3S open yet and QB edge are all leveraging that popular API and trying to bring their Kubernetes. I absolutely appreciate that you're especially, you know, QB edge and open yet and looking at the fact that we can't assume strong continuous network connectivity so both of your designs. Kind of wrap around that take into consideration that we might lose connectivity and still operate autonomously. I'm also very excited that you have thoughts around how we handle these device profiles at this edge, you know, whether it's through digital twins whether it's through another project so I hope all of you got a sense of which which flavor you'd like to select and thank you so much for attending and we'll be open for a few questions. Thank you. Thank you. Thank you.