 Hi, everyone. Yeah. Thank you for coming. Let's get started. We are right on time. So today, uh, first of myself, uh, my name is in Dean and from Google. I'm the, uh, founder and the maintainer for Kube edge. Today we are talking about the Kube edge. So I will give a few updates about the community, security, and the scalability test. So this is our agenda. We have about 35 minutes. I will try to finish this in 25 to 30 minutes. Then the rest time I can leave for the questions. So today I'm going to talk about the introduction, the briefly introduction of our project and give the community update, the security update and the performance scalability test. Uh, final, we were talking about a bit, uh, the roadmap and the community building. So edge computing, uh, I will try to get a brief, uh, the intro because everybody are hopefully already expert on this one. So there's a different edge. So we're talking about the near side edge. Basically it's very close to the device or everybody else. Then we have the city edge at the regional edge, then we reach out to our central cloud. So each edge, they have a different usage. I mean, they have a different typical location. So for the near side, we do a lot of AI inference, uh, AR VRs, but in the regional level, we, uh, typical is a CDN AR VR. And also we do the, uh, the, not only the render, uh, the rendering and also the video transcoding. So in the cloud, we assume we have unlimited computing power. So most of the training will build in the cloud and we are using the computing power from the central cloud. However, in Kube edge, we have our edge, uh, seek. So we have the CDN, uh, sub project. They are trying to do the collaborated or federated learning. So that means the training not only exists in the cloud, but also in the edge side, we are, uh, going to have a look. So here is the Kube edge architecture. So you can see, uh, the main part is the cloud part. It's based on the pure, uh, Kubernetes control plane. So you can missed, you can missed deploy cloud node and edge node, edge node. I mean expanded on the bottom. So basically our system supports, you can mix, deploy both cloud node and edge nodes. And we have the special part called, uh, cloud core is, this is our kind of API server over to the older new edge node. In the edge node, we have a counterpart is the edge core. So we are using the WebSocket long connections, or we also, uh, support a quick. So this part, we can have a two V duplex communication between the edge and the cloud. So typically the edge may be deployed, edge node may be deployed behind a firewall. Through these connections, we can cross a firewall like the cloud directly controlled edge node. So you can have a cloud controlled edge. So within the edge, we support CSI, CRI, CRI we can do, uh, uh, and also the CNI, the CRI, we support not only continent D, CRI, all, all kind of, uh, standard, uh, container provision with support. And also we are using this, uh, pop-up model mosquito to connect all the devices. So, uh, I want this one because we already talked about this before. I will spend more time on our updates. So a quick update, uh, our journey. So we launched this project in 2018 and we donated it to CNCF and become a sandbox project in 2019 March. And we are gradated to incubation in 2020, uh, 2021, 2020 September. And currently we have a more than, uh, 5,800 stars. We have a more, almost 2,000 folks. And you can see the contribution, contributors are really diversified. We have more than 1,000 contributors, uh, from, uh, 80, more than 80 organizations. So currently we are doing the quarterly release. We support not only x86, but the ARM32, ARM64. And the, uh, the new updates they say, we set up based on, after we into the incubation, we, uh, set up a, we recently created our, uh, TSC, the technical steering committee. And, uh, based on all these six already existed, the new one will be the SIG node. Uh, here the link will show you the proposal or the scope of the SIG. Basically it's very similar to Kubernetes SIG node. It's mainly focused on the, uh, lifecycle management for the edge node, the connections, anything related to the runtime in the node. And also covered, uh, cloud node because we support, uh, mixed deploy of a cloud node and edge node. Here you can see, uh, for last year, this, uh, very diversified the, uh, contributors. 36% are individual contributors. So, uh, it's, we have a consistent code, very active project. So, here is our community. Basically we have a, uh, community governments, uh, follow the CNCF rules. We have the TSC, we set up the TSC. We have a different SIGs and work groups. So, talking about on the different topics and the technical communities, we hire a few ambassadors to help us to do the marketing and evangelist. And we have the user group that's for the industry partners talking about the user cases and, uh, user cases and all the problem they, uh, have when they adopt Kubernetes. So, another good update, an important update is the training. We collaborate with the CNCF foundation because we are a CNCF project and also the Linux foundation training. We are going to start our training sessions. It's about 23, 24 sessions. We will last, uh, two or three months. And here is the link. We are talking about all the proposals, uh, what's each topic. And thank you for CNCF supporting us. We are going to have a formal training. And, uh, for the TSC, we have our charter here. That's new. Uh, the current, uh, current members will, until we are going to have another election on September, October frame in 2024. That's a two years term of TSC numbers. So, the members must come from a different organization. No organization have more than two, uh, TSC members. Now, the important update. So, basically, uh, I'm going to talking about the security update. First is about supply chain security. Supply chain security you can see, uh, especially for the container side. When you develop a new, and when developer, develop a new features until release the software, that's a few stated, a source, build, and packaging. And also, you are in, when you build your product, you pull in a lot of dependencies. You can see there's a lot of a red mark there. This could be potential risk and potential attack. So, we pair with a CNCF, we did our audit in June. So, we are one of the first CNCF project, uh, reach the, uh, level of insurance by, uh, LSA. LSA is the open SSF project. It's major, you build the dependency source, all the dimensions. So, this is a cross industry, cross organization vendor neutral group. So, uh, they do a really good, uh, audit. So, they provide an industry standard, recognize the agree upon level of protection and compliance. We, in the source, build, and common part, we all reach in the level insurance, the L4. The full report, uh, you can, here's the QR code. You can get our full security auditor report. You can see all the dimensions, they measured all the, all the details. And also, uh, there's a one, at that time, it's not available, it's a province. So, we did our improvement follow up the security updates, uh, after the audit. Uh, another one, we do the, we integrate the fuzzling. So, uh, you can see here the news, uh, we are one of the CNCF projects being fuzz continuously by the OSS fuzz. Here is the news. So, you can see the principle is by the fuzz, fuzzling test, you create your fuzz, the input, then you enter your system, do the fuzzling test. So, we are continuously tested by the fuzzling test. So, we are the first of the 18, I think currently 18 CNCF project is being fuzz. So, we are one of the first batch. So, another one will be the thread modeling and the protection analysis. So, we did our, based on our audit report, we find out this potential attack will be one, two, three, four, five, that's of different attack. Uh, I don't want to, we don't have time to go and elaborate everything. However, you can still download the uh, the full report and you can download the report on, on the previous page I posted a QR code. Here is our analysis that's posted on the github. Another part will be based on the audit report. So, we have this full report here and we set up our policy and the one-invited management procedure. You can see uh, for the cloud part, we collaborate with our upstream Kubernetes security response team. We listen to the SRC from Kubernetes and for the edge part, we run our uh, security sig maintainers. We listen and watch for the old vulnerability assessment and and all the CVs. When it's coming, we do the validation and assessment. Then we uh, simultaneously do the solution and apply the CVE. Then we uh, we, after the embargo period, we release the recommendation and release the security adversary. So, basically that will split into five parts reporting, conforming, then remediation, then we do the embargo. It's the restricted disclosure, then published disclosure. So, this time we allow our industry partners and user group can take the patch and do the fix. Now, I'm talking about the performance test. So, this topic uh, covered that in the uh, Kupacon Europe. So, basically what we do is basically we test our SLO, service level objective test. So, we test our latency, throughput, scalability, CPU usage, and memory usage. And also for the usage, the scenario of usage of Kube Edge, we especially covered unstable cloud and edge networking part because uh, we are a uh, cloud native edge project. That means the from edge to cloud is different. The connection is different from uh, intro data center. That means it's not stable, it could be disconnected or long delays and also we have a restricted bandwidth. So, before we go in, deep into that, I will briefly cover. So, here is a borrow from Kubernetes community. So, the scalability doesn't only means how many nodes you are going to support, how many pod, there's a few dimensions you need to think about it. The how many secret, the secret number we have, I think currently is 3,000 secret per cluster. That's if we, I mean, even in Google GKE, we may have increased the secret number. We not all vendors support enough number of secret and also how many service do you support back end. The pod and node is only, well, this is the pod per node and also uh, the ingress you support is upon this incoming traffic, it's networking part. So, this, you can see this multiple dimensions that you need to measure for reach the real scalability. You cannot have one. If you miss one dimension, it will, I mean, block you from your scale up. Here, this is what, how we did our scalability test. So, we have a cluster loader. So, it's generally the load and we have our load balancing to load up. So, here is our data. So, we test our 10,000, sorry, 100,000 of a node and 1 million pods. So, that's, we are trying to, that's our config. Here is our test results. So, here is the data, the visualization. Let me here. It's more readable here. So, you can see the pod startup. So, 50% is about 80, 60, it's 16, it's 1.6 and 99% is less than 5 seconds. So, it's reached our SIO, it's about 5,000 milliseconds, it's a 5, basically 5 seconds. And here is the create to schedule. Because this, our tools doesn't allow, it shows zero. Actually, it has some number because, because it's the, based on the standard, it doesn't support, it's a sub second or a sub millisecond. It cannot display here. It's only displayed at zero, but it's a number there. But 99% will be a one second schedule to run. It's all 1 second, 1,000 millisecond run to watch. It's from 50 percentile is, it's about one second. And 99% is 2.2 seconds. And schedule to watch is a 1.6 second. And 99 percentile is 30 seconds. So, from the test results, we have these conclusions. Kube edge can support 100,000 edge node. And meantime, we deploy a 1 million part on the, on the cluster. So, you can get during the KubeCon Europe, I only talk about the test setup. At that time, we don't have the full report here. Now, we have the full report released. So, you can download our report from here. So, here is a few links. Our website, GitHub page, Slack channel meetings and docs. So, you can either pin us from Slack channel or through our mailing list or from Twitter. So, we have our maintainers on the channels. And also, we have a weekly meetings. We have a two-time setup. One is more US friendly. It's in the US 7 p.m. Pacific time. The other one will be more European friendly. It's a European afternoon time. So, yeah, we have 20 minutes here. So, cool. Yeah. Thank you everyone. Here is the QR code. You can post the question or feedback. And I'm open to any questions. You mean the image hub? The harbor, right? So, you, sorry. That one, we publish in the in the doc link. But feel free to let me know. We publish all the reference in this documented link. Yes. Let me quickly show you. We, and also it's published in our GitHub page. And I didn't talk about it in the Node group. We are going to support a Windows container that's planned for the next year. So, we do have our hardware support list. So, that also posted in our user group. So, our industry partners, they also have their user examples published there. Let me know if you cannot find it. You can pin me in the Slack channel or Twitter. So, I will find it. Thank you.