 So today we are talking about a Kube edge. So Kube edge is currently a CNCF Incubation project. So mainly I will talk about the latest update. So my name is in Dean So Kevin cannot make it due to the code. So both of us are the maintainers and the founder of this project For today, I'm going to talk about project history key features and architecture and the deployment cases and the user cases and Mainly I will we'll talk about this performance and scalability test to show how impressive the project is So at last I will talk about a little bit about our future roadmap So our journey so this project We founded in 2018 so We donated to CNCF and that year and in 2019 March we entered the CNCF sandbox and in 2020 September we become a incubation project in CNCF so currently we have more than 5000 stars on github github and more than 1300 folks and more than 800 contributors and the 242 hundred forty more than code submitters and they are from 60 plus organization, so it's a well-accepted project and We really appreciate our contributors. So we foot Sorry, but so for this project we mainly try to resolve the cloud and edge connection issues. So People all know The latency is one issue and also the edge autonomy when the network is broken. There's a Interesting topic here is a broken. So edge need to run autonomically and also Because in the edge a lot of IoT device that connected that will have a lot of a data generated We don't want to pass all this data back to the cloud So the massive data is another issue and also data privacy So we generate a lot of data in the edge side However, we don't want pass all them back to the cloud, especially public cloud So we need edge computing the kube edge is beautiful this Features so key features so we the kube edge project support the native Coordinated API's so when you deploy an app to the edge so you can use Cooper can show kubakato directly. So that's a native a coordinated API's As a developer when you do the deployment that you won't see any different you deploy an app to Edge node or you deploy to a node into a node in data center and there's transparent to the developers and also We can allow mix edge node and The node in the edge side and the node in data center. You can have this a mix deployment another important thing is that we have a seamless Edge and add cloud coordination. So it's transparent to the developers So the framework itself will handle all the communications between edge and the cloud is transparent to the developers another thing is edge autonomy when the network is broken We will preserve the data state in the edge side and run autonomously and when the network Restored we will do the rig synchronization between the cloud edge make sure the data will run as your deployment as you know the Kubernetes have this list watch you have a desire states So when the network restored we make sure the edge will run as a desired state when you issue the command from the cloud and Low resource Readiness does for the IoT cases when you have a even have a rather pirate pie or even a small Very low power edge device. We support this edge and Also Device communication we have building the framework support MQTT and other protocols to support a device edge communication and you can even control your device from the cloud and we have a cloud view of The same view the cloud view from the global matrix data So what's new from? 2020 2021 we have a bunch of new stuff first a Active active higher will be high availability deployment for the edge core I will go a little deeper in the following slides and we have a new Myper framework updates that for the device connections and we have a new HTTP request routing between the cloud edge improvement and Edge mesh is our data plan We have a complete architecture update upgrade and we have a much better cross-lane communication And then we have a device does for the device management does from our IOT sick So we have this new interface most importantly This year we just tested for 100,000 nodes and 1 million power deployment. I will show in the following slides. There's a very impressive performance data So architecture, so the main thing is we have a Generic deployment from Kubernetes you can see we can mix cloud node and edge node Together so edge node is virtually here, but actually is running on the edge side So to solve this problem the main component we created is a cloud core and edge core The cloud core is running in the cloud control plane edge core is derived from Kubelet, so we set up the Web socket the long connections or you can have the cube as a another alternative So this one we have a bi-directional channel communicated between cloud Node and edge node. That's why even your cloud node your edge node is running behind some firewalls So from cloud you cannot even see the edge node however with this Setup so you can still control your cloud To edge send a command line from here. So that's the our connections and We have these CRDs created for the edge controller Device controller that's for the IoT cases and edge controls So let's see how we deploy an app into the edge node. So Internally you can see we have this at least watch Mechanism from Kubernetes the watch part it will be there. That's at CD is deployment in the Kubernetes so when the desired state chain you have a part created the scheduler see the State change and they will issue the update part to bind it to the node then Then the CD will see the state change we update here the main thing is happening in the In the clock core part that's our communication. So we have this edge controller So we see this we watch the part as well. We see okay. That's the state change We need to deploy a new part. So we this the Communication channel I just show in previous slides. We send the communication to the edge half cloud half that the connection That's the web socket connection. We set up So we issue the command to the edge core. This the HD is basically our derived a couplet We created this part here. Then we have a local storage Maintain the state we persist in the here is a SQLite database running on the edge side So in this way when the user request coming from the cloud here is the part of the cloud Cloud edge connections. So then we can create a pod on the edge side All these are transparent for Developers you only need changing update your deployment the configuration with some Tag say, okay, I want to create a pod on the edge side on the edge node If you have the label on the edge node, they will Create that for you. You don't need to do anything. I mean the developer Here is our h8 deployment that's what I was talking about in the previous slides So it's used in the earliest edition. We only have one edge core on the master control plane that's become a bottleneck and When this service crush so basically we lose the connection between the cloud and edge So for last year we have we created an active active Deployment that means that's Multiple active cloud core instances running are running. So then through the load balance or we issue the Connection to the different edge node. So if one edge core is crushed the We can load balance to other cloud cores to make sure the connection is still running So the service won't be broken here is More like the IOT cases we do a deployment to the robotics robotics for You can see all the other things are the same only thing is here. So This are robotics. We do we do the Ross. It's a turtle It's the turtle What's the name? It's the turtle Yeah, turtle boss three is a standard industry robotics So here you can see we deploy the apps Here to in the edge node basically one robot become an edge node When we deploy the app you can see we can Issue this a Ross topic use the command while volume so you can issue some command So even control the the robot Here are some user cases in the recent Deploy the adoption for Cooper edge project one is this a large-scale CDN node that's from a telecom Customer they do this basically they have a multiple Kubernetes cluster deployed So each them control our region each region have multiple CDN node the CDN node is considered as a Edge node basically they deploy this edge core and and the cloud core components The cool branch. So this way you can have a multiple Edge CDN node control by the Kubernetes cluster. So there's a CDN is remote It's outside the the central data center Another adoption last year is the vehicle cloud collaboration basically each vehicle will become a Edge node especially so here similar to We didn't show here by the similar to the previous deployment you may have multiple Kubernetes clusters and Important thing is that when this vehicle is running you may lose connections the cool badge architecture make sure this edge will run autonomy and It won't be broken if the connection is broken. It's a very stable architecture Now I'm talking about the one of two Very important tests that we did last year. We do the performance test. So we test the SSO SLOs for the cool edge Project, so we test the latency throughput scalability CPU usage and memory usage and now I talking about the scalability as a one Impressing one thing we want to mention. So Kubernetes scalability from Seek scalability you can see we cannot Consider the node numbers deployed as a scalability. So there's a multiple dimensions You have a number of namespaces besides the node number of node you have numbers namespace number of services number of secrets Number of paths number of ingress connections load balancers So it's multiple dimensions if you increase one dimension it will cause the other dimension shrink So here is a we got this picture from the Sega a scalability Kubernetes Sega of scalability. So here we'll define the the sick define a lot of a Limit and numbers for the scalability and SL and SL is so you can check here. So so here we do according to the Kubernetes seek Guidance, we do our Kubernetes scalability test. So here is a deployment We have five edge cores through the load balancers. We can show multiple edge nodes We are using the cluster loader to that's the to test the density here. This is the Configurations that's a official configuration the seek scalability provided here our configuration Parameters you can see we configure is a 100,000 nodes and the 1 million parts deployed that's that's the we use the official test matrix we configure this number So here is our test results is very impressive and first is a API responsible Responsive latency so for mutating API latencies you can see when the threshold is a one second The 50 percentile 90 percentile is almost flat. We only have some specs in the 99 percentile in the read only API calls You can see the 50 percentile 90 percentile still steady and the read read only API calls latency At least for 15 90 percentiles are both steady and And the for read only calls latencies We have a few spikes, but it's very satisfied Here is Part of the startup latencies basically you can see it's really fast. They're here a couple of zeros That's because we don't support RFC 3390 nano protocol So because this we only support it to a about a millisecond about second degrees Precision so you feel the faster response is less much less than one second So the permissives if we only show it as zero however it have some numbers It's it's just doesn't show that enough of a precision So with this test result we can Have the conclusion say current crew by edge that can support 100 Note edge notes and we can manage the one million parts deployed This is a full test report. We are going to publish this one to public after this coupon you 2002 we are going to publish the full report with all the setup latency configuration files It's almost ready so for the future We are going to even support our cross-summit communications edge collaborations as more security because the security the more and more Got the attention and especially in that case is we want to achieve our strong security edge and we are going to have a decentralized security for application running on the edge and The other thing is they still continue to improve our device mappers to have a more connection and We are going to manage a class From the we can currently one edge node is a Individual node in the future. We are going to support a cluster of our edge nodes as the edge cluster edge site and And another important thing for the community collaboration. We are we are creating a TSC steering committee technical steering committee. We are invited a few Influenced people from the Industry and we are going to create this a TSC and We have a multi cross community collaborations going on. We talked to the Ajax Foundation already Eclipse and Another big thing that we are going to awesome. So we have the awesome edge collaborations it's going on and Here is some key Website, I hope you if you are interested. That's our official website and that's our cold source. That's our Slack channel. We have a committee meeting committee meetings every week We have a two times one for more like a United States friendly time One is a much suitable for the Europe time So we have a two different time. Hopefully if you're interested you can hop in and join the meetings And here is our documentation many list if you have questions you don't You can either ask in the stack or you just send to the mailing list or you can ping us in the from Twitter Now, thank you Do we have any questions? So I was yeah, I want to ask a question regarding the 5g Mac deployments. You don't want the access edge computing Do you have any use cases related to that? The question is about a 5g Mac. Actually we In a créno, you know a créno community right the LF edge in créno That's a 5g app. So the Kuwait had to have a blueprint project in A créno community. So we collaborate with a 5g Mac all the telecom and the may be near I don't know if you that's it. So we are talking about how we can collaborate actually we Contribute to the white paper For this 5g Mac and also another one is running in Thailand is for the telecom I forget the the name we contributed to the white papers. So it's published I think in 2020 is when the pandemic has started You mean the education that we are doing one of the The this actually is a telecom So we are talking about how we can improve this applied to 5g Mac situations But it's more as coming Currently Let me rephrase it. So your question is if you you have an app running on the edge note, how you communicate between The application running on different edge note You mean the network policy how we can apply to Currently we don't have that so the edge match project is in the data plan to connect across cross land communication between Different note for your question is more like a security feature. That's why I put in the roadmap. We are doing the Here we have this decentralized security and strong security protections That's the things we are researching right now how we can apply this network policy Similar to the cloud network policy to the the application running on the edge So that's a really good question. Thank you Sorry, so can you repeat question so for cloud and edge communication you mean this connection breaking? Yeah You mean can imagine Yes, because that's the nature of edge, right? So we have some ads deployment actually The user doesn't own it. I mean that the user is for example, if you are a Manufacturer of your for the Various router, so we actually have a user case before is a water Company they provide a device for the the company provide the drinking water so they provide some edge core deployment in the edge side, but the individual site bought this The edge device so they own they don't own it But they think we control it from the cloud application deploy This one they can manually I mean you can never prevent people plugging the USB drive and Reload your kernel or something right that you cannot prevent so the security we say when This connection restored that we have the single controller to make sure this desired state always From your cloud side, so you have this have a desired state we rethink when the connection control but when the Connection is broken, so we lose the control is actually you we cannot do anything until the network restored however This much better if you we detect the the issues if you Already broke this one. We can see this only have this side got broken We cannot like this a blue to other edge node will prevent that from happening So but if this Lose connection you never can prevent the people with hack locally from here, but you can only hack the The node that you can have a physical access right more like the computer running at your home The question is this latency really high, so what's going to happen or? So this so when this latency the high there's a two issues This is a control will be issued the load. That's we try to make a Actually, we only transfer only a very limited data here when you do deployment You are willing to deploy an app you you need to pull the image or something, right? you mean you probably won't pull from our control plan you pull from your I mean either from your own Image hub or you you can provide a hub already building so that it's one kind of like that We this channel we only issue as a control command or sink your state. We already reduce a lot of a Communication between cloud we build this one if we reduce a lot of communication between Cloud and node in the native Kubernetes you have a lot of pings going on and if a latency high you will see all this node is Already dead that's the eviction will kick in and redeploy the app So but in our case we prevent this from happening We have the clock call at the proxy make sure it won't happen is we can distinguish a slow node from the dead node But in the native cloud native Kubernetes when you this one isn't the high you can just plain probably will say oh, that's the knowledge that or is not functioning You you're going to evict the Application to other node that we are trying to prevent here. Actually, we have a user case One of the customer Adopters they deploy this one using 2g wireless network because they have a some remote side They only have a 2g connections. So it's very slow. It's functioning really really well But the thing is you cannot I mean Provide a image download you either download from a Local image hub or something this can 2g cannot provide the image download that bandwidth to low So we just say we assume this one a very slow. So for the Native Kubernetes about 30 second broken. You are going to start. I mean the default policy 30 second you are going to evict all the applications, right? But here we don't we have a catch here. We have this a sink We have a sink sink a controller Deployment here as a component. We make sure we always rethink we never evict I mean application from the edge node So when we we will we do is say we will assume this one can always come back So because especially when it is not like a cloud case is that you can deploy your application anywhere For the edge node, that means we want to deploy a particular application to a particular site, right? So if you have a to it doesn't make sense So for this application running in your site to run in B side That's it's not the edge case because you have your device connected. You don't want to your Sensors transfer data to a remote side you want to transfer to a local That's why we we never evict the application. We make sure Whenever it's come back it will rethink the data and the state make sure it's running at the desired state I'm sorry You mean this is super slow This one we The sink controller will try we already read first we already reduce the required amount So suppose that we only have that date matrix or the desired state the deep we are only Yeah, this complete broken then we cannot control this will run out of Autonomy I mean atomic so this edge will I mean still running at a monitor application running and locally if the application Crush that we are restart because we have a local We have we have a local Stated cache. Basically, that's the latest desired state that we stored right So if you have your app a crush that we'll restart you have Three replicas you only have a two now will restart and back to three But that's the the best we can get right because if you change here say I want the five replicas But when the connection broken you can never tell however when here you say three we keep on three So we still keep the edge autonomy But whenever it's restored, we'll see okay. We'll pass the state. We desire five to here So replace the local desire state to five So we'll run five Actually I already very basic questions. I'm adding you to cloud edge So please we can talk later if it takes gonna take more time. So Is the edge node Built on top of cubelet like it is cool. It is no low. It's completely different. It is a different implementation of cubelet Yeah, sure. That's why it's very basic. Sure. We can talk So, thank you everyone. So I still have this Channel so anything you can ask me in the slack or send email wise So if you have further questions that you can either add the slack channel or send me email or ping us in Twitter So we can follow up