 Hi, everyone, and welcome to our session where we will talk about edge architectures. My name is Ildik Ovanca. I work as Senior Manager of Community and Ecosystem at the Open Infrastructure Foundation. And among other things, I'm a big open source and edge computing enthusiast, and I have a co-presenter today. Hi, my name is Gregor Chatteri. I'm working as a semi-open source specialist at Nokia in the central open source program office. And I'm also participating in several open source and edge activities, mostly together with Ildik Ovanca. Hungarians rule the world. Yes. And now let's jump into the middle of all this. So what edge computing? There are a lot of debates out there about what edge computing and edge are, so we will not join and deepen the debate here. I only wanted to just emphasize on that edge computing is part of an evolution path as we are kind of moving on from cloud computing only. And edge computing is making it possible to get a computing power out of the large data centers towards the edge and towards the end users, let them be humans or machines. And when it comes to edge computing, a lot of people and organizations are focusing on the edge part of it because that is new. It is exciting. It has a lot of new opportunities, and it also has a lot of new challenges to solve with really small footprint, also with location and circumstances that you all have to be aware of and plan for. But edge is also always on the edge of something, on the edge of the network, on the edge of your system, so it is always part of a larger ecosystem and architecture. And that is what the Open Infra Edge Computing is focusing on. The Open Infra Edge Computing Group is a top-level working group that is supported by the Open Infrastructure Foundation, and we have a broad industry outreach and a focus on infrastructure software. And what we are looking into really is the massively distributed systems that are behind edge computing use cases, so really how it looks like from core to the edge, edge to core, or edge to edge in some cases. The group is focusing on collecting use cases in the edge computing area to understand the requirements and challenges that they have. And we are using these learnings to build reference architectural models to also help anyone out there who is trying to put together a solution for an edge computing use case, regardless of industry segments or what kind of organization you're working for. So these architectural models are really there to help the industry and also to identify gaps and see what we should all working on in the open source ecosystem to fill those. You can check out the group on the wiki, the link is on the slide, and also we published two white papers. So you can see all the exciting use cases that we've been working with that covers telecommunication, retail, and we even have a use case on how to modernize and automate shrimp farms. So you can also find the links to the white papers on the side as well. Go and read them. They are not very long, but they are even more exciting. And we can move to the next slide to take a deeper look at what we are talking about when we are talking about architecture models. So our learning is that there is no one size fits all solution out there. There is no one particular tool or one particular configuration that will work for everyone. I would think that it's not coming as a surprise to you either. So what to do when we are facing challenges like that that we have to prepare for really multiple options, a lot of requirements that are similar, but still different. So the approach that the edge computing group has been taking is that we are looking into some crucial requirements that most of these use cases are sharing. And our focus so far was on connectivity. So what happens when you're losing connection between the central data center and an edge site? So it really depends on your use case in how you would like to approach that. And in our learning there are really two big groups that we can put the use cases into, which is how much autonomy you want your edge site to have. And in that sense, we came up with two models so far, the centralized and the distributed control plane model. And the two big difference between the two is that with the centralized control plane option, which is the top diagram on the slide, the control functions and services are all running in the central data center and the edge sites are running the compute workload only. And in this case, most often if you lose connection, your workload is still running on the edge site, but you will not be able to launch a new workload and you will also not be able to do many other operations either. And in some use cases, this is totally acceptable and you may want to rather focus on increasing the workload's footprint on your edge site. But when you need full autonomy on the edge, then you need to look into which control functions move to the edge as well, which will reduce the footprint that the workloads can use, but you will have all the options available for you even in case of a connection loss. So that is what we have been focusing on so far. And when it comes to solutions, we started to build these architectures with OpenStack, also with Kubernetes components, and see how it looks like when you're putting this into action. And we also collected a couple of projects for you that Gergay will describe that are matching one or the other architecture models to give you some examples. So I'm giving the word to Gergay. Thank you. So as you've mentioned, we collected some projects related to Kubernetes which are implementing either the sentence control plane or the distributed control plane architecture. And we also selected some projects to show you in this presentation, which provide support for edge use cases in a different way. So for the sentence control plane implementation where we have all the control functions in a central location, the two most notable projects are K3S, which is a very small IK 200 distribution packaged to a single binary which you can run on all of your locations. And it contains a full Kubernetes distribution with basic features for networking storage, load balancer, and rigorous controllers. And also it has a component so-called tunnel proxy which makes it possible that the communication between the central location and the edge connection location and what is needed for the Kubernetes control plane is possible to do over one. So in this way, K3S is a complete solution for providing edge infrastructures. And it is very famous about its slimness, so it's very slim down Kubernetes distribution. The other project which implies the sentence control plane is Q-Bedge. This is a bit more complete solution than K3S because it has also some features specifically for IoT workloads. So on top of the infrastructure services which are running similarly to K3S, all the control plane workloads in a central location and running the control plane for running the workloads in the edge locations. So on top of these control plane functions, Q-Bedge provides also features for IoT like a message broker and an event bus and also some device management features. And here also we have this capability that the control plane is able to communicate over the one via this edge and cloud components of the architecture. So these are complete implementations. They have all the different pieces which are needed for Kubernetes to run workloads and they are providing a single installer where you can download and install the solution. So these are implementing both of them at the same flight control plane architecture. And for implementing the distributed control plane where as learned from Indico we are running the Kubernetes control plane in all the locations and we have some kind of a federation on top of that. A bit different approach is needed and this is implemented for example in Stardingex which is an edge infrastructure solution providing both Kubernetes and OpenStack as an option to run different workloads. So Stardingex is a very good fusion between Kubernetes and OpenStack for edge cloud infrastructures. And this is again an integrated stack which has all the extensions and all the needed components to run workloads on these solutions. And it has a central management function which controls the data and limit synchronization of the different edge sites. So it has this complete control plane implementation for the different sites and there is one central function which is managing all of these and it can handle for example the cloud infrastructure software updates of the components and so on and so on. The other project was selected is not a complete solution so in this sense it's different from from the others what I described in the previous minutes. So it's cube-fed, it's part of Kubernetes. It's an implementation to federate the Kubernetes API. So basically what it does is it's running the federation agent in one Kubernetes cluster and from there it is capable to schedule workloads to different Kubernetes clusters also. So it's really a scalable API for Kubernetes. It is a necessary component for building edge cloud infrastructures purely based on Kubernetes with a distributed control plane if a single empty point to the infrastructure is required. So this project provides that but to have a complete solution there are lots of other components needed because they are not part of cube-fed so it's not a complete stack basically. There are two other projects what I wanted to highlight and we wanted to show this because these are somehow not vertical solutions of edge cloud infrastructures but somehow like horizontal projects which provide features to build edge cloud infrastructure. So one of these is it's meta-cube, so meta-cube is a bare meta for provisioning service for Kubernetes and it has this capability to provision nodes over layer three with the help of Redfish which means that even worker nodes running in remote locations can be provisioned and can be attached to a cluster or can be installed as a separate Kubernetes cluster with the help of meta-cube. Meta-cube implements the hardware management layer of this so again meta-cube is a necessary part of an edge cloud infrastructure solution but this is a very important part because of the remote manageability of the hardware is key in case of edge cloud infrastructure because as we learned from your deco in the introduction side edge cloud infrastructures are about massively distributed cloud infrastructures and to be able to manage these in a scalable way we need automation and all layers of the second and hardware management is an important part of this. The other project is related to networking and it's called submarine and what it does it makes it possible that ports running in different Kubernetes clusters can communicate with each other and it's built in a way that it opens VP and tunnels between these Kubernetes clusters and channels the traffic via these VP and tunnels also it provides service discovery feature across these clusters so this is a very good baseline to implement a distributed edge application which is able to communicate from like edge to edge in a mesh kind of way so for this submarine it is a great networking solution so these are the the example projects what you wanted to highlight we know that it's not possible to list everything we just collected the let's say most notable examples which are implementing the architectures what we identified as the most prominent ones from all the architectures but also we are now working on defining hybrid architectures when Kubernetes and OpenSpec components are part of the edge cloud infrastructure in different places and in different roles but let's hear more about the future plans of the group in the next slide from Yldiko. So we have a call for action for you because as Gerge was mentioning in the interest of time we only had the possibility to bring only a handful of examples to you but I think it already showed very well that there are a lot of components and building blocks and options out there that you can choose from to build your edge solution and edge infrastructure with but we would like to understand it better how the the landscape is shaping how the edge solutions are shaping and how the edge requirements are evolving over time so we are inviting you to come and collaborate with us and give us any feedback for instance about the the two edge architecture models that we have so far the centralized and the distributed control plane model is it something that your use case or your solution already fits into or do you have a third architecture that doesn't really fit into either of these buckets we would like to learn about all that and also if you have a project that you're working on that we did not talk about here but would fit into this work please come in and share the details with us or if you have a use case that you're trying to identify your edge architecture for and need some help guidance or you would like to talk with someone about that we would really be interested in learning about your use case and requirements too so the the slide contains all the information about the the weekly meetings that we have also the the mailing list and IRC channel where you can get in touch with the group so come join us and work let's all work together on finding solutions for the various edge computing use cases that are out there and with that I believe that we arrived to the Q&A part so we are hanging out here at the event so come and ask questions now or start a discussion or find us during kubecon or reach out to us on the email addresses that you sell at the beginning of the presentation or well find us anywhere we are all over the place participating in open source groups up on social media so you should be able to find a connection to us thank you from the edge yes thank you