 Hi and welcome, everyone. I'm very happy to be able to be here with you, even if only in a virtual format. My name is Ildiko Vancha. I work for the OpenStack Foundation as Ecosystem Technical Lead. And among a lot of other things, I'm a co-leader of the OSF Edge Computing Group. And today I'm sitting here with active participants of this group. And for the interest of time, I'm asking the panelists to introduce themselves when it's their turn to speak. So today we will talk about this Edge Working Group and giving you some updates and also just kind of explore a bit this topic of Edge Computing because it's still a really hot topic these days. But before going into the questions part of the panel, let me quickly introduce you what this OSF Edge Computing Group is because you may have not heard of it before. So this is a top level working group which is supported by the OpenStack Foundation. Our vision and mission is really to understand the Edge Computing space better and help all the industries and all the groups in the open source space to be able to fill gaps and provide solutions for Edge Computing use cases. So what we started with a few years ago is to collect these use cases to analyze them for requirements to understand what they really need in order to be successful. And once we had a better understanding, we started to work on reference architectural models and also strategies to test and evaluate these architectures. So you can see that this working group really is operating on a higher obstruction level. We are trying to cover the infrastructure as a service layer, but we are not exclusive to any technology or any industry segment. If you would like to check out more details about the working group, then you can check out the link of the wiki page that is listed on the slide. So take a screenshot or access the slide back after the session. And I would like to also point you to the two white papers that the working group published. The second one just came out a few months ago. If you read them in order in chronological order, then you will get kind of the basics of Edge Computing and Cloud Edge Computing, as we call the area that we got a deeper dive into. And the second white paper is going into details on these reference architecture models that I already talked about. So how do these look like. So as for the reference architecture models, what we found is that there is no one size fits all solution when it comes to Edge. The environments are growing organically and the use cases are also just different enough that your edge is always different from mine, which means that my architecture will probably slightly different from yours as well. But we kind of tried to analyze the options a bit and we were focusing on some error models. And in that sense, we took a closer look at connectivity, which is crucial when it comes to geographically distributed systems. So what we found is that if you look into what happens when the connection is lost between a central data center and the edge sites, then there's one big question that people usually ask is that how much autonomy you need at the edge site. Do you need all functions to be available or is it enough for you if your workloads are still running and the site will resynchronize once the connection is built up. So based on these circumstances as well as requirements, we came up with two models, the centralized and the distributed control plane model and you can find the diagrams on the slide as well as on the wiki page that is listed on the slide. So the question is whether these models are covering all the needs out there, or maybe you have a third model or something new that we haven't looked into yet. If that is the case, then I would like to encourage you to reach out to the group participate and share your architecture model with us. And after this short summary, I would like to turn to David and ask him what he observed and thinks about the evolution of edge architectures and David what do you think the two models that I just briefly described do they cover all the needs and solutions out there or we have more work to do. David Patterson with Dell I'm a senior principal software engineer I've been working with open stack for about five years. Last year or so primarily focusing on the edge. As far as the two reference architectures that we've defined I think they fit a large proportion of the use cases that I've, I've seen where I think there may be some growth is around the areas of AOT, IOT and where we get into more of a mesh kind of situation. But I think what we've started with cover a broad majority of the use cases that we see rolling out right now. Does anyone have any addition to yeah I'd like, I'd like to add to Dave so as your. So I've been involved in another working group that's focused on telco edge and that and and we are running into mech workloads and 5G workloads of course which are very hot right now. And this, this type of architecture is close. Oh, there are there are some tweaks to it, but I would say they're close where where we're getting some interesting types of architectures is where the, where the ownership of the of the network is shared between let's say the customer and the telco. So that that's where the control plane might be actually in two locations one one for the customer and one for one for the telco, but it follows this is just a little more complicated version of multiple. And so that's where we have multiple. Yeah, and I would say the, the biggest change we're seeing is the demand at the edge, like you mentioned V ran. That is, this used to be the prominent use case as rolling out in the field so, and it's also requiring a smaller footprint for hardware and more powerful hardware at the edge. So it's, you know, it's asking a lot, right, and what's a smaller footprint that is still, you know, capable of keeping cool and very powerful, including accelerators and things like that that are demanded from use cases like Iran. And think of it as many clouds at the edge that hundreds or thousands of them. Yes, and I think I think this is a kind of new thing, you know, thinking about edge that we need to incorporate accelerators also in in the edge environment. And this also somehow shows that these two, like clear architectures what we defined like the centralized control plane and distributed control plane are not really describing all the use cases but some kind of a mix of these two real cover specific use cases. So I do really think that that every use case will demand its own own architecture which will be some kind of a mix of, of these centralized control plane centralized user plane architectures or strategies based on what kind of data do you need to synchronize in the different edge cloud locations and based on what kind of behavior is expected in case of network breakage this situation what you're just described. Sounds great. Does anyone have any more that addition to this question, then we can move on so as you could hear that our panelists are mentioning a couple of use cases. And it's clearly shown that is really important it is really important to to understand these to be able to come up with solutions and the best way to do this is by collaboration and collaborating with other groups in the the white industry as well as in the open source ecosystem. Beth, two things, could you also quickly introduce yourself and then talk a bit about how the USF edge computing group is collaborating with with other groups in the broad ecosystem. Well thank you Eliko. So I am Beth Cohen and I work as a software defined networking product strategist for Verizon. And I am involved in a number of working groups that that intermesh with the OSF edge working group. One of them is the soon to be renamed CNTT which stands for Cloud Infrastructure Telcom Task Force, which is a working group or a task force rather that's come out of the LF networking organization and that is that is a focused on in general telco infrastructure to support telco workloads, such as containerized and open stack workloads which are the most common ones used in telco infrastructure. But we have a specific work group focused on the edge, which is of course working in collaboration collaboratively with with the OSF work group. And there's also a the MEPH work group which is also defining the edge. So, I think there's no longer 24 edge work groups floating around I think there's fewer, which is, I think a good thing. So, you know, it's really important to be collaborative about the edge. The edge is so complex. I like to think of it as a super set of the of the cloud. So instead of you know 50 clouds you have thousands of clouds. And of course the management of said clouds is exponentially more difficult. So we're really relying much more heavily on orchestration and automation to allow us to manage those clouds. So we really need those tools, and they are, in fact, being developed which is great. We also have a connection to the Kubernetes IoT and Edge working group and Q-Bedge, which is a Kubernetes based open source project. What basically implements the centralized control plane architecture defined by the ECG? I was going to say, Beth, you bring up a very good point that, you know, touch free enablement of these devices at the edges is of utmost importance because you can only get so far shipping boxes with bits on them you have to be able to, you know, touch free provisioning from the get go is a key point to success with. So you still need to ship them in fairness. You still need to ship them and you still need to plug them in, but you really don't want to send a senior level engineer out to do that work. You really want somebody who is what are what are they called remote hands. Right. Any further comments to this question from anyone. Not to this question, but Dave's last comment, so I would just just join to this this comment and I think this is where we need to work the most the the operability of the infrastructure itself, being it like day zero day one or day two that we should we should build these systems in a way that that all the operations are automatically because we are talking about as Beth mentioned hundreds or even thousands of edge cloud locations. Thank you, Gary, a bit first for a quick intro and also to maybe talk a bit about the the container space as you mentioned, some of the the Kubernetes groups. So I think it would also be interesting if you could talk about why containers are such a good fit for edge and what our working group is doing in this area. I work at Chattari and I'm working in the open source for office of Nokia and I'm responsible for cloud infrastructures there, including edge open stack and containers in in in general, and I think containers that are good fit for for edge cloud they have smaller footprint in terms of of runtime footprint and in terms of images so they they make it possible to have some kind of a more fine grained distribution of of of load in the edge cloud infrastructures. Also, there is no live migration in in containers which makes things a bit more more easy because in case of virtual machines people tend to have this idea that why we are not do live migration over between these different data centers which is I think not a very good idea. And, and the working group started to investigate these these different architectures with Kubernetes. So now we are trying to basically map these these two main architectures what we defined the the centralized control plane and distributed control plane to different Kubernetes based solutions and the, and in this space we have, we have lots of solutions like this cube edge project what I just mentioned but also we have K3S, which is another project implementing the the centralized control plane, but also we have the Q&A federation projects, which are implementing basically an API federation for, for Kubernetes so this is one direction where we are going the other direction is that we are trying to build some kind of a hybrid architectures where Kubernetes and OpenStack are just different layers in these, these stack and we are trying to figure out what is the optimum layering of these different OpenSource projects for for different use cases. That brings up Cota containers which is of course is one attempt to to layer in OpenStack with with containers but it's by no means the only way you can do it and, and I think it's very important particularly in the telco use case because telcos are heavily invested in OpenStack and and while we are actively working to containerize our many of our applications, I know that the networking applications currently are still a struggle. So that's, and of course networking applications are extremely important at the edge. Yeah, so Cota containers is a very good solution if you would like to have this, this kernel separation, what we have with with hypervisor based virtualization in a Kubernetes based environment and I think that's a very important aspect when you have constraints in the, in the physical infrastructure so for example there is only one single node where you need to run different workloads who cannot trust in each other so Cota containers provides a very good solution for that. And that's important at the edge again because we have the constraints of being a very small number of nodes, frequently one. Okay, this brings up a good point is where can you identify some of the gaps there are for, you know, deploying certain workloads in containers at the edge. I know there's been a lot of work in containers to enable acceleration things like that. Where do you think we are right now, as far as deploying workloads like Vray and especially VD use at the edge in containers. If you, if you talk about these, these different run times for example which are enabling these, these hypervisor based separation so like Cota containers, I think there is a, there are areas which are which are not covered by the feature set of, of these content and times but I have to be honest that I didn't check the last, like half year of evolution in Cota containers but, but for example there is no, no support for or there was no support for different accelerators and different technologies like SROV and this kind of things which are needed for, for Vray use cases. So Cota containers is basically just introduces another layer into your container runtime stack, let's say so you really have a hypervisor layer, like a very, very optimized and very small hypervisor was still a hypervisor layer in as part of the content runtime and you have a full kernel and operating system running in your container. So you need, you need the, the, you need the support of these features like SROV and accelerators in the host stacks all of the layers have to support these technologies. And, and I've been finding that's been slow, because so many of those technologies are still proprietary. So SRIOV being one example. And David you can probably speak more toward the hardware, you know the intersection of edge and hardware is as much, there's less disaggregation at the edge, then, then you can achieve in a data center. I mean, the demands for the edge are requiring things like a real time operating system, offloading packet processing things like that. And we're getting there, but we're not seeing at this point, like operating systems having the drivers baked in right yet. So some of these products that are just rolling out from Intel and video and others. The port isn't fully baked into the lower level operating system so that's been a challenge to get those features working in either, you know, any kind of VIM be it OpenStack or Kubernetes. So that seems to be kind of the challenge right now is the, you know, the performance demands at the edge and being able to use the available tools in our VIM layer or IOS. I think it's a constant trade off of like obstruction versus row performance. We just have to find the right balance. Well, interesting that CNTT group has added a new orchestrator, a new VIM layer, which is focused on the hardware itself. So it's a him layer. That's been to deal with that. And with that, we are running out of time. Does anyone have a strictly one sentence. Last thoughts to this question or the panel. I just want to add, thank you, Ildeco and we'll keep on working on edge. And we invite everybody to join in as much as possible. I think the conclusion of this panel is that there are some solutions out there but even more challenges. And here I would like to remind you all that the project teams gathering event is happening next week. And the edge computing group is also meeting there. So hopefully see you all there to come and work with us on all these challenges. And with that, hopefully also see you in the Q&A part and talk to you about edge. Find us there and also find us at this event or the project teams gathering or in some of these open source groups that we talked about. Thank you.