 All right. Hi, everyone. Oh, I'm so loud. So thank you for joining us right after lunch. We will try to make sure that you don't fall asleep. If you also would like to make sure that no one falls asleep, there's a mic in the middle. So this is a panel discussion. We, well, most of us are really good at talking, so 40 minutes should be all good. But if any of you have questions, please grab the mic and ask them because we would really, really love to have a conversation with you and just answer those questions that you have and make sure that you get all the information that you would like to learn about. And before we dive in, we have just a very few like overview slides to get into the topic that I will let Beth to talk about shortly. Thank you. So I'm gonna be just kind of giving a level set what this is all about and what we've been doing and then we'll get into it. So, I'll just skip this for the moment. So this is the edge computing working group update session. So the edge computing work group has been furiously working away for what, about 18 months now. So, so what, and you're saying, well, edge, everybody's been hearing about edge. So I just wanna talk a little bit about what that means to us, which is that we're applying cloud infrastructure type architectures to not just within the data center, but we're blowing it out to the edge. And we'll talk a little bit more about what that edge means. It can be the far edge. It can be down to a cell phone, although usually not, or it can be a smaller data center depending upon your perspective. So, just wanna touch a little bit about the history of the group. So, it really came out of a, September 2017, so just about two years ago, there was an Open Dev Summit with 200 people showed up. We were really surprised. And the topic was edge computing. And I say we spent the two days arguing about what the definition of edge computing was. And so, what came out of that is we then did a white paper which was published in March of 2018. You can read it on the OpenStack website. And then we started diving into what that meant. And we started building use cases. And again, we'll talk a little bit more about that going forward. And then we started actually engaging some of the OpenStack projects to insert code that would support edge type use cases. And I wanted to say that Ildiko and I have been doing the tours. Sort of spreading the word about edge computing which has definitely become very hot. And when we went to the Open Networking Summit in Amsterdam in September, and we ran a session during that summit, and we were very gratified to hear from several people that they're actually using our use cases that we've been developing to develop own app and some of the other standards in open source services. So they were very grateful for the use cases we've been developing. So the working group is definitely a working group and it's open to all, it's a community. And with that, I bring it up to Ildiko. We'll be fine over there. Thanks. Yeah, I have my own mic. And Beth has too. So thank you for the summary. So I would like to just basically ask the panel what would you like to highlight from the latest activities? What's new? What's exciting? Okay, so I think what's new is now the work that was started about the MVP edge, let's say. So this MVP edge is a minimum viable product solution for the edge and the target is to define the requirements and do the implementations for an edge solution which is like very close to what we currently have in OpenStack and this is something what we can start working on and actually people are already working on it. So I think this is a good development however we should not stop working on the requirement and the use case clarification for the let's say ideal edge solution. So for me that's the highlight. So I'm relatively new to the group but what I've been getting out of it is identifying a lot of the gaps that are in OpenStack and what we need to do to support the edge as far as federation of Keystone, federation of images, things like that. There's all kinds of gaps that we're identifying and hopefully we can solve those issues down the road. So I, which to hold. So what I'd like to say is one of the most exciting things that I found about this group is how much the edge has really taken hold in the community and also the intersection of containers and OpenStack has been very exciting to me and I know there's gonna be a lot more discussions about that at this summit and continuing. So it's not either or it's both. We're doing this in order. I guess I should add something as the last in the line. So I think what's been most interesting for me is we started talking about this a little over two years ago and then there was a lot of commercial activity ongoing and what we've seen in the last six months is companies bringing out their actual implementations into OpenSource was installing X come out and start to look at how they integrate with Mainstream OpenStack. AT&T have come out with Airship which isn't exclusively an edge technology but it's built to accommodate that use case and that's really what we're looking for. It isn't the let's build something for the edge. It's how can we apply these technologies in an edge context and I think that's something that Airship can really help with and bring to the table. But it's the code that's actually coming out and becoming a community asset that's really changed in the last six months that I think is really exciting. Thanks. And what do you think our group should focus on? I heard containers, for example, I heard Keystone. So what are the biggest challenges currently that you think that we and the community should work on? Actually what I'd like to see is there's a lot of different groups working on edge related projects. I'd like to see maybe our group pull some more of them in. Like we're working with some of the Starling X people. We've gotten some Keystone and Glance people involved. I like to see more of that get OpenNFV people involved and because there seems to be a lot of fragmentation on how the edge is going to be implemented and if the more unification we can have the better because it's a very fuzzy picture for me at least right now on how it will work. So I think the more people that we get in a unified effort the better it will be off for everybody. Yeah, I would agree with you. I discovered there's 24 separate projects currently working on some aspect of edge and there's some cross-pollination but obviously not enough. So I think I do not hold us as the center of the universe, we're not. But we definitely are contributing and have a lot to say they can really contribute to the overall picture but we're not boiling the ocean. We're gonna concentrate on OpenStack because that's the name of the working group but that does not mean we're gonna ignore the other groups and we really wanna be more collaborative. Yeah, but I think it's very important that we go to other committees and we let's say synchronize with them and we share what we have with the others. Are there any ongoing activities that any of you following that you would highlight that we are already doing or what our group could do to be, I don't know, part of this ecosystem consisting of at least 24 groups right now? I'm gonna beat Gurgay to the button. That's actually on the last topic, I think it's a good question because it's not a question of are there lots of different people talking about lots of different aspects of Edge? There are lots of aspects of Edge to talk about and you can't do that in one meeting so it's actually quite healthy. I think what we need to focus on is how do we bring, we went dark, how do we bring the OpenStack capabilities into those dialogues? How do we make sure that the topics that are of interest to us and the values that we bring actually address those needs? Some of the things which are interesting, while we try and sort this out, I think you know who you are, so just keep talking. Exactly. The work that's been going on in Keystone is really exciting. How we work with security in a distributed environment, how we make sure that whether I'm deploying towards two blades in a basement or towards a large data center that I have the same toolkit, that I have the same way of authenticating, authorizing, providing access to users in those environments is really important. And I think bringing some of these data center capabilities out into these remote sites and allowing people to use them in a consistent manner is really the key to making this a success. If I have to do one thing here and one thing over there, then it's a nightmare and that's a nightmare we've lived with in telecoms for decades that we have to move away from. Centuries. So I'll speak to that because I'm living that pain right now and I think as we're developing products, obviously I'm coming from the telecom world, having a single platform is an enormous savings in terms of operability, supportability, et cetera. So it's really important for us to continue to work with the groups that are continuing forward with the services that are centralized but understanding that there are some specific requirements that the edge that are different from this center. I like to think of the edge as the superset and the data center as the subset of cloud computing. Okay. Thanks. I just thought that maybe Geh-Geh also had a thought before I made him stand up and if he still has. Yeah, so what's happening currently is that we have a testing task in case from what we're trying to solve. We are not progressing very well but we are doing something at least. There is a, in the MVP, let's say domain, there is a spec up for changing Keystone. And there is, I think it's two specs up for glance for image synchronization. So that's our, let's say the current task what we can or we could do. And this is where we need help from the project guys to do this work together. So Geh-Geh just mentioned MVP which stands for Minimum Viable Product. Could you elaborate on what that is? And also like what the panel's thought about reference architectures, is there a one size fits all solution? How does it look like today? No, there is no one size fits all. We can all go home now. I think there is a viable suite of products or a viable suite of architectures that are applicable. I think we need to come to what is that common set. That's something that we're really working towards in that there is no one size fits all but there is an ecosystem or a general architecture that we can apply. I think what that looks like is gonna bear out over time. It's not possible for us to say right now but we do talk about really important capabilities that need to go all the way out to the edge. Security is obviously one of them. Tenancy is obviously one of them. Not in all cases. I don't need tenancy on every edge but I will need it on some. There is a number of key items that we need to be able to move out and there are some projects which we're very central around about how we do that. Ironic of course becomes pivotal in how we're gonna do this because the edge isn't about VMs versus containers or anything like that. Bare metal is part of the edge. So is a VM, so is a container. I mean let's move away from that argument and let's look at what we're actually trying to achieve there and that is to create a ubiquitous solution that we can work towards. Yeah, I mean what we're trying to create here is infrastructure that's spread over a wide area network. I know some of the definitions defined by latency in terms of how fast you can get to the edge. I personally think that's just one aspect of it. I think the harder problem to solve is getting the federated tools out to the edge and making a decision how centralized is it and how decentralized is it and how autonomous does that edge component work if it's cut off from the network. So you cannot assume, unlike in a data center where you can assume that the network is probably up 100% of the time and if you're not you have a serious problem, at the edge you cannot assume that. So you need to develop tools and that's part of what we're working on, defining the gaps of how the tools have to change to fit the circumstances of the edge. Something I'd like to see us start working on more is bare metal provisioning as well. You mentioned, ironic, I'm assuming you're specifically meaning bare metal nodes at the edge, right? But there's also provisioning, right? If you have a thousand nodes, are we gonna ship the nodes out with the bits on it which is kind of the current model and I don't think that that's a sustainable way to go. There's legal and regulatory issues with it too. Right, so how are we gonna handle this problem? Airship, I haven't really looked enough at it to know if that's what's gonna do it. But I think that that's a critical, critical problem is how do we do touch-free provisioning across the network, right? Now as you mentioned, there is a spec up for ironic to provision over layer three. So there is a work ongoing. I don't know the details. It's that fish. We should probably be talking to those guys. You can, they are not here, guys. So we're talking amongst ourselves. We're actually discussing, we would like a Q&A. We wanna hear some questions from the audience, if possible, if anyone has any. We may be talking about the wrong things and you wanna hear about something else, for instance. I have a question. There's commercial solutions there on the market. For example, Adva has one of them. How do you see the commercial vendors who are ship solutions either compared to what you guys are trying to address or collaborate with you? I am very familiar with Adva. Yeah, they have a great product. It was actually started sort of before Edge was sort of a thing. And I see it as a good feedback loop that I would like to see more vendors participating in the working group to hear the requirements and also to add to the work. And just to add on to that. I mean, there's no point having an open source reference if it's not making its way into commercial products. If we're not able to use it in a commercial environment, then we're just making it as a hobby and that's not really why we're here. So the way I see it is the more we can provide into that ecosystem, the more they can leverage the capabilities we produce, the better. And more importantly, we welcome the... We're not an echo chamber or we try not to be anyhow. So Adva's a good example too of they built a solution before OpenStack really supported the Edge. So they're leveraging OpenStack but every node is basically its own OpenStack. And they have their own layer of software on top of that that talks to each individual cloud. So if you have a thousand nodes, you have a thousand OpenStacks. And it works. But this is why we're trying to analyze the gaps in OpenStack itself and provide a better way of doing things. Yeah guys, two question. I see very different requirement in terms of high availability. So specifically related how you build application in the distributed environment and how do we embrace the distributed computation in the sense, right? So from your perspective, and it's back to the point where your site isolated or age isolated, the autonomies, et cetera, right? How the solution looks like or what are the input you see from specifically from the telcos, right? Because the Edge is going to be primarily probably, it's a primary target is a service provider, right? They own the infrastructure. Now, and so what are the requirements you see as input? It's a first question and the second question is related from the pure protocol stack related to the storage, right? When you start distributing the data center, you have a lot of IEO going in and out. So what is the, is there any work that you know trying to solve specifically that problem? Two question. Yeah, let me answer at least partially both. So the first one, HA, yeah, absolutely, you need to do it. There's a lot of different ways that can be done from the telco perspective. Of course, we're most interested in HA of the network, the routing and the security and all that stuff. And you can do it at a number of different layers in a number of different ways. And yes, it has to be done, but I think the other dimension which, I see it from the telco perspective, but the other dimension is the other applications, particularly the IoT applications, they need HA as well. And that's, I don't think that's been addressed at all. I know that the telcos are definitely addressing the HA requirements for the network piece, but the other piece I don't know. And I think that's something that we could work on as an architect or as the working group. I think that would be something that would be of value. And the second question was kind of lost. The IEO part, right? So you could imagine if you start actually doing initial placement for every site, and you have, let's say, the centralized data center that the images stored, some were centralized. So you have to copy the data, right, over the run to the page at the initial placement. The storage, yeah. Yeah, so my experiment with the glance, I mean, you can't really get a decent performance out of it. So I'm having a little trouble here because I'm half deaf, but you're talking about glance caching in particular? Generally, right? So you, at initial placement, let's say you have 200 gigabyte, I don't know, the image, right, that you need to actually copy over the run on the edge side with the servers. Yeah, you can't. I mean, because it takes hours, you're absolutely right. So there's two answers, I think. One is you can preload, or you can sort of limit the amount of data that you're pushing over the wire. And the other thing is, I know there's a lot of work around, I don't know if you've had a chance to take a look at what AWS is doing with the S3 gateway and some of the other work that they're doing around caching and mapping data across between, out to the edge and then back into their data center. So there's definitely work happening right now around addressing that problem, but you're right because the network, it's not a 100 gig bandwidth like you have in a data center. Yeah, but the initial backhaul, you take the hit, but then you get the benefit that is cached locally. So especially, take the Netflix use case, so there's the initial hit you take on the backhaul, but then the edge, the proximity to the user is now improved. Yes, I agree, but the question is from the protocol stack, right? Because right now, if you're using iSCSI, it's not really optimized over the band, right? So if you have a drop on iSCSI on the right or read, et cetera, right, you have to accommodate. So from the pure protocol stack for the storage that you communicate to your data center, you need to use certain protocol that can actually solve this problem, right? Who can actually be resistant to the packet loss or packet reordering or whatever you have in your van, right? And iSCSI is not the best protocol for that. Oh, iSCSI isn't, I agree. I know that Amazon's been doing some work with it. They're doing some proprietary stuff around that, but you're absolutely right, that's a gap. Hi, David and Beth, I'll try to take this one and actually promote. We have a free 20 session calling Pushing OpenStack and Seth to the Edge, where we're gonna tackle not just Seth specifically, but as an example of open source, exactly these topics. David touched on one critical element, which is glance images. And guess what? You need to make glance images available. You need to have metadata available at the edge site. There's a great move in, not in AWS, actually here in this room, we have the PTL of LAN sitting here. This is one of the key areas that we're developing as part of the Edge working group. That's one aspect. Second aspect, don't think about edge in one layer. This is, the service is not going from the core side all the way to the far edge. We're not trying to push IO from core to edge, right? So in fact, when you're gonna move with your mobile device, right? You're gonna move from place to place. You need to get the service from the local closest point of presence, right? And that point of presence may be just two servers, maybe one, right, at that point. But this is where you're gonna get the, so it's a low latency, the very short hop, that's the problem of trying to solve not the biggest problem of going all the way from the main data center of the cloud to the far edge. So we encourage if you have more questions in this area specifically, join us at 3.20 for that session. Thanks. Thanks for your question. I was gonna chime in and say, there's work going on in this. I think one of the things that builds on that that we're looking at is essentially a topological view, coupled to a security view of different sites that we have. And this is something that's come up a few times, we've dug into the details of when is it okay to store images on a site? Not only from a storage perspective, but also from a security perspective. There may be sites where you don't want to be storing a lot of information because they don't have physical security and things that you might take for as assumptions of a data center. And while I don't think we're yet to the point where we're ready to actually inject capabilities on those topics, we certainly have had them, they're captured in most of the working group activities that we're doing. And at least on the Keystone side, I know we're working on the topology and the security aspects, but we haven't yet sort of overlaid, let's say the physical aspects of the different sites into that yet. But that's an area of interest and an area of work. And if you do have an interest, jump on the calls, it's something that we're working through, I think. Yeah, I'm acutely aware because my product, actually, we have security applications and there's certain countries we literally cannot ship these images to. So we have to solve that problem another way. Can I ask a question now? So I have one question. I think the performance of the edge side is inferior than that of the data center. So I think lightweight version of the open stack is needed for deploying on the edge side. Is there any plan or movements for this lightweight open stack? It's been a calling card of this working group from the very beginning. What do we need in different sites? What does, which services are required to fulfill which capabilities on different sites has been something that we've been looking at a lot. I think Beth has been one of the, one of the stalwarts of that message as has come down from AT&T, the lightweight open stack. How do we move that forwards? And it's certainly the case. It's, well, I'll let others speak. I've been speaking a lot. I can just add, I mean, thank you to AVTA because they did kind of the first big consolidation of cramming open stack in a box. So thank you to them. Yeah, there's, it's a collection of services. So it's not by nature, heavyweight. One of the exercises that we have done though is sizing. So the edge isn't one size fits all. So depending on what your use case is, you may have these small boxes on a telephone pole that are running your control plane and compute all in one. Or you may have, if you have a large caching edge node, you may have HA with three controllers and a full mini open stack. So really depends on the sizing of your use case as well. But that what you're talking about is definitely in the scope of what we're working on. I think the challenge comes in not so much what the control plane is, but what are the applications you wanna run? If I wanna run AI applications or video processing applications, then I have a need for FPGA or GPU access into the, through that control plane. And that starts to drive requirements on what the control plane can provide. And that starts to put a weight on what you need in each site. I think that's the link that helps drive what we're defining as the control plane in each site. And it's, I guess, a little bit slow going to bring that out and into these discussions. It's not driving, but it's on everyone's mind. And we have to sort of extract that type of information. What am I actually gonna do there? And then I can start to talk about the capabilities that I need and thus the services and thus the weight of the control plane I have to deploy. Thank you. Yeah, we have discussions ongoing about what services are you running in the different sites and what interfaces to use between these different sites. And these are ongoing discussions and you can find the results in the Edge Computing Groups wiki in the, how is that called? The architecture or? Which one? The one that we're looking for with the MVP. I don't know by heart, but it's linked on the main Edge Computing Group wiki. You will find it. I will find the links. Cool, yeah. I have one question. Either a coordination with ITC and Mac, ISG, and also there's a suggested framework worthy, if you have like a stateful application, then they are somehow considering a framework that allow the relocation for this application from one Mac host to the other. So do you consider this use case or not? That's definitely use case. That's a firewall security use case because they're stateful, so. And they need to be high availability. Yeah, so what Etsy Mac is about is the application layer and now we are only, let's say, dealing with the infra part of things yet. Yeah, but again, the framework of having this VM mobility, I mean, moving the VM with the user. As the user goes, it's close to the rooming framework, but you have to move the application itself. Do you believe this is something that is doable or? You can't move the entire VM on the fly over a WAN, so you have to solve it a different way. But that is a requirement in Etsy Mac. Yeah, I mean, what to say? That's feasible. Is it the way that you would do it if you implemented an application? For me, I would look to try and move metadata between VMs and I would proactively spin up a VM in the right location and then I would transition state between them, but it's a question of implementation. Maybe I can't do that with the applications I'm dealing with and thus I need to move a VM. As a user experience, I can't imagine that would be ideal, but still, it's certainly a feasible approach. I think we look at things in this group from a, what does the performance look like in different situations? We're looking at, for instance, highly stateful applications at an edge and distributed environment. What are the types of characteristics that you see both from an application performance perspective but also a user perspective when you're doing that? And then the other types of applications such as highly concurrent applications, which may be sharing information more, then you may get a different dynamic in different deployment types. And these are some of the things that we're trying to draw out. Hopefully we get some white papers out next year which sort of say, when you're trying to do something like this, here is an architectural deployment model that is very good for that, and then we can actually start to move that needle a little bit. Any further questions in the room? We still have seven minutes left if anyone would come up with questions. What I wanted to ask, we talked about Burr Metal. And I also heard that, I'm not sure it's part of the 24 groups yet, but there's a new project forming in Open Compute, so Open Hardware. And I know that this is a software conference, but we are basically talking about open infrastructure. So I just wanted to ask your view on what edge means for hardware, and will this be specific hardware, real software remain hardware agnostic, or how do you see basically the two layers and them working together? Well, I'd be interested to see what comes out of Open Compute in the context of the edge. When I think of an edge deployment, I think of thousands of sites, and I think of never going to touch them again, because that's going to cost a fortune, whereas Open Compute has a slightly different philosophy with regards to hardware and how it works. Maybe in a home environment, that could work really, really well, where the home user could go and fix stuff. I'm actually not aware of it, but I'm actually kind of excited to understand how that fits into the ecosystem. But hardware is extremely important. I'm not a hardware guy, and there's probably hardware guys sitting here that should maybe address the question. I work for a hardware company, but I am a software guy. But UCPE looks very exciting from what I understand, so I can talk at a high level, is that it's very generic kind of hardware that can fit in a variety of use cases. And hopefully, if you had a very small edge situation, you could deploy one node, single point of failure, or if you had a larger situation, you could deploy HA. Things I'm also interested in is even lighter than that is where ARM may take a place in the future, which I don't know much about, but that's something that's back in my mind always. So in UCPE, there are contributions for an edge form factor, which would fit into a poll or something. Yeah, so some of the factors you need to consider. And I think the hardware vendors are starting to address that. And I suspect there'll be more edge-ready hardware coming out is the harsh environments. Obviously, I mean, in those exists today, you can get boxes that can be stuck on the sides of buildings. And a lot of the edge use cases are in harsh environments and also environments that you don't necessarily want to send the truck out to climb, the top of a steeple on a church where that's where all the cell information is out there, the cell equipment. So yeah, those are the types of things you need to think about with Edge. Is there anyone here from Lenaro by any chance? Raise your hands. No? So Lenaro is doing interesting stuff with ARM in OpenStack. And that's something that I would follow up on and see where they're going. I think it's interesting. There's the boring edge, which is my baseband processing unit and running those things in harsh environments. And that's the boring edge, because it's not generally available. It's very much the domain of whoever runs that. Then there's the kind of maybe more exciting edge, which is where you start to look at data processing and video analytics, which may be more considered to be generally available. This is where innovators will come in and deploy new things and do cool new stuff. And that part of the Edge is actually a little more challenging on a hardware perspective, because you need specific hardware to do it. You need that hardware at scale. I think what we'll see in the near future is probably, well, it's not already. But Edge company is coming out with proprietary hardware solutions that address some of these verticals, which are seen as some of the really important use cases for essentially driving Edge and the multi-tenant Edge, as we call it. But I mean, hardware will impact us. And as soon as those things come out and as soon as that becomes mainstream, we'll have to be here supporting that and making sure that our platform exposes those capabilities in useful ways and in safe ways. I just want to note that right now, most of the development in Edge is happening in the telecom world. But that's only, I think, from my perspective, the base layer platform, the really exciting stuff, that the futuristic mobile gaming and everybody talks about self-driving cars and that kind of stuff, that's on the base platform that the telecoms are laying down now. So I think we're going to start seeing a lot of that stuff happening in the next two, three years. You would say it's NFE heavy at the time currently as well? Yeah. So I call in. First of all, I want to thank you guys. This is a very important topic. And I do agree with Beth. A lot of it I see today is in the NFE world. All the requirements that are coming through. I 100% agree with you. My take is a little bit different. Are we establishing some specifications in terms of jitter and latency to say that if the jitter is above a particular threshold or the latency is above a particular threshold, the Edge paradigm dies. The reason I say this is when I see customers are asking for 400 kilometers, 1,000 kilometers distance advance, that's the kind of requirements that are coming through. And I'm wondering if at some point we say, you know what, at this point, the current Edge definition of the controller's been in the data center and the computer's been at the very extreme parameters is not viable. Particularly when you have VNFs that have to write two volumes back into the data center. I think what we'll see is we will start to classify sites and deployments. And they will have a viability to support certain types of workloads. And there may be that there are really strict latency and jitter requirements to run some types of workloads. As a telco guy, baseband processing is critical in that front. And some sites will be viable for that, and some sites won't be viable for that. And I think that's just, it then comes down to a question of deployment. And that's why we all get so excited about orchestration and how we're going to orchestrate and automate all of this sort of stuff. And that's a whole other dialogue. But I think it comes down to being able to classify the sites and understand the site's behaviors so that we can then ensure that we're deploying the right workload into that environment. And we're not there yet. That's really a gap that we have to begin to fill. But that really only begins to be filled when we move into that deployment in that rollout phase. Because we don't know yet how it's going to look and how these things are going to behave. But it's a really good point and something that is, let's say, next phase for us. Thank you. And with that, we are out of time. But before you all run out of the room, I just would like to draw your attention to the Edge Computing Group Wiki. So you will find all sorts of information about reference, architecture, use cases, and all sorts of work items that we are doing. And I would also like to draw your attention to forum sessions. Forum sessions are working sessions. So those of you who would like to get involved with this group or any of the OpenStack projects or Starling X, all of these are having Edge-related forum sessions in the next two and a half days. So please look up the summit schedule and hopefully see you there. And I would like to thank the panelists for the great work that they've done. And thank you all to come. Thank you.