 Good morning, folks. Thanks for coming. I think we know it's very early, so we appreciate coming over here. We've been told that we need to be less than six feet apart, so we're going to be hugging all the presentation like that. So we do have an exciting presentation for you today. We hope it's going to be good use of your time. We're going to talk about using the edge architectures with OpenStack. Chris Janiszewski, I'm a senior principal solutions architect working with the customer. I'm Darren Sartino, principal solution architect on the same team as Chris, but I'm also a secretary. So we thought along how to make best use of your time, and we thought we should start with the demo. Also, some of the aspects of the presentation we're going to do here. It's still work in progress, so please bear this in mind. But yeah, I think hopefully the demo gods are with us, and we're going to start with that. Yeah, a couple of the other demos we had that you may have seen that were pre-recorded. We're going to try to do a live one. We have a recording fallback, though. So we're going to show you one of the kind of sample edge architecture we deployed in our labs. This is something we see a lot of our customers use, and we only have three sites in this example. But from the customer perspective, we see hundreds of sites being deployed the same way. But it's pretty much rinse and repeat, so I think it's going to give you a good representation on how this is going to look. So in our case, we start with the central location, where we run a majority of the control plane, control services. We also have the set of hypervisors, so NOVA computes. We have a set storage running in a central location, and we also have a pool of bare metal nodes that are going to be consumed by Aaronic. The second side, so the first edge side, we simulate that it can be up to 100 millisecond latency round trip between them. So we simulate that, and the second side is compute. So we have hypervisors, detached hypervisors. We also have a localized storage, so there's an individual CEP cluster running at the second side. And we also have a pool of bare metal nodes, so Aaronic nodes that can be consumed for any type of OS or workload that needs to have this direct access to the bare metal. And then finally, for the use case where you don't have enough space to deploy another CEP cluster, we created another edge location with a single compute and just the local storage. So we're trying to cover as much ground as possible with this tiny environment, if you will. So I'm gonna switch to the open stack environment, and you can see we're using redhead open stack, in this case, but obviously it should be transparent to other distributions as well. So I'm gonna start with the instances. And again, if it's a little bit slow, please forgive us as we're over Wi-Fi, and the lab is actually in North Carolina. But in this case, we are running three virtual machines here. There's one virtual machine in central, one in the first edge location with the localized CEP cluster, and we have an edge too. And you can kind of distinguish them by looking at the middle in the availability zone. You can see there in a different availability zones. And we also, in our case, we are also deploying these bare metal instances. So we have a pool of bare metal instances at each of these locations, and in this case, we deployed some rail OS straight on the bare metal, one in central and one in edge. So how this would look like from the user perspective, how they would be able to consume it. Obviously we're doing things for the GUI, but the same would apply for API and CLI and whatnot. So I'm gonna spawn another edge VM, if you will, and you will see that the user will have access to the multiple availability zones, right? So they can pick which, and if you have 100 availability zones, they could all be visible here. And in my case, I'm gonna deploy this next VM in a DCN1, which is my bigger edge site. I'm gonna hit no for the, I'm gonna use ephemeral storage in this case, although both Cinder and ephemeral is supported. And then finally, I have a set of flavors that I can take advantage of. And this is only a Serous VM that I'm gonna deploy. So I'm gonna pick a tiny, like a very tiny VM. But you can see I also have flavors, special for like bare metal instances that I might wanna deploy on edge. Moving to the last tab that I really care about is the networking. I can deploy the tenant networking that will be able to spawn across all of my availability zones. In my case, I'm using what we call a routed provided networks. So these are the provided networks that are kind of attached to a single neutron network, and they're smart enough to select the right subnet based on which edge locations I'm deploying to. So this is definitely the layer three, they spawn over the layer three, but in my case, I kind of bundle them under the same neutron network. So pretty simple, it looks just like having a flat architecture, except we now have multiple availability zones and we can take advantage of it. I'm gonna do this quickly for bare metal just to show you that it's exactly the same process. So instead of availability zone, DCN or DCN1, I'm gonna, I have the special availability zone called bare metal. And then in order to select, I'm gonna pick a rail here, but in order for me to select which side I wanna deploy this bare metal to or which side I wanna pick the bare metal out of the pool, I have these special flavors attached to it. So I have one called bare metal edge, I pick it and then rest, I'm gonna leave the same and just hit launch and it should just do and deploy another one. So you can see the simplicity, it's the same as running the open stack in a single site, right? It's totally abstracted from the users. Quickly show you what else is being kind of managed by this edge architecture. If I go to my images and scroll down and pick one of those image that I'm using, you can see that not only I'm able to take advantage of the compute, but also my glance is propagated across these multiple edge sites, right? So every time I deploy this VM where my seph and glance is located at these edge locations, I don't have to pull that image and then go over my, you know, maybe slow network. So this is one of the advantage of this architecture as well. So in the case where in the architecture where edge two had no storage, that image would come from the central location. Yeah, absolutely. Another thing is we talked about it's not just ephemeral storage and this is example I have volumes created in each of the availability zones, one in central and one DCN. Quickly, a couple more things I wanna show is if we go to admin and then system information, we showed you the volume. So this is kind of a confirmation that we have an individual Cinder volume services running in central and DCN. And then if I go under the ironic bare metal provisioning, which is the section where you use the bare metal nodes, you can see that I have a pool of bare metal sitting, some of them in a central location, some of my edge. And this is the one I just spawned, he's provisioning. Another like really good advantage of OpenStack running the edge architecture is OpenStack has very low overhead, right? So if you look at actually how big these compute nodes and even my controllers, they're super tiny, right? So in my lab, I designated 12 gigs of RAM per compute to run this, right? So that's one of the advantage we see customer or users that we work with picking OpenStack over something like maybe something like more heavyweight Kubernetes, right? The overhead of OpenStack is way lower. And then finally, one more thing I wanna show you before I turn it over to Darren, observability. It's just another very important aspect. So in here, we take advantage of Prometheus for gathering the data from all of the locations and then shipping it to the single pane of glass, if you will, right? So I can switch between these different sites and I can drill down into my resource utilizations for the site. So that kind of concludes the demo. I hope you like it. I'm gonna switch it back to the presentation and we're kind of gonna tell you a little bit more how the sausage is made and how we make this all happen. Okay, so what is the edge? So in edge can mean many different things for a lot of different customers. Edge kind of can be used, a lot of people hear edge and they instantly think telcos because telcos, I've talked about your V-drands and your remote access networks and stuff. The goal with edge is to basically push your workloads out closer to your customers, closer to where it matters most. But it spans many different industries anything from manufacturing, transportation, we've seen cruise ships where basically they wind up putting little clusters out on their cruise ships to do work there and stuff. Finance, it goes all the way out to handheld devices and stuff. On the right side, most of the time when people think edge, they think geographic edge, pushing things out across geography because the biggest gap there is the latency to get users on demand. I don't know if anyone has kids, but my kids, their patience is like zero. So it seems like every generation that comes along, any additional millisecond seems to bother them. So yeah, so edge is about pushing things out and making things faster for the end customer delivery. So when you start talking about edge, there's a bunch of different considerations because now you have distributed sites which comes with distributed challenges and problems. From the hardware perspective, now you have multiple points of presence around the globe, around the country. Your workloads basically, you need to figure out what your risk assessments are at the edge now because if you're doing any kind of resiliency at the edge, you need to account for a number of servers for failover situations and stuff. And also from the hardware perspective, you have to maintain those servers. So you have to get boots on the ground where those servers are. From the networking side, you get the latency obviously issued there. It's kind of a balance between getting the stuff to run fast for your customer, getting it to come back to your central location where you may want to process that data, do data analytics and stuff like that. There's a ton of other routing. No one owns all of their cables from the edge all the way back to their central. In most cases, telcos and maybe one of the exceptions to that, but you're always trusting in someone else. So troubleshooting problems in those regards and outages, you need to account for types of things that are outside of your control, which definitely can take an impact on SLAs and trying to make those types of calculations. Storage. So storage, we touch a little bit about that on the demo. You can have centralized storage or storage at the edge. If you don't have storage at the edge and you're running a workload, your node goes down, you don't have any kind of resiliency, that could be problematic, but the type of workload may be, your risk versus cost may be worth it to do that. The footprints at the edge is also what can be taken into it that needs to be taken into account because if you are doing resilient storage, you're gonna need at least something that provides three nodes or if you're doing stuff or something that's resilient, so if you can afford that loss. And then the last section here is where is your edge? So I mean, you can do, if you're a global enterprise, your edge is, you could say it's globally, but realistically it's not. You're gonna break it down to regions due to latency and redundancy, but your edge may not even be geographically related. It could actually be in your own data center, which we'll mention a bit later. And then the other advantage of doing edge versus multiple deployments in other areas is the observability aspect. You can pull up a single pane of glass and be able to view DCN1, DCN2 and get a good visibility into what your utilization is. So now we're gonna pass back to Chris to talk about various OpenStack deployment topologies. Yeah, thanks, Terian. So we're gonna go over kind of four architectures that we see customers consider for the edge deployment. And it doesn't mean like version one is better than version two. There are different use cases and we just wanted to walk through the four of them and kind of tell you the differences, right? So starting with just a very traditional flat deployment. So this is one of the things that majority of our customers are running. They're not running any type of edge architectures, but maybe they're just running individual clusters and then instead of running one, they have a multiple smaller side either in the same data center or across multiple geographical locations. So this is kind of our starting point and there's pros and cons of going this way, right? Like it's a proven architectures. We've seen hundreds of customers deploying it. So you get that peace of mind. You're not trying to get into something that has not been maybe fully tested or adapted by the larger community of the OpenStack users. The disadvantage is, these are totally disconnected environments, right? So you need to take care of yourself of putting something that will connect them and give you the observability across. And this could be accomplished with something like Prometheus and Grafana, but from the API perspective, you kind of have to take care of this yourself with, I don't know, maybe something like Terraform or Ansible to be able to interact with all of these clouds individually. And then the version one, if you will, it's what we sort of presented in this demo a little bit earlier, where you have a centralized control plane in one of the locations. Whether this control plane has additional compute, it's entirely up to you. We see customers dedicating one side just for the control and they're making it as resilient as possible. With that said, this one control plane is your single point of failure. So if you lose that side, your users will no longer have access to the APIs. Your workloads in the other sides, they're gonna stay active. They're gonna work fine, especially because in this architecture, we're also running individual CEP cluster at each of the locations. But that control plane is still pretty critical, right? Some of the requirements here, we mentioned in a lab, from the Red Hat perspective as a company, we support the 100 milliseconds between these sides, latency, round trip. But at the same time, we have tried to push this even beyond 100 milliseconds with some really positive results. So if your use case needs bigger disruption from the OpenStack perspective, that might be as well possible. And then going to the version three, or version two, I guess, if we start with zero. So this is almost the same as the one before, but imagine deploying the edge architecture in a single data center where your latencies between the sides are like single digit or low double digit. If you can consider that, so you take the edge architecture and deploying that to gain other benefits in running your workload. And if you're doing it, now this opens up to other possibilities. You can now stretch your control plane across multiple edge locations, if you will, right? And again, those edge locations, they are limited from the latency perspective, but they do buy you much better resiliency. In this architecture, you can kill any of your sides and your entire cluster stays operational, right? So we see this architecture getting implemented by customers who do look for, improve their SLAs, et cetera. And then finally, in the previous, this is kind of improvement on the previous version. Now, instead of stretching the controllers over the same, let's say layer two network, now let's introduce BGP to the mix, right? Now let's break the virtual appease that are managed by your controllers in like a BGP agent running at each of the locations that can manage the failover of the Vips if one of those locations goes down. So OpenStack is a number of different services. So when you're considering an edge deployment, you need to kind of consider the different services that exist within OpenStack. So in a typical DCN deployment that we would typically do, your services are distributed, as seen here on the screen. In actually this, if we compare this to the demo we did, edge one and two are swapped. So in edge two, in the demo, we had the storage and everything within the one DCN. So, and you need to also consider the traffic that goes between those. So you have your edge workload traffic, but then you have to also consider your traffic between, on the internal API, between the different services. And most of those timeouts can be adjusted. So as Chris mentioned, we support the 100 millisecond latency out of the box. And there can be some tweaks done as far as configuration for the other services to ensure that you meet whatever SLAs you have and you don't have any kind of failures. So one more note here. So there's certain amount of flexibility here as well. Like we're giving you the example of what we see a majority of the customers are doing. So like three different type of edges where we push different services to different sites. But we can like in a demo, for instance, we've been running Ironic and those Ironic services have been running in both central location as well as the Edge One in our case. And you can think there's some creative ways of like pushing your working services closer to your end users based on your requirements. Yeah. Thanks, Chris. So OpenSack versus Kubernetes for the Edge. We get this question a lot. And it's really, and there's no answer for, there's no single answer that solves everything. It's really dependent upon the types of workloads you're running at the Edge. This just gives a rough example of stuff that we would typically lean towards maybe OpenSack for localized services like print servers or any kind of resource-intense workloads or multi-tenancy specific requirements for traffic segmentation and stuff. When you're talking about workloads like running web servers or database servers or specific types of application servers at the Edge, perhaps Kubernetes on bare metal at the Edge is a better option for you. We talk to customers a lot about which one works for them. And then there's also the possibility that doing a DCN deployment with Kubernetes on top of the DCN deployment may be the best option. One thing that a lot of customers don't necessarily look at until later is outside considerations. They're vendor support. If you're working with telcos and you're working with vendors that have VNFs, you're running VNFs at the Edge, they may not have CNFs to run in your container as deployment at the Edge. And then perhaps your application architecture has certain demands in regards to the way it's deployed where OpenSack may be a better option than Kubernetes at the Edge. So one of the ones I mentioned was Kubernetes and utilizing both OpenSack and Kubernetes. You get, this has an advantage of the fact that you get the scalability and flexibility at your infrastructure layer as well as your platform layer. This is no different really than deploying the Kubernetes on VMs out in the public cloud where that you get that scalability and resiliency. And then within the deployment itself, for instance, on this Edge here, I'm showing delineated by the dotted lines two different OpenShift cluster deployments. So you can deploy multi-tenancy at the OpenSack layer and then deploy multiple OpenShift clusters within the tenants of OpenSack to give you that multi-tenancy segmentation that you may need in OpenShift but you can't get necessarily from the networking topology. Yeah, and just to add to that, and so the OpenSack kind of advantage of merging these two worlds together, those two open source architectures together, they both have the strengths and weaknesses, right? So OpenSack not only provides you that multi-tenancy but allows you to mix, for example, a bare metal workers with the virtualized masters, right? So which Kubernetes by itself cannot do it. It has, it cannot virtualize on itself at least right now, but getting these two together has more benefits than running them separate. Absolutely. So just to go back to where we started, what is Edge? By show of hands, who thinks Edge is geographic topology? Good, because it is definitely an architecture. And this is one of the things that was listed in Tech Preview. The BGP that we're working on replaces the VRRP requirements on the Layer 2. So what we have was a customer that deployed, wanted OpenSack, and within their single data center they had three separate network fabrics. They wanted to be able to deploy their controllers in the same data center and have resiliency in case of a single fabric failure. So we're putting a lot of engineering into replacing VRRP with a BGP agent that removes that Layer 2 requirement from the control plane layer. We still have, so with 17.1, it's Tech Preview or GA? I think it's GA, it's 71. It's not released yet, but it's supposed to be GA. But, so, but we do still have, there are still some caveats with it. There is still a 10 millisecond latency requirement between the controllers, but that's easily achievable within a single data center. And I think a lot of the requirements around that deployment topology is around what we have tested or have not tested before we actually release. So those requirements may change between now and then. Anything you wanna add to that, Chris? No, I think that's it, yeah. So I think that the bottom line is that the edge architecture can be distributed geographically. There's no single architecture to solve them all. If you need to push them out and have some connection, like if you wanna accept the connection issues between these different sides, there are architectures for that, but you can use this, you can kind of twist this architecture a little bit and deploy it in a single data center and get additional benefits of having your control plane and compute distributed across. All right, thank you, I think, do we have some questions? We have like two more minutes for the questions if there's any, but otherwise, thanks so much for being here. After this, if someone wants to come up to the microphone and ask questions in front of everybody else, we are gonna be at the booth after this session if you guys wanted to come and talk to us. Can you walk to the microphone if you don't mind? Thanks so much. Actually, I joined late, but if I interpret correctly, what you're suggesting that if you have to deploy Edge use cases, so it's probably recommended or is it beneficial to deploy on OpenStack or bare metal Kubernetes? Are you... Kubernetes on OpenStack or bare metal Kubernetes? Yeah, so that was what, there's no one or the other, there's no answer for that without knowing what the workloads are, what your goal is, what your business processes are and stuff. It's not an easy solution to say, just do bare metal Kubernetes versus doing OpenStack with Kubernetes on top of it. I think that was that the question or no? So I think you're asking for the differences between deploying Kubernetes straight on the bare metal versus bare metal with OpenStack, right? So there's, as Darren mentioned, there's pros and cons. I actually had a meeting with customer yesterday and they selected the route of deploying Kubernetes with the OpenStack Ironic and the reason they selected that is they wanted to have their cloud infrastructure kind of independent of the Kubernetes distribution so they not only want to deploy the Kubernetes at the edge, right? But they have other workloads that would be handled by OpenStack Ironic but maybe the metal cube that is being bundled with the Kubernetes, it's really focused just on the Kubernetes and that's it, right? So again, for them, they want to have agnostic bare metal platform that can deploy anything at the edge, being Kubernetes or maybe being a vendor agnostic for that Kubernetes. They maybe want to have Kubernetes from Red Hat and maybe from Google and someone else, right? If you use Ironic for this use case, then you have that ability. If you stick to the vendor, that vendor will typically give you a solution to only deploy Kubernetes of their flavor, if that makes sense. So that's one of the reasons but there might be other use cases for that as well. Thank you. Thanks for the question. Do you want to, do you mind coming to the microphone? Sorry. I think so, yeah. So for the demo you showed you have a central location and two edge. So between the central and edge, are they routed or there are two network requirements? For example, OpenStack, you have a private network, right? Is this network being later too stressed between central and edge or they're routed? So in the demo here, no, because our entire control stack is in central. So it's just the layer three connectivity between the edge and the central for that we did in the demo. Okay, basically the data centers, they do not have a layer two DCR requirement, right? They're just purely layer three routings, okay. Correct, yeah, because the central's all deployed in this. If we wanted to deploy the control plane across the different data centers without in the current deployment because of the VRP requirements, we would require a layer two connectivity between them. But since we're centralizing all of the control functions in a single data center, there's no layer two requirements between the two data centers. Okay, yeah, between the two edges, the different nodes that they can have been different subnets, right? There are no kind of requirements for them to be in the same layer two? No, they don't have to be the same. In our demo, they were actually not in the same subnet. I wanted to keep them relatively close just to take advantage of what we call the routed provider network. So it's this concept of the provider network that's smart enough to pick the right edge with the right network. But if you just want to have totally independent layer two networks in each of these sites, that's perfectly fine as well. This is in a demo when he showed the networking and it showed the different subs within the same network. Each one is a separate subnet, so a separate layer three network. All right, I think we're one minute over. Thank you so much. We're gonna be at the booth. Please see us if you have any more questions. But otherwise, thanks so much for coming. Thank you.