 All right. Hello. This is Kelsey Hightower here, and I'm going to be moderating the CNCF on-demand webinar. Today, we're going to be talking about VPC networking beyond the Cloud. Now, why is VPC so important? For the last seven years, as you all may know, I've been at Google Cloud, and I've had a lot of jobs before then, and I know what it's like to work inside of a data center. You rack and stack servers, you create your leaf and spine network, you got routers, you got top of rack switches, and I've made my fair share of Cat5 cables. In that architecture, automation is key, like I get it. We need to provision all of those ports, maybe you're setting up VLANs, but for the most part, a lot of our automation tools are doing a one-to-one mapping of the standard configuration. Now, if you're like me and you have some experience in the Cloud, things are very different when you get to the Cloud. There is no concept of ports and VLANs. Those things are hidden from you, and instead, you get a new abstraction, and for years, we've just been thinking about that abstraction as VPC. When you create a VM, you just attach it to a VPC, you create a Kubernetes cluster, again, you just attach it to a VPC, and typically that VPC is going to give you an IP address and deal with any other routing concerns that you may have. When I think back to people coming from on-prem and to the Cloud, one of the big differences that we actually don't talk about enough, we always talk about the differences in compute and VMs, maybe even the differences in load balancers and security things like IAM, but hardly do we ever talk about what I think is probably one of the most important components. That's that VPC abstraction that's just let us focus on our apps, our workloads, and our applications. So to help me really dive into is this even possible, I want to introduce Alex from NetRisk to give us a deep dive of his company, his product, and their ambition to bring VPC anywhere, including possibly your data center. With that, we welcome Alex to the stage. Hi, Kelsi. Hi, everyone. Thanks for this nice introduction. So my background is many years in traditional network engineering, almost like 20 years designing, architecting, large-scale data center networks. And I'd like to start, you know, just to give a little bit of a context before we get on to environment, before we start configuring things, I want to give a little bit of a context. And I'd like to start with the evolution of networking. So networking kind of started with CLI, and then over time we've seen SDN, software-defined networking, trying to make it kind of more programmatic for engineers to consume networking. And the next one was intent-based networking, pretty recently. The thing is that all these technologies, they are amazing for traditional data center and telco environments. But when you do need to do these cloud-native applications, DevOps methodologies, you need VPC. Those technologies are not fundamentally made for this. VPC is what we are seeing in the cloud, it's VPC is what we need. And if we look at the compute infrastructure market growth, we can see that not only public cloud is growing, but also bare metal cloud market is growing, edge growing like crazy, and even traditional data center market is growing. Why is this happening is because we need lots of apps. We have lots of data, we want apps for everything and anything. Although most of the apps go to public cloud, it's not that public cloud is not a one size fits all kind of solution. Because in some cases, regulations require us to take some apps elsewhere. In some cases, it's a high cost, especially at scale. And in some cases, some applications do require technical requirements like latency, like machine learning applications, like applications dealing with transient data. So my main point is that applications are highly distributed. We have lots of edge use cases. This is how things are today and this is how things will be always. Some applications will be on public cloud, some applications will be on bare metal cloud, some applications will be on edge, and some applications will be in traditional data center. Now, this means that engineers have to deal, have to deploy maintenance scale applications in all these four types of environments. And in public cloud, that's fairly, it's fairly convenient to do things programmatically. Because of VPC, it's declarative, it's quick and safe, it's kind of designed to kind of help engineers to be insanely great. I even think that cloud popularity is very much attributed to VPC. VPC kind of enabled public cloud. And when we think about our other environments, bare metal cloud, edge computer, data center, those environments are based on traditional network operations model. There's a lot of complexity at best. There's some kind of homegrown solution, which is different from organization to organization. And we still are seeing lots of silos, like DevOps engineers, NetOps engineers, network engineers kind of take a lot of time and it's kind of hard. So we have started NetRest to address this problem. To create a software that brings VPC networking everywhere, to our bare metal cloud, to edge compute and traditional data center. To make both things look like VPC and enable engineers to have kind of similar operational model in both environments. Look at this, this is what networking looks like in Amazon AWS and it's very similar in any public cloud provider. And this is what networking looks like in physical data center. You know, DevOps engineers don't need another API. DevOps engineers, they need a VPC type networking for on-prem. So basically, this is what we're trying to do. We're trying to take that physical network and make it look like very much like VPC. So here's the concept. We have this thing called SoftGate, which is, you can think about it like a VPC gateway. It's a Linux machine that is running FRR, WireGuard, different Linux networking tools and software, mainly open source. NetRest SoftGate sits on top of any physical network. It's just a software, just a machine. You can have that machine in any Cisco network environment, Juniper in a bare metal cloud like in Equinix, doesn't really matter. So that's the machine which does packet forwarding. And then there's this NetRest controller, which has web console, which is very similar to public cloud that has declarative mindset, has, of course, Terraform integration, Kubernetes and REST API. But the idea is that you configure, you deal with this controller, and controller automatically programs VPC gateway, SoftGate nodes, to make network work. And we will see how this works soon in action. All right, this is perfect overview. And I love that image of that telco closet where all the wires are hanging. I'm pretty sure people listening to this or watching this right now are probably responsible for creating that mess. And we all know the value of abstractions. And I think you really dial it in when we start to think about taking all those components. I'm going to remind everyone, all of those components are necessary if you want a real working network. But what isn't necessary is to leak that complexity to everyone else. So having that VPC abstraction layer that gets us back to kind of simple primitives that we can actually use with our networks. Now, one thing I asked before we get into this demo was let's not just show the VPC and a bunch of IP addresses. I really want to call out very common infrastructures. And when we think about VPCs these days, especially for this audience, one common architecture, and I think if you go to the next slide, you have a nice diagram of a common thing that people do, like you have this idea that, hey, I want to stand up something like a Kubernetes cluster in your own environment. And I know how challenging that can be, especially during my days at CoreOS where we were really trying to give people a bunch of nodes and they had to go figure out how to integrate it into their networking. Kubernetes-based networking has always been a challenge in most of those traditional environments. So maybe you can walk me through kind of this diagram of the setup and maybe let's try to educate those coming from a traditional networking background and maybe people who are unfamiliar with most networking architectures that are out there. Yeah, sure. So for this session, we've created this environment where we took a traditional network. So basically a Cisco switch in the middle. We have connected three physical machines for where we will run our Ranger harvester hypervisor nodes. We have connected two other physical machines for a SoftGate 1, SoftGate 2. Those will be highly available VPC gateways. We have one more machine where we run two controllers, NetRisk controller and Rancher controller. Controller itself is just a K3S cluster. So you can easily run two controllers on the same machine. And we have this internet connectivity. Now this internet connectivity can be a physical cable coming from the ISP with a range of IP addresses and physically plugged into that Cisco switch. That can be a cable coming from, if it's a brownfield environment that can be a cable coming from traditional routers from enterprise border routers. Doesn't really matter. We just need some sort of internet connectivity to peer our VPC network with the rest of the world. Now this thing is entirely based on standard protocols. So basically we could have connected a bunch of VMware ESXI nodes. We could have connected bare metal nodes. Just to keep it simple, we will stick to this three nodes for compute, two nodes for socket and one node for controller. But this can basically scale to whatever scale. Yeah, so I think if I were to summarize and correct me, we're wrong here. If I were to walk into an existing data center and I look to my left, there's some NetApp storage, some VMware HP blade chassis and all that is working fine, right? And I look at the top of that rack, there's probably a Cisco or Juniper switch or router combination. And so everything is working. That's what we mean by brownfield. Like things are already there. You're not starting from scratch. And then I decide, you know what? I wanna give this whole cloud native thing a try. Maybe I'm gonna go buy some new hardware and let's walk through this diagram. I get a rack right next to this other rack. And instead of going out and buying very expensive networking gear, I go get some standard maybe network ready commodity hardware. I racked two of them in the rack. So at the top, you have the two. I get that uplink cable from my network team that's ready to go. It's live. And I plug it into one of the soft gates. The next thing I'm doing is I'm racking maybe a couple of those bare metal or whatever servers that I have. And then maybe I just do another one just to host that rancher and NETRAS controller set. So if I step back and look at this rack, I have this kind of clean setup of, you know, dedicated worker nodes, controller that will be the orchestration layer for the other ones and two commodity network devices. And at this point, I guess, I just need to get the software installed to make all of this come alive. Is that kind of an accurate description of someone that was kind of rolling this together with their own machines in an arm premise data center? Yeah, that's very close to how people usually test this. And in some cases, you can have like a rake of kind of new equipment. You can always peer that with existing network. That's one way to go. Another way is like even if it's like a trial and if you don't really want to make a lot of changes, if it's just an experiment, do I want this solution? I don't know. I want to try, right? And like you described, there are a bunch of rakes with HP storage, Dell storage, whatever. You can go walk to your infrastructure team and ask for five, six servers, blank servers, where you will install operating systems. And you will need to ask your networking team to allocate you a range of DLAN IDs. DLAN IDs that will not conflict, will not overlap with other things on the network. And I can actually show then. So this is NetRis controller graphical user interface. And when you install it for the first time, you see this site, site is like a region. And this is default configuration. I didn't change anything here. You can see it says VLAN range 700 to 900. This means that in this case, NetRis will stick to using only these VLAN IDs. But if your network team is happy with a different range, that's totally fine. Just edit and type there the right range. So that's it. Basically, VPC services will stick to using these VLAN IDs. Whenever traffic exchanges happening between SoftGate One, for example, and Harvester 3, in the middle, it will be encapsulated using one of these VLAN IDs. And that's how traffic will pass through existing enterprise switch without requiring network team to make big changes. That's the whole idea. For every service, you only deal with SoftGate, with NetRis, with components that are plugged into VPC. But the moment VPC traffic needs to travel the traditional network, it encapsulates into one VLAN ID, goes through that network, and then decapsulates back. So look, we already jumped into the demo. I think we just pop over there and really go through this. So we just talked about kind of that base install, hardware is now mounted. We got our configuration in place. So now it's like, when I go to a cloud provider and you had a nice screenshot in the presentation, typically when I create a VPC, I tend to be presented with like a list of subnets that I can choose, either for VMs or various clusters that I want to attach to them. Maybe you walk me through, what does that look like from this perspective? Yeah, that's a great question. So there's this IPAM part where all the IP addresses are kind of registered. And this, you can see this list of private IP addresses, slash 20s, those are by default configured there, created there, I didn't create them. And whenever I want to create a new network, new virtual network, I can use one of these slash 20 IP addresses. Now if we look at the, I have created few networks to make this whole thing work, but we will create a couple of new networks soon. So the very first network is this network called hypervisor, so VNet. We call them VNet, virtual network. And we can see it is using this subnet from that list, this slash 20. This is where a ranger, harvester hypervisors are living. First, second, third, and you can see that those machines have received IP address from that network. That's, sorry, these three machines. But they also need to have some kind of a gateway, which is soft gate one, soft gate two, physically. And we can see that in our inventory, so gate one, so gate two, they're green. We can see traffic, memory and stuff utilization. But the nice thing is all this is happening behind the scenes. I didn't configure these things. What I did, I have only created this VNet, attach that IP address. And that's it, my ranger got this, my ranger hypervisors got this minimal connectivity so nodes can talk to each other, can reach out to controller, and make the cluster work. So it looks pretty straightforward. I create a VPC, I get a range of subnets, then I can come over here and allocate any of those ranges, like you showed earlier, those hypervisors are going to the low level nodes, but you were in harvester. So do we have to tell harvester anything about this network topology? Like how does it know that these networks is something that it should be using? Is there any configuration that needs to be done on the harvester side? And I'm assuming harvester is this thing that helps us provision things. So is there anything that I have to plug in on the harvester side to tell it about these subnets at all? So I have one service, one Kubernetes cluster called Kubernetes Prod1 that's up and running there. But let's create one more, let's create a new one. So I will describe the process and what's happening behind the scenes. Let's say we want to create a Kubernetes test. I select the site, we only have single region in this case. We attribute this to DevOps team and we pick one, the next available range of IP addresses, next slash 20, that's available. VLAN ID, we leave it assigned automatically so NetRis will choose one of the available VLAN IDs. We click add, now this is provisioning, it will finish soon. So everything starts from creating the VNet from here. Now when this is provisioned, we need to create a network inside harvester. So I will go to home and we'll walk step by step. So I go to virtualization manager, then harvester cluster, advanced networks. And this is the, you can see network that I've added before but now I'm going to add a new one. I paste the name here and I need this VLAN ID that NetRis has generated to type the same VLAN ID here, 703. So now both end points, they know which VLAN ID to use. So traffic reached the right place. We click create. If I'm looking at this correctly, this feels like the same flow you typically do with like a VMware setup, right? Like if I bring in exxi, I have to bring in this networking information. So it sounds like harvester is like this true hypervisor layer that's going to let us create VMs or whatever. And this is where we plug in those network settings. Yeah, absolutely. The same flow would be if you do with the VMware. Okay, now at this point, when we have the network ready and now we can go to cluster management and we can try to create a new cluster. Kubernetes test. Let's ask for three machines. Default namespace. Now I'm going to tell a rancher which network to use because now there are both networks available and here's the name for that new network. Okay. Now, while that's creating, this feels a very similar to like an EKS or AKS or GKE workflow, right? You come here and I'm assuming harvester is doing the hard work of creating the necessary VMs, programming them to be on the right network. In this case, this VLAN ID, tagging all of the network traffic. So as it flows through, all the intermediate components will do the right thing in terms of handling those network packets and giving me this feel of like, hey, as long as you have a network in place, ideally you will have the ability to create clusters and just attach them to the network that you've assigned. True. Why are we getting this error? Let's refresh the interface, maybe. Seems like some kind of front-end error. But it's actually creating. It's actually starting the process. Well, we'll do it. We'll just leave that for feedback for the harvester team. And like all good demos, it looks like we already have a cluster that's going. So I'm pretty sure I can imagine how that will be provisioned. There'll be some NCD, there'll be some worker nodes. And then ideally, you'll probably be able to pull down a Kube config. Is there an ability to get a Kube config from that prod one cluster up there while we're waiting for this other one to finish? Absolutely. We can just quickly check one thing. We can go to harvester for one moment and see what's happening underneath. If we click on virtual machines, all right. We can see that these three virtual machines have created like two minutes ago. We can see that they have even received this IP addresses automatically. And those are the right IP addresses that we have selected here. We didn't tell this information to harvester. This just happens automatically. Okay, so going back to cluster. Now I'm assuming VMs are being installed and Kubernetes is being installed, but that takes some time. So not to wait, like you suggested, let's go and look to this other cluster, Kubernetes prod one. And totally, we can download Kube config here for the first cluster. And I can do... So I guess at this point, you're going to configure your Kube config with those credentials you've just downloaded so that you can actually interact with it via Kube C tail. So for those probably looking at this, this is an easy way to actually deal with multiple clusters, especially ones that you just created. And you want to make sure that the Kube C tail is pointing to that set. Yep. And it is responding. Six nodes are up and running. Now, so what does this look like on the network side? So Harvester is using that VLAN or that VPC that we carved out, or at least a subnet from that VPC that's kind of represented as that VLAN. So look, Harvester knows all these things, Kubernetes knows all these things, but I'm interested to see what it looks like from the Netris console, like people that are managing the network, what do they see? So that's a good question. And to your point, people that are managing the traditional network, they in from their perspective, these are just some packets going back and forth. They don't know any of these details. We're not disturbing them. They are not disturbing this VPC. Everyone are happy. Now from Netris console, from Netris web console perspective, well, this cluster is up and running. So obviously cluster has at least connectivity with the Rancher controller. That's how Rancher works. Cluster needs to access their controller. But there's also this other component of Netris operator, which I, to save everyone's time, I have previously installed it already. Here is it, a Netris operator that's running. So this operator is talking to Netris controller. It's authenticated with credentials. And now if I deploy an application in Kubernetes, which is using a service of type load balancer, for example, Netris controller will be able to understand this. Let me actually show this. This is layer for load balancer, kind of like elastic load balancer functionality in Netris. And we can see that nothing is there. We could create one using web console. We could pick protocol. We could enable health checks and type in backend IP addresses. In this case, this would work just a regular load balancer. But in this case, we can actually deploy application. I have this basic application here, pod info that is using service of type load balancer. Let's try to deploy it. And before you do that, I think it's important to go back to that pod info and let's explain something to folks really quickly. So there's gonna be a lot of people that are new to Kubernetes that are coming to this. There's gonna be people that are experienced with Kubernetes. Maybe we go back to that config really quick and I can see at the top, things will be provisioning. But let's go look at the pod info for one more second. I think what you're kind of highlighting here is that given that that controller is there, we see that when you create this service type load balancer, what's happening is it feels like you're doing the integration work on the natural side, meaning that you're dealing with whatever's required to provide the IP address, the load balancer service, the service discovery integration, pulling that IP information up, there you go, you can see it there, pulling the IP information from Kubernetes. So I guess a lot of people, if you've ever used any of the cloud services before, this would be the equivalent of like a network layer, load balancer, right? And you can send traffic directly to these pods, but I think it also shows off the first class integration here. You don't have to come to this console and then pop back to Kubernetes. You can keep your normal Kubernetes workflow, put everything in the manifest, use native Kubernetes ways of articulating things like type load balancer, and that's enough information for the Netrous Controller to take over and pull in that config and be responsible for creating the network services required to make it work, just like people would experience an other fully managed kind of Kubernetes environment. So I think that part is really important to call out to show people that, hey, the goal is, once you have these abstractions, so that people can just ignore the undertone and focus on their normal Kubernetes workflows. Yeah, absolutely. And actually to show that further, I can edit this number of replicas from three to 10, let's say, save. And if I apply this again, we can see that more pods are now starting, container creating, those are still in the process. And on Netrous side, we can see that Netrous now detected more IP addresses because we have six nodes in this Kubernetes cluster and originally, Pod Info was running only on three nodes. That's why we had only three IP addresses. Now Netrous understood, okay, now it is running on more nodes. So Netrous has populated more IP addresses of nodes into load balancer service. And when you say nodes here, I'm assuming you're referring to pods. I mean, there's probably still three nodes in the cluster, but these pods are the ones that are pulling in these IP addresses, right? So these are natively assigned to the pods. So we're honoring the true Kubernetes networking model where these pods are being assigned first class IPs from the subnet, right? So when they're communicating or things are communicating to that, you're kind of going point to point. And I think that's the best way to think about kind of doing that Kubernetes model. And the other thing I wanna say is shout out to anyone that's working on this console at Netrous, the fact that you're giving people that real time feedback where it's automatically updating, I'm running commands at the bottom console and then seeing that live kind of preview, respecting the health checks and then moving, letting people know what's going on. That is a beautiful touch. So I just wanted to shout out really quickly to whoever did that. Golden, great, great job there. Thank you so much. Our guys will be super excited. So I think we showed a lot here. I mean, maybe if there's something that we've missed, but for me, when I think about lots of people that are trying to go on prem, thinking about what does it take, at least from a networking perspective, to be ready for that dynamic set of environments that people want, because I think the thing that puts the most stress, whether you're the best network engineer in the world, whether you have great automation tools, I can guarantee you when people bring in Kubernetes, they have this expectation of being able to grab dynamic IPs. What we're not really showing here is that if you restart one of those pods, it can land on a different node, get a different IP. And the expectation of developers and platform teams is that those IPs will dynamically be updated. IPAM management will be fully automatic and the load balancer will follow things like health checks and service discovery to pre-populate everything. I think this is the problem most people have with the Kubernetes networking model. It will challenge whatever you have existently. And so if you don't have the ability to carve out VPCs and dynamic subnets and then do the full integration with the Kubernetes networking model, that's when people start complaining and that's when you feel like you're falling short. I mean, did we capture everything and it looks like we went through everything? If we've missed something, I think now would be a good time to kind of bring it up and show it. So to add a little bit on that in terms of IP addresses of pods, nodes, and so on and network engineering challenges, we're very much neutral and we understand that different people have different preferences. Someone may want to use one CNI, Celium or Calico, someone may want to use in one mode, another mode. For us, it's not much different. We support all kinds of configurations. Someone is happy using Calico, totally fine. Our operator is able to understand that if someone is switching into Calico's BGP mode, this is their kind of highly advanced mode. We understand even that and we know how to respond on our side, how to configure things that things kind of automatically seamlessly work together, as well as if it's just a simple setup with few nodes without complicated CNIs, still it's cool for us. We're just trying to support all these common use cases. Awesome. Oh yeah, go ahead, proceed. Yeah, let's see if this even works. So this service received this IP address. So what happened basically, NetRis has assigned next available public IP address to the service and pushed that information to Kubernetes. Same IP address here and there in console. But let's see if this even works. Okay. We can, so it works. We can even use Curl to make sure that it is being served by different pods. It's not, so actually load balancing is actually working. I can do, oh, so it's changing. All right, perfect. Look, I think we're at the end of the demo, but I do have a few questions and I know I've sourced a few questions before we started. And so one of the questions that I think a lot of people want to know is why haven't people thought about VPC or different abstractions on prem? I mean, in your slide deck, you went through SDN, you went through network automation. You know, why is that not enough? So, well, physical networks, they are kind of designed to move packets and traffic really well, which is fine. Like this is their specialization, if you will. They do this really well. VPC is, in my opinion, public cloud became possible, not because you put your server somewhere else, but because it was relatively easy and fast to use. When you want to iterate and people in this world, they want to iterate a lot, right? We're not we're not in the age where like designing something takes like 10 years. No, we're in the software age where we want to ship something, make changes and ship again. We want to ship many times a day. So basically it's iterations that are important for the business. And VPC is making this possible. So I don't need to go and communicate internally with multiple, we don't need to do kind of five meetings to decide how to add or remove one villain or something. That's in my opinion, that's why VPC is extremely powerful to make this iteration DevOpsC type mindset possible. And before starting NetRis, when I was looking to public cloud and being myself a person coming from traditional data center, I was like, oh my God, this VPC thing, this is insanely great in public cloud. This is, I wish this is available everywhere, like in every network, every data center, every edge, bare metal cloud, even my little home network, everything should have this. This is kind of how I thought about this. All right, so look, the next question, if I look at the list, one of the most common things is like, we talked about how VPC and abstractions are great in the cloud. We clicked around a lot in the console. So the next question is, is there things like APIs or Terraform support? Yeah, that's the question. Yeah, that's a great question. So yeah, I showed everything in web console. There's also REST API, which for example, someone who's like potentially embedding NetRis can use REST API, there's Swagger built in. But what is kind of more designed for DevOps engineers, that's of course Terraform. And I have here a little example of that. So to save everyone's time, just we'll show the resources. So in this Terraform file, I have summarized kind of NetRis resources. Like in previous example, we have created this Kubernetes cluster. Let's say we want to do the same, but with Terraform. So this is NetRis resource, this is VNet that we have created. This is Harvester resource, and this is Rancher resource. So basically in this example, I'm using two Terraform providers, NetRis provider and Rancher provider. And if I do this, let's say if we go to VNet here, and so we can see this is what Terraform is trying to do. So trying to create NetRis resource, the same VNet, and basically what I did from inside the Rancher from web console, but through Terraform. So process has started and it actually takes some time. I can see that it already created the K8 staging network. That one looks new, pretty recent, it has the latest day, and I can see it from the logs right there. So seems like the API is super responsive. Yeah, this has been created. And if we look into Rancher, actually K8 staging is creating here too. They're very responsive too. What is this thing waiting here? It is kind of waiting until the cluster is up and running and only then Terraform will finish the work, which takes some time, a few minutes, five, 10 minutes. But yeah, of course there's Terraform, how without Terraform. And I also wanna call out that it looks like your demo killed itself and that other cluster you were making earlier, the Kubernetes test cluster, looks like it's up and running. So yeah, it may have been a UI issue, so it's nice to see that get resolved. So don't tempt the demo guys, no need to connect to it, just let it shine bright, nice and green, good to go. All right, so I think we got time for maybe one or two more questions. And when people are looking at this and they're asking themselves like, how do I get started? Is it best for them to kind of start with like a small POC, get new hardware? What's the best way for people that are watching this? Maybe they're super excited. Maybe they look at this and say, yes, this is the missing piece. What's the best way for them to get started with this? Good question, so I would recommend similar basic setup like this, just like two machines for a soft gate, one machine for a controller, and two, three, just a handful of machines for compute nodes. In our documentation page, there is this install guide on VPC Anywhere Getting Started guide. Describes the concept and it works step by step through how you install controller. Controller installation is basically a one-liner that you copy and paste on Ubuntu Linux or any Linux. It will stand up a Kube cluster and NetRisk controller. Then you install soft gate nodes, which is, again, very basic. When you install NetRisk controller, you'll see this two soft gate nodes they will already exist there, but everything here will be read, hard bit will be read, because you will need to actually provision that soft gate machines, which is, again, a one-liner command you copy here, paste on machine, wait a few minutes until it installs. That's pretty much it. There are examples in the, we're also on Slack, so people are welcome to join our communities like Channel, where other users and our engineers are there happy to answer all kinds of questions. Awesome, and look, I think that was super dope. We got a nice demo. I mean, the big takeaway is for me where we can bring the cloud-like abstractions, the VPC on-prem. Also, doesn't seem like we have to throw away everything we have, so this works well in brownfield scenarios as well, where we already have a setup, we can work with our team to give us a range of VLANs, give that to NetRisk, and then we can now have a great way of partitioning that network, and this should sound real familiar from people coming from that VMware or a world where you did the same thing for your hypervisors, but now instead of just getting network automation, we're actually getting back to that world where you get that VPC, you have subness to choose from. I don't know about y'all, but I remember the days of people trying to track all this stuff in the spreadsheet, and now we have a full console, we have visibility, and we have something that actually understands the Kubernetes networking model, not just the node level, not just at the pod level, but even things like L4 networking load balancers and health checks and making sure that you can scale and remove things dynamically. So this is really dope, really wonderful. I want to say a big shout out to Alex and the whole NetRisk team who helped put this together. I know your company is pretty new, and this is just the beginning for them. I know we'll be at KubeCon probably talking more about this later. Maybe any final words for people in the audience and where people can catch up with you later if they have more questions. Yeah, absolutely. Thanks, Kelsey, and thanks everyone who's watching this. Hey, give NetRisk a try. It's really easy. We are looking forward to your feedback, positive feedback, negative feedback. All feedback is helpful. We have the Slack channel, so if you go to netrisk.ai slash slack, that's how you can join our Slack community channel. I'm on Twitter, Alex-Saroyan, that's my handler. It's also on the CNCF page, registration page for this webinar. If you're visiting KubeCon, please come visit our booth. We will have demo and we will have multiple environments where you will be able to play around with the product, talk to people who built the product and give them feedback, learn from them, exchange with experience. Yeah, looking forward to that. All right, awesome. All right, thank everyone for attending. I'll be around. I'm gonna keep my eye on this in general because I do think we need new abstractions. Kubernetes was a great abstraction for compute and it's nice to see that we're pushing past automation to new abstractions and maybe just maybe this VCP anywhere concept will take on. We'll catch y'all next time, later.