 Eric Raul, if you could join us in the attendee chat, that'd be awesome. And thank you for joining us. Really appreciate it. All right, Christy, let me know. All right, hitting the bumper. Hello, everyone. Welcome again to another OpenShift Commons. And today we have a very popular topic. And I know we all, we all love networking. And this continues our series of what's new in OpenShift 4.8. And we have lots of great new networking enhancements. And we have Mark Curry here to give us the update and please get your questions ready. Put them in Q&A, put them in chat. If you're joining us through OpenShift TV, we have people watching those channels. So put your questions there as well. And let's go ahead and get started. Mark Curry is one of our consulting product managers. And if you know networking and OpenShift, you've already heard his name. So he doesn't need much of an introduction, but Mark, so excited. Thank you for joining us today. And you want to go ahead and dive right in. Okay, thank you. Let me go ahead and share my screen. All right, hopefully, if that has not already popped up in your screens, it will very shortly. So first, thank you, Karina. And also, thank you, everybody, for your time today to talk a little bit about some of the new enhancements and the features that you're going to find in our forthcoming OpenShift 4.8 release. So let's first take a quick, big picture look at what I'll cover today. So some of the things captured here, you can see that IPv6. We're going to talk a little bit about new router configuration settings for HA Proxy. Some of our observability effort in the form of NetFlow support. Some upgrades to some of our core products, that is core DNS and HA Proxy. And some of the advantages that are brought forth from those upgrades. We're going to talk a little bit about the oven migration tooling and some enhancement to that. Talk a little bit about SRIOV NIC supports and one big thing that's not technically 4.8, it's 4.9, but still worthwhile knowing for those of you that do SRIOV networking. And then we'll talk a little bit about network policy, audit logging in events, and network policy of Mac VLAN interfaces. A couple of things that are not in this topic list, which I threw in fairly last minute, I'm going to talk a little bit about a couple of new D&I plugins that we support as well. So there are many networking developments that we did during the 4.8 timeframe, but really these are probably the ones that are most requested or are most visible changes to our customers. And so that's where we'll focus the topics of the presentation. Everything is fair game for questions. So first up, let's go ahead and talk about one of the bigger things to have happened to Kubernetes networking recently, and that is IPv6 single and dual stack enablement. So we've supported IPv6 for secondary interface, data plane interfaces for a while now. But as a 4.8, which is built on Cube 121, where IPv6 dual stack first went data and the API finally stabilized, OpenShift now fully supports IPv6 end to end on the control plane of a networking bare metal deployment. So we'll add other platforms as we can that are IPv6 capable in the future. So backing up a little bit to make sure we're all on the same page, single stack of course means that the pod interface has either one of a v4 or v6 address on it. So you choose one of those and all of OpenShift networking is going to be 100% aligned to that choice. Dual stack on the other hand means that the pod interface has both v4 and v6 addresses assigned to it, as depicted in the graphic that's on this slide. So what that means is that the cluster can communicate with any internal or external endpoints that are using one of v4 or v6. The latter configuration v6 represents the vast majority of our customer use cases, mostly because single stack is, as I said, 100% single stack, 100% v4 100% or 100% v6. If you need to connect even one host of the other IP family, then you need dual stack. And we've had several customers that were affected by that scenario. And so most times, unless there's a very strong need for single stack, most customers will choose dual stack. So how do you enable v6? It's actually very simple. At install time, you do that typical pause of the install to modify the install config gamma file. You specify IPv6 subnets in addition to the IPv4 subnets. And then just continue with your install. If you're trying to do this post install, you can still do it. It's still pretty straightforward. You simply edit the network config resources to add secondary, you're going to need machine network, cluster network, and service network values. And then those will get rolled out by the cluster operator correctly. So it's pretty straightforward to do. The common question I get is, what about other platforms? You know, as I mentioned, urban bare metal, but, you know, there are, we're definitely working on some other on-prem platforms that support IPv6, for example, open stack. But there are no near-term plans to port v6 on most, if not all, the cloud providers, at least to end the way. And so because of some of those current issues on those various platforms, we're blocked. So, for example, some of them only offer their APIs and their DNS over v4. So you can't do dual stack. At least one of them has a load balancer that accepts v6, but translates it to v4 internally. So it's not really doing v6 and so forth. So there's other issues. So until and if some of those issues are ever resolved, this will likely be the status quo for a while, platforms we support today. Next thing I want to talk about, and there's a lot of content on this one slide. So there are many new features and enhancements to OpenShift Networking 4.0. As I mentioned, I'm going to cover some of them here. And then there's even more I'll cover in the next slide, a very similar slide, as a sort of representative cross section roughly divided into ingress, egress, enhancements, and general networking enhancements. So this first slide, kind of roughly across ingress, egress, enhancements, one big thing is that in 4.8, we upgraded to HA proxy 2.2 long-term support inversion. Our next update is targeting 4.10, where we'll go to HA proxy 2.4. So we get all the new features that are detailed in a link that I believe is down the student note section of this slide, which will be shared with you, including some of the major ones listed here in this bulleted section. So there's going to be increased performance. There's all kinds of enhancements for security, hardening, and options. There are health checks that are added. There's new and improved observability into the traffic and the workloads that are happening so that you can do better debugging. Another big one, you can do syslog over TCP. There's also new SSL TLS capabilities. There's a stronger encryption 2048 bit, dynamic SSL storage, and so on. If you want to see the full list rather than me talking about them highlighting them here, definitely go and check out the HA proxy 2.2 release link. Along with the HA proxy upgrade, we added several new supported customizations for it. So one of the first ones that we did was router use proxy protocol, which basically allows the source IP address to pass through a load balancer if the load balancer supports the protocol. One example of that is Amazon's ELB. So we have a number of customers that have asked for that functionality. Another one, router back-end process endpoint. So this is critical to shuffling endpoints for proper distribution of the requests when you're running multiple routers that have a load balancer in front, like, for example, if you have an F5. So this will help balance out that load internally on those endpoints a little bit better. A couple more options, tune max rewrite, tune buff size. So some customers have the use case of very large header data. And I'm talking about on the order of about 48K, somewhere in that neighborhood, 48K, 64K, which is very large and maybe was a bit unexpected on our part. But if HA proxies buffer for that header data is not large enough, then that data gets dropped. So we now support the configurability of these parameters to allow for that. We don't limit the value that can be used. However, keep in mind the larger the cluster, the more memory it's going to consume as that configured value goes up. Another one, customizable number of router threads, also known as NB thread parameter. So since 4.1, we have a, we've supported the NB threads parameter that defines the number of threads used by HA proxy. But what we did is we did an engineering analysis and we determined a best practices value for that to be 4 threads, for many, but not all workloads. So what we did right out of the gate was we assigned a fixed value of 4 threads to NB threads. However, customers with large cluster nodes have asked us to make that configurable. So that they can accommodate some larger host nodes that they happen to be using in their deployments. So what we've done is we've made this now customizable in 4.8. Searching gears a little bit, IP failover, keep alive D support. So the keep alive D image, it's interesting. It's been there in the product for a long while. We dropped documentation for it, but we never dropped the keep alive image. And so it kind of was this, a bit of a hidden secret within the product. Customers still wanted to use it and they wanted our formal support of that. So of course, what we did is an open shift was we formalize support for the use of that image to provide high availability in open shift. And as part of that support, we of course needed to introduce brand new documentation and a procedure for implementing it on cluster nodes. So, or not a brand new, but rather a current procedure for implementing. So that's what we've done in 4.8. Other thing gateway API. So in open shift 4.8, we will present a developers preview of gateway API. So this was formally known by several names. This was known as ingress v2 in the very beginning. It changed names to service API, but now it's changed to gateway API. Hopefully that sticks. I think it's the most appropriate name for it. But gateway API represents a unifying technology for ingress. And so we're targeting integration of it with the likes of another project which may be an envoy friendly like contour as the primary ingress controller for traffic alongside HA proxy. So that would represent an enhanced integration with, for example, open shift service mesh with its envoy project involvement. Of course, you know, there's going to have to be a great deal of performance and scale testing to enterprise hardening to make sure that that is going to serve our customers in a positive way, but that is underway currently. Another thing global access option for GCBs ingress internal load balancer. So without this particular option, traffic originating between projects in a shared VPC network must be in the same region as the load balancer being used. So this facilitates communication cross region for shared VPC deployments. And finally, another one, egress IP load balancing enhancement. This is for our current open shift SDN customers to spread traffic more evenly across cluster nodes. So what this does is this removes that single node choke point of egress IP where the IP address is assigned to one node in the cluster and therefore all traffic must go out that node to be assigned that egress IP. So you remove that choke point and then you can spread the load across multiple nodes and multiple IP assignments across cluster posts. A similar OVN enhancement is forthcoming in the... All right, so let's click gears a bit and talk maybe just a little bit more about just some general networking enhancements. Again, this is a small sliver of a much larger effort that's going on, but these are some often requested or asked about things. So a big one is network observability. We have a rather large effort underway across all of open shift or to increase observability of all the things. Specifically for networking, networking complexities make cluster administration really difficult even for experienced administrators. So the goal of networking observability is to improve the quality and visibility of open shift networking metrics, some of the important networking correlations that are inside the UI or that can be drawn for you for an improved operational experience. So one of the big things we've done specifically in 4.8 towards this goal is to provide network flows, tracking and monitoring for network analytics. And so this will help us with a supported way to monitor traffic into and out of the cluster. This specific enablement is a first step in preparation for adding a NetFlow, SFlow, IP fix collector to oven Kubernetes in a follow-on release. So this is going to be really helpful for troubleshooting performance issues, capacity planning, security audits and similar, and look for much more from our observability efforts in follow-on releases. Also in 4.8, we've added some key SRIOV capable NIC hardware support for our customers. And so a lot of these requests came from customers, partner organizations. And represented here is sort of the short list of what we did specifically in 4.8. So we've added the Mellanox 6.5 family, we've added Intel Columbiaville family, HPE Ethernet adapter, specific adapter. And I know this presentation is really about what's new in specifically in 4.8, but I do have to mention that in our follow-on release, 4.9, we've integrated with RHEL's fast data path team and their test harnesses. So at 4.9 onward, any NIC that RHEL supports for SRIOV, so will open shift. So this list that presented to you of three different adapters, hopefully I won't have to keep presenting adapters going forward. In 4.9, we do have some specific ones we added in parallel to this harness integration with RHEL, fast data path. But hopefully in the future, this will be just something that we looked up in our documentation to see what the RHEL supports. Look for more on that in a future release. We've also added network policy support to Mac VLAN interfaces for our customers. So when you're using Mac VLAN, kind of a quick background on this, a virtual network interface gets created and it then gets exposed to the local switch with a new MAC address. This interface is in turn bound to the container. The advantage is that from a networking perspective, it looks like the host has two network interfaces with one of those being assigned directly and exclusively to the container. So the upside to that is since the pod is exposed externally and directly bypassing the SDN, users can realize greater performance and they can also use more protocols. Downside is that there is no routing layer now in front of it to protect it. So you could use something like IP tables or another firewalling technique inside the container, but you'd have to do that for every application which can be operationally difficult. So we enabled network policy from Mac VLAN interfaces to protect that traffic. And we're doing it in a globally configurable and accountable way. So as network policy becomes more global, more cluster-wide configurability options are added, as well as multi-cluster, this will be just part of the larger plan and vision for that. This next one, I'm going to mention this one quickly here because I'm going to talk about it in more depth in the following slide, but we've enhanced the OpenShift SDN to oven Kubernetes CNI migration that we support and includes support for UPI deployments as well, which pretty much completes our picture. So I'll come back to that in a moment. For security and compliance reasons, our customers have also asked us to provide and enable audit logging of network policy events. So what I mean by that are, for example, when a packet is either accepted or denied, that largely, not largely, it went entirely missing from any sort of logging or auditing events. So what we've done is we've added that back into the product. So this information is presented to the built-in logging stack and custom Kibana dashboards and is useful to support our customers in their compliance security policies where activity on the far wall needs to be inspected at runtime to monitor suspicious behaviors, like basically act in the role of an IDS for intrusion detection support. Or even for post-mortem analysis. Lastly, there's been a couple of product updates within OpenShift Networking Core DNS. We updated it to version 1.8, and this is going to include a number of feature enhancements and bug fixes. With 1.8, we also provide the ability to control OpenShift DNS pod placements. So this is for those customers with extreme workloads that need to be able to control where exactly DNS runs, either so that it's potentially locally available to where it's needed for optimal functionality or maybe to isolate it so that workloads aren't, or DNS itself isn't bothered by the workloads or vice versa. So this is a big part of Core DNS 1.8, some of the functionality we've had. All right, I mentioned that I was going to come back and talk about OVN migration tooling. So as I previously mentioned, other migration tool support is now supported on all platforms that we support, and this includes those platforms that are both UPI and IPI installation modes. UPI, I don't want to presume too much here, user provisioned infrastructure and installer provisioned infrastructure. So whichever mode of installation you chose, no matter the platform, we support the OVN migration tool on that platform. So OVN or OVEN is supported as of OpenShift 4.6, but it's not the default out-of-the-box networking. So customers that want to take advantage of the latest feature enablements and enhancements that are really being done in OVN, they might consider switching from the OpenShift SDN networking plug-in to the OVEN Kubernetes plug-in. We want to make that as easy as possible for them. The easy option to get to OVN is obviously just do a green field install of a new OpenShift cluster and just choose OVN at that point. However, for customers where that is not an option and still want to use OVN and just convert their existing clusters to it, we want to make that option as painless as possible, but understand that swapping networking on a cluster is a non-trivial process. And there is likely going to be some amount of service disruption. We try to minimize that wherever possible. And so typically what people will do to mitigate that also is to schedule this during the maintenance period. So the graphic on the right outlines a little more detail of the procedure for the migration. And this is also going to be in the 4.8 documentation. And also recognize that this is for going from OpenShift SDN to OVEN Kubernetes, but there is a similar process, different but similar process, to roll back if you ever needed to do so. Specifically, this is migration, and I should point out, this is not migration from just any SDN plug-in, a CNI plug-in to another. This is specifically supported and fully tested and vetted between OpenShift SDN and OVEN Kubernetes. So this particular migration is logically split into two phases. One where the machine configure operator prepares the nodes for the new networking. So in that first phase, this involves a series of, excuse me, a serial reboot of all the nodes in the clusters. So larger clusters will take longer to migrate. This is the current way that this is done. We're looking for a way to optimize that so that this is not done serially by the machine configure operator. And so look for that in a forthcoming release. In the second phase, the cluster network operator, I'm sorry, the backup. So after the machine configure operator prepares the nodes, the second phase, the cluster network operator, is going to deploy the new control plane to all the nodes. Once again, once you change that, do the actual flipping of the configuration over to officially to OVEN. Once again, you're going to have to reboot all the nodes, but this time they can be rebooted all at once. And in fact, that's desirable. So they come up with a cohesive understanding of the new networking. The timing issue just with one of the nodes and the networking didn't quite come up properly. Just reboot it again. It'll come up fine. And so that's the process. So we've tried to make this as painless as possible. Keep in mind that it is, like I said before, you're basically doing open heart surgery on yourself and you're doing migration. So I'll plan accordingly. And the last slide I want to include in here was to talk a little bit about some new plugins. So this is not really tied to an actual development effort in the 4.8 release, but these did happen. These certifications did happen during the development cycles of 4.8. So I think it's worthwhile to bring up here and discuss. So the two new Kubernetes CNI plugins for OpenShift that are certified and supported are Isovalence, Stilium as an option, as well as VMware's Entrea plugin. So Red Hat does not provide support directly for these plugins, understand. So that's never been the case. The only ones that we provide direct support for are the ones is obviously OpenShift SDN, our current default, Oven, our next generation networking, and on the far right, Career Kubernetes for those people that were looking for the equivalent of a pass-through of networking down to the underlying OpenStack Neutron networking layer. So we don't provide support directly for any of the other ones, but we partner with the organizations responsible for those CNI plugins to provide support. And because of that, I just recognize that different third-party plugins have different levels of testing and support for our other layered products, such as OpenShift virtualization and OpenShift service mesh. So consult your account team for more information if you have any questions about that. But we welcome on board these new plugins and hopefully these will be useful to our customers going forward. And so that brings us to the end of what's new at a high level of networking in OpenShift 4.8. So I will open the board to any questions that people might have. Thank you, Mark. You have this timing down. Perfect. Okay. We do have a few questions. First of all, can you put it or maybe stop presenting? Sure. I love the thank you slide, but... Yeah. I mean, I might have you go back to a couple of slides if they make sense. So first question. Do OpenShift routes support non-HTTP protocols like RTMP? I tried to set up my own restream.io-esque service and had some issues with it. So the protocols that are supported with routes is HTTP, HTTPS, DLS, SNI. If you have something other than that, we do have several other mechanisms to get that traffic in to include node port, to include exposing external IPs. Those are probably the two most popular ways. One of the things that I talked about during this presentation, you may recall I talked about gateway API and how it is that we're trying to unify ingress. I fully recognize that I'm telling you to do something in different ways simply because you're using a different protocol. And so one of the goals that I have for a longer-term vision, actually not even that long, but maybe midterm vision for OpenShift networking is to unify the way we do ingress so that you shouldn't have to care what protocol you're using. You shouldn't have to care about, you know, if you were using SIP, for example, you're doing voice-over IP applications. It really shouldn't matter what you're doing. You should be able to configure your ingress the same way. And this is our goal. That's not the reality today. It can do it. And our documentation should cover that. As a starting point, as I mentioned, it's not about exposing node ports and external IDs with services. But in the future, we want to bring that together. And in fact, as part of that unification, you may say, you know, hey, Mark, I don't want you to interact with my traffic at all. I want to just bypass your router and you're all together and go directly to, for example, a pod. Maybe go directly via some SRIOV mechanism to the pod. So we want to provide some of those high-performance pathways and networking and completely get out of your way all together. You, of course, would need to handle that traffic yourself and your application. But we want to be able to provide that option. Awesome. Thank you. And if that give a follow-up, please put it in the Q&A or any new questions because, you know, we have Mark here. And he's so in demand. It's not often that we get time with Mark. All right. Is it possible to turn off the network observability features? I assume it will introduce additional CPU costs to the cluster and some people may be sensitive to it. What do you think? Great question. Yes. So we generally have, we've gone through several incantations of, and this is not specifically limited to networking observability, but we've gone through several incantations of how we would deploy this. And in the beginning, we had several different layers of robustity that could be configured. Ultimately, it turns out that our users are really asking for one of three settings on some of this observability, and in particular, any observability elements that are going to add overhead, like additional logging to the cluster. So the three settings are just turn it off altogether. I don't need that information. And you can do this on a sort of a, we're setting this up to be more of a per observability function kind of thing. So you don't have to turn it off globally for all things, but you can target individual things that you'd like to turn off. The other two settings are give me a reasonable, and reasonable of course is very subjective, but give me a reasonable amount of robustity that for some of the things that customers generally ask for, or that we and our own debugging and triaging of customer issues that we encounter and find very useful information. So turn that on. And then the third option is just turn it on full blast. So you might, you know, this is definitely going to incur the most overhead. So why on earth would anybody want this? Generally speaking, this is for, you know, real, really, really in-depth debugging sessions, but probably more likely for those customers who have regulatory compliance that they have to meet that says you need to capture everything that happens, so that if, for example, there was ever a break in, we can go back and do some retrospection to understand exactly what happened, how they were potentially able to break in, what they did once they were in, and so forth. So I expect that one of those three settings will be applicable to most, if not all of our users. I'm happy to entertain additional options, but that's the target for these kinds of things that might incur overhead. Awesome, thank you. All right, if not using a load balancer in front of the OpenShift cluster, is it possible in a supported fashion to load balance the API ingress, which runs in keep live D on the master nodes? If so, are there any documented links? I can go get those. Yes, so great question. So load balancing the API today, it's kind of a load balancing in double quotes, and that's with the keep live, the IP failover mechanism. It's really more of a high availability kind of thing. So what our customers do today, and one of the things that we support with our partners is the ability to use something like, say, a commercial product like an F5. You put that in front, you use that to do the load balancing, and we have, if you consult our knowledge-based articles, there is documented procedures for doing that. The support for something like that, as you might imagine, falls onto the external load balancer like F5. But we do provide a procedure for doing that, and if you have F5s, presumably you have support for those, and so work with that provider. In the future, one thing that's really interesting that's coming in 4.9, I know this talk was about 4.8, but I don't think I can get away with not mentioning it. Starting in 4.9, we're going to be fully supporting Metal LB. So if you have a metal, bare metal deployment, Metal LB in 4.9, we're going to be supporting Metal LB Layer 2, and you can use that. Also in 4.10, we're targeting Metal LB with BGP support, and potentially support that. So the BGP support will essentially, mostly the purpose for that is to advertise routes to Kubernetes services using BGP, but I think some form of Metal LB may be useful, and we will provide some more information about how to do that as we get closer to the release of Metal LB. And for that question, he did provide additional context in that it was for VMware and using IPI install, if that helps. Yeah, the specific details about that probably would not be able to answer it's a different way for you than I already have. And it, you know, the advanced, well, rather the custom networking configurations that you can do for VMware installs fully documented in our product documentation, anything you're doing beyond what's specified in there. Like I said before, it essentially would involve the integration with a third party product. And so some of the support for that would fall onto that provider. All right, thanks for that additional information. Another question. Are there any plans to support external load balancers that do full re-encrypt for all inbound routes? My understanding is that currently OAuth does not work with re-encrypt due to MTLS. That is correct. It does not work. And we are discussing that right now. That's a pretty difficult problem to do in a universal way for our customers. But this is on our backlog for feature enhancements. And to date, we are in the design and architecture review of that possibility. All right, please add any more questions into the Q&A or the chat. So Mark, what are some of the common use cases that you're seeing that you're being asked for by customers? Give us an idea of the strategy going forward. Yeah, you know, really top of mind lately has been our customers as not only as the product has matured, but we still have our customers' workloads and how it is they use Kubernetes platform to deliver upon those workloads. That sophistication is growing at a pretty rapid pace. And so the sophistication of the networking, some of the advanced networking features, these are things that three years ago would have been unheard of to ask for or would have raised a few eyebrows that they would have asked for. But with the adding of the telco market to the scene of Kubernetes platforms and all of the requests they have, it turns out that a lot of those advanced networking features that we always thought were pretty limited to telco organizations, turns out those are actually super useful to some of our higher-end enterprise organizations as well. And so I'm seeing an increased adoption for and requests for enhancements to some of those features. So hey, I saw you did this. Can you also support this or make this configurable for us? So look for a lot more of that. That's not going away and that's only expanding. Another really big thing is scale and performance. So again, as our customers' deployments grow and sophistication and size and scale, a really big thing that I'm seeing is people are essentially redlining what the capabilities are of networking. So what we're trying to do is approach that from several angles. The first is better documentation. So one of the things, a very common thing I see is people have completely overwhelmed their ingress and come to find out that they have essentially a choke point in the sense that they're only running one and they're not doing any sort of sharding whatsoever to split up the workload across different nodes in the cluster. So, you know, and when I look at our documentation, I somewhat agree with some of the sentiments I've heard recently, which is it doesn't really fully explain that. It doesn't explain best practices and it doesn't maybe provide really great guidance for how to do that. More importantly, and so we're improving that for sure right now, but more importantly, to a certain extent these customers shouldn't have to worry about that. There should be some sort of built-in intelligence and automation that says it's time to start sharding. Time to break up this workload into multiple shards that we spread out across nodes and have this all done sort of automatically for our customers. So a very recent request for an enhancement that we got really brought that to a head and there are many customers out there that would benefit from this and have now said, yes, please, we would love to see that. So look for more around that in a future releaseable. Those are really the two, probably the two biggest things I get. At least those are the two most top-of-mind things for me. Thanks. Is there anything that you've seen that has surprised you recently? Like you thought it was going a certain direction? I think, I guess a very recent thing is you heard me talk earlier about some of the new CNI plugins we have. One of those was Stilium. So one thing that maybe surprises the wrong word for this, but one thing that I'm paying close attention to here is the adoption and asks around EBDF for OpenShift Networking. There are many customers out there with use cases that would benefit from EBDF being introduced into OpenShift Networking. I am fully aware of that and we are absolutely going down that pathway. In the short term, there's Stilium. I would have expected to see greater adoption. Stilium has been certified now for one, maybe two months. That's probably closer to two months. I would have thought I'd see greater adoption, but I have a feeling customers might be waiting for us to more natively support EBDF in our networking deployments. Please understand that that is forthcoming. When you provide the greater control of EBDF to users, along with that comes some additional questions about how do we support that? If we give you so much more control, you now have the ability to do a lot more things, how do we support that? There's questions around that that still need to be resolved, but this is something for you to look forward to. That sounds like we have a lot to look forward to, at least from the last 45 minutes of talking networking and OpenShift. There's lots of great new features and enhancements. Is there one particular thing that you're especially excited about? I know networking is really exciting, but what about you, Mark? I'm very excited about the forthcoming BGP capabilities. We have a number of customers that have been asking for BGP, and in fact, they've chosen a different CNI plug-in specifically and only because it happens to provide BGP capabilities, and our default out-of-the-box networking has not supported that in the past. Look for that starting in 4.10, as I maybe mentioned earlier, but this BGP capability is really going to open up a lot of doors, and it's going to, you know, for some very large customers we have that prefer sort of a flat network in these routable service IP addresses and the advertisement of those routable service IP addresses. They are very, very much looking forward to that, and so I'm pretty excited about the future of that. I'm excited about some of the new avenues we have as I mentioned with EBPF and actually how that EBPF will provide enhanced observability in the product. Some of the integration that we have with some new layered products like our advanced cluster security, ACS, multi-cluster networking being provided and viewed and managed by something along the likes of advanced cluster manager, ACM. I'm also really encouraged by a big effort that we have underway right now to, like I mentioned before, unify Ingress and provide this global API for networking. So, you know, the vision here is that you define how it is that your network is configured on a particular platform using particular cloud-native load balancers of that platform, for example, and the ability to make a few simple modifications to that API, to pick up your deployment and place it onto just another platform. So if you wanted to move from one cloud provider to another cloud provider or to bare metal or whichever way you're moving, you change that outer layer that defines the API surface for that provider and underneath that, you know, it sort of quote-unquote unlocks additional API capabilities to configure some of the specifics of that particular cloud provider. Like, for example, if you were on AWS, you would unlock one of three load balancers, ELB, NLB, ALB. If you happen to choose ALB, it would unlock additional configurations and capabilities around the ALB's WAF, the web application firewall functionality and you could configure that. So having this global API to do all of those things is absolutely the direction we're going to for an improved operational and continual management, especially as our customers continue to grow to multi-cluster deployments. Don't get me started, Karina. I'll talk over. I was just thinking, I mean, we'll talk offline about all the other exciting stuff, but mentioning, let's say, the different workloads and we have a question on whether there are plans to segregate traffic on different uplinks based on the types of workloads and make OpenShift workloads aware of that. Yeah, well, that's a really great question. So it is, there is a huge amount of momentum within the OpenShift engineering and business side of things to add increased integration, enhanced integration with post-level networking. So we're an operating system company as well, right? So we have this phenomenal enterprise operating system right at Enterprise Linux, Ralph. And it has an amazing array of capabilities for how it is that you can configure networking, how it is that you want to configure the different nicks in your hosts, how you align interface to those hosts, how you do bonding across those interfaces, all the things. Our customers are asking us to bring that level of configurability that's NREL networking into OpenShift. So today, you install OpenShift and it uses whatever the prime, by default, it uses whatever the primary nick is in the box and it binds the primary gateway interface to that nick. So all your Kubernetes control plane traffic flows in and out of that primary interface. But customers don't necessarily want that. Going back to my earlier conversation about how some of our advanced customers are now asking for advanced capabilities. So now what they're asking for is, well, we don't necessarily want all of our control plane traffic going out that primary interface. We'd like to instead maybe carve it up and send it out specifically to a secondary interface and use that primary interface for maybe data plane traffic or a specific applications traffic. And maybe we want to tie that to a very specific nick or a set of nicks, maybe two nicks that are tied to a couple of top of rack switches and have bonding between them and all the kinds of settings. It's not just interface configurations but host-level networking from the perspective of, you know, some of the telco use cases require, especially with these real-time component requirements, they require things like precision timing protocol. That's a host-level thing. That's in the kernel. So how do we enable that in a meaningful way for OpenShift workload? By the way, we do support today PTP with OpenShift. But that's an example of something where we needed to increase or enhance that integration and involvement and manageability of host-level networking from within OpenShift to service the workloads that people have. So that's a pretty long answer. I hope I answered your question, but maybe the DLDR on that too long to read is we are trying to bring host-level networking into a tighter integration with OpenShift networking so that you can carve up your networking at the host-level any way that you want to do it, just like you can with Realm. You're adding all these new, well, improving on all the enterprise-level features because I think host-level networking and bringing it in is more maturing, right? Networking at OpenShift. And this becomes especially important in bare-metal deployments, right? Because in some cloud providers, I don't have access necessarily to some of the hardware-level capabilities that these instances are running on. But for bare-metal deployments, I presumably have full access or the user has full access or knows somebody who has full access to the host to be able to configure things the way they want. And so providing that for something like bare-metal deployments, OpenStack deployments, this is super important at that level. Sounds like you're really setting it up for the Metal LB Support in 4.9. Absolutely. I was just on a phone call with our bare-metal product manager before this to talk about how we're going to align. Awesome. All right, another question, since we're getting down to the wire, considering networking inside OCP being more mature than before, do we have native network diagnostic tools and route topology? You expose it in some sort of operator dashboard, which gives log analysis, packet drops, detail, formation of flow. What do you think? Yes, good question. So as part of our much larger OpenShift observability project, we have, and specific to networking observability, as I mentioned before, this was really catalyzed by our telco customer base. For telco developers, networking is their business. So whereas some developers, they don't want to have to think about networking, they just want their application to work. But for a lot of telco developers, networking is their business. They need that visibility into exactly what's happening. So some of the things you're going to see that are forthcoming that are in the works is, for example, today with OpenShift Service Mesh, we incorporated the Jaeger project to provide tracing capability of traffic. There's no reason why we can't add that to OpenShift at large. So we're looking to incorporate some packet tracing capabilities. OVS flows today, we have users that want to see exactly what's happening with those flows. So what we have in starting in OpenShift 4.9, the ability, we have a very simple tool that will present these flows to you in a very simplified manner. And I just saw a wireframe targeting 4.10 that will further enhance some of these flows and bring this really nice interface into the actual OpenShift console, present this much deeper inspection of packets and packets that were dropped, traffic, throughput flows that are active and so forth. And so this will help as a starting point provide that kind of information in those metrics that the developers and even the cluster administrators or project administrators require to understand about what's happening with their particular projects. And I can just go on and on. There's an improved visibility into the network policy, being able to view that better. So for example, today if I said, hey, how do you have network policy configured on your cluster? You can spit out for me some YAML. I've been doing this for a long time and I look at that YAML. My eyes are going to glaze over a little bit because it's not that straightforward to read, but it is readable, but that's not the way security should be answerable. So what we have done is we are making some of the network policy configurations more digestible by the people who need to digest it. So for example, network administrators, security administrators, they're used to talking in the language of ACLs, zone rules. They work on these things on a daily basis on their switches and everything else that they interact with. So why not present network policy in that format? Maybe present it in a graphical format, so that a picture truly is worth a thousand words, where you say, hey, that red line or that line that doesn't exist between these two objects in my project, that's a problem, or maybe that's by design. And you use that to confirm that you established proper network policy to limit who can see what within your project. So all of these things are coming to bear. I would say 4.8 was really where the gears started to turn to produce some of these enhancements to observability, but we're at speed. So look forward to 4.9, 4.10 to see some big jumps in that capability. Awesome. There's not many people who can make networking sound as exciting as you want. All right, we have a final question. Is it possible for CSI driver to mark traffic with IP DSCP values? These DSCP settings will be used on ACI switches for delivering quality of service. Storage traffic shall be prioritized and not dropped if marked with the correct DSCP settings. So when I hear CSI, I immediately think storage, but if you're talking about IP storage traffic, I think what I heard was IP storage traffic and specifically the QoS of that traffic. Today we have some QoS capabilities on SRIOV traffic, but we do not have it in the larger... We don't have it upstream in the larger Kubernetes ecosystem. So one of the things that we have started is to upstream some QoS end-to-end capabilities within Kubernetes to allow for some of these kinds of rate limiting that you would expect from QoS. And it's not just storage IP traffic. There's a lot of customers that want to be able to put caps on basically any kind of labeled traffic from their applications. Maybe they want to rate limits backups. Maybe they want to rate limit a specific application. Or maybe another one I've had a lot lately, the ability to rate limit incoming traffic for two reasons, to mitigate distributed denial of service attacks, as well as some administrators have asked me for the ability to rate limit to limit the streaming of images from, say, a QA registry into new projects, because technically somebody could denial of service from within. So they've asked for these kinds of capabilities. There's different ways to attack this. Maybe we build it out and create self for that kind of use case. But still, there's many other use cases in the larger team of things that could benefit from end-to-end QoS within Kubernetes. And of course, that's where we always work. We do upstream first. And then we would enterprise hard and then bring it down into open shift. And I love that we are ending on you saying we do upstream first. So if you're wondering what's happening with networking as well, just go look upstream, right? That's correct. That's where all the action is. All right. So go get involved if you aren't, and you really want a feature pushed. That's where you can do it. And thank you, Mark, for giving us this deep dive into open shift networking and what's happening in 4.8 and some futures. Is there anything you would like to leave us on as we're closing out? We are customer-centric in our roadmap. So you heard something today that you would like to see done differently, or if you didn't hear something today that you wish were in Kubernetes or open shift, please reach out. We are customer-driven. And your requests of today could be product realities tomorrow. That's a nice sound bite. I love it. Thank you, Mark. And thank you, everyone, for joining us. And we will all see you soon on the next OpenShift Commons briefing. Thank you. Thank you. And this is when Chris is...