 All right, everybody, my name's Joe Fernandes. I run the product management team for the OpenShift Core platforms. And Resba, who you heard from earlier, is my peer and manages platform services. So we have a lot of our product managers here at KubeCon and at the OpenShift Commons gathering. Before we start, I'd like to thank all of our customer presenters. When we started this event, I never expected to see a room this large. And the customer presentations today were phenomenal. So thank you to all our presenters. Thank you, once again, to Diane for organizing. This session is for you. I know we introduced a lot of new stuff today. And there's more to come. We're here to answer your questions. So anything that you want to ask is fair game. So if you want to just raise your hand, we will do that. All right. All right. Here we go. Hi. My name is Anil. I'm from Visa. I'm a chief systems architect for Visa on KubeCon's platform. So we use OpenShift. And we are in production. But I'm going to have a little tough questions to you now. Let's get for that. OpenShift is great, right? A lot of features and a lot of things it does underneath it. But your upgrade path and as well as implementation sucks. So how are you going to address it? Because this is becoming a serious pain to us. All right. I can start, Mike. So look, installing OpenShift, installing Kubernetes, we identified that. That's been our biggest challenge, right? The releases are happening every three months. The installation is complex. And then keeping up with it is more complex. We did a lot around automation with Ansible. That's where we started. That's what drives the OpenShift install and upgrade path in 3x. The things that we're challenged with is just the environment being too dynamic, too mutable, and so forth. So a couple of things you saw today reflect the investments we've made over the last year and a half, actually beyond that if you think about CoreOS's history with it. Really moving to a fully immutable environment based on the Red Hat CoreOS operating system. But essentially moving to Kubernetes as the way we're installing and upgrading Kubernetes. And we're taking the installation, which is really three installations when you think about it. You have to set up your provider infrastructure, whether that's Amazon, Azure, Google, OpenStack, VM, where it doesn't matter. You have to set that up first. Then you have to install the operating system and manage that separately, which is rel. And then you have to basically run the OpenShift installer to install OpenShift on top. With OpenShift 4, we're combining that into one process. So you basically describe the cluster that you want and where you want it to run, and then the installer takes it from there. We didn't talk about this today. We're also then automating the upgrade process through something called over-the-air updates. And again, this is something that the CoreOS guys pioneered for container Linux and for Tectonic. But essentially it means if you have your clusters connected to Red Hat, we can drive automated updates for you through your connected clusters. We also have offline mode for disconnected or air-gapped environments. But all these things, which you'll start to see coming out in beta now and in the new year, reflect where we see the future of installation and upgrades and really where we've put the bulk of our investment over the past year. So I don't know if anybody else wants to add anything? Mike? Yeah, so in terms of, did you want to talk about the upgrade from 3.x to 4.x or just the upgrade? Yeah, so what's happening? We choose an atomic as a base voice when we started eight or nine months back or year back. We worked with your team. They said everything's great. Go with the atomic, because we get a lot of benefits, especially around the attack surface, and then less binaries and less boot room drive. So it does help in that way, but now looks like what we are hearing is you're not going to support atomic upgrade path. You've got to convert back to RHVL. That means we've got to rebuild the clusters. And then sometime around the 5.19 later of the year, you'll bring in the container Linux. So there is a gap. I don't know how your product team is trying to address that, but that is a serious gap. Yeah, so the upgrade is an interesting problem. What we're investing in right now is we're telling all of our customers they need to get to a serial upgrade path. In Kubernetes upstream above us, just the way the APIs are written, you need to do them serially as you go through them. And a lot of our customers are on 3.7 and 3.9. They aren't necessarily on 3.11, which is the last 3.x release. So instead of just giving them a serial update, we wanted to just do a pause and tell our engineering teams to spike until February to try to figure out, is there a way to capture one cluster and deploy a lot of the content, the secrets, the service accounts, the role-based access, the role binds, the stateful applications, the PVs over to a new cluster. And that's a huge engineering task, but we want them to at least examine it. And then further, for those customers that can't have another cluster up at the same time, is there a way to back up an existing cluster, put it to a file system, reinstall using the same equipment, and restore back and move forward that way? That's all investigative R&D work right now. If it doesn't pan out, or if that's just a massive task that we would put in front of our customers, we'll just continue down the 3.11 upgrade path, which is what you're referring to. And that entails moving to what's known as an unmanaged installation of OpenShift. So there's some customers that are coming back to us and saying, look, I love what you're doing, and it's going to take me longer to get there than next year, the first half of next year. I need to have my own RHEL operating system. I've had it for the last eight years. It's not even RHEL anymore. It's been tweaked so much. Or the infrastructure is so complex. Maybe it's on a submarine, or a Boeing, or whatever the case may be, that it won't necessarily be something that you'll automate for me. For those customers, we want them to bring up their operating system. And then we will lay down the operator-enabled OpenShift and still upgrade it over the air for you. That is probably what you're referring to. We can't do that with Atomic. We need to get you to RHEL to have that unmanaged path. And then what you're referring to is we'll switch later. I think it would be great. Some of these decisions were made. You work with customers to understand what we are going through. For us, right, when we are into production, doesn't matter what it is. We got to make it happen, right? So if you're not following that, you're seriously creating some kind of a gap in the production clusters. So I would like your team to look into that. OK. I mean, as Mike said, we're two things, right? We're extending 3.x to the lifecycle of 3.x. That's going to be around for quite some time, in addition to the 4x. But then what Mike mentioned is, in addition to automated in-place upgrades, we're investing now in this research into automated application migration. Because in place implies that you go release to release, because that's how Kubernetes works. For customers that are on older versions of Kubernetes or 3x, that might not be what they want to do. They want to get from a 3.4, 3.5, 3.7, 3.9 all the way up to a 4.0. And so being able to do automated migration of content from one cluster to the other would enable us to do that in a way that you couldn't do sort of in-place serial. Can I just add one thing, Joe? Yep, so go ahead. So one thing that I want to just add that a big part of OpenShift 4 is leveraging the CoreOS technology to simplify upgrades. So that is based on customer feedback. Yes, there is going to be an initial, not a manual upgrade to get to OpenShift 4. Once you're in OpenShift 4, you're going to be able to leverage technology that was proven for over two years, where we upgraded over, I think, 60 updates that we pushed and upgraded over 6,000 clusters over that time. So that technology is going to greatly simplify upgrades and other operations. With the IBM and ICP plus OCP, it doesn't become something that's all together. I don't know the four dot x. Really quick. Thank you so much. Hey, really quick. So I'm the PM for Atomic and Red Hat CoreOS. Come find me afterwards. Let's talk over beers. Go ahead. Hi. Whoa, that's loud. I'm Jason Kinsel from Oak Ridge National Lab. So since, I don't know, 3.4 when I started looking at OpenShift, most of the development discussions and all that stuff were open. They were on Trello. You could kind of follow what you guys were thinking and where the product was going over time. It seems like after CoreOS, a lot of that has been moved, depending on the product, to behind a Jiro wall of some type. Are there plans to move that back? Or are all these products staying behind closed doors? So this is a great question. So internally, as part of our development and agile process, we've made a transition from using Trello, which we've loved, we've used it for years, to Jira. That is in the process of being opened up right now. We use Jira really because if you've used both products, Trello is great. It's very easy to use. You can have public boards. You can have private boards. So it's great for collaboration. It's not great for traceability, for connecting epics to user stories to get reports. And we did all sorts of unnatural acts in extended Trello itself to try to get what we needed. It just turns out for where we are now at the size of our team, that we're over 30-something scrum teams at this point, it was just too much to manage. So we've shifted around the 4.0 release all of the work into Jira, and we're in the process now of making all those boards public. Our intent is to have the entire thing public, and that should happen over the next upcoming weeks and months. That's what we're doing now. And thank you for following us. Hi. Wow, news loud. Ron Parker from Affirm Networks. Earlier, we had quite a bit of discussion about bare metal Kubernetes. And what was implied is cloud provider environment was Amazon. So I'm wondering what options exist for bare metal in a pure private on-prem kind of environment? Yeah, so good question. So again, one of the things you notice about the OpenShift 4 installer that's very different from OpenShift 3 is we're actually taking care of everything from the infrastructure to the operating system and up to Kubernetes itself. We're using a technology that came out of the SIG cluster lifecycle, which is basically called Machine Controllers. And it's part of the cluster API and so forth. So Machine Controllers essentially are just as they're described. It's basically like a Kubernetes controller, but for machine. So everything that you can think of for pods in terms of declaring state and having it designed so that now you can apply it to machines. We just chose Amazon as our first target because it's prevalent and it's easy access. So Derek can talk more about that. We're adding that now for all of our targets. So there's work going on around OpenStack. You probably saw that there around bare metal, around Azure, Google, vSphere, and so forth. So we expect to have that. I think bare metal presents interesting challenges because you don't have nice APIs to talk to. So we sort of need to build an API, a cloud-like API around bare metal so that you can ask for machines to scale up your cluster, to scale back and so forth. But we are very excited about that work just because we're seeing tons of demand for Kubernetes on bare metal or what we started to describe as Kubernetes native infrastructure versus where the industry's been, which is Kubernetes is an abstraction on top of somebody else's infrastructure. So yeah, you should expect the bare metal provider work to yield. I don't know if anybody else wants to, anything? Good. Okay. Yeah, if I could jump in for a second. I figure the metal as a service kind of layer will get sorted out. I was specifically wondering about the horror level of abstractions like load balancers of service, persistent volume mapping, autoscale of cluster, that kind of stuff. Am I on? Oh, nice. You're absolutely right. When you start looking at an infrastructure that isn't being provided to you by a cloud, there's a lot more components. We're leveraging a lot of the skill set in our open stack area. So open stack has been cracking at this for quite some time with director and partnering with a lot of the ecosystems on on-premise hardware providers. Ionic has extensible IPs, so you can bring up a larger WAN network at the same time. Our Ansible networking plugins are talking to a lot of network providers to extend that even further. All those pieces of art, if you will, are going to be used by OpenShift. So we're in the process of taking that and seeing if we can use it holistically or can we extract components of it and run it on Kubernetes and then use those investments as our API to attack the machines. Yeah, and that's just assets that we have at Red Hat, stuff that we worked on like Ansible networking, like the different components of OpenStack. We're also doing work in the community. Great, there's a lot of end with partners. We were just talking with the guys from Dell a little bit earlier today. A lot of our hardware OEM partners are also really interested in this topic as well as our networking and storage partners. So I think if this is an area that you're interested in, we're seeing demand for it. We'd like to talk to you more about it because I think there's a lot of interesting challenges ahead but also a huge opportunity for customers who want to operate in this manner. Hello, I've got a couple of questions about OpenShift 4. Could you make some comments or talk a little bit about roadmap items around the router versus typical classic Kubernetes ingress and also support for VMware? So VMware as an infrastructure provider, right? So VMware as an infrastructure provider is something that we're working on. Again, in terms of our top six most commonly deployed environments in the data center, it's VMware, OpenStack and bare metal and obviously Amazon, Azure, Google and the public cloud. So we will have OpenShift 4 installers for all of those as well as additional targets beyond that. Then in terms of router versus ingress, you can deploy both. I don't know, Mark, you want to talk a little bit about how that works? Yeah, so today you can use Kubernetes ingress objects. We have our default router is HA proxy and we also have an integration with engine X so you can swap out HA proxy for that if you prefer. We also have integrations with F5, Ivy networks, et cetera, other vendors. But in terms of ingress and so prior to Kubernetes, I don't know if you know the history, but prior to the Kubernetes ingress, there was routes which we created and at this point we are all in on eventually flipping over to Kubernetes ingress but it has to hit feature parity with routes first as a number of things that you can still do with routes that you can't do with ingress. So we are on the path, we're monitoring it heavily, we're contributing to it. You can use ingress route objects, excuse me. It's just we haven't hit that point where we're gonna flip over yet. Yeah, I'll actually put in a plug for my blog. I just wrote a blog today. It's up on openshift.com. This part two is coming out tonight. It's called OpenShift and Kubernetes, where we've been and where we're going. So the where we've been blog talks about some of the things that in OpenShift go all the way back to Kubernetes 1.0 but weren't yet in the Kubernetes upstream. So our enterprise customers needed it but Kubernetes didn't have it. So we built on top of Kubernetes. So that includes routes. The concept of ingress didn't come, I think, until Kube 1.3. That includes the concept of deployment configs which was later a lot of that functionality was taken into deployments. RBAC, people take RBAC and Kubernetes for granted today. RBAC didn't even show up in beta until Kubernetes 1.6. We had it on top of Kubernetes 1.0. Pod security groups which came from security context constraints. We were focused on a different market, right? So Google and Red Hat were the first two companies to bring Kubernetes to market but we were bringing it to enterprise customers in hybrid cloud environments with different concerns around security and different requirements for how they'd manage their clusters. So we did a lot of work in upstream Kube and then we built around it but our goal is always to sort of merge as far upstream as we can. In some cases, like RBAC, the implementations were at parity. In fact, that's because most of the implementation was Red Hats or like 100%. And so we just basically switched to the upstream. In some cases, like in deployment configurations, in routes, there's still features in our original implementation that aren't available yet in the upstream implementation and customers are relying on those. So we committed for those customers to continue supporting both and then you guys can choose if you want to use one or the other. So I don't know if you want to add anything else? Oh, hey, we get this question a lot about like is OpenShift going to adopt this Kube resource or not? And I think sometimes people have a misperception that because it's a resource in Kube, it's like graduated to version one and it's forever stable in the upstream. So like a good example right now would be OpenShift three ship security context constraints out of the box. Upstream has pod security policy. It's not clear it's ever going to reach the version one status. So people say, well, why aren't you adopting this yet? And it's still a beta API upstream and the roadmap for it's not clear. So for other objects like Ingress, it's certain times hard to get a uniform agreement in such a broad community for a set of use cases that like we want to be able to still deliver value to end users and customers who are meeting demands. And so there's not necessarily a one size fits all solution to every problem. And in the same way that a lot of communities now is extensible, like this morning when we demoed operators, those are just more intelligent workload controllers, right? And so no one's going to ask us, when are you going to adopt deployments to manage a Kube API server, right? Like it makes sense for us to use the platform to build the right thing for the right use case we need and anybody else can go and extend the platform given all the work we've done upstream to enable that. So I wouldn't be surprised if you see multiple Ingress implementations in the upstream. I wouldn't be surprised if you see multiple security policy implementations like there's a marketplace of ideas and people can go and pursue them. Yeah, actually, this is a good point. I mean, I think Ingress itself is still beta upstream in Kubernetes, right? Ingress has not yet been declared stable even though a lot of people are relying on it. And then there's different Ingress implementations which you get from different vendors, right? So, but again, we'll allow you to support both. A good point that Derek made that I wanted to comment on. So we had to build a lot of this stuff around Kubernetes, but the Kubernetes API wasn't particularly extensible, right? So over the years, initially with third-party resources and then ultimately now with CRDs, that's how we made Kubernetes extensible. So the Red Hat team, the OpenShift team helped drive a lot of the work in CRDs because of all the things that we wanted to integrate and build on top and around Kubernetes. Now CRDs are allowing customers through the operator framework to build applications as an extension of the same Kubernetes API, right? So really powerful concept which Seb described and Rob's gonna be demoing tomorrow in the keynote. But yeah, it's just, this is I think gonna unleash a whole new set of innovations on top of Kubernetes and so forth. So, other questions? Hi, I wasn't able to make it to the entire event today so I apologize if this has been discussed at length. But I read the blog post announcing OpenShift Container Engine. I'm just wondering if there's anyone on stage who can sort of speak to the goals of that product in contrast to OCP? Yeah, so first of all that OpenShift Container Engine and OpenShift Container Platform are not separate products. They're the same products, they're just different configurations. So we've heard from a lot of customers that said, well, I like OCP but I don't want all the stuff that comes with it, right? I use my own Jenkins service or I use my own SDN. And that's kind of the point of OpenShift, right? Like the whole batteries included but optional and everything being pluggable. That's how we architected OpenShift, that's how Kubernetes itself is architected. So all OpenShift Container Engine is a, it's a subscription to OpenShift that basically reduces some of those components that people commonly swap out. Specifically, the advanced networking capabilities, the SDN that we provide, logging. Like we provide an Elk stack or an EFK stack with OpenShift, a lot of people use it with Splunk or with their own EFK stack. And then some of the CI CD capabilities, the Jenkins services, the build services and so forth. But otherwise it's actually, there's not two separate products because a lot of all that content is just containerized. So essentially you're just not entitled to those additional containers and so forth. And then with operators, that's gonna be actually an even more convenient way for you to determine which content you want in each cluster because operators is driving our whole install mechanism and so forth. It's gonna be an even more convenient way to sort of enable or disable different features and different clusters. I had a question over here. Hi, this is Jitendra. In the talks in the morning and the use cases and case studies that were presented, most of them were around making the environment available to the developer quickly or around green field applications, performance oriented applications. But I haven't really seen any brown field legacy applications migrating to OpenShift. I don't know if you want to talk about a few examples of how people have done monolith applications into OpenShift. Sure, that's the way I just wanna answer one. So it's interesting, the customers that we talk to will always fall into these four buckets, right? There's this bucket for the CIO, the transformation of the culture of the business. There's this bucket for green field for next generation run times and applications. There's this bucket for brown field, the lift and shift if you will. And there's this bucket for sort of infrastructure ops. They're being told they've got to shut down machines and move to public clouds. How do you do that, right? We have feature sets in each one of them. And you're absolutely right in that the easiest place to start is in a green field area. But very rapidly we find our customers move towards the lift and shift area, the brown field, because it is larger and it is more revenue impacting to them. I would say most of our humongous companies, our global financial companies are doing brown fields. And they're attacking Java applications, fat Java applications, if you will, on legacy web logic and web sphere, moving those run times over. We have a lot of popular key bank, I will leave, wrote a case study out there on their movement. So if you go to openshift.com slash customers, I would say probably 60% of them are stories about lift and shifting legacy applications. There's even some that get into message busing, if you will, those ESBs and those frameworks and how they containerize them and how they move them over. So there's a mix between fat Java and ESB architectures that are out there. Yeah, we have, the partnership we had announced with IBM earlier this year was driven by demand for customers who wanted to run web sphere in Kubernetes on OpenShift and we have customers that have spoken about that, but also customers who are doing web logic, Jboss, so traditional app server stacks. So it's not all just Spring Boot and Node.js and kind of microservices architectures, there's a lot of those traditional architectures. And then you talk in terms of like database workloads, which is not being further enabled with our work on the operator framework. We have a lot of customers now who are running extensive data services, whether it's databases, analytics, messaging solutions on top of Kubernetes. We're getting a lot of questions on HPC, on grid. So again, yeah, a lot of the Kubernetes talks tend to be around Greenfield and cloud native microservice style architectures, but that's by no means what we see and what we're hearing about. We're hearing a lot of traditional, a lot of... And really quickly that a year ago, we had a competitor in the market that had an alternative orchestrator and they were not good at doing brownfield applications and they weren't good because they didn't have the storage infrastructure components, the PV backbone framework, right? They didn't have real IP addresses. They didn't have real routing. They didn't allow you to do multicast. They didn't allow you to do UDP. These are all just qualities of these brownfield applications that the Kubernetes platform always catered to. Yeah, sorry, back here, James wants to ask you anything. So my question is not about the upgrade path, but it's close to that. As OpenShift has matured and we've kind of gone along with the platform, a number of new features have been released that are really intended to be consumed by development teams, things like the Prometheus operator framework in general, but also platform components, right? The SDN, Damon said. One of the things that I found challenging is understanding these new things that are coming in to the platform and how to consume that as a platform operator. So it's like, you know, because some of these things are coming out from GitHub, I just don't have a unified view of what each upgrade is doing in terms of new things that are installing and new products that are becoming a part of the cluster. Can you talk to maybe how you're bringing products into the platform and managing those things and kind of making that information available to customers? Sure, I think what you're referring to is I just tried to install and Ansible did 500 things but I had no idea that it was gonna do on me, right? We're not doing a great job of documenting all the changes that Ansible performs between each one of the versions and a big part of the 4.0 architecture is to move to the operator SDK. So all those teams across the ecosystem now have a consistent way to get onto the platform, right? There's not gonna be that variety where you need the extensibility of Ansible to do it because you're gonna write your content to that operator SDK and that's how you're gonna get onto the platform. So that's the ultimate fix. Take cake comfort and knowing that 3.11 was the last time you would have had to learn about the stuff on the fly so it all changes moving forward. And also just to add to it, I mean if you saw the slides that Reza showed in the keynote with the unified hybrid cloud and the Kubernetes marketplace there are some like data question in the demo so you can actually pick and choose what services that you would want to run, right? I mean so that will become much more visible. You could see all the operators, cluster operators, the CBO was showing all the platform operators and all the level operators but yeah, I mean that the OpenShift 4 you saw some of that in the demo. The only thing I would add is like some of the changes that were made in 3.10 and 3.11 like you talked about the SDN or DNS or static pods. Static pods. A lot of that has fed into enabling us to decompose OpenShift to be 4.0 like you saw today. So like the ability to be very lean in the default distribution of OpenShift's Kubernetes and allowing us to, for example, not include the SDN like in our cubit by default or to not include the DNS services there by default and make it clear to you that you can swap them out and better separate the architecture I guess. So I think, as Mike had said, the changes you did see in 3.10 and 3.11 were us getting set up technically to enable what you saw this morning, hopefully in 4.0 but I don't think you should see major wholesale restructuring after 4.0 because at this point we start to look like any other cube distribution with respect to the core control plane. Yeah, actually the OpenShift for deployment, very small core, immutable core and then something like 40 to 50 plus operators. The effect it's had on how we develop OpenShift is essentially every single development team has been working on install upgrade because every team is working on install upgrade for their component by building an operator for that component. So whether you go to our team that works on logging or on SDN or on Jenkins or on whatever Prometheus what they've been working on over this past year is how do I install my component? How should it be upgraded? And how do I, you know, over time expanding on sort of life cycle management and that's kind of really the power of the whole operator pattern and how we're I think taking advantage of it back in the installation and upgrade. And then the only thing that we're at is to make it more consumable to you as an operator. Today, like you were tweaking master configs and node configs and having to work at a very low level. Some of the stuff I couldn't touch on very much this morning was like in 4.0 you'll see API groups in your server that literally say config.openshift.io and the API definitions of those objects should ideally be self-documenting. So like when you look at the ingress.config.openshift.io object and you see it has a router host name field, it should be very obvious to you what it is and what the interface is that you're allowed to touch. And so where you might have had points of variability about how you set up particular answerable variables or what you configure in master config and all that stuff. For 4.0 it just gets very standardized in that your interface is cube objects in a well-defined API group. Kyle from Arctic, we do a lot of consulting for enterprise customers. I'm asking this question so I can answer it next week. Goes along with the 4.0 demo I saw around auto scaling. So we've had customers asking about auto scaling for probably since 3.3 to be honest and I don't think any of them have really been ready for it. Things like Knative I think are gonna make it a lot easier for people to start to extend these applications and I saw you spun up another 10 hosts in AWS. So how are we gonna deal with the subscription challenges we've had up until today and be able to rent OpenShift for the hour because a lot of our customers have been going to upstream to do this. So this morning's demo was on Red Hat CoreOS which great job there. I hope you guys liked it. It was pretty great. There's no subscription manager on Red Hat CoreOS so that's one thing that makes it a heck of a lot easier. Yeah, so a couple of things though. Yeah, every time we do, we talk about auto scaling. The common question is can auto scale the nodes? And it's like no, no Kubernetes can only auto scale pods that can't manage infrastructure. Well, now that's changing and you kind of saw that here with growing the cluster APIs, the machine controllers now, Kubernetes can scale its own infrastructure. We are in addition to kind of eliminating subscription manager which kind of limited you from the rail perspective, we are also using the metering project which you heard about to basically enable a consumption-based pricing model which we'll be introducing this year. So this would essentially allow you to, on top of some base subscription, be able to sort of burst up node capacity or for a stuff that runs on top of OpenShift like our middleware storage to sort of pay for that by consumption. That's kind of where we're going. The challenge with consumption-based pricing is you need good metering before, you need to be able to meter consumption before you can charge for it. The metering framework that we're introducing does that and then that gives benefits to you as administrators because that means you can also leverage that same subsystem for doing internal charge backs, internal showbacks, charging different departments for utilizing a shared cluster and so forth. So we're building it for customer usage but we're also using it ourselves and then I think that's gonna enable some of the stuff you discussed. Thanks. I didn't take it meant free. No subscription. Yeah. No. The only thing I don't know if it was clear this morning, like in 3.11 we shipped the cluster autoscaler but what you saw in 4.0 is you're not having to do any AMI style management, right? Like because all configurations delivered from the central control plane, like you're just booting a vanilla Red Hat Core OS box and then the cluster itself turns it into a node. So like with respect to the management overhead of like a lot of solutions in the ecosystem just vanilla cluster autoscaler Kubernetes, like you have to manage that machine image on the various platforms you want to go and configure and tell the autoscaler, go spin me up. Whereas what was really uniquely interesting about the Red Hat Core OS stuff was you're not managing that AMI at all. Like you just get your config from the central control plane and it's life-cycled like any other node in the cluster. Yeah, I think we've talked about that. Clayton mentioned that Terraform message this morning. That Terraform message was basically to spin up that initial boot node. Once that node is up, that node is a Kubernetes node that basically builds out the entire structure. So it's really Kubernetes is installing the Kubernetes cluster and stuff using the APIs that Derek talked about. Any other questions? Hi, my name is Saran, I work at Visa. I have a question about Operator Framework SDK. Can you talk a little bit more about how financial sort of applications can take advantage of Operator Framework? Sure, there's usually nothing special about the application other than if you have like a complex distributed system, which I imagine credit card processing or any kind of financial application is. It helps you spin that up in a very consistent way. So you're not just handing off like a distributed system between maybe two engineers that need to set up a development environment, but your CI process also needs to be handed off. Hey, here's version 1.1.2 of our entire distributed system. And if you can hand that off and you know it's tested well, you hand that same operator off to a staging environment, scale testing, production, whatever it is. So it's really all about can you model your application inside of an operator? And the answer is if you can run it on Kubernetes, then yes, you can. So the nice thing about using an operator is you're not reinventing any of the core concepts of Kubernetes. So you're not inventing how secret handling works or service discovery, scaling out horizontally or vertically or mounting storage. You're just bringing like your unique logic to the table. And so if you can express that in an operator, which I know you can, then you can build kind of anything under the sun. And so I'm sure your production workload is really difficult, but a lot of the database vendors and other infrastructure dependencies that are building operators today also have super complex needs. And when I install this thing, wait for this thing to warm up and then prime this cache and then oh, when you upgrade, restart this thing first, make sure it passes its health check, rebalance the storage, migrate this data. All that type of stuff can be done through an operator. Hello, my name is Michael. I have a question about router. So is there any plan to add layer three or pure TCP support on top of existing HTTP HTTPS? I'm sorry, I missed the, what was the support you were asking for? Layer three. I need to expose stateful services over socket. Through the router. Yeah, so we don't have plans specifically around that, but we are looking at a number of different mesh topologies that may support that, including, so like network service mesh is one technology we're looking at. We're looking at, you know, Istio multi-cluster. We're looking at Envoy as one of those options. There's a number of different technologies we're looking at to be able to provide that functionality, but at the moment we don't have anything specific. Can you tell me more about your use case for that? Yes, so basically I'd like to expose, for example, database outside of Kubernetes. So my consumers are outside of Kubernetes cluster. In OpenShift, for example, there is floating IP that can be used for that matter. Okay, so we do have, we have the ability to say all traffic coming from a particular project and we can filter a destination service. We can say that it can be directed, that it must be if they're looking for a particular database that it can be redirected to a particular service on a external location, but nothing that keeps it, there's nothing specific about the stateful part of it yet that we do, but that's something we'll definitely take into consideration. And I'd like to talk to you more about that. Yeah, we're doing a lot of work on egress and ingress policy-based stuff and yeah, I think there's probably more we can follow up with you all.