 And I know nobody ever wants to ask the first question, but there is a prize. Who would like to ask the first question? There's one over there. All right, hold that. Hello. Simple question on surface. Probably not so easy to answer. How does feature look like IBM now? The IBM. The one thing we probably can't answer. Yes. Yeah, we can't talk about IBM or anything like that. So nope, he doesn't get the prize. We need a better question than that. Who's got the first question here? All right, way up front. I know it's late. Oops, turn. There you go. Hi, guys. We really love the OK, which we installed at our company. And we're looking forward to OpenGIF 4. And so we wanted to ask, are all those operator and auto-updating features also going to be available for the OKD project for us who can't quite yet do that with the subscriptions? So I'm going to take this one. So the basic product functionality will be there. You're right. But as you know, we actually run for our partners certification program for their certified operators and certified workloads on OpenShift. So one of the results of that is that you can actually call Red Hat and get support, like L1 support deal with your problems with the operator, right? You can't do that with OKD, obviously. And you won't even see those operators there. But all the community operators are there. And the functionality is the same. So you'll get the updates. You can trigger the update of the platform itself from the UI. That's all going to be there. But it's not supported, right? All right. Who's got the next question? Way in the back. Taro's got them. So hi. My name is Tufan. I'm from Agrico. I have, in my company, we have been facing a unique challenge with respect to OpenShift. So we, in our company, are developing battery systems, which are standalone cubes, where we have just one big industrial PC. And we tried to implement a standalone OpenShift cluster over there. We can't call it a cluster. It's a standalone. But the real idea of implementing OpenShift was that later on, customers will have multiple such cubes in their power plants or wherever they put that. But there are customers who start with one cube. And they just want the standalone setup there. So challenges we have been facing was like the storage for our databases when you use standalone, because very tricky. Because cluster FS, by default, you required three nodes. We want to have this flexibility with the storage. So what are your suggestions for storage in such cases where we have to take care of multi-node scenario and a single-node scenario as well? Just to make sure we got the question correct, you're talking about running OpenShift on a single node. It's like all in one, is that OK? And then how does storage work in that environment, basically? What platform are you on? Is this a cloud, like single VM, or bare metal, or? It's a single VM. It's a single bare metal server. So we have this industrial PC. We install with all in one server. And later point of time, if there is a demand, we scale it into a cluster where all the nodes are all in one server. Got it. And so you want to move from a single to a multi-node later on, basically. So the great thing about OpenShift 4 is we're using all the cube primitives to run the cluster itself, including things like taints and tolerations and node selectors for actually scheduling out the master components to the masters and that type of thing. And so what this lends you to do is in day two operations is change around that environment all the time. So if you wanted to add dedicated node pools for doing ingress or monitoring or whatever, including scaling up your control plane, you can do that via those systems. So going from a single node to multi-node is supported. I will throw out a caveat that we basically only support HA control plane by default. We don't really want you running a single node because there are use cases for it, but it's not very common, basically. But you have a lot of flexibility going from single to multi with OpenShift 4. Yeah. So in terms of the control panel, it has been absolutely smooth. But when we talk about the storage part, it becomes tricky. Like the cluster, which is like a default right now for OpenSift, it becomes tricky when you have less than three nodes because generally it's not supported. If you modify the replica replicas, and you try to make it work, but when you scale up with storage like this redistribution and rebalancing, it's quite tricky and brings down the servers. So any suggestions for the storage part when we have a single node? So some of the storage operators that Witsa talked about at the panel, for example, like the Rook should be able to handle the nodes. I don't know if they handle single node use cases either, but it's basically looking at node labels and things like that to know what it can divvy up storage for. And then one of the other things we do is just also depend on the platform that you're running on. So integrating with EBS volumes on Amazon, et cetera, whatever we have. But sometimes that's either just nothing, local disk, or NFS, or whatever you have as well. So our use case is mostly limited to the very remote part of the world. Like sometimes we have to put a container. It's like a container, though, as a whole. The cube is a container. So we have to put a container somewhere in African countries where you have very less chances of getting internet connectivity, or it's very slow. Things are coming up, but it's a backbone there. So we have to be offline all the time, mostly, and plan accordingly. So the operators are very interesting because it can offload some of the work that we can connect. We don't need to connect and work on that. So really exciting to look into this. Thank you. Let's chat afterwards about this, because I think I want to understand your use case a little bit more. OK, thank you. And you can create a ticket when you use OpenSift and use the support. So, Taro, behind you, there's one. I have a quick question on OpenShip 4.0. What's your plan for migrating or upgrading customers on 3.x? And I'm specifically interested in on-prem environments, like running bare metal, or KVM, or vSphere. Well, a couple of things. One is we are working on a migration tool, which will let you migrate applications. This is an application migration tool. We did a demo off this tool at Red Hat Summit. And I think the video should be on YouTube, so you can kind of take a look at that. But that's the idea is to be able to migrate applications from a three cluster to a four cluster. And there are some details behind it. But I really liked what we saw with McGuire Bank earlier today. I mean, to me, that was a great way in which they handle updates without even requiring a tool like that, because they're already always constantly updating or create a new fresh cluster every 90 days. I don't know if you saw that. I mean, that's another option. But we are working on an application migration tool. The one thing to add to that is the rationale behind that and our thinking is doing in-place upgrades from three to four is extremely risky, because at some point there, you're going to pivot a lot of the services over it. If you run into an issue, you have no good cluster on three or four. And so the thinking was to make a general purpose migration tool, which is useful to go from three to four, but also four to four clusters or whatever, and also let you do that on a per name space basis. So if you have application teams that are ready to move soon versus later, you can handle all those use cases. So just a quick follow up on that. So does it mean that you expect app downtime or you expect? Because when you talk about migration, you are looking at potentially having a small downtime, especially if you're creating a new cluster and so on. Just wondering what you're thinking of this. Yeah, I think a lot of it depends on the application. And if the application, for example, can tolerate, for example, the fact that there is data duplication, you can copy the migration tool. One of the things that it does is that it can copy your data over from this PV to another PV on that new cluster. But it all depends on if the application can tolerate that kind of stuff. I mean, if it doesn't, then you can actually, you'll have to move the volume if you will, the PV. And at that time, you'd have to quote unquote quest the application. So you will lose some of the downtime in that period. Now, how much it is and what it is will depend very much on the application. And so details can be worked out. All right, Steve, did the volume come off? All right, there we go. So I've got a question in regards to GitOps patterns in relation to OpenShift. Do you see in the future for GitOps patterns in regards to operators, deployment of operators, and Knative as well? Did you get that? About working with GitOps and OpenShift in general. I'll share one thing that we do internally is we have a product called OpenShift dedicated that we're hosting OpenShift for customers. And we have a little bit more of a lockdown environment for you to kind of generally support that environment. And we have an operator that does that. So it basically writes out a bunch of our back rules, turns this on or off or whatever. So an operator is a way to get a standard environment. So you boot up a cluster, throw this operator on it. It transforms it into a XYZ internal company cluster possibly. And so that's one way on the cluster level. And then I think there's plenty of solutions just I think in general in the Kubernetes community about taking manifests from GitHub and applying them. So if it's like a blind apply, then that works. If you need to script it around, people use Jenkins and all kinds of other stuff for that. I don't think we're extremely opinionated on that other than it's generally a good practice. I don't know if anyone has comments. I think you asked for the relationship to operators. So you definitely want to be able to apply the same pattern to the artifacts that are managed by the operator, right? So instead of having your base Kubernetes manifest like your deployments, your config maps, secrets, PVCs all stored in GitHub and basically get it applied by a GitHub or a GitHub's process, you want to use an operator which then only exposes a single API. And that remains a single file in your GitHub repository which you can continuously apply and update from. You want to basically move away of managing all these Kubernetes primitive artifacts to make up your application stack and you want to have a piece of software that runs on the cluster that understands the application. And you basically just apply GitOps to its configuration. So you would basically just put the custom resource in GitOps and let the operator on cluster worry about the rest. The question. Hi. It's a follow up about what you mentioned about the migration application. So Ansus has a migration into JKE of VMs and other projects. Does the migration also do something along those lines? The migration, what you mentioned before. I suppose, was it on the VM side? I mean, there's an echo in the room and I can't hear. Sorry, I'll repeat. Project Ansus says that they can migrate VMs from AWS, for example, into JKE and also other clusters. So does our migration for OpenShift do something similar? So let me see if I got your question right. So are you saying, can you migrate application or set of applications from OpenShift running on one cloud to another cloud using that tool? Not necessary, but the VMs. VMs, oh, the VMs, OK. No, this is actually the application migration. So this is not at the VM level. So we are also looking at how to migrate the control plane itself, if that was your question. But we are not moving the VM itself. Like we are just moving the application, the parts and the PVs. I was like, list out everything that the migration tool migrates. Yeah, right. So I mean, I think so at a basic level, it is really migrating all the Kubernetes objects in that namespace and all the PVs that are in that namespace. So that includes the Kubernetes object. So that's how that application migration works. Right over there. Yeah, Wilks. Hi. Our customers mostly use OKD on-premise in their data center. And they are a bit of afraid for OKD for what operating system can we use for installation, also just atomic or center S or rail, or what would be the possibilities? Well, I mean, OKD, what I think it's going to be on rail chorus. I don't know if anybody knows this answer. I don't. I think in the short term, it's going to be sent us based. And then eventually Fedora Chorus base. But I believe right now, the Fedora Chorus bits are exist, but are not shipping yet in OKD, if that makes sense. Hi, guys. Just a question on potential GA for federation in OpenShift 4. Is that something on the roadmap? It's something we're looking at. It was going to be in the 4.1 release, but we had to shove that out for various reasons around resourcing. I think it will still probably just be tech preview in the 4.2 release as current planning. So we're doing our best to pull it in as quickly as much as possible. I'd love to chat to you about your use case after the session, though, if that's OK. And there's an operator and operator hub for it, so you can try it out. And it's tech preview. Any more questions? We're down there. OK, I'll make Taro run. Hi. Are there plans of supporting IPI for vSphere? There are plans. I don't know if maybe it's beyond 4.2. Maybe it's 4.3 or 4.4, so there are plans, for sure. Yeah, the guiding principle there is we want to have full service provisioning for any platform that we can. So if there are APIs we can hit, so all the cloud providers, OpenStack, certain flavors of different vSphere installs you can. And so we want to do that for everything that we can. Yes, OK, thanks. You said that OCP 4.2 allows disconnected installations. What are the requirements for then 4.1? What is a connected installation? Does it require full internet access or just access from a certain redhead service to the installation? Because we are running on-premise, so locked down. Yeah, I can take this at at least a high level and I don't know if anyone has any more technical details. So the two things that we don't have for 4.1 are fully disconnected, as well as understanding proxy ingress and egress. And it's one of those things where it's been architected into, which is not supported in 4.1. And so a connected cluster is pulling container images from koi.io, as well as Red Hat registries, connecting with our update service hosted by Red Hat. And I think that's basically a designed to be container image-based so that a disconnected environment is just pulling from your registry instead of our registry. And then instead of connecting to a Red Hat update service, you would tell it, this is the exact thing that I want to upgrade to. Stay tuned. Hi, Mark from 6 Again. I have a question about Tecton. And I think Daniel talked about it, that it would be included in OpenGIF 4. Can I also use it in 3.x? Tecton pipelines? We're targeting 4. It's only 4. 4. We're targeting 4. OK. Yeah. Thank you. It's the end of Jenkins. All right. Any more questions? Going on? Well, in the interest of time, because I think they kick us out of the room at 5 on the dot, I'm going to let Brian come up and close us out with the road ahead and thank the wonderful group of Red Hat product managers and tech folks here. And this is where you get the facial imprinting and recognition of those people. So for the next week, they'll be haunting the halls and giving presentations so you can track them down and corner them and ask the questions you wouldn't ask in public today. So please do find us. We'll be in the booth. We'll be doing lots of things like that. So we're here to answer your questions. So thanks. And with that, thank you guys. Great work.