 Good morning. Good afternoon. Good evening. Wherever you're hailing from, welcome to another edition of Ask an OpenShift admin office hour. I am Chris Short, executive producer of this thing we call OpenShift TV. I'm very happy to have you all here with us today. I am joined by the one and only Andrew Sullivan. Andrew, how are you today? I am. I'm hot. It's it's it's officially almost summertime, right? You know, we we know Memorial Day is coming up. It's like 94 today here in Raleigh or something. Oh, you can keep that. Oh, I'm supposed to tie the record high. So it's it's getting to be that time of year. I will say Detroit hit 90 degrees first this year over Raleigh. So we got that on you. But now it's 77. So I'm not gonna like prank. You know, so yes. Hello, everyone. Welcome to this week's episode of Ask an OpenShift admin office hour. So with all of the office hours series of streams, right, our goal here is to interact with you with our audience. We want you to ask questions, whatever it is that's on your mind, we are happy to talk about to address that to the best of our abilities. So Chris and I both have an administrator background across multiple different technologies. I've flirted with being a developer at one point in my life, right, I stayed at a holiday and express the night before and it didn't work out so great. So we can try and help with those questions, but we'll have to phone a friend if that happens. And some of them between the three of us. So in the absence of or in addition to your questions, we try to have a topic every week, where we talk about something that is important to you to our, well, audience of primarily administrators, at least I think y'all are primarily administrators. So this week, we are joined by Tushar Kotarki on the product management team for OpenShift to talk about subscription entitlements, right, all of those happy, fun things that are a necessary evil. Chris and I laughed as we were setting up this episode because it's like, yeah, nobody wants to have to worry about those things, right? It's the But it is the thing you worry about. Yeah, it's and it's and it's important. So Tushar, I'll hand over to you, please introduce yourself. Hi, thank you, Andrew. Hi, everyone. This I'm Tushar Kotarki. Since we're talking about weather, I'm in the Boston area and we're going to hit 90 here today. But the Memorial Day weekend looks like is nice and pleasant as 60. And, you know, so it's all good. And he's glad to be here, Andrew, and walk this topic. As you said, this is a very important detail that in some ways kind of also exciting anything about capacity planning and sizing and how many clusters versus one giant cluster, all those. I don't know if we'll get into every detail, but there's certainly exciting topics or ask the questions and, you know, let's make it interesting and then we can always follow up on Yeah. And I know, like all of the product managers, you're not responsible for just this one thing. You have a broad range of responsibilities. I see your name on more things than just about anybody when I look across the email list. So do you mind sharing any of the other things that you happen to touch? Yeah, I mean, you know, I have been on this team for many years now. And as a result, I have kind of dabbled in everything. You know, I'm one of the things that I do really is I'm the AI and machine learning lead as a workload on top of OpenShift. And then how does, for example, Kubernetes, you know, an OpenShift, how do we make it better to be a to make AI and machine learning as a first class citizen is one way to look at it. And that brings me in contact with things like, you know, everything from GPUs and hardware accelerators to how does MLops work and how does that fit into our DevSec ops story, etc. That's one example. You know, in the past, I've been, you know, the work manager for storage and logging and compute and, and a few other things in the middle. So in general, I'm familiar with a number of topics, including things like this and our upgrades and updates and stuff like that. So happy to talk about different remind range of topics. Yeah, and I know that experience and history also you get involved in a lot of customer conversations. You are probably the number one most customer facing product manager, I know, on the team, at least certainly from what I see. Yeah, no, I mean, and I think that's that's definitely something that we love talking to customers show our rule of thumb is if you want to if you have a meeting conflict, which one are you going to take? You're always going to take a customer meeting work compared to an internal meeting or any other meeting. So we are lucky. I mean, as you know, Andrew and Chris, you know, is we are lucky that we have 3000 plus customers and they're already, you know, advanced users of containers, Kubernetes and OpenShift, and they have a lot of interesting questions and they pull us in and it's always very interesting to talk to them. Yeah, I'll definitely echo that same sentiment of, you know, looking at like the internal G chat and Slack and email every single day I see emails going, wow, that's an interesting question usually followed by I didn't know that was possible, you know, so every day too. Yeah, the things that people are doing are just incredible and amazing and and sometimes it's one of those like I I genuinely didn't know that it was even possible to do that. Never mind. It's actually supported like somebody thought ahead and actually did all this stuff. So yeah, no, I was I was in fact I was reviewing something just recently and they had a very interesting H.A. architecture of how to deploy OpenShift and I'm like, oh, wow, yeah, it is possible, obviously, but I never thought about deploying OpenShift in this way and about it, you know, so you kind of live and learn every day. So that makes it definitely interesting. I'm sure. All right, well, I do see we have we do have at least one question here. So I think the one from a deal is from maybe that's from the previous show. Maybe a show. Yeah. So so I do see Nick is it possible that we have transcripts for the shows as a non or for non native English speakers? Sometimes it's hard to understand what we're talking about. So. Not today. So this is a topic that I know Chris has been working quite a bit on behind the scenes. So I'll I'll let you it's. So here's the thing, getting transcripts of videos is often highly inaccurate. I'll just leave it at that. There's a lot of great software out there trying to make that better. It's finally gotten to the point where I think we could start doing that. Now is the question is great. We've got all these transcripts. Where do we put them? And like, we're actually talking about it right now in Slack. Eric and I are Andrew, you know, about like, Hey, I wish you could do like a. You know, speaker deck where we share our slides like a similar thing, right? Like I wish there was a site for transcript sharing. And yeah, so the deal also had. The OK, yeah, I see your comment there. Yeah, yeah, yeah. Yeah, so. As a consolation, Nick, we do have the blog posts. So at least for this show, I don't know about the others. But for this show every Friday, we have a blog post that comes out that summarizes the topic and I put links. So basically, you know, at timestamp X, we talked about why, you know, in order to make that a bit of a reference makes it easier for. And Chris and I have had this conversation multiple times, very content discovery. It's easy for folks to go and search open shift dot com and find content that is relevant to them that happened to be in a live stream because very much to your point without the transcripts, it's hard to discover that. So a deal, your question. So OpenShift 4.7 service mesh is based off of Istio 1.6, which is quite old. Any idea which Istio version will be used by OpenShift 4.8 service mesh operator. So I don't know the answer to that. And the reason why I thought it might be the previous show is because Langdon is actually our service mesh expert on the tech marketing team. Tushar, I don't know if you have any any insight on that. I don't know off the top of my head, but I can quickly take a look at our roadmap and see if we have mention of it. Otherwise, we'll get back to you with that. I mean, I don't, I mean, I don't want to definitely mess up specific version numbers for sure. Yeah. Yeah, so I think what we can do, if you all are researching that, I'll actually do my weekly top of mind stuff and give you a couple of minutes to see if you can find the answer. Sounds good. So for those who have seen the show, right, so one of the long running, I'm going to call it a segment, even though we don't have like a special segment thing, is top of mind, right? These are things that I have seen either internally or externally that I think are interesting or important or relevant to you all, to our audience. So with that, let's see, where's my list? So first I'm going to share my screen. Oops, as soon as I remember how to use Zoom. The Zoom fun. I know. Let's see. Hopefully you are now seeing a OpenShift console. Yep. So the first thing, just to reinforce that whole blog post thing. So this is the blog post from last week. You can see it's just openshift.com slash blog. And then you can look for the Ask an OpenShift admin hour content. We also have a category. Usually they all fall into the OpenShift TV category regardless of the other topics. So you can go through here. You can see all of those summary blogs of all of the topics that we've covered and find that content at any point in time. So the reason why I had that blog open was actually because of one of the things we talked about last week, which is a CVE. So last week, not Siri. I know you heard. Siri hears what it likes to hear. I got to move the phone out of here. I put it on silent and everything, but every time. No, that doesn't work. So the CVE, which is the Simlink exchange attack with Run-C, you can see 2021-30465. So the CVE continues to evolve, continues to be worked on. You see here it was, let me refresh the page. It'll probably say updated two hours ago now, three hours ago. So the important thing here is if we scroll down, where is it? So with OpenShift 4, so 4, 5, 6, and 7 are all affected. If I look at the advisory update that's inside of here, you'll note that it says that there are specific versions that you want to update to in order to address the CVE. So 4.7.12, if you're on the 4.7 line, if we look at 4.6, 4.6.30, right? So whichever branch of, or whichever version, rather of OpenShift you happen to be using, just pay attention to the CVE. I'll post a link to that in the chat if you haven't already, Chris. I have not. No, thank you. So I'll drop that link inside of there. Whichever version you happen to be using, be sure to pay attention to that advisory and then adopt, deploy that updates at your students' convenience. So the next thing I wanted to talk about, let's see, oh, this. So I was, and it seems like I do this regularly. We've had on the show a number of times talking about the installation process, how to troubleshoot the installation process, how to look at it and all that other stuff. A field person actually pointed out to me and you can see that this is not new, it was just new to me. We actually have a GitHub page. The engineering folks created this GitHub page a year ago that describes how to dig in and how to troubleshoot bootstrap failures when you're deploying clusters. So I will share this link as well. It's super helpful information. If you happen to be deploying, encounter some issues and want to dig into that yourself. I was helping some folks on the Kubernetes Slack the other day, troubleshoot and OKD deployment, which is how I discovered this page. Nice. Another thing that came up and we've talked about this also on the show before. So I had an account team reach out to me and they were asking about the cluster network ranges. So let's look at the docs here. So I'm gonna go to the docs, I'm gonna go to the install, we'll choose installing on any platform. And I want the install config.yaml. So what they're asking me about is this cluster network here, which the default is 10.128 slash 14. So that is a massive swath of IP space, right? A slash 14 is, yeah, it's a lot of IPs. And their question was, why do we need so many? Why does that need to be so large? And I tried to find where we talked about this before and I couldn't, so I'm gonna talk about it again. And this time hopefully I'll remember to actually mark it. So this is broken up into and the way that you look at this. So there is one large network, this slash 14, which is then broken up into, in this instance, the host prefixes 23. So multiple slash 23s with one assigned to each node in the cluster. So with OpenShift SDN, what happens is when the SDN is deployed, each node is assigned a slash 23. And then it does NAT for the pods that are on that node. And it pulls those slash 23s out of this larger slash 14. So how do I know if I want to use a slash 23 or a 24 or a 25 or whatever that happens to be for each node? And it depends on the number of pods that you intend to have on the node. So remember with OpenShift, we support a maximum of 500 pods per node. A slash 23 is 512 IP addresses. So if you want to have up to 500 pods on your nodes, use a slash 23. If you only intend to have say 250 pods per node, use a slash 24, so on and so forth. What I don't know is whether or not you can adjust that day two after deployment. I think you can adjust the host prefix. I don't know if you can adjust the network, the sider here. So how do you know how large you want for the cluster network sider address space? So that depends on the number of nodes that you have in the cluster. So by default, we have 512, 23s and a 14. And the way that you get that math is, so remember this is two to the power of. So it would be two to the power of 14 to achieve that number of addresses and then two to the power of 23. So you can subtract those. You subtract 14 from 23, which is nine. You raise two to the ninth power and you get 512. So that's how I very quickly calculate that I can do 512 effectively host inside of a slash 14 if each one is given a slash 23. I think the maximum tested number of nodes on a cluster is 2,000. I remembering that correctly, Chris. I wanna say 2,000? I'm not too sure, you might know that better than us. 2,000, 2,000, okay. So if you intend to have 500 or less nodes in your cluster, a slash 14 is great. If you intend to have up to between 500 and 2,000 nodes, then you will need to make this a larger subnet as the case may be. And then service network is, as the name implies, this is the IP space that is used for services. So anytime somebody creates a service object inside of the cluster, it pulls out of this IP space. So that is entirely dependent on how many, the number of services you expect in the cluster. So hopefully that makes sense. I will be sure to put this one in the blog post so that way we can share it and make it accessible for future generations. For all of posterity, as it were. All of posterity. All of posterity. So the last one I've got here is two things that are related. So a couple of weeks ago, maybe three weeks ago, we had Mark Russell on and to talk about CoreOS. And one of the things that we talked about with Mark and Derek was like being able to use butane to create ignition for configuring the host. And specifically he highlighted, I think it was creating a mirrored boot disk, but you can also, and it's fully supported to do things like partition the disk. So I can use a second partition, I can use an entirely separate disk for slash var if I so choose. And some people like to do this because it means I can have a small operating system disk that is on maybe lower tier of storage and then a very large, very high performance, you know, disk for var, which is where all of my container images and everything else is gonna land. So I bring this up because I was asked about, well, what if I want to further add more disks for other parts underneath slash var? So I want a disk for var live containers. I want a disk for, and now it's escaping me of where the empty data storage comes from. Oh. Yeah, it's somewhere underneath var. Yeah, so it's somewhere underneath var. And, you know, essentially having, you know, two or three or four disks attached to the node in order to separate that out. So there's nothing that technically prevents you from doing this. I don't know, yeah, you can see here, Kubernetes only supports two file system partitions. It will technically work. You would definitely want to confirm with the support folks. We'll see ES, Twilight and I will be supported. But really importantly is if you do this, it could potentially affect Kubernetes ability to detect and resolve out of space conditions. So let me grab this link that I thought I already had open in here. So we'll post that there and I'll also post it over here. So effectively, this KCS talks about how there are two partitions that it is able to monitor. And I guess I should open it in a browser that I am logged in. I guess I can log in real quick. So when we partition that drive or when we add a separate partition for var, we run the risk of, see, there's node FS and image FS or the two that Kublet monitors. Effectively, it loses visibility on that. And that can be detrimental for a couple of different reasons. The biggest one being if it fills up, normally Kublet takes action. It begins evicting pods, asking the scheduler to reschedule them onto other nodes until it can free up enough capacity so that things are not interrupted. Operations continue as expected. Without that ability to monitor image FS in particular, effectively it can fill up. And when that happens, much like any storage system, effectively everything becomes real only. And you, the administrator would have to take some sort of action to say reschedule these nodes or excuse me, these pods, right? Free up some of that space through garbage collection so that way it can begin or resume normal operations. So again, fully supported, you can absolutely create multiple partitions, just be cognizant of some of the issues that can occur if you happen to do that. Make sure to take advantage of the monitoring capabilities and importantly, configure alert manager, right? You want to make sure that you are aware when something starts going sideways. And what I always say is don't, like you see here the default eviction thresholds, same thing with like alerting thresholds. Not everybody should have the same thresholds. If I have, I'll pick numbers at random here. If I have a 100 gigabyte disk and my eviction threshold is 10%, that only gives me 10 gigabytes of spare space. If something is writing at a gigabyte a minute, that's 10 minutes of reaction time before things go really bad. If it's a terabyte disk, well that's now a hundred gigabytes of space. So at that same one gigabyte per minute, I now have an hour and a half to do something. So just be again aware cognizant of what those thresholds should be, how quickly your organization can react to an issue. And of course, how quickly you get the alerts. If it's on your work phone that you turn off after hours, that might be bad. Okay, I'm done. So that's all I've got for this week. I will unshare. Unshare and we have a question. One question about subscription calculation. Is there a tool to gather the subscription information based on the current cluster configuration in real time? It is annoying to change the node labels taints and wait for several hours to find out if the changes are successful, particularly when OCS and infer nodes come into play. So sounds like they're kind of asking for like, look at my cluster and tell me what my subscriptions are and how can I fix them kind of thing? I mean, you could do that with the cloud.red.com, which we call as a cloud manager, OCM. You could do it also with ACM if you're using ACM, which is a trans cluster manager. So hopefully, I mean, in fact, I was going to show you a page later about OCM because that's a great question. It's a great seed question for my content because one of the things that you can do with OCM is you can go there and see what your subscriptions are and how they've been used, et cetera. Wonderful. Which class? Okay. Yeah, and I think for cloud.red.com and OCM, I think that that's based off of how frequently it reports the metrics or, sorry, telemetry. Right. So I don't know. You have telemetry enabled, first of all. Yeah, so I don't know if there's a way you can manually trigger a telemetry report or if it just happens periodically. That would be an interesting question. I'm not sure who to ask. We'll have to dig around and see if we can find that out. Yeah. Okay. But there is a cost explorer in there. There's also cost models and your subscriptions are all laid out there as well. So that'll kind of give you an idea, but as far as how to trigger, like I want the latest and greatest data right now, I'm not sure, yeah. Yeah, and I also, and Tushar, your insights and your help here will be very, very valued of, it's not like if you, you know, if you bork that configuration and you fall out of compliance for a couple of hours, it's not like we're gonna send the red hats license police to your business or anything like that. You know, we are, I don't want to say we're a lax about it, but we're understanding of how these things work. Yeah. Absolutely, exactly. And that, in fact, I was gonna say is that, yes, we could potentially see if there's enough, but I don't ask the question. Why is it that big, it's not a big concern, as we are concerned from a red hat point of view. And obviously, yeah. A frequency of telemetry data is the problem. It would help if we can trigger the upload. Yeah. So I will, so I'll tell you what, Frank, I'm going to ping a couple of people that might know the answer to that while I hand off to Tushar to kind of kick off the topic today. Yeah, let's do that. And let's see, the share screen is here. Let's kind of do this. Okay, share screen and we'll just do this. Okay, let's see. All right, let's go into the presenter view. There you go. Chris, you can see it. Okay. We'll keep it, I mean, we've discussed whether we need slides or not. I think slides just brought a little bit of an anchor, but obviously, feel free to stop me and ask questions. I mean, Chris or Andrew, I guess I'll interject. Yeah, me too, yeah. So anyways, so I mean, you all know this, but just as a level set, I'm not going to belabor that, but OpenShift and Linux is kind of our container platform that had Enterprise Linux and we're at OpenShift that forms the basis for our open hybrid cloud to run basically any applications on any infrastructure or cloud in any location, right? So that's kind of effectively what we are saying. And hang on, I might not have it, I might have it, okay. So what I want to do, yeah, this is the way, I don't know, I want to skip that one. So this one is really about what, how can you consume as a customer, as a user, how can you consume OpenShift? And then you can see, fundamentally you can consume OpenShift as a manager that had OpenShift offering on a self-managed. And manage really means that somebody other than you is managing the OpenShift for you. And in the case of self-managed, you as the admin manage OpenShift. And in the case of OpenShift, either ways, you can run OpenShift on many different clouds and we have offerings on AWS, Azure, I mean cloud and Google cloud, whether you are doing it as a managed service and you want it as a managed service or you are self-managed. And then furthermore, if you're self-managed, obviously you will have OpenShift Plus, you can have a choice of OpenShift Platform Plus, OpenShift Container Platform, OpenShift Kubernetes Engine, and we'll talk about what that means in a later slide, right? And so this is really what that is. I mean, at the heart and core of it, it is Kubernetes running on random price linux or at CoreOS specifically, which is really as you all know, container optimized, right at enterprise linux. And then Kubernetes Engine actually is kind of something that gives you, if the way I look at it, if you want to purely just run containers and not use anything else, then you'd use Kubernetes Engine and then you'd be able to use obviously Kubernetes, but also be able to install the platform, get updates and networking and storage monitoring and lock forwarding and some of those services come with that. If you want to then say that, hey, you know, I want to do more stuff than that. I want to actually do use service mesh or serverless. I want to use GitOps. I want to do actually full-fledged log management. I just don't want to do forward logs. I want to actually post the logs or I want to build co-ordinative applications and I need to such as a process automation different language and runtimes or API management or databases, especially if you're looking at AI and machine learning, if you want to do that or if you want to increase your developer productivity with things like the developer CLI or IDE, if you want to do all those things, then you would go with OpenShift Container Platform, which is really has all those some combination of all those different tools. And many of you are already familiar with that. So I want to go into details, but just to give you a sense of how OKE is different or OpenShift Kubernetes Engine is different from OpenShift Container Platform. So OpenShift Container Platform is, you can think about it as a developer plus CI CD platform plus whereas the Kubernetes Engine is kind of a pure container platform, you know? And then as I said, I mean, they're all OpenShift Container Platform, Kubernetes Platform, they're all the same product of bits and pieces. So... Too short, if you don't mind. So I have a couple of questions. So one, as someone who has never entitled a cluster, an OpenShift for cluster, I should say, it's been since the gosh, 3.10 days, 3.11 days was the last time that I, before four was released anyways. Anyways, is there when I go in and title a specific cluster what does that look like? I go to cloud.openshift. or cloud.redhat.com, right? And essentially I'm associating the cluster with a particular pool of entitlements. And where I'm going with this is I can't mix, like I can't have some nodes that are OKE and some nodes that are OCP in the same cluster. That's correct, yeah. So I mean, I think a good way to look at it is that, you know, a OpenShift, so first of all, unlike OpenShift 3, one of the things that we did with OpenShift 4 was that these are called core band pricing. Now, in other words, you buy a bunch of subscriptions. You use, when you buy a bunch of subscriptions, N units, you get, let's say N entitlements and then you can use the N entitlements to run N number of worker nodes. I'm kind of simplifying it. And the N number of worker nodes can be between one cluster or can be distributed across X other clusters. So that's kind of what we are saying. You don't entitle per node anymore. You entitle per cluster or group of clusters. So that's number one. But in doing that, obviously, to answer your specific question, A cluster is either an OpenShift container platform cluster or an OK cluster. It cannot be a mix. There is nothing called mixing or, you know, and so that's kind of how I would answer that. And the way to look at it is really, and then, yeah. So let's just leave it at that. I mean, it all depends on then, on top of that, what services would you like to use? I mean, if you want to use some of these additional services, as I showed here, then you would use OCP. If you don't need those services, then you'd use OKE. And then, yeah, yeah. And moving between them is literally the click of a button in the subscription manager. There is, you said this before, and I'm going to reinforce it. There is no difference in the bits and in the deployment between an OKE cluster and an OCP cluster. And if you choose OKE first and decide, you know what, I do need to be able to do all these things, it's just changing that entitlement. And then, you know, you go in and deploy the relevant operators and you do your thing. But exactly. And that's actually an important detail, Andrew, that we can double click on. Because the beauty of it is, I don't want to say 100% but like 99% of everything that you install with OKE is kind of a day zero install. So in some ways, think about it. Doesn't matter whether you install OCP or OKE, you have installed an OKE cluster. Like that's day zero. And then on day two through that workflow that Andrew, you just mentioned, which is you go to your embedded operator hub and then you start installing these different services. All of these services are individual operators. So you then pick and choose which ones you want. So that is a nice little distinction right there. I mean, a good rule of thumb is just to say that thing to think about is your open shift, your installer installs a quote unquote OKE cluster, even though you have OCP subscription. And then you can add things as you go to it. I actually never noticed that until you just pointed it out of like, yeah, when I've finished deploying a cluster with OpenShift install, it's effectively an OKE cluster. And then I choose to go in and deploy logging and all of the other stuff. And that's what turns it into an OCP cluster. Exactly. So that makes it kind of easy to do that kind of movement, as you said before. Yeah. Nice. We have a question here from Christian. So when will consumption-based pricing be available? So consumption-based pricing is a great question. Consumption-based pricing is available under what we call a limited availability program right now. So if you're interested, talk to your account team and they will guide you through that process. So it's available, but on a limited basis. It's generally available to everybody. But we are going to announce that we're getting some more critical feedback from customers, et cetera. And then we're going to announce that more formally in the second half of this year. But it's available now. I'll take a, I'll ask an extension to that question. With the managed services, how is that done? Is it? That's a great question. That's a great nuance. In fact, managed services is on demand already. So I should have, so thanks for noticing that, Andrew. I mean, so the answer that I did about on demand pricing is really more for the self-managed skews, which is the Kubernetes Engine container platform and platform plus. But on the other hand, if you are thinking about the managed service, that's already on demand because it's hardly and we have a billing system in place for that. Got it. So if you're using Azure Red Hat OpenShift, Red Hat OpenShift on AWS and let's see the other one. It used to be called ROX, and it's now something else. The IBM one. IBM, yeah. Yeah, they're all already consumption based. Yeah, that's good to know. OK, that's all the questions I had. I am curious about OpenShift Platform Plus. Right. Because, and Tushar, one of the reasons I had asked you to come on the show is because you and I collaborated a lot. You did most of the work, but we collaborated a lot on the update to the subscription guide, which was mainly driven by the announcement of Platform Plus. You've been very modest, Andrew, but we'll leave it there. No, but no, yeah. The lot of this OpenShift Platform Plus is kind of very exciting. As you know, somebody asked about how many nodes and how many clusters. And in fact, when you think about that, but anyways, where I'm saying is that without belaboring that too much, you know, customers need management because especially with more than one cluster, they need security. I mean, that everybody needs that. I mean, we see vulnerability, security threats all the time now and you need a and when you think about multiple clusters, you're thinking about multiple locations. You know, you're thinking about multiple teams, all those things. And for that, you need a global registry, container registry. So what we said was that, hey, you know, our customers are getting sophisticated. They're going beyond one cluster or two clusters or five clusters. They're going beyond geographical boundaries. They're going beyond, you know, perhaps lines of businesses within their organizations as admins. You might be servicing more than one line of businesses. You may have started small now years now because of the success, others are coming and we are seeing that. And so what we said was that, you know, we would put that all together and it's effectively a nicely integrated bundle, which is OpenShift Platform Plus and it includes AC as shown here, OpenShift Advanced Plus Management for Kubernetes which provides the observability, discovery policy compliance for both the cluster itself as well as for workloads and configuration. Then you have the cluster security which provides, which is through the Stack Rocks acquisition, which is now branded as advanced cluster, Red Hat Advanced Cluster Security for Kubernetes, which is ACS. And that provides vulnerability management and network segmentation and threat detection and response. And then you have the global registry with Quake or Key. Some people call it, I don't know, I cannot tell Key, I think it's- The UK and Australia, it's Key. Everywhere else, it's Quake. So, and the product manager for, oh yeah, yeah, he's in Germany so I guess he calls it Quake and so I'll call it Quake. So, yeah, and so we have the global registry there which has, this has already been there. So we put this all together. So this really is a nice, what I would call a multi-cluster, you know, multi-cluster story, but I don't want to just leave it there. Even if you have one cluster, you want to manage it. Even if you have one cluster, you want to do security and scanning. Even if you have one cluster, you got to think about DevSecOps and how are workloads getting onto there. And you've got to think about global registry, but I think that's kind of what OpenShift platform plus is. So, Tushar, you have a pretty broad view across the customer base. Are you seeing any patterns in like cluster count is the phrase I'm going to use of, I feel like with the OpenShift three days, we saw it seemed more common to have a small number of very large clusters. And are we seeing continuing to see that? Are we starting to see a shift to more smaller clusters? Yeah, that's a great question. The TLDR is, you know, kind of smaller cluster, but everything is related, right? Like the smaller clusters of today are still quite comparable to the larger clusters of three years ago, right? Because Equipment is much more popular, but, you know, the trend is definitely more clusters versus bigger clusters, right? So that, I would say, is definitely the trend. And there are reasons for that. One is kind of simple to understand, which is everybody understands the 10 blast radius. You know, you know, if the clusters get too big, they become, there is a chance, no matter how much we have all planned, Kubernetes is highly available, meaning might have your infrastructure, which is highly available, you might have your data center, which is highly available, let you have recovery scenarios, disaster scenarios, but still things can get wrong. And when those get wrong, how do you isolate? Even maybe there is a security threat. How do you isolate? So I think that is definitely that consideration of the blast radius is one point. The other point is, I think more positive, that's kind of more of a defensive move. The more, I don't know if I want to use the word offensive, but the offensive move is, or the proactive move is to say, hey, you know, more and more developers and more and more desks and ops and the agile practitioner within your organization want to use a spin up clusters on demand. And so that, and sometimes use it as a individual user or maybe in a small departmental setting, et cetera. And so that is also creating this demand for more clusters. So in fact, one of the tenants for OpenShift 4 really was the ability to create a cluster quickly. And so that's and to provide full stack automation. So when we think about the full stack automation, the IPI installer and stuff like that, that you guys have already seen. And in fact, the fact that you can actually go to OCM and say create and use the high interface to create clusters is what that uses. But anyway, so that allows you to create clusters on demand and potentially take them down on demand. And interestingly enough, Andrew, as you know, you can also add capacity. I mean, you can add capacity with auto scaling of nodes, et cetera. We made all that very easy with OpenShift 4. So long story short, I'm trying to answer in the long-winded answer to your question is the trend is definitely more clusters for sure. More smaller, definitely related to scaling. It's like thinking about like how we vertically scaling versus horizontal scaling. So I think the trend is definitely towards horizontal scaling at the cluster level, if you want to call it that way. And Christian also has a question that I think related to that, which is if we're seeing more clusters means most likely, there's a lot of folks who are doing automated deployments of those. And I think ACM certainly falls into that category of I can click a button in ACM and get a new cluster. Same thing with Helm, et cetera. So is there an automated way to entitle those clusters or is that a manual action that has to be done after the fact? That's a great question. I'm not trying it myself, so I can't like think about it off the top of my head. But I'm sure when ACM when you create clusters or when you import clusters, there is a way to entitle them automatically. There might be a work for that. In fact, I remember talking about it at one point but we can clarify that later, Andrew, right? So I think there is, but I'm not 100% sure. Okay, and that brings up a good point of ACM is something that I think is really interesting. I know very little about. So I'm hoping to bring them on to a future stream to talk about the capabilities there and my point being, so I am doing show planning, topic planning for the next quarter for Q3. So if anybody has any thoughts, suggestions or ideas, let me know. Happy to incorporate those at any point in time so that we can cover the topics that are interesting and relevant to you all. I should point out the next Advanced Cluster Management Office Hour is on the 8th at 2 p.m. Eastern. That's a good point. I always forget that they have a show as well. They have a show and they always bring everybody on board to kind of talk up their new features and everything. So it's a wide-ranging show. So if you have questions about ACM, please feel free to show up to that and we'll get them answered. So to Shar, we've got 13-ish minutes, 15-ish minutes before we reach the top of the hour. So I wanna make sure that you have time to cover anything that... Yeah, I know. Yeah, what I'll do, Andrew, is that I'll just quickly run through this and ask me questions and stop it. This has been really good. I don't have to cover all these slides at all. And in fact, everything that I covered in these slides, I, Chris has said that he'll take these slides and he'll upload it somewhere. And I've put like specific URLs for each one of them on most of the pages that you can follow along and everybody can get it. And this is really nothing. Everything that I show here is actually available publicly from one of our Red Hat sites. So you can go and access it. Yeah, and for the audience, please, if you have any questions, anything that comes to mind, just let us know. And that includes after the show, at the close, I'll reiterate, but feel free to reach out to us on social media. I'm practical Andrew on Twitter, or you can email me, andrew.sullivanatredhat.com. But to Shar, please. Yeah, yeah. So I think most of you have a similar subscription as it basically entitles the customers and users to download Red Hat Tested and certified software, in this case OpenShift. And it provides access to, you know, a guidance, stability, security, confidentiality, and other things which we'll cover in the next slide. So effectively that's basically what it is. You know, when you buy something, you buy a subscription from Red Hat and for which you are entitled to use the software and get our quote unquote support and services associated with that. And things such as ongoing delivery, technical support, commitments on certifications and software assurance and expertise. And I think this last one is really very important. I mean, those tons of customer interactions that Andrew and I talked about earlier, those are all because a lot of them is because we get in front of customers, customers have question. And then we have other things that Andrew was working through some knowledge based articles, et cetera. And we all collaborate and it's really, you know, that kind of unique. And you don't have to go into Google searches. You can potentially find this information much more readily. And really the idea is, I'm not saying don't do Google searches. So hopefully it'll land you to the knowledge based article but more importantly, you know that it is trusted, et cetera. So, and then training and other additional things are also part of that. So we offer basically three types of commercial subscriptions. And again, the details are there, pretty obvious standard premium and self support. You know, premium is kind of used for production usage because that's basically you have access to 24 seven. I mean, standard could be used for anything that requires standard business hours. And self support is really, you can't call us, you know, you only have access to knowledge based articles, et cetera. The one thing to remember is that we don't have really self support, I know relative but OpenShift does not have just that in mind. We only have standard and premium for commercial subscriptions. The other subscription place, this is actually perhaps more interesting, you know, and probably less known that is individual developer subscriptions that you can have. It's for the individual. And, you know, we define that as a natural person, not an organization and allows you to use certain direct subscription services in connection with that software, including OpenShift for individual development users or even individual production users, so at no cost. So that definitely for quote unquote developer kinds of things, you could use that. We have evaluation subscriptions, you can go and get 60 day evaluations and that you can do anytime for both self managed and for hosted managed services. And then you have partner subscriptions. This is for partners amongst you. If you want access to that software, you can do that through the partner program. And you can, and I didn't know this until we were chatting before the show. You can use a developer subscription with OpenShift. Yes, that's correct. And it's, how many nodes are there? It is 16 nodes, yeah. It's a good size cluster, you know, so for as an individual, if you want to track the advantage of OpenShift, you could do that. Yeah, I mean, I know we've been moving or adding a lot to the developer subscriptions like Red Hat in general, you know, the number of relevant instances you can deploy and it's all completely, you know, there's no cost to a developer subscription or some of the developer subscriptions, I should say. But yeah, like that's really interesting. That's a, you know, I can get up and running and deploy OpenShift and do all of these things and have a fully entitled cluster to build my application with. And, you know, if code ready container, sorry, code ready workstation, you know, code ready containers, yeah. Containers. Yeah, I'll get it right away. CRC. Yeah, CRC isn't enough, right? On my developer laptop, you know, if my laptop can't run it, you know, now I can deploy a full real OpenShift cluster and keep it up and running for longer than that 60-day eval. Yeah. That's really awesome. Absolutely, yeah, yeah. No, it's really good, so that's all right. And so anyways, I mean, this we talked about, so I'm going to skip it. These are the different, I think we talked about this also, I'm going to skip it, it's there. You know, for self-manage, you can get platform plus container platform or Kubernetes engine. And as I said, you can run an infrastructure built by Red Hat, you know, and then manage by the customer. This is the managed OpenShift, which is kind of very exciting because many of you want to go to that, you know, OCM page and want to create a cluster on Amazon or IBM Cloud or AWS and, you know, get that going. And so that's also something that you can do and that's different ways, but basically, as I said, somebody other than you is managing that for you. So I think this is the, if you want to try, now if you're not an individual and if you as a part of an organization, if you want to try it, you can always do the trial subscriptions. And I think the first one is the most exciting one. Obviously the managed service has a nice trial period which is quite exciting. Self-managed also has it. And I think those, I think the managed service is new. Self-managed has been there for quite some time. So you might be familiar with it. The developer sandbox, I don't know, Andrew or Chris, if you had a session on that, but the developer sandbox is really cool. This is basically, gives you access to OpenShift environment for development and testing. And it's made for developers and it gives you access to an Eclipse share-based ID environment, help chart, add build images, Git access, S2I build tool, and you can get going basically on using OpenShift. So this is all for 30 days. So the web browser, no infrastructure needed, it's running on ANA and AWS basically. So it's quite interesting if you want to take advantage of that. So the other one, which we, I don't know, we sometimes don't talk, and I love this one, but if you don't want to do any of that, you just want to learn containers, Kubernetes and OpenShift, and you want to learn Kilobs, or you want to learn event-driven architecture with Apache Kafka, and how does that look on OpenShift? Or you want to develop with Quarkus, and how does that look on OpenShift? Or you want to do AIML, this is a screen capture, but if you go down, there's AIML, if you want to run Jupyter Notebooks, or you want, the nice thing is you can go to these learn.openshift.com, and there are all these self-paced tutorials, core-coded tutorials, which is great, I use it all the time. And one of them is actually playground, and so if you want to go in, and I love that, because now you can go to 4.6, you want a 4.6 environment, you can do a 4.6, you want a 4.7 environment, you can go in, and then a 4.7 and do many, many things, and you have deployed even operators from there, you know what I'm saying? I was going to specifically mention that, because I use the sandboxes all the time. Yeah. I just want, I'm pretty sure this is going to break the cluster, let me go try it. Yep. And I use them, back in the before times, when we would physically go to conferences and work in a booth, I would use those sandboxes all the time to give demos from the booth. Yeah. So yeah, it's an awesome resource. Yeah. So the subscription types, I think we talked about this a little bit. You know, so basically we have two fundamental types of subscriptions, you can add that to core-based, and that's normal, or you can do socket-based for bare metal only, and then the standard and premium, and, you know, so that, and then you have other things that you, like you can have middleware and other bundles that you can run on top of OpenShift, that was a part of that, you know, price book. So that's what I wanted to say there. These are some details. I'll see what I can cover in the next few, but the subscription that I have linked here has all the details, but this gives the answers question So I'm going to interrupt you real quick. Don asked a question about, how do we entitle clusters on disconnected networks? Okay. So on disconnected networks, basically you basically go and go to, still you can go on OCM, and then you basically throw up and say that these are the number of subscriptions that I'm consuming. I mean, we know that those clusters are not reporting. I mean, look, remember, and that's okay. You just basically offline inform us of how much you are using. That's basically the, how you do it. Yeah, so it's effectively an honor system? It's effectively an honor system. Okay. So yeah, we rely on you, the administrator to manually go and say, I'm using this many entitlements on my cluster that is disconnected. Yeah. Yeah. Okay. I mean, honor system anyway. So one of the lessons that we learned on WGF-3 is that if somebody was auto-scaling, we introduced auto-scaling of notes in 311, but then we added much more harsher subscription requirements and we said, and it prevented actually them from auto-scaling in a production environment. And we said, no, we don't want to do that anymore. So this is always a quote unquote in some ways. Like we don't prevent you from running anything. Let's put that, because we don't want to like, if you want to order a scale, we don't want to be like, oh, you don't have something, so we don't want to order a scale for you when you need it. So that's kind of anyways, so. Don, I see your question there about how to support track tickets. I'll reach out to a couple of support folks and see if we can get an answer to that. We'll follow up next week on that one, because unless Tushar knows something off the top of his head. Support ticket, the only thing I don't know is that like you go to access.data.com and there you can create support tickets. I myself have not done it. So I, but that's effectively what it is. And I get to see tickets through that same portal. So that's, I think what it is. You go to access.data.com. Yeah, and I would suspect that it's somewhat, like if they ask for a must gather, which if you're on a disconnected network that would be hard to gauge, right? That'll have things like the cluster ID in it that they can then use to compare to what's in cloud manager to determine entitlement status and all that other stuff. So, so Tushar, there was, there was two things that I wanted to cover. One of them I've completely forgot and the other one, or ask you about the other one is, and we see this, it was one of the most, I'll say not controversial, but the most edited parts of the updated subscription guide. And that is infrastructure notes. And so can you, can you talk a little bit about what is and what isn't an infrastructure workload and the ramifications of that? Yeah, and this Andrew, just to preamble that this is important because the reason why Andrew is asking this question is because the control plane notes autonomous masters and the infrastructure notes we don't charge for that. So you don't have to have, you don't have to pay for that subscription. So therefore this question comes up as to, okay, now there are things that I can run on these infrastructure notes and therefore I don't have to pay for it. So what can you run on that? So I think that's kind of where Andrew is coming from. And so effectively what it is, is like, I think I might have something on that. So these are all the things that you can run on the infrastructure notes, right? Open chip registry, ingress router. So let me step back. So anything that is what you would consider like as required to run the cluster, but not an application. It's not a workload. It is not something you're doing for your customer, right? Like your user is how I would think about it as a infrastructure, right? So an example would be registry ingress router, monitoring log management, HAProxy for cluster ingress, Rata key, K, you know, OpenChip data foundation, which is OpenChip's what used to be self storage, advanced cluster management, advanced cluster security, data ops, pipelines, any of these that you've tried to run. These are all you are running in service of either monitoring the platform itself, which is the OpenChip container platform, or in service of monitoring some of the things that you are doing on the platform, but these are not applications. So this includes, for example, some custom teleparty monitoring agents, CNI and CSI traverses, the type of that, you know, a hardware or virtualization enablement accelerator. So all these things, even though they're third party, they are all used in support of your main workload or workloads and applications, but they're not workloads or applications unto themselves. I mean, nobody is running your OpenChip cluster just for logging, for example. Although, I mean, you could be tempted to run elastic search. We definitely, like in which case, right? Like that's when that you've crossed that boundary and you're gonna use the entire OpenChip cluster we don't support that use case, but let's say you ran elastic search now, the whole purpose of that entire cluster is to run elastic search. And then now you're not, and then you're sending logs from other clusters from other places, then in which case I would say, hey, that's not in support of that cluster, you're doing it actually as a workload. So in which case you need to run them on workload. I wanna be cognizant of time, it is the top of the hour. So just so you know, yeah. So the exception to that, I think it's an exception is ACM. If you're deploying, so if you have an OpenChip cluster that you deploy ACM to, and it's only running ACM, and then you to manage and deploy other clusters, that's still considered a wholly infrastructure cluster. That's correct, yeah. And so is ACS for that matter to your point. Yeah, okay. Yeah, and the other thing that I just got asked about, and I think I confirmed with you yesterday, Tushar in chat was deploying an operator, even a third-party operator, the controller pods are considered infrastructure workload. It's the application that is still regularly, you still have to entitle those nodes. So if Chris Short writes his own operator to automatically record and upload streams, the controller pod to deploy the application, infrastructure, the pods actually doing the work application. Yeah, that's this little item right here, yeah. Oh, and Fahad, apologies if I put you your name, actually just ask the same question, right? We can, yes, you can run third-party controllers on infrastructure nodes. So sealed secrets, a custom controller or operator, all of that is considered infrastructure workload. It's just the operand, right? The thing, the application that it's deploying and managing is not. Yeah, that's good, yeah. Cool. Did you follow up? Can we use infrastructure nodes for builds? Did you ask, did you cover that or not? No, I did not, I did not ask that. So Paul asked, can you use infrastructure nodes for builds? I'm assuming builds like tecton pipeline type stuff? I'm assuming no here. Yeah, yeah, I mean, basically because OpenShift container platform, one of the values that we bring is the, that it is a CI CD developer platform. And so this is definitely, in those cases we would consider it as a, like it's a workload, you know, so it's builds. Yeah, I mean, that CI CD is a workload, correct? Yeah, exactly. Yeah, exactly. Okay, yeah, just wanna make sure. Did you see Don Weeks follow up? Did you address that Andrew, I forget. I've been doing so much behind the scenes here. Yeah, about the, how entitlement for support tickets works. I'll follow up with support to make sure that we can, that we get a good answer on that. Yeah, so Paul says S2I builds. So no, that's an application, right? Not an infrastructure thing. Like the S2I processes can run on an infrastructure node but that output makes it not, I'm assuming. My assumption would be that, so a build config and the registry, right? Where this image streams and all that stuff is kept is infrastructure workloads. But when it triggers a rebuild of the pod, that action takes place on an entitled node. And then it would push that image back in. And then if there's like a rollout strategy or whatever that happens to be, or it is rolling out to an application that is on, you know, entitled nodes. Cool, okay. And tectons, right, pipelines should be the same. That's good, yeah, awesome. All right, all questions covered. Yep, yeah, thank you. I actually, I missed one of those, I think was Paul's. So thank you for catching that. So Tushar, anything else that you wanted to address? No, I mean, I think we covered some of the important points. I definitely encourage you to look at the slides. I mean, Chris has been put, you know, there are more details, everything is covered in detail. So feel free to go through that. And if you guys have specific questions, then tweet at Andrew or you can send me an email to ticadarquiatrad.com. Yeah, short at redhat.com. I can route it to anybody that needs it. Do I get to choose between core and socket licensing? Is that a thing now? No, socket licensing is only available for bare metal. So if you have bare metal, yes, you can choose. But if you don't have bare metal environment, or if you have bare metal and you're gonna use VMs on top of it, then you cannot. You had to use core. And how many cores are associated with a socket license? I believe it's up to 64 cores. It's either 32 or 64. I think the subscription guide has that. Yeah, I dropped the link to the sizing guide. So scroll up a little bit in the chat and you should have it there. So thank you. If you're using a physical server, you know, socket-based licensing is a thing. Yeah. The real bare metal, as Andrew likes to call it. Yeah, yeah. And to be clear, when Tushar says bare metal has socket licensing as an option, we're referring to physical servers, not the bare metal installation method, which is non-integrated platform-regnostic. Okay, yeah, 128 cores per socket. That's a lot. Yeah, interesting. I wonder how we would handle that. It would be two licenses. Yeah, I mean, assuming there's 64 cores. Yeah, okay. Okay, so as Chris mentioned, you can reach out to us at any point in time. So Tushar, I want to, so let me reset there. Let me organize my brain a moment. So first, thank you to Tushar for joining us today. Really appreciate your help with this. You know, like I said, we collaborated on updating the subscription and sizing guide. I still learn stuff today because it can be nuanced, right? It can be, there's a lot of stuff to keep track of there. So thank you so much for joining us today. And for our audience, thank you also. Really great questions today. Please keep an eye out Friday for the blog post where we'll summarize, right? We'll create an index of those so that way anybody can search and find those answers in the future. If you have any questions, if anything didn't get answered, if anything, if you didn't have time to ask today, please don't hesitate to reach out. You can reach me on Twitter at practicalandrew. You can email me directly, andrew.sullivan at redhead.com. Chris is short at redhead.com. I can remember that one now. He's also Chris short on Twitter. So please don't hesitate to reach out to us. Again, I'll make a plea for if there are any topics, any show subjects that you are interested in, let me know. While I try and come up with all of these ideas, you know, Chris and I collaborate on occasion. I can only guess at what, you know, if it's interesting to me, hopefully it's interesting to the audience. So again, please, if there's anything you're interested in, don't hesitate to reach out and let me know. Yeah, like there's no topic that's kind of like too simple or too advanced, right? Like feel free, whatever it may be, ping us because it might be something that we need to do a better job of doing sometimes too. Absolutely. What's that XKCD about today's 10,000? Oh, yeah, yeah, I forget the number on that one, but yeah, I remember that. 1053, yeah, I'll post a link to it. So yeah, very much. There is no wrong question. There is nothing that you think, oh, everybody should know this. Everybody already, right? I'm the only one who doesn't know this. No, absolutely not. So that XKCD is a great example of let's take the opportunity. Let's learn together. And literally every week I learn something on these shows. Yeah, the streaming channel has just been a wealth of knowledge for me about everything across the entire platform or company's offerings, right? There's a lot of knowledge to be gleaned from these videos. So yeah. And don't hesitate to send your sales folks our way. We encourage them to watch the streams too. Yeah, point him at this. And they'll get a little educated. And you can be a happier customer as a result. So Sai Krishna, we're getting ready to close the stream, but please move over to Discord. So we're all on Discord as well. And we're happy to help answer your question there about cluster operators degrading. Yeah, please join us on Discord. Or if you can't do Discord, feel free to send me an email short at redhead.com. All right, so thank you, everybody. Have a great rest of your day. Have a great rest of your week. For those in the US, have a safe holiday weekend. Yes. And enjoy the extra time off. Yes. Stay safe out there, folks. Talk to you soon.