 Let's get into the big topic of open source, something that we actually say is right. This is so awesome. We are an open culture that is actually under some sort of process that a developer or let's say as the Kubernetes ecosystem really brings. Welcome to this week's Ask an OpenShift admin office hours live stream here on Red Hat Live Streaming. My name is Andrew Sullivan and I am as always joined by my lovely and effervescent co-host, Mr. Johnny Ricard. How are you today, sir? I'm good. How are you? It's a beautiful Red Hat day, Johnny. It always is. I've watched that intro video. I don't know, 50 times or something. How many times have we been doing this? So how many times have we had that video? And I was just noticing how much center I look in the very ending of that. I'm like, man, has the last year really been that rough on me? Geez, I'm getting old. Over 50s, real. So anyways, yeah. Hello, everyone. Welcome to the live stream. So this is an office hours live stream here with Red Hat Live Streaming, which means that we are here to answer any questions that you happen to have, anything that's on your mind, anything that you'd like to talk about that it's hopefully open shift related. We can't help with other things. We might try, but no guarantees. So any questions that you happen to have anything that's on your mind, we're happy to do our best to help. If the panel of experts that we have here, and I have to say that I have two of the smartest people at Red Hat that I know joining me today. If we can't help you, then we will reach back into the appropriate organizations, whether it's product management or engineering or wherever it happens to be in Red Hat, we'll find those answers for you. And with that being said, I would like to introduce today's topic and our guest. So we, this one came together. It's the culmination of several weeks of effort of coordination with our guest, Christina, but it's a topic that is near and dear to our hearts and something that is interesting and exciting to me in particular, somebody who has been, held many roles myself. So we are joined today by Christina Kira Cadu. Nope, I messed it up. Kira Akidu. Close. Yeah, that's correct. Who is a senior architect with the Red Hat Consulting Group. So Christina, I'll let you introduce yourself, please. So as Andrew said, I'm Christina Kira Cadu. I'm a Red Hat senior architect. I've been in open source for about 15 years since my undergrad. And I've been working with OpenShift since cartridges were a thing. We're back in version two. Yeah. Nice. Yeah, I remember that. I don't miss it. So I like technology, but the main thing for me is solving problems and help people and teams. So technology is a means for me to do that. So if technology doesn't solve the problem, then we find solutions with a good conversation or a change of process. I like people. I'm in consulting rather than anywhere else. And one of the things that I'm passionate about is bringing them together. It's one, great news, right? You're in the right role with the right skills, right? And doing something that you enjoy. But two, it's something that's really important. And I think a lot of us IT folks, I would classify myself as an introvert. And that doesn't mean that I dislike people or anything like that. It just means that sometimes it's harder to get that started. And Christina, I have no idea if you're an introvert or extrovert, but just wanting to help those processes, the people, the technology, all of those things work together is super important. And it's sort of the key or one of the things that we're going to talk about today during our conversation. And yeah, I used to work with a group and we used to say, technology doesn't fix broken people in process. You can't just throw more technology at the problem and hope that it sorts itself out. Usually it was in the days of CMDB, right? Christina, you're in the UK. You're probably familiar with the... Oh gosh, what is... Johnny, what is the certification we all had? ITIL. Yeah. So I think that was a UK government thing originally. And then the US government adopted it and then it went from there. All right. Sorry, go ahead. No, no, no. So as always at the beginning of our stream here, we like to talk about the top of mind topics. So these are things that we've identified, things that we think that might be important or interesting to you all in our audience. And please don't be afraid to chime in if you've got any thoughts to say. So the first one of these that we want to talk about is actually a follow-up from last week. So not our stream necessarily, but the what's next session that happened last Thursday. So in case you missed it, you can find the recording of the what's next at quite simply redhat.com slash next and new. All one word, no spaces. That'll redirect you to a page, a landing page that has the recordings and the slides for all of the what's new and what's next presentations with OpenShift. So you can go ahead and see all of those. But we wanted to pull out a couple of things that I and Johnny both thought were particularly interesting. And Kristina, you're of course welcome to weigh in as well if there's something that was interesting to you. So the first one that I want to poke at and spend just a couple of seconds talking about is CoroS layering. And Johnny, you and I both like as soon as they said this, we both sent messages to each other saying that's really cool. Yeah. So there was a blog post or I guess it wasn't a blog post. It was a tweet thread that Mark Russell, who has been on the stream with us here, that he posted and I retweeted. So if anybody is curious, you can go and look at my Twitter at practical Andrew. It's in my name there on the screen. So talking about what is Red Hat Enterprise Linux CoroS layering and the purpose that it brings. And effectively, if you're familiar with RPMOS tree and the way that that works, we're just taking that principle and putting it into a container that we can now use to add additional layers for lack of a better term to CoroS. So this is really useful for all those use cases, all those times where, you know, oh, my hardware needs this special driver, right? I need to add a driver into the host or my security team is insisting that we have this package, you know, installed to the host and, you know, all of those kinds of things. You know, I think a number of our partners are very interested in this as well on how to add things and configure things inside of the host using a much more, I'm going to say flexible, but much lower level mechanism, right? Because sometimes machine config, while it's powerful, it can be a little bit difficult to do some things. Johnny, Christina Phelps? Yeah, I personally think that this is a great feature. I mean, like just where CoroS may have been unapproachable to some organizations before, because you couldn't actually modify the underlying host to put whatever agent you had to have on your host, this kind of solves that problem. And I think it's a big step forward. And I think it shows that Red Hat's listening to some of the pain points that our customers have. So I think it's pretty awesome. At the same time, I think we shouldn't forget that when we use Red Hat CoroS, we're using it for the containers that are running on top of it. And we should aim to design and run processes on top of the CoroS with that in mind. Yeah, it's funny you say that. And that's a really important point to remember, because I have seen a lot of conversation from our field folks, not since this announcement, but previously around how do we really have to support rel worker nodes? It opens up so much possibility for people to just accidentally break things. So yes, that is a very good reminder. And thank you very much, Christina, of don't forget that you can break things. And you can make it more difficult than it needs to be. And you're designing for cloud native workloads, please bear that in mind. Yeah. And I should also also remind everybody that the CoroS layering and the next couple of things that we'll be talking about are they are roadmap things. So remember that they're not shipping today. And technically, there's no guarantee that they will ship at all. It is roadmap. So just keep that in mind. I feel like I always have to remind folks of that. Johnny, there's a couple other things here. What was one that you wanted to highlight? Yeah, I think that the big one for me was just the overall whole theme of everything. If you look at what's next from last time, they were really focused on just security across the entire platform. And then this time now it's about the consistency and security. So now it's a lot more holistic. So just from a messaging standpoint, that was the big thing that stood out to me, that everything that they talked about was like, okay, how are we consistently doing this across the platform? How are we like integrating security? All the things that like us snaps people and security people have in mind. It was kind of like, you know, checking those boxes for us. And then the other thing I thought was just the ACS stuff was pretty awesome, like, where ACS is going to be a cloud service or could be a potentially a cloud service. And then the dynamic network policies I thought was that's going to be super awesome because I think that that's just a tough hurdle for people to get over. And I think once they have something that helps them kind of get there and help kind of maybe give them a little push, then it'll be a lot easier for folks to implement. I'm actually quite excited to see the integration between the, what do you call it? I don't think it's the wrong bar. I think it's actually been released the security operator with ACS because then you can see more things in that UI. And it caters for that consistency and security that you've just mentioned. It's a building. Exactly. Legos. Yeah. Yeah. Interestingly, there's several efforts underway to create some solution patterns with security solutions and open shift. So, Jenny, I think your team is involved in one of those. My team is involved in one of those. So there's lots of stuff that's coming out or will be coming out in the not too distant future on the security aspects. So keep an eye on all the various red hat properties. And of course here, we'll be talking about it as well. So, last one really quick before we move on, though. Yeah. That's what would be super awesome is that KCP thing. You brought it up last time in the last room. That is just, the more I look into that, the more red that's just going to be, I cannot wait for that to happen. That's going to be, I'm super excited for that. Yeah, that one, it's one of those, I think, once people understand what it is and the purpose it fills, it's going to be a huge one. Because at first it's like, oh, this doesn't make sense. This is very confusing. And it's, it's abstracting the underlying, you know, open shift and Kubernetes clusters away from the user, from the developer, the application team. So that they see a Kubernetes API in point that they have full control over. They can go in and they can deploy their own operators. They can do all the things that they need to do inside of there. But behind that API could be one or 10 or 50 or 5,000 different clusters. And it just transparently, you know, works for their work. So it's, it's some really cool stuff that's coming down the pipeline. I'm excited to see what that turns into and when it becomes available. We'll have to invite that team on at some point, Johnny. Yeah, that'd be, that's gonna be awesome. Okay, so just a couple of additional quick things that I wanted to hit on here. Let me share my screen as soon as I find the right window. Nope, not that source. We're going to share this guy. All right, a couple of quick things. So one last week, we quickly talked about how, so there was a upgrade blocking issue with SCD in 4.8 to 4.9 upgrades. So that has been resolved with 4.9.28, which means that upgrades in the stable channel are now available again. So if we come out here to the upgrade path, and we go to stable 4.8, I am currently on, we'll go with 4.8 dots, what is the current stable? 4.8.36. So if I say 4.8.36, and we can see here that I now can go to 4.9.28. So for anybody who was waiting for that issue to be resolved, you can now update your clusters as a part of the stable release channel. So yay, I guess, playing downtime maintenance windows, whatever that happens to be, and then we can continue along those updates. That should also, or that does not mean that 4.9 to 4.10 isn't stable yet, however. So if we come down here to stable 4.9, and I want to go to my target version, you notice that there's still no stable for 4.10. So if you want to, if you're comfortable using FAST, you can do those 4.9 to 4.10 updates. And remember that 4.8 to 4.10 is the first update that will have the accelerated updates associated with the new EUS policy. So when you're doing that upgrades, basically, it pauses the machine pool, machine config pool rollouts. Well, the control plane does the 4.8, 4.9, 4.10 upgrade. And then once the control plane is at 4.10, it unpauses the MCPs, and the nodes go directly from 4.8 to 4.10. So we've got a blog post about that somewhere. I need to dig that up. The other thing that I wanted to touch on here just very quickly is this blog post. So let me post this into Twitch. Oh, I don't have Twitch open. Here we go. So this blog post, which talks about bringing your own scheduler into OpenShift. So this is something that has been asked about, I won't say frequently, but it has been asked about a number of times with OpenShift 4. So with OpenShift 4, we removed the ability to really, you know, deeply customize and even replace the scheduler associated with OpenShift. For a number of reasons, not the least of which is it can negatively impact the control plane pods. And this blog post kind of walks through and talks about all of that. But the good news is now with the secondary scheduler operator being generally available, you can now go in and you can add in your own scheduler, if you so choose. The example that they use here is load aware scheduling. So if we scroll down here, they have, they link into the trimaran, I think that's the right pronunciation. I'm terrible at all languages, so my apologies. So basically this defines how to, or explains how to add the secondary scheduler into the cluster so that it will apply to any of your workload pods. So the reason why it's a secondary scheduler is because the regular, the primary scheduler, the normal OpenShift scheduler still applies to control plane workloads. But the secondary one applies to your application workloads. But yeah, lots of cool stuff that's available here. If you actually go to this link here, you'll notice that you'll get dropped into the Kubernetes SIGs scheduler plugins. And from here, there is a whole list of these that are available inside of here, right, capacity scheduling, co-scheduling, node resource topology. So trimaran, there's lots of these in his side of here. So if it's something that's interesting to you, check it out. Maybe it can be useful for your particular workload. All right. So with that, that's all the top of mind topics that we'll hit on today. Our hope none, unless you've hit the bug, fragmented etcd, updating is not going to happen. Yeah, I understand if people want to wait a little bit, you know, past the first release for a fixed rather significant bug that caused a midstream, you know, blockage or update block. I get that. Yeah, the fear of updating. Yeah, it's funny, right? Because we always talk about and we always remind folks that the fast channel is GA, it's fully supported. There's no reason why you can't or shouldn't, in many cases, use the fast channel to apply updates, especially if it's something that is important or relevant to you. At the same time, I was an administrator for a long time, right? I get wanting to wait for stable and wanting to make sure that it has been thoroughly tested and vetted, you know, in order to make sure it's not going to accidentally break something. You know, the engineering folks and Johnny, I think you've you've seen Christina maybe as well, right? They wrote this whole big long internal blog post about like why you shouldn't be afraid of fast and why you should be using fast. And I think internally, folks recognize that. Like I said, I recognize the value of fast, but I also very much empathize with with you all with our audience, with the administrators who say no, I'm good. I'll wait. Thank you. So this one out. Yeah. Yeah, for sure. Because you don't want to implement something that's broken, then have to try and roll it back. And yeah, you don't want to be there. At least you want to test it, right? So you want to see what's what's going on. And if if the only place that you can test it with live work, load is your production, I can see why maybe this is not something you want to try straight away. Yeah. Yeah. Chris Branch, is there specific or OS nodes to use depending on cluster version? I just recently upgraded cluster and need to add three more nodes. Will my current ISO be compatible? So technically, yes. And Johnny, Christina, please, please add on any thoughts that you happen to have. So technically, yes. What happens when you deploy a new core OS node, so you point it at its ignition config, and it goes to the machine config server and pulls down the full the full set of machine config. And one of the first things that it does is download the new core OS version, which is in a container and put it onto the host. So technically, it'll update itself to the current version. Generally speaking, though, the most common recommendation is it's fine to use a core OS version that is within the same minor release version as your cluster. So if your cluster was originally deployed, say four dot six, and that's the ISO you've got, but you've since updated it to say four dot nine, you would probably want to deploy your new nodes using four dot nine rather than four dot six, just to simplify things a bit. And just to key in there a little bit, we talk about it a lot. Four dot nine dot 15, you will not have a four dot nine dot 15 core OS ISO most likely. There's only going to be one or two releases per minor release for the core OS image itself. So they're not they're not married to each other. The core or the OCP cluster version versus the core OS version. And if you if you want to find out which version that you're actually using, there's there's always the release dot text you can you can look at to find, okay, is this the same as before? Have we got a new version, et cetera? Yeah. Yeah. And the release dot text is on, it's available via the mirror, right? Yes, mirror.openshift.com. Go to the mirror and go to OpenShift v4. And I think we have to go to dependencies for that one. Yeah. And then core OS and four dot 10 and four dot 10.3. And am I in here? No, it's a It's in client. Sorry. I pasted a link. No, you're good. Let's see. OCP. I very rarely look at it. Can you tell? So here's the changelog. And there's an HTML version of that. And then down here's the release dot txt. And very much to your point, it has all of the different versions, everything that's inside of here, including, if I remember the name of it, I remember if it's nice. Yeah. So and to be clear, there won't be one with every single release, but there should be a container image inside of here that has that core OS release inside of it. So not every release has a core OS update. Yeah. I was just looking, one of the things... So it says on the component versions machine OS Red Heart Enterprise Linux core OS, it says the version. Yeah. So you can see the last time it was updated was the fifth. So probably, since we do weekly releases, it's probably two releases ago. And in case anybody didn't know, so the way to relieve these releases, so 4.10, so this is the OpenShift version, 4.10. 8.4 or 84 is the RHEL version that it's based off of, so 8.4 in this instance. And then this is the date that it was actually built. So 20, 22, 04, 05 at 05.41. Yep. Let's see. I see chat. Can you teach Ansible Tower? I can spell Ansible Tower. That's about the extent of my knowledge on it. So, Johnny, I think you have far more familiarity with it than I do. I was just going to be a smart aleck and say it's the Ansible Automation Platform now. It is now. In fact, that's the cultivation platform version 2.1. Yeah, we got to use the tower, so we changed the name. So. So, yeah, I think, Christina, you have quite a bit of experience with AAP, don't you? I have some experience with AAP, yes, and a bunch of other Red Heart products. I don't teach, what do you call it, courses on it, but of the ones that I have attended. We have some very interesting content on it. Yeah, I know. Can we, the three of us, teach Ansible? No, not effectively. I don't think we have an Ansible livestream. I think there have been some instances where Ansible has talked about on various livestreams. I want to say the level of folks have done at least one. We've talked about it briefly for things like using Ansible or deploying, using Ansible Automation Platform and OpenShift together, so basically having Ansible jobs running from inside of an OpenShift cluster. But yeah, it's not my forte. And you can use it in combination with advanced cluster management to call stuff, to make automation orchestration, I suppose, happen. In fact, deployed on OpenShift itself. But it's because of the vastness of the topic. I don't think this is the right forum. There's lots of resources. We'll dig those up. I'll include some links to Ansible resources in the blog post for the stream. All right. So that's all the top of mind topics we have for today. For the audience, please don't hesitate to continue to ask questions, make comments. We love it when y'all chat. It's what we're here for, so please don't hesitate. And that being said, let's talk about today's topic. We mentioned, gosh, it's been six weeks ago or so. Let me see here. Where's my window? I think I brought it up. Yeah. So here, Christina, your blog post was published just over a month ago. And I think we mentioned it when it was published just a couple of days after that. And it triggered a conversation between us of, like, hey, this is really great information. This is stuff that we've talked about in a number of different ways, maybe not as obviously direct. But we did bring it up during the top of mind topics that week. And it just so happened, I think you heard that we had talked about it and you reached out to me and then we had this conversation about, hey, it would be really great if you could join us on the stream to talk about not only what's in the blog post, but all of the other stuff that you've got going on, because you are very busy, right? And as you said, you've had a number of, I don't know about roles, but certainly a number of different areas of expertise in your time at Red Hat. So I thought it would be a phenomenal opportunity to bring you on and talk about that. Yeah. So writing that has been a result of my customer asking multiple times about, you know, how do we do this? What's the best way of doing, of engaging with our developers? We have this new platform going on and we want to make sure that we ask the right things. Surely, Red Hat has done this before. What do you have when it comes to resources to make that happen? And it's not been the first time that that question has been asked. And it kind of goes to show that if you're actually being really lazy and you want to be lazy, you should write the blog post the first time that this has happened. What's that thing about, you know, automation rate is the same way? Instead of doing a task that takes 10 minutes once, you spend four days writing automation to do the task once. Yeah. And this time, I think it's the third or fourth time that this has been asked and at that point I've had enough and it's like, okay, let's write it down, let's collaborate, let's find how should we approach the subject of introducing new workloads to the platform. And my customers, the ones I've asked, have got it right. They want to make sure that, you know, they increase the utilization of the platform. They want to make sure that the product that they've been paying money for actually gets a fair bit of use from their developers and creates, it's been used to create business value further down the line. Yeah, it's, I've said before, and it's worth repeating, I think, that you pay for 100% of the platform. Especially, you know, in a hyperscaler, you're paying for those resources, right, those AWS AMIs or Azure instances or whatever it happens to be, you're paying for those, you know, open shift entitlements and all that. And if it sits at, you know, two or five or 10% utilized, that's effectively, you know, resources that you're not using effectively. So, of course, it makes sense that we want to bring that utilization up. And I think sometimes as administrators, and I certainly had this problem, you know, a decade ago, 15 years ago, when I was a storage admin, when I was a virtualization admin, and we were just kicking off, you know, back in 2007, 2008, when virtualization was just starting to be a big thing, like, how do you talk to those application teams? How do you talk to those folks? How do you, you know, have that conversation of one, hey, it's okay, it's safe to bring your workload here, it's safe to bring your stuff on here. But two, how do we do that smartly? How do we do that? And identify, right, help to understand at the platform level, what's about to happen. And I really like, you know, and Stephanie, thank you for posting the link into the chat there. So I really like some of the questions that you have listed here. I sometimes I was, as I was reading your blog post for probably the fifth or sixth time just before this, I always, always chuckle at the core platform requirements section. Because I as a storage admin, it was one of like this, I won't say favorite, but it was always this hilarious thing that we did of, yeah, so tell us how many gigabytes you want, and what's the, the latency that you need and how many you need. Yeah, and it's, and you always get this like blank deer in the headlights look from the application developers who are like, what's an IOP? So it's sometimes with resources, it's a little bit of a guessing game. And fortunately, you know, modern platforms are flexible enough that we can make mistakes and it's okay. But I less concentrating on the, the resource requirements, I really liked this, this, the latter sections around application design, CI CD code and image promotion, developer onboarding for anybody who watched last week's stream developer onboarding, you know, application onboarding was one of the things that when we pulled our field folks, that was one of the things that they almost unanimously said is really important. So I'll pause there and I'll, I'll, I'll leave you with a question, Christina, which is, you know, from your perspective, what, what does that, what does that mean? How critical is that as somebody who is, you know, practicing this day to day interacting with customers and bringing those applications on board? So developer onboarding is, is paramount actually, you want to make sure that you have the process from, I don't know, maybe even from, from costing to getting something on the platform. If you're an enterprise running OpenShift, you're not going to your, to your, to your system administration that says, I need a project on OpenShift. It's not usually how it works. It works with, I have a cost code and I have budget and I have three developers that want to work online on, on, on containerization, on this particular business problem. How do I onboard them on the application? And there's quite a lot of things that you, you want to do. And, and it doesn't necessarily mean that you have all the answers. But what you should start with is automating the bits that you have the, have, know the integrations for. And, and I see, so I see one of the, the comments there. Cleanscast 74. It likes the questions in the blog, but loads of application developers can't, can't answer many of them. Yes, that is, that is entirely correct. They cannot answer many of them, but it's that the whole point of having those questions is to see how far your developers are during the design phase, during their implementation phase, have they considered any of these things or are they things that you can help with? So if they don't have the resources for consumption for these things, maybe this is the point where your organization wants to consider a sandbox to say, okay, developers can onboard here and can test and they can see, okay, we've got these resources requirements that we want to go further down the line for. And there's loads of things that you can do for that. So, yes, it's not for real production workloads, but you have to start somewhere. So if you don't have anywhere for your developers to play and find out these things, then maybe you should consider that. If you do, then you have half the problem solved. And then this is why you have progressive environments to learn a bit more, a bit more, a bit more before you end up in production. Yeah, when we were chatting yesterday, you know, about the stream, we talked briefly about, you know, having day one requirements is one thing. But day 365 is likely to be entirely different than day one. And so, you know, the set of questions is, you know, it's one of the tools as you go through and do that onboarding process and that growth process and that learning process with the application and the application team. But really, it should be an ongoing conversation as both sides application and operations learn more. Exactly, exactly. And you say application, I would say applications. So the first thing they're going to start with is one use case and one set of requirements. To say, I would like to make this an example as to what I want to run with further down the line in production and say, I've got 15 requirements that are core and I want to satisfy them. And then on day 180, you say, okay, well, what about application number two? And then you say, by the end of the year, you have improved your application, your platform features by taking in account what your application teams require. So if I say as application one, I have, I want stateless applications running on open shift. And I want a little bit of a scale up and down. That's, that's all you're going to get. There's nothing particularly sophisticated. And I would say start with that and then add more and more complex features on it. So like further down the line, you might want to say, I would like to add storage replication or storage clusters attached to this so that I can have some kind of disaster recovery across the board for applications that are non stateless. Or I would like to use some kind of a backup and recovery method for applications that require it again, that are non, that are stateful. So the, as you said, the requirements that you have on day one are different to the requirements that you're going to have on day 365. Yeah. Yeah. It's, it's interesting to me that you mentioned some of those things. And I see that there's a bunch of questions. We'll get to those in just a moment. Thank you, everybody, for asking. Because it as a storage admin who has watched and participated in things like CSI and seeing how storage level features like snapshots like replication, you know, we had Annette Cluett on a few weeks ago to talk about, you know, using the ramen DR project to set up storage replication in a Kubernetes native way. What's interesting to me is that anecdotally, I don't see a lot of application teams, developer folks who really utilize those features. And I think it's because they have never really had access to it. You know, I look at, you know, when I was a storage admin, we had, you know, features like deduplication and compression and snapshots and all of these other things. And, you know, you know, VMware integration with, you know, the storage arrays to do, you know, snap offloaded snapshots and all of this, and it very rarely got used. And I think it was largely because they didn't know about it or they didn't understand it. So it's interesting, you know, that you, you talk about those types of features, because I think that we need to work again, that whole bi-directional communication, you know, Johnny, I know you and I, when we work together at the same customer rate of, I know you saw the same thing I did, where we have to have those conversations and it has to be in an ongoing basis. It does. And it doesn't always work with, I don't know, what I've seen work well is having discussion forums. So kind of having two hour a week, perhaps, as part of your team saying, okay, one or two people are going to man this and we're going to have an open call with developers to solve some of their problems for the people that are running on the platform. And for the people that want to onboard on the platform, you have different discussions that you have. You open a conversation around some of those requirements that I mentioned in that blog post, understanding the application, understanding if the architecture of the underlying platform would satisfy those requirements and then feed those in to your product owner and say, I would like to add these feet or we have a, we have demands for these features, hopefully the product owner would be there in those discussions as well. And then they realize we have a demand for these features. And then you'd add it into your backlog to make sure that they happen. But as you develop them, you would want to make sure that you're doing the right thing for your developers. Because, you know, a new feature might come down the line that would, that would make either your life easier or much more difficult, depending on on what's going on. So, so you'd want to keep either in you, you'd want to keep the developers as well as your product owner in the loop as to what's going on in your platform development. And I think having interactivity, so having sandbox is having test beds for your developers for the platform is, is a good way to go. No, I like this, this concept that you're kind of hinting around the edges of an office hours, if you will. That sounds familiar. Good idea. Yeah. Okay, let's, let's talk a few questions here. Let me scroll back down to where I last left off. Let's see. So, clinkies, clinkies, 74. One, I love that you're posting code into, into the chat box here. That's, that's awesome. So for anybody on YouTube, I don't think I'm sure that got cut off because YouTube has a 200 character limit. So I'll copy that out and we can paste it into a gist or something that we'll put into the blog post. But yeah, that, that little snippet, I was just looking at something similar today of how to extract packages like, so you can get the precise package versions that are in a specific core OS release. Any way to provision a PVC with local storage. And thank you, John, for responding host path provisioner would be one way that you can do that. So there is the, gosh, what's it called now? I don't think it's, I think it's tech preview or maybe even like just upstream available. I think it's the, the local, local storage operator. Well, there's the local storage operator. And then there's another one that is related to what the ODF folks are doing with single note open shifts where it, it'll provision local LVM volumes. It's like local LLVM operator or something like that. So there's a number of different ways. One thing that I always remind folks is that with the host path provisioner, it doesn't manage the underlying paths. So if I think in your example here slash, slash app slash app, if that doesn't already exist, the host path provisioner won't create it. So you have to have something that creates that partition and makes it available. And then the other one that, that you mentioned there, Johnny, the local path provisioner, that one, I think there's another one, I think it's local path provisioner that will create those, but it's, it's not straightforward. So yes, it's possible. We should probably do a little bit of research and organization of our thoughts around that. So that way we can give a better answer. I forgot about the LVM one. Yeah, that's a, that's the new one, the latest hotness. Yeah. Let's see how OpenShift manages the Istio for routing concepts. So I don't know the answer to that. That's a, that'll, unless either one of you are familiar with service match, that's probably an Ortwin question, which we need to get him back on for a part two. So I, I think that Istio has its own control plane and it will create those routing rules inside of the service mesh in order to bring traffic to and from the various pods and all of that, according to. And it would replicate them across. And it would replicate them across, which is one of the reasons why we found replication cross clusters to become a bit performance intensive. So. Yeah. Yeah. And Ortwin very rightfully, right? We talked about that when he was on the stream of, Hey, there are resource requirements. It does add some latency. It does add CPU and, and, you know, RAM and network, you know, throughput requirements when you deploy a service mesh. So take that into account. Lewis, new to OpenShift. I don't know much about it. Where do I start? So great question. One that we, that we love getting. Welcome to OpenShift. We hope that you have fun here. So I always encourage folks to first and foremost, try out learn.OpenShift.com. So let me share the window here. Oh, I posted the link, but it got stripped out. That's why I was like, where is it? I know, put that link in there. Okay. Yeah. So let's see. Where did my window go? Stephanie, if we can share my window here, there we go. So if you just go to learn, just learn.OpenShift cannot type and talk at the same time. So learn.OpenShift.com will redirect you over to now developers.redhead.com. And if you scroll down here, there is a whole bunch of resources that are available. A lot of them are developers centric. But if you get down here to this interactive lesson section, there's a bunch of stuff that's inside of here. So a lot of these you can go through and, you know, literally like the OpenShift 4.9 playground, you can click the start button, and it'll take you to the screen here. I click start, and oh, there must have been one already ready. Usually it waits to provision one, but you see just like this, I'm in a real, you know, I'm in a real cluster. It's a code ready containers cluster, but it's a real cluster where I can have access to and I can, you know, do a lot of different things inside of here in order to learn in a completely, you know, safe, harmless environment. The other thing that I always recommend to folks is go to try, try.OpenShift.com, which is also redirect now to redhead.com. But from here, you can, you see it has a bunch of different jump points over to get more information. That being said, there is also the, it is the DO080 learning course. See what old duck-duck-do comes up with. I am in the U.S., thank you. So this is a free, you can see here, cost is $0. So it is a free course that anybody can go and register for on the Red Hat Learning sites and kind of give you that high-level introduction into OpenShift and deploying applications inside of there. I would be very surprised if that is true. OpenShift container platform 4.1 appears that that course description needs to be updated. But yeah, that is another way that, again, no cost, you can get started there. And last, I realize that I am banging this drum quite a bit. Last, our communities. So you have, among other things, the OpenShift Commons. So if we go to OpenShift.org, which now redirects over to okd.io. So our upstream Commons community, OKD, so they have a great community of folks to help. You can also go to the Kubernetes Slack. So in the Kubernetes Slack, there is an OpenShift users channel that you can participate in. I think several of folks who regularly watch the stream are active in there. So lots of different ways that you can reach out. And of course, here on the channel, you're always welcome to ask questions to the livestream hosts. I hope non-discord. Yeah, there is discord. I will be honest and point out that I'm not very active on discord because too many chats. I'm also not on discord. No, not discord. Well, I'm on discord for gaming. Yeah. Let's see. Did I miss anything? There's another question from Jamie Gonzalez. Man, I'm sorry if I ruined that. But it's asking if we can speak to the reasons for developing on the platform versus developing locally and then running on the platform. So why should I develop in my cluster versus local and then push? It entirely depends on what you're developing, right? Yeah. I'm thinking, so Andrew's getting out over his skis here. So there's concepts in the developer world of things like interloop and outer loop. So interloop is being able to quickly write and test and iterate on your code using whatever resources. So a lot of times in the Red Hat portfolio, we talk about things like code ready workspaces or code ready containers. One was running on your laptop, one is running in the cluster and being able to accelerate that interloop activity so that as a developer, I can fix the bug, I can add the feature, do whatever it is. And then when it's done, I commit it to Git, I do my push, and that kicks off depending on a number of factors that kicks off the outer loop of things like the CI CD process going and doing the builds and tests and code reviews and all that other stuff. So they're pros and cons to each approach. I will say I am woefully unqualified to talk about them aside from, I think whether it happens locally or on the platform, a lot depends very much to Christina's point on what you're developing. If your application is multiple gigabytes of tens of gigabytes of RAM that you need to spin up in order to do a test, that seems like it's a bad fit for your developer's laptop. And also when you develop on a platform, when you develop away from your machine, then you don't have the, it works in my environment problem. It works in my cluster problem. It works on my code ready containers infrastructure, because you may have permissions, you may have, you may have permissions that you wouldn't necessarily have on the actual platform. So that was, that's one of the things that I've noticed for people that come from an application background, I want to do stuff on a more secure platform, a little bit more restrictive platform that things don't quite work the same way. So I would say start developing in an environment that is similarly restrictive to production, so that when you reach a point where you are in production, you don't have to change many things. You can only change the environment or variables that your application depends on rather than the permissions, the underlying infrastructure. That was the first place my mind went when I saw that question was like, well, if a lot of times, especially me, I'll have the libraries already installed on my laptop, right? Like if I'm running something with Ansible, I already have the Cades module installed, but if I'm trying to blast it out to like a CI CD pipeline or something like that, where that container, wherever I'm trying to deploy to may not have that, and then I'm going to hit that, you know, out of my deployment cycle versus where I would never hit that locally, or I'm giving thumbs up in the first month that I got it working, it could break on its face pretty bad. And it's not just that, it's all the integration points, like on a platform you can do a multiple integration test should you have access to. So if you want to hit some kind of authentication back and front end, etc, then you can say, well, I have this server, I can point things to rather than say, I'll put a stub here and work on it later. That's a do block. Yeah. I just not related to what you were just talking about, I just posted the link for the Kubernetes Slack for the channel. I think that will work. I don't ever use the browser client, I always use the desktop client. So I don't know how to extract links out of that. So I think it'll work. The other thing, I posted a link to a Lyft, so Lyft engineering team posted a series of blog posts about how they scaled their application. And it talks a lot about that inner loop, outer loop, and where development happens, and how to maximize the efficiency, the effectiveness of developer and developer time. So if you're familiar with Kubernetes, if you're familiar with OpenShift and the developer capabilities, I think you'll see a lot of things where, I won't say optimizations can happen, but platform integrations can happen and where certain tasks can help. As much as we try and abstract things away, and the whole thing with CoroS is there are no snowflakes and all that other stuff. Every application is a snowflake. And the way it's developed and the way the developer teams operate is unique to each one of those. But yeah, that blog post or series of blog posts is a great read, if you happen to have the time. Oh, I'm sorry. Yeah, I was just going to say there's one more question down from Abba's piece about, you know, they're building their private cloud with OpenShift in the lower environment but worrying about production with a DR setup. So basically they're building low, deploying high. And so their concern is when they get to the production cluster, how do they set up DR in a SRM type fashion, a VMware SRM type fashion? So is SRM the moves VMs across capability? So I'm not sure I would use that. So I quite like the ability to have active, active deployments. I quite like the ability to have the ability to deploy application here or an application here and say, well, you can hit one or the other and it'll be the same. And if one goes down, then the other one will scale to accommodate the load. And when it comes to data, then you would have the potentially the ACMA sync, whatever it is with ODF capability that you can use. So what I want to understand a little bit more about that question is, do you, do you have a requirement to use VMware SRM or is there just a, we must have a way of recovering our applications in an event through a disaster? Yeah, I'll add in that the SRM comparison for OpenShift DR is always an interesting one to me because SRM is effectively taking those virtual machines and reinstating them in a new location, right? In a new VMware environment. We're not moving the VMware environment, we're moving the VMs. And if you think about it, the analogy to that in OpenShift is kind of what you just talked about of like, hey, I've got my application definitions, I've got my data, I need to recreate that, you know, redeploy my applications and the destination. You know, having Well, it is, right? And certainly with something like OpenShift, where there is some rigidity in the expectations that it has from the environment. So the analogy I used the other day is, you know, again, I'll pick on VMware because SRM, you know, hey, I'm backing up the operating system disk in every one of my ESX servers. Okay, disaster happens at the primary site. Do you go to the secondary site and, you know, basically restore the operating system disk to a whole slew of new servers, turn it on and expect everything to just work the same way it did before? Well, no, right? Because the network's going to be different. There's different DNS servers, there's different, you know, maybe maybe they aren't exactly the same physical servers, you know, so the network ports are in a different order, right? It's, you know, each zero and E3 instead of E0 and E1. And like all of this other stuff is going to change underneath there so that there's a very high likelihood of even if it does come back, it's still going to be largely broken. Whereas SRM, again, it's moving the workload, not the environment, the workload to a new location. So we have to approach the same thing with OpenShift. So I would say having that active, active, having potentially the ability to deploy to an environment that is already existing, kind of solve some of those problems because you're sure that your DNS is working on the environment too, because you've pre-built it and you've tested it, surely. And you're sure that your authentication works because it has to. I feel like there's a story, there's a story behind that rather innocent, surely. Of course you did. Yeah, it's working, right? You made sure, right? Yeah. That's gold. It's funny. What's like the number one troubleshooting thing? Like when somebody says, oh, my internet isn't working and I just had my son literally last Sunday, my son, my internet's not working. I don't know what's going on. What did you do to the internet? Why did you break the internet? And I walked upstairs and I can see the back of his computer at the network cables unplugged. Yep. He kicked it, did he? Yeah, that's exactly what happened. So, yeah. Anyways, yeah, very much to your points of active, active or even active, passive, but somewhere where you have that environment available, where it's ready, it's been validated, is super important to that disaster recovery. It doesn't even have to be a follow-up, but you can use the scalability component that OpenShift has and says, okay, well, we're going to keep their minimum here. And once we have a bit more workflow, it can scale up. But alternatively, I have customers that are running applications active, active on both clusters. And the thing that keeps them together is a GTM at the top. Yeah. Yeah. And as somebody who is currently undergoing or doing revisions on the OpenShift subscription guide, a reminder that for DR purposes, we only require entitlements for quote unquote hot DR. So if it's a warm DR environment, so where you have resources deployed, maybe turned off, not running all the time, but importantly, no workload deployed to it, we don't require entitlements for that. So you can turn it on, you can apply updates, right? Keep it in sync, you know, version wise and stuff like that with the primary. But until you deploy workload to it, we don't require entitlements. And you can, should disaster occur, you can basically take the entitlements from the original and apply them to the destination. So yes, the Christian, if you're listening, the subscription guide is the key, the gift that keeps on giving. It's my quarterly curse. All right. So Christina, there was a, I know we're two minutes after the top of the hour, it is late in the day for you. I want to be respectful of your time. So I wanted to ask you one more question for our audience. And I say one more, it'll probably lead into a couple of others. But for our audience, please go ahead and send in any questions, anything, any comments that you happen to have. Yes, Costa Rdoscaler is our friend. Thank you, our hope nine. So any questions that you have, any comments, please go ahead and submit those in. We'll do our best to address them. But so one of the things that we had talked about yesterday and as we were leading up into this is how do we as administrators who sometimes, you know, I think struggle to understand how our clusters are being used by application teams and stuff like that. How do we provide that business value justification rationalization up to our own management chain? And this is, you know, it's again, something that when I was a storage administrator, you know, every year, it was, why do you need, you know, two petabytes more storage? What are you doing with the other 30 that you've got already? You know, why is it only at 60% utilization? Like, you've got way more than two petabytes. Why can't you just do this or free? Why can't you just use that? So I'd be curious about your thoughts and anything that you have on that. That's a good question. So, you know, in previous OpenShift versions, we've had reporting mechanisms that showed a little bit more about the utilization. Now, what we have is the cost management software as a service service that Red Hat provides. And I find, I find that it's quite useful to use that to report what you have currently, of course, you have to do the background work of, okay, how much does my infrastructure actually cost me? What is the management cost? What is the server cost? What is the actual storage cost, etc. to run my underlying infrastructure? Then kind of, once you have that set down, you can say, okay, now I have OpenShift running on this infrastructure, I would like to know what does an application use? Is it worth me running 10 clusters when each cluster is running one application each? When they can have a overhead of, I don't know, six servers for running control plane and infrastructure services for the app, for the platform. So, I would say at that point, you would be able to justify your, I want to say, cluster density. So how many applications you're running on each cluster, given how much each cluster actually costs you to run? I quite like that. Yeah, it's the same thing, in my mind, I map it to the same thing that we do with performance. Oh, the performance isn't what I expect it to be. Here's a list of 10 things that I can do to improve performance, and you want to kind of check that before and after. And what you just described is effectively, hey, I can look at and I can see from a cost perspective, what's the most efficient way of running my clusters, of hosting the applications in my clusters? And I think there's two things that are important there. One, the actual cost doesn't actually, doesn't often matter. I mean, yes, so I'm sure somebody somewhere cares about how much it costs, but rather it's, you know, it costs $100 to do it this way, but I can prove that it costs $90 to do it this other way, or $110 to do it this other way. And it's the relative cost, which brings me to my other, and I'll say that this is a controversial one. I don't think that, like you were talking about at the very beginning, identify how much it costs in infrastructure and in manpower. I don't think you actually need to know the real numbers there, so long as they are correct in their relationship to each other. Well, it depends entirely on whether you're doing cross charging in your underlying platform, but you're absolutely right because a large percentage of companies using OpenShift Kubernetes-like environments don't necessarily monitor their underlying usage, their cost, and you can learn quite a lot of it. So from it, you can learn that, I don't know, your developers prefer, if you're doing your tagging right, you developers prefer using JBLS applications or they prefer using Python applications for whatever reason over something that your organization is trying to promote, or you can learn that if you do your tagging across cloud providers, that your application teams like using one cloud provider over the other, when you have a clear directive to use something else. So you can learn quite a lot by tagging and tagging your applications and using cost management to understand a little bit more about, okay, where does your cost lie? How do you distribute cost across cloud providers? And if you do your homework right, you can do your cross charging effectively, or you can, even if you don't care about the prices, you can learn quite a lot about your utilization across the long term. Well, it's all about getting value, right? At the end of the day, it's getting value back to the customer and that value could be in any form, it could be financial, it could be performance, it could be availability, there's all these different things that could like lead into like that value factor or whatever. I think that Delta Solo made a really good point too about where a lot of people think that on-prem is free, and that's just not the case, right? There's just so many variable things that go into the power space and cooling, data center costs, all these things that go into it that make it completely the opposite of free. And so it really makes a good argument though towards cloud versus on-prem versus like you said, Azure versus AWS, like I prefer this over this because of the, you know, it's just a little nuanced things that devs have and engineers have. Yeah. Yeah, how many of us have those spreadsheets of how much each are you in the data center costs and how much admin, you know, how much does an administrator's time cost and how do I aggregate all of these things into, you know, the open shift entitlements and how do I, yeah, it's one of those things of it's never going to be right. And you end up in this, you know, pedantic conversation where it doesn't matter. So long as they're relatively close and the ratios are right or relatively close, that's the important factor. Sorry, I interrupted you, Christina. So I was going to say at that point where you say bringing value to customers. I know organizations that keep using strict quota to charge their customers by things that they might use, or they're up to the point that they're allowed to use versus to the things that they're actually using. And that's the point where you actually bring value to your customer saying, okay, you've used the entire cluster and we're very happy for you paying us. And you've used only a tiny bit of this cluster. So yeah, this is the respective cost of what we're doing. Or you're our biggest customer and therefore we're going to cater for your features now because you're using this much and we want to make sure that you're successful. It has quite a lot of other implications, managing and understanding where your costs and what your utilization is. Yeah, I think that's something that I don't know that I ever learned this while I was an active practicing administrator, but understanding what your customer, customer being the application teams, the developer teams, what they're actually doing with the platform is super important. I was always one of those super nerdy guys who's all like, yeah, let's add this feature. This is really cool. We can do this and then nobody actually uses it. That was a big thing when I was in consulting. It's like we would go out and we've deployed these clusters and there's this big idea like we're going to get the Kubernetes, we're going to containerize our application. And then what happens is the barrier to entry was a little high. And if they didn't have somebody kind of hand holding somebody like Christina really there to help guide and like, all right, here's how you have to do this. Here's what you need to be thinking about. Then they might get the Hello World application on and show, hey, we did it, right? And then it would just go, it would go flat. My friend KO, he would sit here. He talks about empty clusters all the time. That's like his main mantra and getting capability on these clusters. And that's really the thing, right? We don't want them to buy something as awesome as OpenShift and then not ever use it because it's too hard or they just don't know what to do or whatever the reason might be. So yeah, it's awesome. But we just got to make sure that we do a good job as architects and consultants to make sure that we're getting everybody up to speed and that they know how to use it. It's absolutely important to get your developers on board versus introducing a new feature that nobody uses. And I helped some customers back from it saying, you actually need service mesh when nothing does mutual TLS across in your cluster? Or do you want this other feature here that your developers are going to use because you're going to have 30% more utilization further down the line? Yeah. Why do you think you need this feature? That's what I always like to ask. What do you think you're going to get out of this? What exactly is the outcome that you're expecting? And a lot of times what they think is going to happen versus the reality are two totally different things. And so yeah, it's definitely important. Yeah. And apologies to service mesh. It was just a recent example. I think it's a valid example. Service mesh is complex. It does add value and I hope not. I like your comment there of understanding the cost of something versus the value of something. It does add value, but you want to add it because you're going to maximize, utilize, capitalize on that value, not just adding in something for the sake of having it there, right? If you build it, they will come, only works in the movies. Thank you. Absolutely agree with that. Let's see. A couple of comments here. Please share your contacts on LinkedIn. I think we're all available, at least Johnny and I, just search for our names as they show up in our, what does Stephanie call those, lower thirds, the things that show up with our name and our contact information. Mine too. Yep. What about OpenShift and telco cloud workload? In what context? Yeah. If you can expand on that a little bit and give us a little more information, that would be helpful. OpenShift is used by telcos. We have several many telco customers that are utilizing OpenShift pretty heavily, both for containerized network, you know, CNV containerized, no C, containerized network functions, CNFs. There we go. I'd get there eventually, as well as VNFs through tools like OpenShift virtualization. Again, it's the use case thing. ABIS, if OpenShift can autoscale the pods when Maria CPU has reached threshold limit, yes, it sure can. That's the horizontal pod autoscaler, as well as the vertical pod autoscaler can both do that depending on how you want to scale those pods. Horizontal basically says, hey, CPU utilization is at 80%. I'm going to add three more instances of the pod. Vertical autoscaler would say CPU utilization is at 80%. I'm going to now add an extra 500 millicores. Although do be aware that with vertical pod autoscaler, it does bounce the pods in order to do that. Yeah. Thank you. And, Johnny, you mentioned at the beginning, in the top of mind, KETA. So KETA will allow you to do that based off of other arbitrary metrics. So not just memory and CPU, but it can tie into things like Prometheus and, you know, oh, the web server, the database, whatever is serving X number of clients or X number of records per second, I need to scale that up or scale that out. One thing that's really cool about KETA that I did, it was like the thing that caught my eye about it is that it can scale to zero. So it's kind of got like that serverless function aspect of it, where it can go and do its thing and then scale back down to zero. I think that's going to be awesome. Christina, I think go ahead. We have a question there saying, how do we determine the split between helping the app teams restricting their ability and restricting their ability to do their work as platform administrators? You will know if you're talking to your development teams, you will know that something is not working for them. So if you have those forums, you will understand when particularly cumbersome or they're finding ways around it. And in fact, you will be able to teach them the things that you've implemented in the platform, that you'll be able to show the things that are important as part of the organization like security or compliance metrics that your organization has, how they've been implemented and you'll be able to say these are here for this reason. But if you have objections around this particular item or requirements that aren't necessarily meeting but being met because of it, there are things that we can work around. And you should be able to do that if your development teams and your business applications have demonstrable business value. I love that response, especially that communication and finding out where they're having problems and what they're trying to work around themselves. Things like shadow IT. We used to hear a lot about shadow IT and how it's such a problem and the users, the apps teams are going around and working around all of the rules that we have in place. But they're using a corporate credit card that is approved all the way up the management chain to go out and pay for these shadow IT resources. Which means that it's really only shadow IT because only consider that to the ops team or the infrastructure team. And it's only because they're not providing the service that the business needs. So today's theme, right? Having that open honest conversation, being able to talk about what are the problems, how can we bring value to what you're doing is core to that. I was going to say, and they're saying we're getting constantly, we can't do some AWS, why not on premise, etc. That's a conversation you can have with them and then say to your chain of command management chain that this is something that is a feature, that is a requirement, that is something that you want, that you feel is necessary because you have these customers that are requiring it. So it could be a not at the moment, and maybe later down the line because you have these features to implement right now. It could be a, it doesn't bring business value and therefore we're not implementing it. It could be, let's get on and do it right now because it's really. Yeah, I'll even add a different aspect of that of particularly with containers, I think, and Kubernetes based applications. There has been a cloud native, there's a shift where some of the responsibility for things that previously happened at the infrastructure layer now happens at the application layer. And I think DR is a perfect example of that of, yes, we can make open shift behave exactly like Red Hat virtualization or vSphere or something like that and how it does DR. But the complexity and therefore the expense associated with that is going to be substantially larger than if we just go to the app team and say, hey, if we do this, can you have, can you make the application do this? Let's meet in the middle. And then it balances out that complexity, that cost. It's a shared responsibility. Everybody has some vested interest in it. So I think it's something that gets overlooked a lot. I don't know if you see it that way, Christina. I missed part of it. Sorry, my connection is not, I think, stuffed waiver. That's okay. Basically just saying that we need to have conversations around what the infrastructure and the platform is capable of providing versus what the application is responsible for. Whereas previously things like DR, the application basically absolved themself from it, SRM. I put it in my VMware VM and SRM, just make sure it's at the new site. And we can make OpenShift behave that way, but it's complex and it's expensive. So let's meet in the middle where the application takes on some of that responsibility. And I don't think it should because there's quite a lot of stuff that you don't want your platform to take over. So why on earth would your platform want to back up and restore your database? You should know how to do that because it's your database. We shouldn't be, it shouldn't be managing data to that extent. So there are definitely use cases where you want to say your disaster recovery is absolutely a mutual understanding between developers and CIS admins. Yep, that whole DevOps, just open the communication lines. We love to throw around the term DevOps, right? So I'm just reading through any of the comments that we have here. I do, it is 12-22 my time, which means that it is 5-22 for you, Christina. So I'm going to say we'll give it another three minutes for anybody who has any questions, comments, etc., to weigh in. Please don't hesitate to do that. If we don't get to your question or you feel like we haven't answered it adequately, please don't hesitate to reach out. So you can reach me on social media at Practical Andrew, on Twitter, on Reddit, on all of various places. And then you can also send me an email directly, andrew.sullivan at redhat.com. Don't mind anybody who reaches out. Johnny loves it too. And if you send me an email, I usually include Johnny as well just for good measure. So yeah, please don't hesitate to reach out to us if there's anything that we can do to help. And again, if we misunderstood or didn't get to answer your questions. So Chandler is very slyly reminding us there that next week we are going to be hosting the OpenShift virtualization team. So they're going to be showing us some really cool integrations between OpenShift and virtualization via OpenShift virtualization. So I'm not going to spoil it other than to say that there's some really cool stuff that we can do with things like get ops and pipelines and lots of other stuff that we might be talking might be showing there. And on the comment on GDPR, I've really done well at it. I'm so sorry. It's caused so many headaches. And yes, it's a complex problem. And yes, it manifests when you have shadow IT. And there are a lot of gates to make things policy, no, law approved. I don't think the next few minutes are going to cover it. I cannot, I have an inkling of what you're talking about, but I cannot speak authoritatively about it. And I know just from, you know, we of course all have mandatory training and all that other stuff about GDPR awareness. I can only imagine how complex it is to actually be practicing and doing underneath something like that. Delta Solo moving away from DR and towards application HA, deploying across multiple sites, global load balancing, yeah, very much the whole cloud native application thing. It doesn't mean that we don't want our infrastructures to be resilient or available. It just means that we don't want to take down a whole site because one network adapter died or something like that. But it means finding that balance of how can everybody have a vested interest in order to have the best outcome. Let's see, do a quick overview of volume snapshot options and open shift. I'm not quite sure what you mean by that, Abbas. So volume snapshots generally apply to, if we're talking PVCs, you rely on your CSI provisioner to do those snapshots. And the specific implementation will be up to the CSI driver and the storage vendor that you happen to be using. So yeah, our hope nine. Thank you. So yeah, if you have anything that you want to clarify about that or ask somewhere about it, please don't hesitate to reach out. Again, android.solovin.redhat.com. We'll clarify that. Storage is near and dear to my heart, so I always like to answer those questions. And just so we're making Stephanie happy, and to key in on Delta Solo about adding this as a reminder, go ahead and hit like the like or subscribe button, whatever it is, and YouTube, make sure that you tune in every week because we're all hopefully we're going to have something interesting to talk about. But you know, shameless plug for our show. Hopefully you come out and watch. All right. Well, Christina, thank you so much for joining us today. It's been a great conversation. It's been a pleasure having you. I haven't had to demo anything today, which that means that there's been no chance for things to go horribly wrong. You're welcome. Thank you for having me. It's been a pleasure being here. Yeah, you're welcome back anytime. So, Johnny, as always, thank you for joining me today. Appreciate it as well, Stephanie, behind the scenes. And with that, I will hand it to you, Johnny, for the last words. Yeah, Christina, thank you. I'm glad you came on. This is really great. Next time I'll email you. I'll put not spam in the subjects of that way. My marketing ability doesn't take anything down. So but no, thank you for coming out. This is awesome. And I'm looking forward to having you on again. My pleasure. And see you soon.