 I'm appropriate greetings and salutations everyone. Welcome to the Level Up Hour. I am your host Scott McBrian today. My co-host Randy Russell, International Man of Mystery, is out being internationally mysterious. And Jafar will hopefully be joining us next week. These are our Twitter handles where you can find us. And before I get into our topic today, which is Red Hat on Red Hat, it introduced our esteemed guests. I would first like to take a moment to talk about episode 47. You'll recall that it was quite embarrassing when my live demo, what could possibly go wrong, went completely pear-shaped. And thanks to a little bit of troubleshooting after the show, I've got some additional information to clarify what went on there. So the first thing is that when I was doing my containerized application, I said, you know what? I've done this as root, but I'm going to go ahead and do it as rootless, live, without any practice or validation. And that was my undoing. So what happened originally was we're getting this weird and error. And that was caused by my rootless container trying to attach to the system D of the system and get C group stuff and other things going. So one of the things I attempted to fix was by pulling a multi-service image from UBI. And that actually fixed the problem, although we didn't know it. The other solution was from our audience. Jonathan Tucker had suggested that instead of making a call to init as the initial process for the container, we should directly call Apache. And it turned out that also fixed the problem, but we didn't know it. Because the other thing that was happening was firewall DE. So without punching a hole through the firewall to allow port 80, not a whole lot of web application stuff is going to be going there. So what did we learn? One, test your demo is before you go live with them. Two, always check other components like firewalling to make sure that your traffic can pass through. All right. So our esteemed guests today for Red Hat on Red Hat are Mr. Brian Ackison and Mr. Ben Pritchett from Red Hat IT. And Brian, why don't I let you introduce yourself and then we'll pass it over to Ben. So what do you do here? So I am the infrastructure architect for Red Hat IT. So I cover pretty much all aspects of infrastructure in Red Hat, including compute, network, storage, really work across those groups to build a unified platform for our internal applications as well as customer-facing applications. Great. And Ben? Hey, I'm a platform architect within Red Hat IT. Focus on containers and platform as a service. And I work a lot with Brian on producing internal platforms for our developers to use to deploy to build their applications on and ultimately host a lot of the Red Hat applications that we use both internally and externally for our customers. And so you guys both colloquially work on a group or a project known as Red Hat on Red Hat. Can you tell us a little bit about what that is? Sure. So Red Hat on Red Hat is a program where we run Red Hat solutions internally to meet our own IT needs. And this really consists of virtually every Red Hat product, middleware, OpenShift, REL, Rev, OpenStack, really the entire suite of Red Hat products. We run internally to serve our customers. And obviously our IT shop is mostly REL. We have about 95 percent of our OS images or containers are REL-based. And so we just have an enormous platform for our users to actually utilize and develop on Red Hat products. And obviously OpenShift is very critical to that strategy. And I'll let Ben tell you a little bit about that. Yeah, absolutely. So as part of Red Hat on Red Hat, we talk a lot about our internal container and OpenShift usage. And this is really useful for other IT organizations across other companies to really just see and practice Red Hat products as we use them day to day. How we design them, how we implement them, how we maintain them, long-term operations, and ultimately cover the full life cycle of how we run Red Hat products internally. Yeah. And so I've been a Red Hat for a minute. And I think this all kind of started way back when we started the Red Hat corporate standard build laptops. Like that's kind of the genesis of what eventually became running Red Hat software everywhere, which was we decided internally that we were going to use Red Hat products wherever we could, including things like workstations. We had already been using it in IT for things like file servers and web servers and other things that Red Hat was already good at. So we continue that tradition today of using Red Hat products wherever possible. So I noticed that you guys said that RHEL and OpenShift are in that product mix. Where and how do you deploy those? For RHEL and OpenShift, what we offer with an IT is several platforms around OpenShift that internal Red Hat developers can come to and build their applications with a preference to RHEL, mainly for the security and trust that RHEL has around it when building and deploying applications. So yeah, we offer a few platforms that we try to make as self-service as possible, while also meeting our IT security and resiliency requirements. And we've been offering production OpenShift platforms. Let's see, I joined Red Hat IT back in 2015. So it was around the 2016 timeframe, which if you think about the OpenShift release cycle, that was right around the time OpenShift 3 released. So we've been running, actually our history with OpenShift goes way back as well. And when I joined IT, we were running production OpenShift 2 platforms as well. So yeah, it's kind of our history and how we offer OpenShift as an internal cloud platform for, again, our Red Hat developers to come and build and deploy mainly RHEL applications. But yeah, we also interface with other types of container images. And to answer the question of kind of where, I'd say everywhere. At this point, we obviously manage our own data centers, but we also increasingly use a lot of cloud-based resources. So virtually all the major cloud vendors we use and run RHEL there and more recently OpenShift across those environments. So really everywhere from, hey, a random file server in some office, someplace until the most cloud-native application you could think of. So RHEL gives us that very nice abstraction layer that we could build consistent applications on top of. And Ben, when you're talking about building RHEL for deployment, you're talking about container images, right? Correct. Yes. Okay. And do you guys build or roll back? Do you recommend that people build their own container images off of the RHEL sources or do you guys start with something like the universal base image? Like where's the entry point there? Kind of depends on the use case. It depends is the answer. But for a lot of our custom applications, absolutely. We recommend they use like the Red Hat Registry services to effectively get the great work that's already been being done and maintaining kind of those base RHEL images. We don't want to have all of our developer start from scratch where they don't need to. And there's a lot of security updates and vulnerability fixes that are maintained in the images that are hosted like at the Red Hat Registry services. So we recommend for developers start where you need to and if there's great work already being done and maintaining some of those base images, feel free to use that. With that, we also offer security tools that let us know that the runtimes and the build processes for the applications that our developers are building are in fact secure. And so that's how we can also kind of trust but verify how we're pulling in that great base content, whether it be UBI or other types of kind of base images. But we also have, you know, continual scanning we do to make sure that we're trying to best keep things secure as we ever increasingly move apps into the container world. Excellent. And just to set the audience like concepts of containers and what we offer from Red Hat's registries a little bit. So there are basically two piles of container images that Red Hat maintains. There's the RHEL ones which require RHEL subscription and have access to the full repository of software that is distributed with RHEL. And so if you look in the container repositories you may see like a, I don't know, MariahDB included image. That's a RHEL image. The UBI images and we talked about that a little bit in episode 47 last week. The UBI images are four starting points. There's micro, minimal, standard and multi-service. And those have a subset of packages from RHEL. So they're still based on RHEL packages but there's not the complete RHEL distribution available for them. The difference between them is that as I mentioned the RHEL ones you're required to have a Red Hat subscription. The UBI ones, you do not need a Red Hat subscription. And the UBI ones, all the packages in there are freely redistributable. So you can build your container application and then take that resulting container and distribute it wherever you want to whomever you want. And they also do not require any kind of Red Hat subscription with them. And then Ben mentioned that, you know, the security work and other stuff. So we rebuild all the UBI images every six weeks, period. Unless there is a critical or important security vulnerability in one of the pieces of software distributed with that image, in which case it can be rebuilt sooner than the six-week standard rebuild process. So that's what we're kind of angling towards there. Now Ben, do you guys actually do rebases every so often on those images? Like automatically as part of your build process? Yeah, absolutely, yeah. So what we strive for internally within Red Hat IT is we want to have automated life cycle management of our applications as we build, deploy, monitor, test, validate. Effectively, all that stuff we want a machine to do. We don't want a person to necessarily have to go in and kind of do all those activities. And build is an important part of that too. So as developers are continually building and deploying their application, and as developers are coding, this can be many times a day. Continually kind of rebuilding that image and re-releasing it to dev environments to ultimately move that further to the right and kind of into our production environments. Policies we have internally, we like app teams to rebuild weekly actually. This also just makes sure we have good grooming of the application. We can successfully and healthily redeploy it in that everything's working as intended. And so really just kind of that practice of the life cycle management is a good thing to be doing regularly. And again, we kind of like our app teams to be doing that weekly. We can also have app teams hook into event driven updates as well. So if they have, say they detect that an image has been updated, there's the potential to effectively hook that into your life cycle management. So you're automatically rebuilding based on detected, say vulnerability fixes that are coming down the pipeline or you know anything like that. So what does that process look like for a dev team? Do they just like have an automated rebuild every week and it magically goes to production? Or do they have inspection points, unit testing? Like what's that workflow? Ideally it's this hands off as possible, although you know in practice there's still some manual steps like UAT. Sometimes you have to jump in and kind of manually test some things. But for the most part, as you're coding and you're committing that code to say your mainline branch, you want automation to kind of pick that up, automatically build and redeploy your application. Do the necessary unit testing as well. Any kind of monitoring that you've attached to your dev environment, you want to make sure those are all green as you kind of redeploy your app. And so really we kind of initiate that from the code check-in, if you will. Cool. And I know that you guys offer basically like two methods of app, well more than two, but for our purposes today, the methods of app hosting are like virtual machine hosting versus container hosting. So what are your decision criteria or suggested things to look at for developers to decide whether they should go VMs or containers? I think that's something that's constantly evolving, certainly in our environment. Even up to let's say last year, we would just push the larger applications, 8 gigs of RAM and so on to virtual machines. Just because we don't want to overload the OpenShift environments and we didn't want to have to provision very large VMs underneath OpenShift. However, with recent OpenShift versions, that's obviously changing a lot and lets us support much larger containers. So right now, a lot of it is developer preference, although we certainly on the infrastructure side try to push as much as we can to OpenShift. But there are still some, frankly, still some older apps that still require VMs. Is that because they require more complex dependencies for the application layer? Like they're doing a ton of libraries plus database plus some other stuff? Yeah, first of all, for sure, the single node applications that say you have one large database server and everything needs to hit that database server. We don't necessarily want to put that in OpenShift and have a single point of failure within OpenShift for that database. With the VM level hosting, we're able to more easily migrate to a live migration of that VM across hypervisors as we down the hypervisor versus in the container world having to spin down that container and restart it on different nodes. So kind of those legacy applications that can't go down because they run some business critical resource and they just have a legacy architecture. Those still very much live in the VM realm. But obviously, as newer applications come along, we want those to be a dot more of a container native approach. And yesterday in our pre-search show meeting, we were talking a little bit about automation and I know that we're just talking about removing people from the equation for deployments and things like that. But there's a differential between automation and orchestration. So what's the difference there and how do you utilize it? So I usually think of orchestration as an orchestrating a release that I need to pull some nodes from a load balancer, basically write logic that can systematically roll out application without any real user facing impact. Whereas automation, let's just say I want to automate this process that goes and creates an LDAP user. So I kind of think of those very much in different realms. Obviously, there's tooling kind of spans that gamut, things like Ansible on the VM side is very handy at doing both. Although on the container side, we have much better tooling for that. I'll let Ben kind of speak to that. Yeah, absolutely. So I think we start to see automation realized in OpenShift, especially recently with operators. And this is something I'm really excited about as well. How can we start to automate external and internal dependencies to an application via how you can control for a piece of automation in an operator to say reach out to an external service and when you create an OpenShift route, create an analogous monitor, synthetic monitor along with it. And that's a great example of how we can automate particular activities from our cloud platform in OpenShift. How an application kind of takes that piece of automation and realizes it in their lifecycle management is kind of more aligned with the orchestration. So using our pipeline tooling, our continuous deployment tools as well to say hook into that piece of automation in this example, like an operator driven piece of automation and kind of build their whole application on top of that from start to finish. That's kind of where I would start to say that's more aligned with the orchestration piece of the puzzle. And the premise of this show when Langdon started a year ago was more like taking the people who do containers on RHEL natively or pods on RHEL and being like, hey, have you thought about moving to a platform that made better suture needs? And I think that that's kind of the automation orchestration is when you make that switch. So given enough time, you can write anything you want in shell scripts and Ansible, right? But I guess the question is, should you write stuff to monitor your deployment status and container running status in shell automation or use something like OpenShift where that's just built into the product? And that's a conversation that I think we're still having with container practicing folks. Do you guys have any kind of guidelines or thoughts on when that kind of switched should or could happen? When you advise people, hey, let's do it this way instead? So if the platform offers it, it certainly consumed that. Obviously in Red Hat IT, we've been building a lot of customer solutions for ourselves for a very long time. And we do have the scars to prove it. Trying to keep up with the various automation configuration management tooling out there over the eons at this point. Between Puppet, First Puppet, CF Engine, now Ansible, right? Those are very dramatic changes and take a long time to migrate from one kind of automation platform to another. Whereas if you could just consume what the platform offers in OpenShift, you're able to kind of quickly adopt what the great work other people have done for you and just leverage that. And then you also get an upgrade path for free when the next generation tooling comes out as part of that platform. I would add to that that it could be useful just to get something kind of quickly solved for like in your particular use case. And that's where the shell scripts kind of more of the tactical Ansible playbooks can be really helpful. Like I have a problem to solve and I can do it quickly using Ansible or scripting to kind of solve for that. But once you need that to be solved across many applications or across a platform, that's where we start to see the need for those type of tools to be realized, more in a like Kubernetes or OpenShift centric world. Yeah. So you guys have been at this for a while. So what is some advice that you would go back in time and give your younger self that would make your now self much happier? That's a good question that I'll have to think about. So to this day, I'm still cleaning up Perl scripts that I wrote in 2007. So I guess my lessons learned there would be although the solution worked, it's not the most maintainable and forward thinking automation on the planet. So if I had adopted kind of more modern languages back even back then, I think we would have been in a better place to migrate that automation to something like OpenShift now whereas now requires an extensive back port. And also to that end, obviously don't architect towards people organizations. Like every other company, we have a lot of org changes that happen from time to time. And tying a specific application to a property of that organization is just a really bad idea that quickly turns into a maintenance nightmare as organizations change and evolve and adopt new tooling. You're stuck with these kind of older team specific practices. And I definitely avoid those if you can. Yeah, I'd say one of the pieces of advice I'd give myself is there's a lot of great work we do within Red Hat IT and it could be useful kind of going back to the scripting versus building platform orchestration discussion we just had. Like if there's a team or something that could be solved using a very quick tool or process, let's do that. And as we need to scale, let's figure out how that scales and how we build that into a platform concept. Sometimes when you are kind of maintaining a platform with many, many applications that deploy on it, you start to see everything as kind of, oh, it needs to be solved in this particular way. And I think being open to other solutions is really important. I don't know how much that aligns with what Brian just said in terms of thinking long term about the scripts and the tooling automation that you're creating. But that's something I would say to my earlier self that let's solve things quickly and then let's figure out how to scale them over time and know when to hop in at particular points of, say, an automation life cycle. Yeah. Yeah, I would also add that just because you can do something doesn't mean you should do it. And we all get how smart you are, so you don't need to prove it in kind of the solutions you build, obviously simple as best. So Brian, I have to say I'm feeling a bit attacked because I found out 17 years later that a piece of pearl that I had written in a weekend was still production. And I found that out because somebody was like, hey, this needs to be updated, but you were the last one to work on it. Can you update it? I was like, what? Yeah, so it's not just you guys that have the not invented here and like, oh, I'll just fix this one thing real quick with this thing. And it turns into like a forever thing. Right. No solution is ever temporary. Right, right. And I think that there's, I mean, at Red Hat, there's a fair amount of not invented here. Like everybody wants to do their own thing and invent the thing. And I think that over the years, we've gotten a lot better at determining when to build versus when to buy or partner. Because it turns out that like, you don't want to maintain that piece of pearl for 17 years. Because that was four jobs ago. Now you're like a manager or director and somebody is asking you to write pearl. So, cool. So I've seen a lot of organizations around, like they don't always think about maintenance or longevity of their applications. And this is kind of tying into what we're just talking about, where it's like, I just want to get it done. I just want to do it. And I find that a lot for application development as well. It's like, I have to have this thing deployed by Thursday. And then after Thursday, I never have to think about it again. And in reality, that's not true. And in fact, I think you guys are the ones that probably see the dangers of that most acutely. So when thinking about longer term maintenance and management, because I know that you talk to application teams before they start their projects, what's some of the advice that you give them for looking to the future? I think the first thing that comes to mind is, generally, we're talking in the onboarding process. Or like you said, as the application is being developed, I would say to app teams, think about the day when this application is turned off. And that might be really hard to think about because that could be, like we just said, there could be something running for 17 years. So how do you think about that date 17 years from now? But really, it's about the full lifecycle, like when this application has a particular monitor alert that goes off, who's going to handle it? Which piece of the stack do you want particular teams to be involved in when, say, there's a storage issue, network issue, application runtime issue? And then again, how is this application going to change when it ultimately gets decommissioned? Or how is it going to change over time? So kind of thinking past that onboarding process to the long-term maintenance and then ultimately every application gets turned off at some point, whether it be one year or 20 years, that's what I would say. Yeah, for sure. We obviously strive to convince teams to think about day one and day two support of their application and to decouple it however possible from the other layers that they need to access, so that typical microservice patterns. So that each individual service can evolve separately and really maintain its own lifecycle without an app being tightly coupled to the middleware, being tightly coupled to the database, which just becomes a very rigid thing to have to, and brittle thing to have to migrate later on, because it will need to be migrated at some point or will kill it. But either one of those activities will require a substantial amount of work and kind of how much work depends on your initial architecture. Yeah, I remember working on a project and the vice president of this department was like really frustrated because he expected that after Thursday we would never have to work on this thing again and he's like, but wait, we have to keep doing work on it. You're going to want new things in it and we're going to have to revisit it over and over again. And that's probably also tied into Ben, what you were saying earlier about rebuilding and redeploying every week to get that consistent maintenance and feature updates and security things in your container image. I've seen people pitch containers as a way to not have to worry about their thing anymore. Like, oh, you've got this old row five thing, shove it in a container, problem solved. Have you guys seen anything like that? Absolutely. And what is, after you recoil and horror from that statement, what do you respond with? So I will say from a platform perspective, we always want applications to think about rearchitecture about driving cloud native practices as they deploy two container platforms and within a hybrid cloud. I will say there are programs, there's projects that sometimes mean we need to make choices about what we rearchitect at certain points in the life cycle. So there's some applications that we will say, okay, we'll kind of do more of a lift and shift model with this application at this point. We'll simply take the bits, put it in a container and put it on, say this particular OpenShift deployment. But we also need to track, again, the life cycle of how are we rethinking, say our network dependencies or how we're breaking this out of tomorrow of a microservice model over time, given that we do that initial onboarding, say that's more driven from like a lift and shift practice. So sometimes it's, you know, we got to pick and choose kind of where we changed an application at particular points in this life cycle. But it's important that we, like you said, consider just because it's deployed in a container on OpenShift doesn't mean we're done. There's always improvements breaking down into more of a microservice architecture that we can be thinking about. Right. And, you know, at the end of the day, most applications I use at least some third-party libraries and app teams need to be considering what happens when there's vulnerability in Apache struts, for example. You know, how do you handle that and what, who gets woken up at 3 a.m. to actually fix it? And so having that management level support for that and having teams own the complete application and the life cycle for it in something like OpenShift is just extremely beneficial because that makes it very clear that this team owns the app top to bottom and, you know, they're the ones who need to validate when one of their dependency changes and to verify that the application still works correctly, be it through automations and pipeline builds or manually. But having the application team both empowered and responsible to maintain their application for the entire life cycle is just a huge change that we've only seen recently. And really as it evolves to OpenShift to be able to say, hey, I know the infrastructure teams have always taken care of your VMs in the past, but now this is your app and you need to maintain it. So that's been a real mindset change for us. I do want to add we've talked a lot about app team responsibilities and that type of thing. But part of what we also offer from a platform perspective is the tooling that that should be helping with a lot of this life cycle management. So we don't expect that an app team coming on to our internal platforms figures everything out on their own. We're constantly working to build standardized tooling, life cycle management, even migration tooling, because one of the things we do, we've had to do a lot in Red Hat IT as we, you know, redesign our data centers as we move to different public clouds, we ask that we kind of shift those apps across to the latest and greatest based on, you know, it could be cost performance resiliency, whatever type of requirement that we're trying to meet with, say, a new public cloud or a new data center. And so part of what we offer from a platform is kind of that support and help and kind of managing those migrations. Alongside the great tooling that's already built into the product, you know, I know there's a lot of OpenShift based add-ons and tools that can be used for how do we move an application from one, say, OpenShift cluster to another, because that is something that we see a lot in practice within Red Hat IT. And I know that in the last two and a half years or so, at the REL side of things, I think we made some decisions at the product layer to react to some of the things that you guys are seeing in the operations field. So with REL 8, we started the projectable release cycle cadence, which means every six months we have a new die release, every three years we have a new major. So, you know, at this point, we should expect REL 9 coming out next year, because that's three years after REL 8 was released. And so if somebody is like coming and just starting up their project, you can have discussions with them now like, well, should you run it on REL 8? Or should you run it on REL 9? Because it's imminently coming out in the next year. And then the other piece of that is like we've now forecast the life cycle. It used to be you didn't really know when things were going to change and now we have a documented, here's what it ends. Here's when it goes into maintenance mode or maintenance phase. So that you guys can have those better conversations with people as they get on to the platform. Right. And I don't know if you've had those yet. Actually, that that's actually enabled us to to provide our customers with internal opt-teens, a very clear OS matrix guidelines of what internally we support when they should be thinking of getting off REL 7 for REL 8, you know, when they need to be adopting REL 9. And so we kind of take the official product life cycle that we publish and kind of what we expect coming down the pike and as well as combine our security standards with that. And the end result is we could give our customers a very predictable set of standards that they could say, okay, well, you know, REL 8 will officially be desupported by IT at this date. So I need to be on REL 9 by then. Obviously, with containers that makes that transition a lot easier as well. I don't have to rebuild the VM and so on. So that, yeah, definitely really appreciate that the work Red Hat's done to be more predictable in releases and how long these are maintained for. Now, one of the things I'd like to see happen is, you know, especially in those places where it's a set it and forget it type of deployment model, you at least now know when you have to not forget it anymore, right? Because like in seven or eight years, you need to think about where your next step is, which means you may have to get budget for bringing developers on to do your updates and migration to get to that next release. But anyway, that was a little bit of a side, sorry. So since you guys are the production infrastructure team, what are things you see the development teams doing that you wish they would do differently? I'll let Ben take that one. I may have some developers watching this. I need to think a little bit about how I answer that, maybe. I would say when we talk about a platform, you know, that's run in turn within Red Hat, IAT such as what we do with OpenShift, I would say, read the documentation, understand what we're offering kind of from a platform perspective and see how you can use the tooling that's already there, and then give the feedback where it may not meet your use cases. Because one of the things that we sometimes see when we offer a platform and for context, the platforms, the OpenShift platforms we run internally, our main production hybrid cloud platform supports, I think, 180 different applications on it. And these are all like, you know, critical business critical applications we're talking about. So an outage really affects the ability for the Red Hat business to run. But going back, you know, sometimes we have app teams that, you know, they get drawn to building kind of their own solution or, you know, using a tooling that we kind of haven't figured out how it quite integrates, say, with our OpenShift platform. And so we can certainly take those features on and those requests on and say, okay, let's think about how we support this for 180 different applications. But my request would be sometimes we need to think more critically about when we introduce a new tool. One, it can't just be for one team, we got to think about how we provide it from a platform. And two, once you introduce a new tool that, say, has lifecycle management of your app, that now kind of becomes critical to the runtime of your app. And so you also need to think about, say, for like a pipeline technology. How do you monitor that the pipeline is running successfully? How do you, who's going to be on call when a pipeline breaks down, you know, that type of thing. As we introduce new tools, we need to think about how we support them. And so that's why from a platform perspective, when we have kind of standardized integrated tooling that we know we can support, my request would be try those out first and where they don't work. Let's talk about the updates to the RFEs that need to be associated with those. So would that fall into the vein of like partner earlier? Yes. Absolutely. Yeah, there's always going to be the 20% use case in a 20% of the app teams who can't take advantage of whatever tooling we're providing because of X and being able to communicate that clearly and what success looks like to us is enormously helpful. And at the end of the day, most apps don't fall into that 20% category, although they do, they think they do. And usually we have solutions to meet whatever need they have. So it's important for app teams to be willing to collaborate with the platform owners and infrastructure teams on what their application really needs versus what maybe somebody knows. We do see developers coming in and saying, I've always done it this way, so I want to keep doing it this way. And it's like, well, actually, we have better tooling for that now. So how about you look over here? And that usually goes pretty well. So yeah. So one of the things that we see a lot, especially from like operation side, like really adapt it standing up OpenShift or standing up REL container hosts or whatever the case may be. But then we lack onboarding to help the dev teams get onto those platforms. And so what we end up with is like an infrastructure that's running without anything running in it. I know that you guys do a tremendous amount of coaching and helping onboard people and teams to your platforms. What are some things that you would suggest other folks do to help with that onboarding? Yeah, this is something I think our team spends a lot of probably underestimated time on is how do you successfully drive applications to adopting container platforms? So some of the things we've done internally is we try to make it clear when you're a good fit for say our internal OpenShift platforms versus where you may want to go to say a virtualized workload. Kind of how we were talking about making that decision-making process earlier. Having this kind of upfront in the conversation means we can avoid an application team ending up on one platform where they may actually be a better fit for another type of platform. Another thing we do, we've done this for years around our internal OpenShift offerings within Red Hat IT is we try to develop a community around the platform offering. So one of the things, we develop internal knowledge base, internal chat rooms where we want the great solutions that developers are doing around our platform offering to be shared amongst the other tenants of the platform. And this also helps because our platform teams are limited. We don't hire a multitude of engineers and operations support to maintain this. So this means that we sometimes get developers helping to solve other developers' problems all in the context of the community we've developed around our internal offering. So empowering your developers to be excited, be interested, and to help other developers solve problems and to give them a space in which to share their solutions I think is really important when it comes to adoption of hypercloud or container platforms or any kind of new model you're moving application teams to. Yeah, and I think also providing teams with clear patterns of, hey, your application looks like this. Okay, here are the set of tools and patterns and even templates and yaml files that they could quickly adopt for how their application is architected. It goes a long way towards helping them get on the platform. There's very little they have to do other than just pick the pattern that most closely resembles their application, tweak a few variables, and they're good to go. So being able to provide that kind of a standard-based approach really goes a long way at least towards the 80% use case. So we're approaching the top of the hour. I told you I was going to throw you a curve ball. So, Brian, what was your first Red Hat distro? So that was a Red Hat Linux 4.1 or .2, not Red Hat Enterprise Linux, Red Hat Linux 4.2, I think it was, in 1997-ish in the dorm room in college. And so I was hooked on Red Hat products since then. Excellent, excellent. What about you, Ben? I think it was Enterprise Linux 3, Red Hat Enterprise Linux 3. It would have been in same circumstance college 2004, which I think was either REL 3 or REL 4 at that point. I'm going to go with REL 3. Because I think. Was it REL 3? That looks very familiar from 17, 16 years ago, 17. I would have been really impressed if you had the Red Hat Linux disk around, but I'll let you off the phone. I actually have a whole collection. I have a whole collection. You predate my collection, which starts at Red Hat Linux 6.0. Interesting. I don't have media kits for that. I used 5.2 as my first, but it was the heady mid-90s and I was like, boxes, who needs boxes? There you go. All right. It looks like we do have a question in the chat. If in simple terms, I would say Kubernetes is simpler than OpenShift, but to convince a customer to migrate to OpenShift, what points could be put up in order to convince them that OpenShift is preferable to building their own Kubernetes? I would say, and this is from experience, although Kubernetes has come a long way since I think we ran production vanilla Kubernetes within Red Hat IT. This goes back to like 2015, but a lot of the challenges we are trying to solve with a vanilla Kubernetes distribution is authentication authorization. Our back was a big part of it. Ingress, so how do you get client connections coming into your applications? This is also around the same time OpenShift 3 was being released where we started to see a lot of these things solved inherently in the product. From an operations perspective around a container platform, we were having to solve for particular things in Kubernetes, specifically around our back in Ingress, that we could spin up OpenShift and we had APIs in which to work and manage this stuff more natively to container named spaces. As I've said, Kubernetes has come a long way since, but I think what OpenShift offers is still aligned with what we saw six years ago, is thinking more about the day two operations and how can we solve those natively in the product so that there's less time spent from our platform engineers and kind of solving these things directly in Kubernetes. I would also add that we run OpenShift everywhere and increasingly more that's running OpenShift in public cloud providers, but also running it in three node bare metal servers in the closet in some office someplace. Being able to kind of leverage a consistent platform really everywhere and give developers both build tools, pipeline deployment tools as well as obviously the container runtime is critical for that seamless experience and loading customers run their workloads really anywhere it makes sense. OpenShift has become our abstraction layer and we don't want app teams running on the underlying infrastructure itself. We want it to run on OpenShift. Obviously, Kubernetes has been mentioned has come a long way as well and it depends on what Kubernetes solution you look at. It supports different kind of footprints, but OpenShift you really have that seamless platform that this structure is really anywhere you need to deploy it. A long long ago, someone once told me that OpenSource is free if your time is worthless. And so products like Red Hat Enterprise Linux, OpenShift, Red Hat Virtualization, they're all based on OpenSource projects, but Red Hat puts a ton of effort and development not only into the actual OpenSource community for that project, but also into the packaging, maintenance, longevity, management of that thing. And so OpenShift is not just Kubernetes, it's source to image, all the management infrastructure that goes along with it, all the monitoring stuff that goes along with it. There's like a ton in there besides just Kubernetes. And I think this ties into some of the stuff that we're talking earlier, right, where you could use to build it all yourself, but then who's going to fix it when it goes belly up? And are you then tied to that thing for the rest of your life because you're the only one that knows it, you're the only one with the skills on it? Yeah, I would say that we are a big customer of Red Hat support and leverage them very heavily and they've helped us through some extremely interesting and sticky issues in the past across a variety of products. So having that access to support has just been invaluable. Absolutely. And one thing I do hear customers say is like, why didn't you even open a support case? So what is this for? And I would say that even if you're not opening support cases, that tells me that we're doing a good job, right? Because that means that all of the engineering work that we're putting into the product, all the QE work that we're putting in before release, like that's all paying dividends, which is why you're not opening support cases. But if you do need support cases, we of course have that too. All right, so guys, any last words of wisdom? I'm just thinking about how I got introduced to containers and maybe a piece to leave on. I was curious and I played around with tools back in the day and I got really interested in containers. So be curious, like play around with stuff, like figure it out, break it apart, put it back together. Interestingly enough, that's kind of how I ended up in my role today. And so sometimes kind of taking on those passion projects and getting really motivated and interested in a particular tool or technology can ultimately end up in the kind of job or role that you want to be in. Yeah, very much. And you really don't learn a tool until you break it and learn how to fix it. So try breaking as much of stuff, as many things as you possibly can, right? Not in production. Not with your BGP advertisements, but try. Try breaking things and seeing how far you can push products in solutions. All right, so now is the point of the show where we do sweet, sweet internet points. So, Brian and Ben, you're new, so you've not seen this before. And just to recap, this is episode 48. We had Brian Knackison and Ben on for talking about Red Hat on Red Hat, running Red Hat Tools at Red Hat. All right, so sweet, sweet internet points. And I'm sure Stephanie, our show producer, will shortly paste this into the chat. So it looks like Norenda has woken back up from his slumber and is continuing to accumulate sweet, sweet internet points to extend his lead on the other folks out there. Bacon Fork Store, respectable midfielder at 2900 points. And we'll see. Can anyone catch Norenda? That's the real question. Don't forget to like and subscribe if you enjoy the Love Love Hour programming. And we look forward to bringing you more soon. Guys, thanks a lot for joining us today. Thank you for helping us. Yeah, thank you.