 Let's get into the big conflict of open source, something that we actually have in mind. This is so awesome. We are an open culture that is actually able to fix that process that a developer or, let's say, as the Kubernetes ecosystem really brings. Welcome to this week's Ask an OpenShift Administrator office hour livestream. That is a mouthful that I have to, like, concentrate on every week. Johnny, how many times have I borked that? 100% of the time. What's the Napoleon Dynamite rate? 60% of the time it works all the time? Every time. That's right. Hello, everyone. Welcome to February. Today is the unofficial holiday or pseudo-official holiday, I guess, here in the U.S. Groundhog Day. It is also my youngest daughter's birthday. It's a great day to turn eight, I guess. Today is something that I think Johnny has been looking forward to, at least since we put it on the schedule about a month ago. It's something that is interesting to me as I learn more and more about it. Johnny, what are we talking about today? Today, Anthony and I are going to talk about the validated patterns that our engineering group is working on. We're really excited about what we're doing. I think we've seen a lot of great output from our team, and there's a lot of excitement building around it. Hopefully you guys can pick up on it and see it, too, and find a way that you can use it. Really excited to talk about it. Yeah, I am, too, because there's... Let's just say I have questions. Johnny, you know my background, right? Administrator and all that other stuff. We focus a lot on what we used to call reference architectures. I'm excited. I'm interested to find out what all of that means from a validated pattern. I keep wanting to say platform. I think I've said validated platform probably more times than I have pattern. I don't know why. It just makes sense. I get it. I'm excited to learn what that means for you all, for our audience. With that being said, this is one of our office hours series of live streams. What that means here on Red Hat Live Streaming is that we are here to answer your questions. Just like if you ever had a manager or a professor who had open office hours where you could come in and talk about or discuss anything that you need to discuss, we can't improve your grades, but we can certainly help answer some questions about OpenShift. Regardless of what today's topic happens to be, you're more than welcome to ask any of those questions in chat, regardless of which platform you happen to be watching us on. They all get rebroadcast and integrated, so we can see both YouTube channels, Twitch, all that other stuff here in our interface. Please don't hesitate to ask those questions. Before we get into today's topic, we will have what I'm going to call a reoccurring segment, which is the top of mind topics. Let me bring up my notes document here. The first thing, before I forget, and I want to make sure to get it out of the way. If you are not aware, so from a stream perspective, next week we will have Frank Baldwin and the performance add-on operator on the stream. If you're interested in performance add-on operator, how to maximize the performance of your nodes for things like DPDK, actually it's DPDK supported, I don't remember, definitely SRIOV, all that other stuff. Be sure to join us next week for that. We'll be talking about a myriad of other things as well. Be sure to get subscribed, and if you don't subscribe on one of the channels, I would definitely encourage you to go to red.htslivestream. There's a Google calendar embedded in that page that you can subscribe to, and it has all of the streams that are inside of there. Next week we'll be doing performance add-on operator. The week after that, the 16th, we will not be streaming this stream. Instead, during our time slot, actually it's starting a little bit earlier, I think it starts at 10 a.m. and goes until 11.30 a.m. Eastern, is the what's new in OpenShift 4.10. So that is the product management team's opportunity to basically give you all all of the details of what's happening inside of 4.10. And then Johnny and I typically follow those up with another episode where we focus on some of the things that I have a little bit deeper into those for things that we think are interesting to you. So if you have the opportunity to watch that long stream, great, maybe take some notes, suggest some things that you want to learn more about to us. If you don't have, if you miss that, no worries, we'll talk about it a little bit afterwards. Then you can always reach out to us too. Yeah, the roadmap is a great way to get an idea of what's on the horizon. So that way, there's always a feature that somebody's looking for that you may not know that's even thought of right now. I'm going to go out there and look at it. I'm really happy that we open that up to everybody to see it. So it's exciting. And yeah, go find out what OpenShift is doing to make your lives easier. Yeah, although I will say that this is the what's new, the what's next, which is the roadmap. I think that one is happening in March. March or April, I don't remember. Gotcha. Yeah. So I don't, yeah, I get them confused all the time. My bad. Yeah, that's right. I'm all amped up. All right, let me find my share button here. And we're going to share a window and we're going to share this one. All right. So the next thing I wanted to talk about or the next thing I wanted to highlight is the mirror registry, mirror registry. So this is something we briefly mentioned it a while ago. Effectively what it is. And if we scroll down in the page here. So you can see that this is a command line that will go and effectively deploy a single node instance of quay that you can then use to mirror images. So this is really helpful for disconnected installs or, you know, even if you're pseudo connected right where you're, you don't want to have your images pulled directly from the internet. You want to cache them locally. So super useful for that. It's a very quick and easy way to get a registry up and running inside of your infrastructure. And of course it's quay. So it's super powerful to boot. But if we scroll all the way down here, a couple of things to note. So the mirror registry can be downloaded at no additional cost from the downloads page. So it is included with OpenShift. It is supported as a part of OpenShift. Just be aware that it is not, and you are not intended to use it as a replacement for the internal registry or a replacement for a standalone, full-on quay registry instance for all of your applications and stuff. It's specifically targeted at, you know, mirroring OpenShift images for deploying clusters and stuff like that. Oh, I see Mark's on. Hey, Mark. Let's see Microsoft or MicroShift rather. CRC, Snow, 3node, OpenShift, in order of increasing features. Yeah, don't forget about single node or remote worker nodes and all that other stuff. So MicroShift, just a bit of a plug. Johnny and I have been talking with some of the MicroShift engineering folks. We're going to have them on in March. So we'll be going, diving deep into what that looks like. So again, be sure to subscribe if you haven't already. And there's a lot of banging going on side today. My dog is trying to get in the door. If it sounds like a wild animal is getting in here, it's my puppy trying to get in. I was, you know, getting, I don't know what it is about this hour. All week long, every other hour of the day, it's mostly quiet, even like the delivery guys. Surprisingly, they'll like tiptoe on it. I don't know that I have a package until I get the alert. And like Wednesdays, all of a sudden it's, I'm going to take this and I'm going to slam it down and kick it around and see if I can set off the dogs. That's right. Following up from last week, and I think I've got it up over here somewhere. So last week we talked about the CVE 2021-4034. So this is the pulpit escalation. So I talked with a bunch of different folks, product management, engineering, et cetera, has a follow-up to that. One of the things that came out of that, and if we scroll all the way down, I won't say it was just me. It was field folks and account teams and all of that, working with them as well. In the red hat security bulletin, oh, I forgot to publish this link. Let me do that real quick. Nope, got to find the right window. Here we go. So that is the mirror registry link, but we want this RHSB link. So in this red hat security bulletin, way down at the bottom under the FAQ, they added this question for what is the impact to OpenShift? So the abbreviated version of that is essentially, the exploit is a quote unquote local exploit. Man, I cannot talk today. I cannot say my P's is a local exploit. So what that means is that effectively for someone to take advantage of it, they would need to have root access or the ability to log in to the host anyways. With OpenShift, that's limited to the core user. And if you can log in with the core user, well, you already have sudo, so you're root anyways. Conversely, or on the other side, is there are some packages or container images, I should say, that are shipped by or with OpenShift that include the pull kit vulnerability as well. Similarly, those container images are effectively privileged. So the only people who can use them, who can deploy those are cluster administrators. So again, if they have cluster administrator, well, they already have root across all of the nodes as well. So while yes, OpenShift and Red Hat Enterprise Linux CoreOS, CoreOS is vulnerable. It is mitigated somewhat by some other circumstances. That being said, they are working on a fix. I don't know the precise version of OpenShift that it will be in because it depends on release cycles and how they do all that stuff. So hypothetically, and I'm sure anybody who's watched the stream long-term knows, let's say next Monday is 4.8.703. That might not get promoted into fast or stable. It might stay in candidate for other reasons. So it could be the next version, 704, that would be the one that gets promoted into stable. So it's hard to say on this day with this version, you will have access to fixes for that. Rather, we just have to wait for those promotions to happen. All right. Tiger, can Raspberry Pi running MicroShift be a worker node for a normal OpenShift cluster? I don't believe so, but we'll check with the MicroShift folks when they join us. I'll collect your questions on that, Tiger. And we'll put them in the notes document that we use so that we can be sure to address them when they join us. And Tiger, I know you're a Red Hatter, so you can also send me messages internally if you'd like. Since you mentioned 4.10, curious to know what the release date is. So we don't... Let me rephrase that. Andrew doesn't ever discuss specific dates because they're always prone to change. As in, I have seen a OpenShift release slip hours, like two hours from the originally intended shipping time. And it slipped by two weeks in that instance. So we are expecting it to ship soon, but I would say ideally in Q1. So calendar Q1. So you can kind of project that out. But remember that times are always subject to change depending on whether or not they find any last minute bugs and stuff like that. But the good news is I think January 31 was code freeze. So we're close. Let's see, looking through some of these others. Oh, kind of a... Not super technical, but interesting. So I don't know if anybody else saw this. There's a Kubernetes documentary. Documentary. Words are hard today. So part two was released earlier this week or maybe Friday last week, which for me, it was a lot of fun. Not the least of which because like this flyover in the introduction, it says Raleigh in like 2014. And it actually has the old logo here on the building. And then I think the next scene, it cuts to, yeah, Red Hat Tower. And now we have the new logo. So it was kind of funny for me to see them piece together the multiple clips and to form a full video. Anyways, we'll post that there. I did watch through it. Lots of really interesting stuff. A bit of my own history. I was actually at the very first CNCF meeting in New York City when they had it at the Times Building. So it was super fun and super interesting to see the very beginning or I guess that was just after the very beginning of Kubernetes. And of course, Red Hat was very involved from the very beginning. Yeah, I haven't had a chance to watch it, but it's definitely on the list. I should have caught up on it this weekend, but just got busy doing stuff from the house. How dare you? I know. I felt out of work. I know. I know. I shouldn't. I wouldn't even talk about it. We should talk to your manager. I know. Yeah, Sachin, it was good to see Clayton, yeah. Clayton, so Clayton actually lives here in Raleigh, I think. So we've at my previous employer, we did a podcast and we actually had him on at one point. We haven't had him on here though. Maybe we should do that, Johnny. Yeah, that'd be awesome. Last but not least one that this is kind of late breaking news, if you will. So they just did the press release for this. I don't think it's even like it's public, but it hasn't actually been linked from the press releases page. Let's check real quick. Yeah, so it's not even on the press releases page, but it has been released and that is. So red hat open shift platform plus entitlements now include open shift data foundation essentials. So essentially this is a, I won't say an add on, but basically it's an addition to that entitlement. No additional cost where you can go in and add ODF essentials, which entitles you to think it's 256 terabytes of capacity inside of there. So if you're using open shift platform plus entitlements today, you can now deploy ODF. You can take advantage of that shared storage rate, the local storage on your hosts for all of your PVC needs up to 256 terabytes. No, that's really good news. And if you're not familiar with the platform plus, it comes with a lot of the big things that red hat wants you to use, right? ACM, I think ACS is in there now, ODF. That's really great. So I think that it simplifies the subscription purchasing and all that and helps people get up and running without worrying about, you know, licensing or whatever. Yeah. Yeah, it's, I know our, or my team, there's several folks who are working on like environments and stuff like that for red hatters to be able to get hands on with all of the products, all of the things and open shift platform plus. So that way it'll be an easier onboarding thing. So for any red hatters who are watching very soon, we'll have those environments available to you. I'm just looking. I don't think that my links are coming through. I'm pasting them on to YouTube, the open shift YouTube channel, but I'm not saying that. I don't see them. All right. I'll, I will put them into Twitch then, but rather than me trying to do that and talk at the same time, I'm out of top of mind topics. So Johnny, what, what, let's talk about some, some validated patterns. Yep, for sure. But real quick, I'm going to answer Sasha's question. The inclusion, it's to make, it's to simplify the purchasing of subscriptions really it's, it's so that you don't have to worry about buying like eight different SKUs for the products that you want to use. It helps give you a single SKU to order to get what you want. Right. And like a single shot. So it helps simplify that workflow. That's really most likely while they're doing it. But yeah, validated patterns. So I've got with me the, the, the man, the myth, the legend Anthony her and he's a, he's a product manager for the validated patterns team. So I'll, I'll go ahead and have Anthony introduce himself and kick us off. Thanks, Johnny. My name is Anthony her. I'm a product manager in the office of the CTO focused on edge and these validated patterns. So our validated patterns initiative started about a year and a half ago and we've been working towards bringing up a number of use cases into the library. I'm extremely happy to have Johnny and the team. I've been with red hat for about five years now, but it, this is a fantastic initiative to showcase real world solutions or at least the framework for those solutions into a reproducible architecture. And the framework is, is what we're working on. So I'm happy to talk about it and happy to support you on this, Johnny and happy to be with you, Andrew. Awesome. So we're happy to have you. So Johnny, I don't want to seal your thunder. So I do have a question up front though. Let's do it. Yeah. And so we've used the term validated platform. Right. And I think in the social media posts, I was making some, some pokes about, you know, that sounds an awful lot like a reference architecture. So we have reference architectures, things that we actually call reference architectures from our partners. I'll dig up the blog post. It has a whole list of those from primarily our hardware partners. And we have portfolio architectures. Right. There's a, I don't know if they're publicly accessible, but certainly red headers are aware of the portfolio architecture team. And then we have validated patterns. So can you describe what the difference between those is? Like when would I use a reference architecture versus a validated platform pattern, right? That type of stuff. Yep. So a reference architecture is, it's essentially just a, an architecture that you would use to product, to make your environment, you know, production ready. Right. It's something that we say, Hey, here's the best, best practices to get your, your solution in place, you know, in a production environment. A validated pattern is it takes that reference architecture. And it essentially runs it, you know, we build the framework, the getups framework around that, that reference architecture. So we're building out the Argo applications. We're building Helm charts and we're building all of the automation through a getups framework for that reference architecture. And then we're also running it through a CI pipeline. Right. So a reference architecture is a moment. I actually got this from Anthony earlier. It's a point in time snap of that design. Right. And the, the red hat validated pattern is, it's a life cycle of that reference architecture. So, you know, you get, you get the whole getups framework and then you've got a CI pipeline that's kind of like a control in that life cycle. So when you go from 4.8 to 4.9 or 4.9.4 to 4.9.704 or whatever, right? It's running through the CI pipeline. And on the other end comes out this validated pattern that you know that you can take from one environment to another. You know, obviously you'd have to, the key here is you'd have to have your own open shift cluster and then you would just take your, your 80% Legos and then you'd have your 20% right. And you'd bolt that in to our validated pattern and that's really what you get at the end. So I think that's the biggest difference is that reference architecture is the point in time and the validated pattern is the life cycle of that reference architecture from, you know, from now until it's dead. Got it. Got it. And so, so you and your team are responsible for effectively the continuous update and validation, revalidation of those. Yeah. So what we do is there's a CI aspect or there's a, a QE aspect to our team. And so once we've got it and get, and we've said, hey thumbs up, it runs through the QE team. They run it through a CI pipeline. And then, you know, once that comes out, that's, that's really the 8,000 foot view of the oversimplified process. Right. We build it, we build it, deploy it, send it over to QE, QE does it, runs it through the CI and then they run the continuous integration to make sure that, you know, it works or if it breaks, then it comes back to us. We figure out what happened and then, you know, rinse and repeat. So I'm, I don't know the answer to this. So I'm, I usually, you know, what's the thing about being a lawyer and every ask a question you don't know the answer to. I know you're on, you know, you and the team, you're on the engineering side. Yep. And I hear you mentioning, you know, CI and QE and all that. Does that mean that these are supported? Like if you deploy using this pattern, it is a supported pattern. I'm going to defer to Anthony on that question. Yeah. The red hat products themselves are supported. Yeah. The red hat products themselves are supported, but as far as the pattern right there, there is an aspect of it that, that is, it's community, right? It's a, it's a community project. So, you know, it kind of comes with those, those, you know, restraints or whatever constraints. If you want to call it that, like, it's, you know, when we can, you know, but Anthony, I'll let you, I'll let you answer that better. I don't know if it's better in this case. Yeah. So, you know, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's validation. And that's what we're including in here. When you had asked earlier, what's the difference between all these different offering? A lot of the things are the same. Ultimately, they're just a reference of a solution. When we're looking at our side. If you have a problem with open shift inside a pattern inside a validated pattern, open shift support will, will take the call. So you shouldn't worry that you're not going to get forced all some issues coming your way when going from open shift 4.9 to 4.10 as you upgrade the stack, as well as all the minor components. And the things that don't, don't always get tested are connecting from Kafka to through the API chain to all the different services in these patterns. So that's what we're trying to, to do within this validation stage. Will it be supported? All the products inside a validated pattern right now are all Red Hat products because we're only delivering as, as Johnny said, 80% of the solution, the top 20% is going to be partners, system integrators, consultants, and the hardware is obviously not Red Hat either. So we're relying on external forces to surround the validated patterns to make them work for any customer. So all the Red Hat products, everything inside that we support, we will support whether it's in a validated pattern or not. We're just going that extra step to make sure that it's a nicer solution for you. Yeah. And so that's, I like thinking about it that way. And the reason I say that is because, you know, we see fairly frequently and I'm sure you see them as well, you know, emails internally where it's, I have this matrix of things that I've added to my open shift. How do I know that service mesh version X and, you know, open shift virtualization version Y and, you know, feature, you know, like all of these things all work together. And it sounds like the validated pattern team is effectively doing that, right? Making sure that everything plays nicely together, that it's going to continue to function as good or better after the upgrade as it did before the upgrade. Right. And a good example of that, right, is when the APIs were, the API extensions were deprecating, right? So we would have caught that a really great example actually is last week, open shift getoffs. The operator went out 1.4.0. And within, I'd say minutes of the release, we identified quickly that there was a parameter that was removed from the CRD, right? And so we, our team, like three out of the five engineers on our team caught it within like 10 minutes, right? And we reported that submitted the chair ticket and all that stuff. So I mean, like, I'm not saying everything's going to be that quickly to turn around, but that's, you know, same day, maybe next day, we're going to catch that stuff pretty quickly and find out before the entire world finds out, you know, right on the face that, you know, something broke and they can't explain why. And so I think that that's really a lot of the value, the un, the unspoken value that you're going to get from a validated pattern is, is that, that just continuous testing that continuous, you know, validation that we're going to be providing. So, or when asked a question here and I'm going to expand on a little bit and I think we'll answer it's kind of in the flow. And that is, you know, what is, what, what is that validation process? What does that look like from kind of inception to delivery? And, or when specific question is, you know, is performance benchmarking a part of that? Performance benchmarking. I do not think on the medical diagnosis specifically, it's definitely not part of it. There might be some of it built into the industrial edge pattern, but I, if it's not part of the original architecture, then, you know, it's probably not going to be part of the initial pattern build up. But from the way it goes from inception to deployment and Anthony, you know, please chime in Q and a Q or Q and if there's something else that like I'm missing on that, but from inception to production really, it's like we, we get this reference architecture or this POC that was delivered to a customer or a solution. And we, we take that and then we decouple, you know, if it's written in Ansible or if it's written in Shell or if it's, you know, there's like this hodgepodge of, you know, both, right? What we'll do is we'll, we'll decouple that from the, the, the original repo and then we'll, we'll start converting into Helm charts, right? We'll create Kubernetes manifest and then then we'll start applying through ACM if it makes sense to apply through ACM or we'll definitely apply through an Argo application and really start testing out piece by piece, right? And then automate piece by piece and then essentially, you know, we'll just tack on to each of those Legos, right? We just keep building onto that block by our workflow that goes from, okay, I've just pulled this from, you know, stable and then here's your, your route and you go and you click on the route and it's got your data and it's got the app and everything. So it sounds like and it may be easiest if we, I don't know if you have like an overview or a view of what one of the patterns looks like, but it sounds like you're not telling people, you know, here you have to deploy an OpenShift cluster that has three control plane nodes with these resources and six infrastructure nodes with these resources and then, you know, n number of compute nodes to run these workloads. Rather, it's deploy your cluster sized according to your workload and once the cluster is deployed, here is the GitOps repo, here's the implementation to go through and configure everything. That's 100% correct. We're not in the business of telling you how to build your cluster, right? Like if you need ODF, there's restrictions that come with ODF, X amount of compute. You have to have, you know, ODF is a big operator. See, you have to have a lot of CPU and a lot of RAM, obviously a lot of disks and stuff like that. So there are those aspects where, like you said, you have to be ready. Your environment has to be ready and then we just lay down the pipeline on top of that and then it goes. And we do get into that a little bit. It's more of a high level from an application standpoint, but not necessarily on the OpenShift side from an architecture, but like I said, it doesn't matter as long as you have resources that you need to run that workload. We don't care. Got it. So kind of to Orwin's question, circling back and answering that indirectly, it's the performance is whatever you configure the cluster to be, right? And, and, you know, you mentioned the medical imaging one, you know, you mentioned the edge validated pattern. So I'll pick on the medical imaging for a moment, right? I would suspect that your cluster, you know, deployment is going to look very different if you're doing one image an hour versus, you know, 100,000 images an hour. Right. So basically the patterns team doesn't address maybe there's some basic requirements of, hey, you need this much to achieve this goal, but, um, yeah, or this many nodes of this different type or I don't know. I'm curious to see I know you said you had a demo. So I'm curious to see. Yeah. Yeah. So what I'll do is I'll start the demo because it's going to take a minute. Okay. And then, um, my dog is warning because you're going to send all right. So I don't know if you can see this. Is that any better? It's a little small, but that could be because my, yeah, that's better. Okay, cool. So I'll make it. All right. So, so essentially this isn't like a requirement, but the way that we've done this to help to kind of simplify the delivery is we created a file, right? And inside the main file, it has everything we have a read me that has all of the directions. But if you're used to, if you've done helm or anything like that before, you know, we have a values global, which is essentially like, you know, your global values. So all we have to do is run a make install and this pattern in particular is the medical diagnosis. And so it uses a combination of ODF, Grafana, there's a bunch of operators that get deployed, but it's also right now a combination of helm and Argo. And then there's a little bit of Ansible for some of the imperative commands, which we're running on or we're working on completely converting into OpenShift job. So that way it's completely declarative and we don't have any imperative things going on. So I'm just going to run this. Actually I'm going to export, make sure I'm good to go and then I'm going to run. So well, well, Johnny's typing Anthony. How does a an architecture and implementation, I guess, right? So how does it become a validated pattern? Does that mean that a customer has adopted it? Does that mean that we're for lack of a better term like inventing this as an example of how to do something? I'm curious about the kind of before Johnny starts work on this. How did it come to be? It's a great question. We hear this all the time. The patterns themselves, we think of things in terms of kind of scaling up. Anyone can create a pattern. You don't have to be at Red Hat to generate a pattern. This team is done along with help from other teams within Red Hat, our portfolio architecture team, the data engineering team. They've created these initial examples from customers. So everything that we've done thus far has been based on a customer's architecture without the secret sauce that sits on top. All of that is proprietary and we don't really want to showcase anything that's going to put that customer in jeopardy. But what we can show is how to configure things with some level of best practices inside of the Red Hat team. These are not science projects. They're not something that's just off the shelf and oh, we want to highlight this. Our goal was to highlight what customers have been deploying. So when we started out with the medical diagnosis, the industrial edge, all these projects started out from a customer first. So when they start out, they're generated as a get-ups deployment. And what Johnny's going to go through is how this was created for the medical diagnosis solution. I'm afraid to ask this because I know there's never enough humans behind the scenes to execute. But how many validated patterns are there? The follow on to that is I'm a customer. I did something that I think is really cool. Can I submit that as a validated pattern to your team? You can submit that as a pattern. And depending on our ability to take it into house, then we can make that a validated pattern. So the distinction is we have different sites and I know that I talked to Johnny about this earlier. We have different online locations. One that is our Red Hat get-ups pattern, which is specifically around everything that we have validated. And then there's a hybrid cloud get-ups pattern, hybridcodpatterns.io and we'll put the references in here. And that location, anyone can put a pattern on. We're also working with partners to be able to deploy our patterns and their third party solutions at the same time in the same get-ups workflow. So we can talk about all of this. There's lots of areas to get into. I want to make sure that we have time for Johnny's presentation. Sorry, I'm sitting on my bed. So what we're seeing here is this is just the initial build-up. So this is the open-shift get-ups operator deploying. And so what we do is we do an open-shift get-ups, which is cluster-wide and then we deploy a data center level get-ups operator. And what that does is that deploys everything like the data center level. And so what you'll see is you'll start seeing all of these operators popping up here in just a second. So you'll see like open data foundations, open data hub, Grafana, open-shift serverless, AMQ streams, there's going to be all of these operators that start popping up. And once they do, I'll kick over to Argo and then we'll look at Argo from the application side so that way we can kind of see it going through. But open-shift data foundation takes so long to come up because Nuba has to come up and it has to do some validations and then the OCS operator, it actually doesn't complete until everything completes. So by the way, the cool part about this is we've just been sitting here shooting the breeze and all I typed was make install. So that's something to remember when we talk about simplicity and ease of use, I typed make install and we're getting all of these operators popping up automatically. All the routes are being created, all the things that we need to make this application operational, it's all automated. So then now what this means is when you come in and you have a cluster or when we come in and you have a cluster and you take this framework you take your 20% or your 25% and you're the last mile you plug your notes and bolts into this framework and now you've got a full solution that has meaningful impact to your organization. And I think that that's where at least for me when I get excited that's when I'm getting amped up thinking about this because I was in consulting for five years and there's so many times that we've deployed OpenShift from 3.11 I mean from 3.4, 3.3 to 4.10 or 4.9 rather and just people can't they get overwhelmed and they can't find a way to meaningfully use the cluster and so I think that this really helps solve that problem and I think that this provides that tool chain that people need to get over that hump to like okay I got this you know what I mean and then now they've deployed something and it's meaningful it shows value back to them and back to their organization and then it helps them have that base layer that now they can build on it gives them like a legit framework that they can like truly build on and then be part of a community that they can distribute back to and then everybody can build on it you know what I mean so I mean it's just this it's the idea of helping everyone and but it's not just charity help right it's like legitimate help that's gonna bring you value yeah and I was just looking at so I posted a link to the hybrid cloud patterns.io and there's hyphens between hybrid and cloud and cloud and patterns so I was looking at the site and there's you know basically documentation around each one of these including there's a very nice architecture diagram logical diagrams and all of this other stuff that go into a lot of detail inside of here and I do see kind of what we were talking about before there's cluster sizing suggestions around like different platforms that it's been tested on and different like node counts and node saw instance sizes and instant types rather so yeah I'm I was cognizant of this website I had not actually looked at it in depth and there is just a wealth of information here I keep going to the wrong orco I'm getting all excited I'm sorry I've got myself all pumped up don't forget that the hybrid cloud patterns is the upstream pattern site that anyone can go to and submit queries to there's also the red hat dash validated dash no it's easy to get confused on these sites I'll pop it into the chat as well that's where our validation comes from yeah please do if you have the link oh sorry and yeah so if you're familiar with Argo CD you know this is this is what we this is what we're showing right like these are each one of these represents an application an Argo application and you can see the repo it's going out to get hubs going out to our repo and then yeah the repo and then our branch and then it's pulling in the helm charts and it's deploying the helm charts all through Argo and so it's going to go through and it's going to loop through and it's going to deploy all of these resources and then when it's all done when my little script is done running in the background then what we'll do is we'll go hit the links for Grafana because that's really the dashboard for the medical diagnosis and what's going on in that is from a architecture standpoint we're going to run just scale up a pod and what that's going to do is it's going to sync images extra images from an S3 bucket via Nuba you know because it's S3 but it's really it's like Nuba backed from ODF and it's going to push it through an AMQ pipeline and then so AMQ and then serverless and all that stuff is going to happen and then it's going to display like the pictures of the X-ray but it's also going to run some statistics and it'll give you like the risk assessment like the standard deviation and all that so you get a bunch of information from images that are coming in images that are being processed and this particular pattern it's using X-rays right for to check for pneumonia or to detect pneumonia rather but this is just one sample right like there's there's other ones where you'll be able to take a hopefully a CT scan or X-rays of something else and be able to detect you know other anomalies within a within the body so it'll it'll helps take some of the human errors out of you know X-ray visualization or X-ray reporting and you know make some of these medical diagnosis is the medical diagnosis much easier right and much more accurate and it just out of curiosity the validated platform or pattern rather and I really got to stop saying platform the validated pattern it doesn't have to be used with the specific application rather it's kind of a yes we're showing it and it's tested and validated using this application but really it's all the parts and pieces that go into that and you know it could be a different application that's doing maybe something similar using the same same bits. Yeah 100% so the idea is that it's LEGOs right like it's modular so each one of these things right you have your ACM that's one component of it you have ODF that's another component you know the 20% in this regard would be like the image generator and the image server and demo right where everything else is a red hat product and so you know that those three things that there might actually be for I might be misremembered but like those three or four things right that's going to be that 20-25% that you're bolting on right you're that's that's our that's our nuts and bolts that we're putting into this pattern to make it you know meaningful to us right yeah but everything else it's it's modular they're built to be individual repos and so the reusable they're composable like that it's it's all the all the good stuff about automation and OpenShift and GitOps kind of like all wrapped into one. Yeah Christian yet again he owes us a check for all this GitOps. I know man I know we should get like free Argo shirts or something like that you know Argo tattoos I don't know I do I do have an Argo shirt so I see Jacob asks is it possible to have two network adapters on control plane nodes in terms of network redundancy so yes you can basically what I would suggest in that case is to create a bond with those two network adapters so if you're doing UPI or a non-integrated install or a bare metal install you can boot to the you can either do that through kernel parameters so kind of the afterburn kernel parameters to set that up or you can boot to the live ISO and configure that using nmcli or nm2e and then when you install coroS so coroS installer dash dash copy dash network excuse me and that will that will apply that IPI it's going to be harder so IPI you can't preconfigure the network for those nodes so essentially what you would have to do is if you're using OpenShift SDN day two you could use something like nmstate operator which is tech preview right now to reconfigure that interface so go from a single interface to a bond or something like that if you're using OVN Kubernetes then you can't do that it won't allow you to basically it'll break if you change that first interface so IPI would not be ideal there I don't think if you have two network adapters that are on the same network and both have an IP address I don't think that would work because etcd is configured for a specific IP address for each one of the nodes and it would take which everyone comes up first so Jenny I know you're watching your Argo there I don't know if you had any thoughts on that no I feel like that's right I think etcd is going to bind to the I think it truly will bind to the first address it gets right so yes it's possible it's just complex or maybe not possible if you're doing IPI but UPI non-integrated definitely we're really only waiting for the OCS operator to essentially do its health check but one thing I just wanted to kind of take Kean on here is I'm just going to click on whatever but the cool part about Argo if you're not familiar with Argo is that it literally gives you a visual representation of the resources that are deployed in your cluster so like this Kafka application this is our Argo application it's deploying a Kafka resource the Kafka topic the routes and then each one of those is pointing to a service which is pointing to a pod or whatever there's just all this cool stuff that's happening and to me this is probably being an infrastructure guy and then trying to help deploy applications and stuff like that this is probably the coolest thing ever to see when it just starts popping stuff out it's just amazing so hopefully you guys are getting excited about this too because it's pretty awesome I will say that GitOps is on my list of things to learn a lot more about this calendar year yeah man and not just because Christian's excitement about it is infectious it's a legitimate it's a powerful tool it is and if you like automation I think a lot of people like automation but I think a lot of people love visualization there's a lot of things that you don't really get out of that from normal writing Ansible Core or writing a shell script you might put some green feedback where you're like oh okay or red stops or whatever all the old school stuff but with Ansible Tower you can do workflows representation of your workflow going through and I think that that was a big hit and I think that with this in Argo this is just awesome to see and it's cool to see it just like spider web out to all the resources and really help people understand just how awesome this thing is it's not just an app it's an app with all of its buddies so with the patterns does it extend that beyond just the deployment and connection of all these various pieces that include things like so using this x-ray imaging example does it explain how to do like HA maybe or DR across multiple clusters that type of stuff no I mean honestly not yet if that was something that was like you know I'd say on the roadmap then yeah it would definitely would but I think initially it's really just taking that that initial repo and then decoupling it and making it up pattern and I don't think that that was built into it or even considered when it was written so I think the biggest thing from an HA or DR aspect would be that it's all blasted out to S3 and backed by Nuba and so or backed by ODF rather so there's some of that but it's not like multi-site or anything as far as ODF is concerned okay that kind of makes sense you know because each application is going to be different and have different requirements you know we've talked about this before here on the stream you know with each one being different it means that there's going to be different methods of recovery you know each storage provider is going to have a different method of replicating storage or protecting storage so on and so forth so speaking of which for anybody in the audience ODF is a topic I know we talked about it some with Annette last week we have an ODF topic that's going to be coming up as well in March so I'm learning how to share a screen I told you it is funny to me we're on episode what 56 I still am like every week it's how do I share what's this button do yeah so I just wanted to share this so that way you know that I'm not cheating you know so the route for Grafana is here and so what I did is I just increased the replica account from from zero to one so that way it kicks off the that pipeline that we were kind of talking about so if we look here we can see that oh man this is going to be awesome alright so let's see you know it's funny I ran this like four times back to back and I went through every time and so naturally as I'm doing this so my Michael's asking about looking into OpenShift platform for the first time thank you Michael I hope it's everything you want it to be is the monitoring aspect comprehensive enough to execute an Ansible command if a criteria is met so yes effectively you would use AlertManager which is a part of Prometheus to trigger some sort of alert to go out and if that includes going to a third party or an external system to trigger some other automation that can absolutely be a part of it so Terence is lauding this is awesome so he's saying that it's great resource and I agree Anthony did you have by chance sort of looking at the chat here did you by chance happen to have that link for the validated platforms instead of the upstream patterns so I included both in the chat a little bit earlier up you might be able to see it there are some just to point out a couple things we're increasing the framework over time so there's small things that we're starting out now to be able to make this more consistent as Terence is saying to make it consistent for people to deploy if you're deploying the same thing over and over again multiple sites but Johnny and I were talking earlier on there are several parts of the framework that as we move forward that will be increasing and adding value dealing with sealed secrets dealing with vault those sorts of things may not be in the framework to start but those will be adding value to others that want to take that same base framework and the way that we're doing this from the validated pattern side is that when we created the industrial manufacturing edge or this x-ray the medical diagnosis solutions they're using a common framework that we started with and we're including that in our CI so that it's being tested but there are changes happening in that common framework so that we can add in vault or sealed secrets over time and they'll be back ported into the older patterns or the validated patterns to bring them up to speed over time but we're trying to do things in a more methodical way so that we're not just constantly being updating and causing disruption at customer sites if they decide to deploy these You're kind of flirting with the edge of something there which is I really, really like and that's I know most people don't see this but internally because I'm adjacent to the OpenShift product management team we see your team driving a lot of roadmap stuff that's happening across OpenShift and to me it's really cool that you're creating these things that are effectively to quote the name, it's a validated like yes you can do this and checking and making sure and finding all of those little errors and all of these little incompatibilities and inconsistencies that normally we would have to rely on unfortunately we'd have to wait until a customer tries to do it and now y'all are finding those way earlier in the process so that's really cool to me We appreciate that I mean this is part we are a customer in this regard taking customers configurations from external making them validated solutions inside the company and using those to constantly test all our products working together because we're focused on business solutions and not just individual product solutions as much as they are incorporated we're looking at what the business value is for a customer to deploy this from a hospital's perspective to reduce the number of doctors that they need to see a patient's x-ray because they can do some AI on that x-ray and see what the likelihood that they have pneumonia or sepsis or other diseases associated with it so there's some really cool stuff here that's a really interesting way that you put it of you are effectively a customer for all intents and purposes and very much to the core points you know the product management team as a whole they quote-unquote own open shift in all of the features and functions on top but ultimately each one of them is only responsible for a small piece and even me as a tech marketing you know person I only own are responsible for a couple of those pieces so it's hard to see it's hard to get that you know basically confirmation validation again across you know the big large swath of the things that are happening inside of there so yeah I'm you know I make no secret about it internally or externally I love the work y'all are doing and not just y'all but also the portfolio architecture team right and all that other stuff and that's certainly not to discredit the work that our partners do with reference architectures or anything like that but it's super valuable. There was a slide I know we were looking at presenting some things and one of the slides that we had was that we're part of like a three legged stool and the portfolio architecture team and the data services team that put together several of these examples and our customers and our partners are all contributing to this to make all our products better we focused I mean the reason that this came about is because it was a project on the Edge initiative leadership team the ILT that is across the entire company so breaking down silos so that our products work across the different barriers that we just just look at individually so it's our approach and it's why I'm so excited to be part of this well that and working with rock stars like Johnny and western Martin Andrew I mean all the folks on this team are just I mean I can't say enough about this team but anyway I'm bad at 1000 on demo so I mean like my luck with demos live has been outstanding so yeah so what's going on is like there's a job that's running in the back ground and it's essentially doing like this initialization and there might have been like a little bit of a race condition so I'm trying to reset it to get this thing up but if you look at the output on on here this this is the visual representation of like the pipeline that's going on so you can't really see it which kind of sucks but like right right in the middle of the screen under the risk distribution that would be the images that are being processed verse that are normal verses with pneumonia or whatever and then over on the far right would be where the x-ray itself would be presented you know and then these images that just shows the images that are going through and so like there's all of this data that you know it just gets plugged in and so that's where the one time where I'm trying to do it live and show everybody it just fails brilliantly that's what that's what you would get though I mean like you get like this awesome tool with a bunch of data that you can use and there is on the site for the validated platform or a pattern man I really did it earlier did you see me grimace there is an a screenshot of that with all the data filled out yeah there you go and so yeah exactly and so that's what you would see I'll just have to record one where it's actually working because it's being silly right now but the big thing here is that there's all the integration between all the different components and you know it just shows where like all you have to have is OpenShift it's going to go out it's going to install OCS or open OpenShift data foundations it's going to install AMQ streams it's going to configure all of those it's going to create the Kafka resources and you know then it's going to go through the OpenShift serverless and it's going to create those jobs or those those functions so serverless can do its thing and then at the end you're going to have this tool that is usable and it's going to provide value so Terrence highlights something here it's a great teaching tool to show off best practices with deploying complex applications and Anthony you would use the phrase best practices before as well so is that a part of the deliverable like is there if we go and browse through the websites for you know the hybrid cloud patterns that I owe and that stuff is there like a here are the best practices for integrating these components or is that rather reflected in like how the like in the get ops repo a repose as the case may be and how those things are deployed together well a couple things to consider though is best practices change over time the reason that we're looking at this sort of configuration as code is that it is a way to solve a specific business problem as outlined by the customer so when we look at this it is a it is still a point in time review from our architects and the engineers and people within red hat to say is this the way I want to deploy it if something was to change in one of our patterns and we're no longer able to use the API as defined we may have to change things does that mean that the best practices changed maybe on how you solve this problem because this is the only way to do it but there are also alternatives using AMQ or Apache there's lots of ways to solve the same thing in open source so we don't want to take the more declarative approach that this is the only way to solve the problem this is the way that we've documented to solve this specific business problem for a customer yeah that makes sense and I say that you know I was a VMware and a storage admin for a long time and of course both of those companies would publish best practices on here's how to connect our things together and the problem that I found when I eventually went to work for that storage vendor was that a lot of people treated as it's not a best practice it's a rule that you have to do this and you still need to be aware of your environment and what you're doing and you know kind of make that decision of is this a really recommended practice for me so G Kamar little off topic but when is the next release of 3.11 the last one was December 8 I don't know so off the top of my head my initial response is that 3.11 is in maintenance phase so I would expect that it will only be releasing when there's things like security issues and bugs and stuff like that so I also want to highlight Johnny I know you're still bouncing different things instead of there we are at the top of the hour so I don't want to consume too much of your audience time nor Anthony I know you have many other things to do Johnny I'll consume your time all day you know that that's right so if you have any questions comments concerns issues please go ahead and get those into the chat we'll address those as they come in you're also welcome to reach out to me at any point in time you can reach me via email andrew.sullivan redhead.com you can also reach out on social media so if you've seen me chatting there on twitch practical Andrew is my twitter handle as well so you can of course reach out on there I just saw a couple of messages come in you know just think of consulting creating the framework of the operators putting the plumbing together yeah that's absolutely right and Johnny I know you're busy so Johnny is a J-O-N-N-Y know H at redhead.com yep that's right hit me up if you have any questions or if you want to talk about anything you know patterns related or even you know open shift troubleshooting or whatever so with our last couple minutes here I'll again throw out a reminder and a plug for two weeks from today we will have the what's new and open shift 4.10 session with the product management team so we will not be having the open shift admin live stream that being said I along with several of the other folks here on redhead live streaming will be on that stream and we will be here to answer questions so if you have any questions about features functionality that type of stuff you can ask them here if we don't know the answer and even if we do know the answer we we always shuffle those or shuttle those over to the product management side as well so that way they get a full capture of all the things y'all are interested in whether it's internal or external folks so definitely if you are interested in a lot of the things happening 4.10 and I have to say 4.10 is a there's a lot of stuff going on in there ranging from you know metal lb has some stuff going on I know there's a lot of stuff happening with the scheduler and kind of other low laying components right things that I tend to interact with open shift virtualization 4.10 which is not a part of the open shift 4.10 release it's a decoupled but they usually follow within a few days or maybe a week they've got a bunch of stuff that's coming out in there so 4.10 is going to be exciting CSI providers there's a number of CSI providers that are going to be affected in there so yeah I would definitely encourage anybody and everybody to at least check out one of the recaps that you'll find around what's new in 4.10 because there's a bunch of stuff there yeah so Johnny any any progress no no it's something that I've obviously screwed up so I need to figure it out we'll send you a bill maybe you should send me a bill I don't know I'll relinquish my get up shirt this we'll have to talk to Christian see if maybe we can get one of the Argo shirts sent to you all right to our audience thank you so much for joining us today we really appreciate it again if there's any questions anything that you that comes to mind after you're viewing this I know that we get a lot of views on the various YouTube twitch you know people watching not live feel free to reach out to us via email or social media Anthony so much thank you so much my brain or mouth got ahead of my brain there thank you so much for joining us today we really appreciate it like I said this is one that I was looking forward to since we put it on the schedule a month or so ago so thank you again it's been a pleasure yeah it's um you know we're a friendly group here we're pretty laid back that's right have a good time with it so you know like I'm glad it it went down because you know stuff does always go wrong especially when it feels like I'm driving but you know it just goes to show that you know like there's just little things that you can tweak and then once we get it perfect it'll be in this tool and it'll it'll be perfect all the time yep so Johnny I will I'll give you the last word all right Anthony thank you Lester Martin McKaylee the validated pattern to team everybody you know Michael St. John thank you guys for showing up and supporting the chat and answering some questions too so those guys are rock stars I just get to hang out with them so glad that they were here and glad that you know mostly went well all right well we will see you next week then everyone have a great week great weekend and stay safe out there see ya