 Good morning. Good afternoon. Good evening and welcome to another special edition of ask an open shift admin. I am Chris short host with the most show run of the stars of this thing we call red hat live streaming I'm joined by the one and only Andrew Sullivan and my good friend John Spinks today. Andrew, what are we talking about sir. Yes, sir. Yeah. So, uh, quick aside here. Do you know where that whole good morning, good afternoon, good evening thing originated from? I kind of know if that's like the first time I heard it was John Troyer back with the. Oh, did the VMware community podcast and all that. For me, it's just like being, you know, I know that we're streaming multiple time zones and then it was like the good morning part was definitely a shout out to Robin Williams for good morning Vietnam. So I didn't realize John Troyer did that. So yeah. Yeah, that was how he used to open. I used to listen to him all the time way way back when. Anyways, yeah, so hello everyone welcome to the ask an open shift admin office hour. So today's topic is if you're familiar with John Spinks probably no surprise around open shift or excuse me insights for open shifts. So red hat insights is of course our metrics monitoring kind of platform advisor platform I'll let John go into details here because he knows all the right words to use so. But yeah, today we want to focus on something that is relatively new to open shift. And insights itself is something that I personally don't have experience with as an administrator right it launched after I came to work at red hat after my. My, my role was no longer being a administrator so I only have kind of tangential experience with it so all of that being said my rambling aside john if you don't mind introducing yourself. Absolutely thanks everybody my name is john Spinks I'm technical marketing manager for red hat insights. I've also handled historically smart management satellite in a suite of our management products for rel insights is a suite of hosted services that are available on our hybrid cloud console cloud at red hat.com. They're included with your existing subscription so if you've already got a subscription there's nothing more to buy. You just have to activate and use and we'll go into a lot more detail here shortly about some of the benefits and capabilities that we're offering. Indeed, and a little bit of a trivia tidbit here that the three of us have known each other for a long time john I know you and Chris have known each other for like decades and. John john you and I like literally sat next to each other at our previous employer for several years so yeah john john I've known you the longest of anybody here at red hat and you were actually instrumental to me coming to red hat. So, we used to pass each other in the halls both on headsets at the same time just. Yeah, pacing around the cubicles. Nice, I know that feeling. Yeah, and john and I go back we used to actually work together at college foundation North Carolina. Yeah if you ever have an opportunity to work for a small nonprofit, you have a chance to learn all sorts of technology, more than you potentially ever wanted to touch so I like to joke that. At that time, I wore almost every hat imaginable, I think at the same time I was the email admin the the actual physical space admin so making sure you know how many years were available in the rack how many pd us was windows admin for a while, a san admin, Tivoli admin like it goes on and on and all these things at the same time. Yeah, I can't just say email admin I was the domino admin. I can't say now it's even better domino. That was rough. Oh, yeah fun fun history. It was it was fun when I was setting up this show to think back to all the years that you and I shared that that cubicle space over there in our TP. Anyway, so today or this is one of the office hours series of live streams here on red hat live streaming. And what that really means is that we are here for you all for our audience. So john is here to talk about insights and all of the great things that it has to offer and the ways that it can help us do well be administrators be better administrators Hopefully, but ultimately, whatever questions happen to come to mind whatever issues you may be having whatever things may be standing out in your mind, please don't hesitate to ask those at any time. We'll be happy to address those to the best of our ability. And if we can't answer them here on the stream, then we'll dig up those answers and we'll follow up either in the blog post. So just a reminder, there is a blog post that comes out after every one of these episodes that has answers to questions links to specific times and the video and all that other stuff. As well as we will, if necessary, we will follow up with those on a following stream so don't ever hesitate to ask us any questions. I think it was a last week or two weeks ago Chris somebody asked us like oh can you give like life experience recommendations and like well, I mean we can but there's no guarantees out good that that those opinions are going to be you know. So yeah, don't don't hesitate to ask any questions whatsoever. So we already have some questions by the way. All right. So first question do I still need the host name cloud that open shift calm that isn't all routed now to enable insights. So it is cloud that red hat calm nowadays, and so, yeah, I asked will lead for clarification there if he was having issues, but it's a recent change and that might possibly be some of the part of the question is the services like insights are now available at console dot red hat calm so if you do have strict firewall rules in terms of I only allow this specific URL, you may need to do an update here because we did make a slight shift used to be everything was that cloud dot red hat calm. That's now more of a landing page for hybrid cloud console, but if you log in it's going to automatically redirect you over to console dot red hat calm so the only. The only way you should really be impacted by that changes if you do have strict firewall rules in place. Your communications do happen over port 443 so if you've got that port open, you know, just the ports open. You're fine but if you do have strict URL allowances you may need to add to your list console dot red hat calm. This change does not affect the API so. Happened just this week, late last week 29th. Yeah, it's it's very recent. It should automatically redirect. And we talked about this last week of open shift calm slash blog and really all of open shift calm move to cloud and dot red hat calm and all the cloud dot red hat calm things move to console dot red hat calm. So if you see different URLs or if your firewalls are having issues you know your your proxies need those added please please be aware of that change and I think there was several blog posts and other release about that as well. So I see a couple of questions in here and not all of them are are technical but that's okay. So, Parag, I am doing the ex 180 certification how how to prepare for the exam. I feel like all of red hats exams their hands on. So, the interesting thing is, and I haven't and to be clear I haven't sat for that exam yet I should get around to doing that at some point. Yeah, I should too. But we don't limit things like documentation right you have access to all of the documentation and all the normal places and all those stuff so really it comes down to time. If you're having to go to the documentation to look up how to do all of the things right it's not that you won't be able, like you won't be able to find it and make that happen it's you're going to run out of time to do all of the tasks that are inside of there. So, my recommendation is and always has been like look at the list of things that it tells you so there's a, what is it the DO 180 whatever the companion courses, the syllabus for that will have all of the potential tasks that that's in there, and you can map those over to the documentation. And it's just one of those like, just do it just be hands on, you know, trying testing, you know, experimenting, breaking, you know, Eric Jacobs on our team who also has a live stream that'll be on this afternoon. So, yeah, the gaming. You know, his, his, so funny that his icon that he uses for internal stuff is Wreck-It Ralph, because he does. He very clearly states, you know, yeah, I'm really good at breaking things not on purpose. So, so yeah, that's, that's really the best that I can offer is, you know, the the red hat exams are really good because their hands on but that can sometimes be difficult because their hands on cloud dot red hot yeah I see you asking that I'm new to open shifts where to start a shish. So, I always always always encourage people to go to learn dot open shift calm. So, learn dot open shift calm, let me share my browser window here. share not that one, this one. So if we go to learn dot open shift calm, we have this interactive learning portal that has just a huge number of different scenarios that are inside of here. So just for example, I tend to use these open shift playgrounds a lot when I just need a disposable cluster and don't want to have to wait for something to start up. You can see I haven't logged in anything I haven't done anything other than click on these start scenario buttons, and I'm immediately dropped into effectively a hands on course. So you can see I'm not logged in I didn't do anything right you all saw me browse to this. So it's available to anyone. These clusters only last for an hour or two right they don't last long but all of these have a set of steps you can go through where you can learn different things and you can use that as a jumping off point. So, if you have familiarity with Kubernetes already, and you're trying to make the leap over to open shifts. There's a, you saw there's a huge number of those scenarios in there. So if you're not familiar with Kubernetes already. We have Kubernetes by example, and Chris I don't know if you have a link for all that stuff. Oh yeah, you know I do. Johnny on the spot. So the Kubernetes by example stuff is a great way to, to kind of get started along that whole path and I think there was also some, there is some jump back here. Yeah, I know. There is some getting started stuff inside of here. So OpenShift basics over here. And you can see there's a number of scenarios all inside of here around how to get started with various aspects of OpenShift. So, this is always the first place that I tell folks to go. You also have Try OpenShift.com slash Try which will redirect over to redhead.com now. So this is a good way to look at all of the things that you have available. You can see like cost free free free right so just different ways you can get up and running. And do remember that with a developer account you can access you can cloud or console.redhead.com you can get a pull secret you can deploy clusters. So if you saw my this cluster over here, this is using a developer account right it's it I can deploy full clusters they're good for I need to follow up on that. I because I think so if you look in appendix one technically OpenShift is eligible for the developer entitlements and all of that but for some reason it's not letting me entitle that so I need to follow up on why that is. There was an error when I talked to the product management team there's an error and whatever back in system that's linked to but it's been like a month or more since I talked to them about that so I should find out what's going on there. But yeah, might be a good idea. Yeah, so developer entitlements you can absolutely use with OpenShift if you want to deploy a longer term cluster to get hands on and experimenting with. I'm trying to scroll through questions here. What leads issue with Stackrocks and restricted install that I'm not familiar with we'd have to. Yeah, he's got a pull secret issue looks like the old URL the cloud.openshift.com still in his full secret so it's weird. Can you join Red Hat after completing your OpenShift certification? Definitely increases your chances. Yeah, it can't hurt right and it largely depends on where in Red Hat you're wanting to join right engineering I think they have less of a focus on those types of things. You know you don't need to be OpenShift certified to be an engineer to write code. But it's if you're looking at a field position right a sales position or something like that. It tends to be much more visible and of course they tend to favor that not necessarily still but they do tend to favor it. But yeah, as usual go to jobs at Red Hat or jobs or careers now I forget. I don't remember. Yeah, they are redhead.com slash jobs and you can apply it will. Yeah, and we publish all of them on there we don't have any special access or anything like that to these like there's no hidden jobs or anything we publish all of them. People ask us that like oh is there a secret postings? No, no there's no. We go through the if we want to move internally we go through the exact same process. Oh jobs deprecated its careers now in 1.22. Oh gosh. Thanks Welly. So I want to quickly go through as is tradition the top of mind topics for this week. So everybody new to the stream or just as a reminder. I try to cover some things that have happened since the last stream or otherwise in the industry that are important or I feel might be important for you all for our audience. That's also a way that we can kind of address real time things. So the first one that I wanted to talk about. This is a question that came up. Internally is about how much resources should a node have reserved for those kind of system level things that are going on. So red hats and you know we in the docs we have some sizing information in here like if you come down to scalability and performance we have like recommended host host practices and some other things in here. So if you look through there there there is some sizing guidance but none of it talks about like how much resources do I need to set aside for Kublet or stuff like that. So the good news is you don't necessarily have to know off the top of your head. Right you don't have to know on day one well I know I need to set aside one CPU is worth of resources. And the reason for that is because the cluster can automatically allocate those resources for you. And to me this is interesting because I can turn on this auto sizing reserved parameter. I can let it run for a little while and see what it sets them to. And then if I want I can take positive control over it at that point and say no I want you to always reserve this much. Not as the case maybe not everybody runs their you know puts their their cluster nodes as close to full as possible. And to be clear that probably makes sense in a cloud provider right I want to deploy as few instances as possible right I want to use the ones that I have as much as possible because cost savings, but on prem maybe you don't load them as much. So, I'll post this link into the chat here. Thank you. So, the documentation here goes through how to turn that on it's a cool it config that gets applied so it may result in the nodes being rebooted when you do that I don't remember if cool it configs just restart cool it or if they reboot the nodes just be aware of that. And then that will result in that turning on you can then view and monitor that by simply do it tells you here right look debug into the node and then do a PS dash EF grab on cool it. And it'll tell you what that value is set to I think it will also show up. So if you describe the node OC describe node. You'll have any output. I can, I can actually show this. Come on. Yeah, that opens the thing that yes. Anyways, if you do a describe on the nodes that will be a resources blurb and it'll have available and allocatable or total and allocable something like that. So, if you subtract the allocatable from the total it will tell you how much is reserved. Yeah. We'll revisit that. So the second thing I wanted to talk about today is cluster updates. So, I got asked in the context of DevOps, and kind of, you know, programmatically doing things like cluster updates. How can I use either the CLI or YAML, which is not documented in order to update the cluster. So, in the docs here we have this updating a cluster within a minor version. We post this link into the chat here. And if we come all the way to the bottom here. So we've got step four how to apply an update OC Adam upgrade to and yes I said Adam and not admin and I will switch between all of these things in order to equally offend everyone. So, you can see the cluster version operator. So you can see this polls among other thing this channel ID, and it tells you, I think it's down here a little further. How to maybe it's up a little higher. Anyways, you can update that update channel or you can change that update channel by simply doing a patch emerge patch against this particular object. I thought it was in the docs here but I know for a fact it's in this case yes. Nice. So we will post that case yes, and if we scroll down through all of this information up here. We see we have right here this OC patch cluster version slash version and I can change whatever channel I happen to be using. So through this mechanism entirely through the CLI I can go through and say I want to go from stable four dot seven to stable four dot eight and then the next man would be OC admin upgrades, you know, either to latest or to a specific version. So all of that is documented inside of here. So you can also do this via YAML. So hopefully this thing is pulled by now so you get cluster version cannot type and talk. So if I pull this object, we can see up here. This is our, our spec. I can edit this object, I can apply it, you know, OC apply, you know, whatever it is that I so choose and essentially set both a channel as well as a desired version in this spec object. So if I wanted to update to a different version, I would have a and I need to dig it up. I thought I had it. There is a Where is it? Here we go. One, two, you minimize minimize. So we'll just bring this up to look at the YAML. So you can see here desired update and then the specific version that I want to use. So you can apply this YAML object to the cluster. So either literally through like an OC apply or through the get ops paradigm. And this is how you do cluster updates via that mechanism. So just FYI. The last one we'll talk about today is machine set deletion priority. So if we come out here. So when you are using IPI or UPI and a machine set, so you scale up the machine set. And now I have, you know, five, 10, 15 nodes, however many I happen to have. And okay, I use them. My capacity now needs to be reduced rate. My application is died down. I want to delete some of those machines. So by default, when you scale that machine set down, it will randomly select machines to be removed. Right. It doesn't have any preference or priority in there. It basically randomly selects them. You can express that as, and you can see here, you can change that policy to be newest or oldest as well. So you can say delete the oldest nodes in the cluster first or delete the newest first. You can also, and what was new to me is you can give a specific machine a priority. I would say I have a machine that is in maybe a specific AZ or a machine that has been tainted somehow because somebody SSH into it or whatever reason, if you have a specific machine that you want to have the machine. API operator and make sure I'm saying the right one. All you have to do is add an annotation for that node or to that node for this particular item. And let me paste this into the chat here. And at that point, it will use those as the first ones, right? That one or more as the preference. So that was new to me. I didn't know that you could do that. Kind of neat if you ask me, but yeah. So I think that's all I got. The other minor one that we can cover in like 10 seconds is if you happen to be looking at the OpenShift 4.7 nightlies. We are in the process of transitioning that or rebasing that from rail 8.3 to rail 8.4. Just FYI and it's not available in candidate or fast or stable at this point. It's just the nightly, but at some point in the future, expect to see OpenShift 4.7 rebased to rail 8.4. So just FYI. All right, I'm done. I know we took a little longer than normal for those. So my apologies to John, but I did want to cover those as much as we could. All good stuff. No worries. Yeah. So John, Insights. As I said at the start here, Insights is something that I am not super familiar with, right? It came out at Red Hat Summit, I think two years ago, was when we announced Insights for REL or REL. Yeah. Insights for REL. Insights has actually been around for around six years. It's been a long-term product, but we had a major reimagining, re-envisioning of it a couple of years ago, and I'm sure that's what you're thinking of. But Insights actually started out as a $99 a seat paid thing with the idea of, hey, we work a lot with customers at Red Hat. We know a lot about what's going on. How do we take some of that to limit or show you type data and give you information about what's going on in your system? So think about this. Any of your other vendors you may have that have some sort of a health check or a, you know, we used to call it auto support for storage devices. You know, you can go back and it's essentially, hey guys, we're detecting something odd in your environment. You may want to fix this before you get that 2 a.m. phone call. And that was kind of the original idea of Insights, and it was just for REL at the time. It had a essential mission of we're going to identify availability, performance, stability or security risks. But it was, you know, six years ago, it was fairly simplistic in what it gathered. But if you think about Red Hat and the value of your subscription, one of the biggest values that you get is that support experience. And that's not always, crap, I got a problem. Let me pick up the phone and hey, Red Hat helped me fix something. It's those knowledge base entries. It's that documentation. It's that entire experience. We have, I believe the numbers somewhere around 115,000 knowledge base entries, over a million support cases. So we've got a lot of experience at Red Hat, fixing Red Hat stuff and keeping it running well. The whole goal of Insights is let's take some of that information and give it to you in a consumable fashion where in some ways you can self support. And you can have some predictive, prescriptive information on making sure your stuff is running correctly before you have to pick up that phone and call support. Nice. Yeah, what's that? There's reactive proactive and preemptive actions. And that was always, and I'm trying to remember who said that there was an SVP at previous company, I think that coined that term for the support organization. And I always liked and the thing that I'm most familiar with from Insights is it will generate like Ansible scripts to apply settings that it finds or fixes that it finds stuff like that. Correct. So the big thing that we announced this last summit, so just a couple months ago now, was we expanded the Insights brand. It had traditionally only supported Red Hat Enterprise Linux. We now support OpenShift as well as Ansible Automation Platform with Insights. And that's essentially a growing area where we're taking more of those learnings and we're spinning it out. We've had it in OpenShift for a while, but it's not really been branded or marketed in that kind of a way. If you've used anything on cloud.redhat.com, you've already seen the presence of advisor as it was called within the UI. That's the advisor service of Insights. It's very similar to the one that we started out with in RHEL six years ago where it's identifying for you on your cluster issues that may arise. And this could be something really simple like, hey, you're running it out of support version, you may want to upgrade and get into a supported train where it could be we've detected a vulnerability and you should address this before, you know, you have a bad day. Let's go ahead and fix this or even it's a you're running on this hypervisor and the hypervisor configuration is not correct. Like if you go in and change this, the buffer setting, you're going to have a better experience, more performant, more secure. Awesome. I'm just responding to questions and chat here. Yep. Insights for RHEL or yeah, Insights for RHEL, I'm making sure I'm ordering words correctly here because RHEL for Insights. Yeah, which would not be the same thing. So Insights for RHEL is focused around taking all of that data, all of those metrics and feeding it up into the cloud platform where it does things, right, it uses whatever analytics it has and says, here are these conditions right here's some things you should know about here's some potential fixes for those. So Insights for OpenShift is effectively the same. Insights for RHEL started the way you explained it and it's the way that OpenShift is today to a point that there's some more availability. Insights for RHEL has expanded a lot from that original charter and that's what you were remembering from about two years ago where it spends a lot of time. We actually show you all the CVEs that are on a RHEL host. If you have regulatory compliance, it can use OpenSCAP to compare to PCI rules and say, hey, your system is or is not in compliance and here's what's missing. We can tell you about all the patches that are available on a RHEL box. We can even compare some of the metadata that we're collecting and look at things like the kernel version and say, hey, we've got a baseline that says we expect you to run kernel X but we've actually detected you're running kernel Y. And for the majority of those things, as you mentioned earlier, we can create an Ansible Playbook and simplify the process of you fixing it. Insights for RHEL actually takes another step further. If you have an additional subscription smart management, you can click a button and you can fix all that stuff. So it will actually fix it for you, which is a huge advantage and time saver. The Insights for OpenShift, which is really what we're here talking about today, it is in its early stages because we just announced it here a couple months ago. But we've had that advisor service for a while. It's comparable to that same advisor service in RHEL that focuses on those prescriptive predictive type of information. Hey, you've got this problem, you should go fix it. And here's how. It's not just, hey, other than if it's in an unsupported version, the answer is get to a supported version. That's not very prescriptive. But if you have this particular vulnerability, take these remediation steps to go ahead and fix it. There's another aspect to it as well that's really important, and that is subscriptions and cost management. So talking to the business folks, you pay a red hat to use these subscriptions. Are you consuming the proper number of cores? Are you over subscribed, under subscribed? We have a subscriptions tool that will tell you that. And also if you're running in OpenShift dedicated or you're running on AWS, we have some cost management type capabilities where we can detect what you're spending. And again, we'll forecast a little bit on what we expect you to be spending by the end of the month. Just, I don't know if we'll kick. I don't know what the first letter is. I will kick you. There you go. Yeah. Thank you, Chris. You're welcome. Reading where I can't. So I don't know. So they were asking about how long it takes to go from fast to stable with the updates from 4.7 to 4.8. So engineering bases that entirely off of metrics. And remember both fast and stable are fully supported. So today you can absolutely fully supported switch to the fast channel update to 4.8.2, I think is the current stable. And then switch back to the stable channel. And that whole process, everything about that is absolutely fully supported. When they move a upgrade edge from fast to stable is based off of metrics. And with why stream or excuse me with z streams, it's usually like three or four days. It doesn't take too long with why updates or what, you know, major upgrades, well, minor upgrades major upgrade would be like three to four. Then it's based off of metrics and it can take a couple of weeks sometimes. It can take even longer right sometimes they'll discover bugs we did. We saw this with 4.7. Remember it was. 4.7.6 or something before for the upgrades were enabled for 4.6 as a result of a couple of bugs so it's not necessarily the number of bugs it's not necessarily the severity of bugs or anything like that. I don't know what those metrics are I wish I wish I did. That's actually ties into this a little bit because the telemetry that we collect helps gather information on some of these features. And if we determine that a particular feature. Yeah, there has a lot of bugs or doesn't have any bugs that can affect how it's moved from channel to channel. Yeah, that was and I'm glad you brought that up because that was kind of going to be the segue into my next thought which is. The shift has the telemetry right and and you when you go to the docs and it shows you how to how to deploy like one of the first blurbs up there at the top is hey there's telemetry that's being collected you can go in and turn it off and here's how type of thing right. So, with insights, I can only assume that it's hooked into that same data. Is there anything else that you need to do to enable turn on rate begin to benefit from whatever it is right and I don't know the data that if there's if there is any extra data that insights collects associated with that. Yeah, there is a couple pieces we leverage that telemetry as part of what we're collecting that's very valuable in the health checks where we're telling you hey your systems running correctly or it's not. There's also an insights operator piece, and that is what gets configuration type of data and sends it up to red hat for analysis. All of the data that we're collecting I you'll hear me refer to it time and time again as metadata. We do not target any kind of person personally identifiable information we don't want it. We don't look at it. We've got no desire for anything of your personal data and notice my use of words it was very careful we don't target it. And I use it that way because I have a particular example in the rail space where we had a customer call us to the mat we get called on to a meeting and they're like you're collecting information that we don't want you collecting and we want well that's interesting tell us more. And they're like we we have this particular project name that's a sensitive project name and when we went and evaluated the collection. That project name was present in the collection and this is a big issue for us were like. Okay, like, let's look into this turns out they had taken a service on the box and they had created a service and given it the name of that project. And when we go and collect running services. That'll do that name was collected and it's it's nothing you know it's same way we're collecting like you know, in TP. We're collecting that name and it's like hey this service is running because one of the things cool things that we can do is we can actually set up rules. Is this particular service running or enabled and if it is or isn't you can create an alert to say like I expect the firewall to be running at all times. If firewall D is disabled send me an alert that's something that we can we can set up so we do collect the name of running services. And they had created a service with this name so we showed them that you could go in and you can create on rally. They blocked list. You can do this by pattern you can do it by keyword you can do it by command. And so just put it in this list and then we'll never touch that term ever again so if you're sensitive about something like your company name. Put it in that term or in this case a particular project. So we don't target personally identifiable information but there's there are scenarios where if you do something on the box that's unexpected. We might collect it and we do give you ways to check what is collected and redact anything that concerns you the caveat there is the more you redact. The less valuable the findings are so if you say hey you know IP addresses not only do I not want to send you my IP address I don't want to send you anything about my networking stack. Well then we can't tell you if you have a network bond that's misconfigured or no LACP that's misconfigured you know we don't know anything about it because we're not collecting any of those details. But that brings up a good point because you those two things that you used as examples there. Those are like operating system you know rel level things so there's CoroS rel right there is Kubernetes and there is all of the open shift components so does insights for open shift kind of apply at all three of those levels does it have rules for because that's interesting me because particularly you know I do a lot with open shift on on bare metal and knowing if there is a misconfiguration at the hardware level. That's something that's valuable I always assumed it was just at the open shift, you know, services operators level. We're definitely at that cluster level where that you've got that insights operator running, we're not down at a particular pod level or no level at this point. I do expect some some growth in that area now that this is all part of the, you know that the insights family but right now we're definitely looking at the cluster health and the cluster findings more so than anything else. And we saw last week with Jimmy that the ACM which does have pods, you know level monitoring and integration and all that other stuff does hook in with insights as well. And I can only imagine insights, right, we're not insights ACS probably tangential at this point but it's another way to get that pod level information that's going on inside of the cluster so across the portfolio the the suite of things. What do we call it now platform plus open shift platform plus ACS we're staying, we're not involved with at the moment but as you mentioned ACM we're going to have those integrations and it's a similar thing that we do with red hat satellite can see all the findings and insights within the view that you're used to going to today. So right now for insights for open shift you primarily see that from the hybrid cloud console cloud at redhead.com console redhead.com however you want to refer to it but we do want to bring those into the places that you're already working because that's that's more valuable to you wouldn't want to bounce you, you know, force you to bounce around a bunch of different places. So, I think we talked about, or did we talk about disconnected. Not yet. That is the number one question that I always get about insights. So the, and I'm going to answer your question slightly differently insights client, the operator 100% open source you know all of that is available to you. However, what is not open source is the rules or recommendations that we go through because those are all determined from the closest thing that redhead has to intellectual property, which is our experience in support. So if you take a collection that that collection is generated on the cluster it's sent. We've mentioned earlier, it's sent over port 443 so it's sent out to our API is for analysis, and all of that analysis happens in the cloud. So there's not an on premise version of that analysis today where it has to be analyzed up in the cloud somewhere. We evaluate that metadata and that's when we send back the results which are okay we found issues and here are the issues or the risks that we found, or hey, you're all good to go, you know, clusters, you may get a, and I saw it on a screen you showed earlier like zero issues found from insights that may just be the result of everything's running good and that's the best result, or you might get, hey, we found something kind of questionable here you might want to come take a look at this. So and john you've known me for the better part of a decade you know I'm full of stupid questions. So, you said that that agent is open source. So, if I really wanted to on a disconnected network kind of like deploy that and have it forward into like my own custom system or something like that and basically recreate the magic of insights. If you're that determined I don't see any reason why you couldn't. In fact, we had for insights for rel there was a dev comp meeting in for no, maybe two years back time still doesn't have much of a meaning for me so I've lost track. Last time we were able to travel. There was a presentation by in the before times. Yvonne who he did a one of the level up hours with you Chris on the connected customer experience. He did a presentation there dev comp talking about insights and how you can create your own rules but it was just that and you were you re doing that package to point at some sort of a local rule based engine that you would have to create but you can't make the use of any of the ones that we create. And we don't allow you the another common question you get is hey can I create my own rules and we don't allow that, because we don't want injection of anything that we haven't created that could potentially do something questionable. That data that we do collect because we take data we send it up to the cloud we're overly protective of that data and how it's treated. So we don't allow you to create your own rules because that means you could potentially write something, send it up to the cloud and run something malicious against our data sets and that that's no bueno or not going to happen. There's always an an a XK CD right little Bobby tables. Yeah. So let me see if I can dig that one up I'll post the link. I know. So, I guess you can know that one immediately. I know. I mean I guess you can indirectly reason, you know, cause rules to be created basically if you open a support case. Absolutely. Absolutely. There are times those rules that they're generated by our experiences in support so if we know that somebody's hit the same issue over and over will create something to detect that issue and be like hey you know we're starting to see this pop up. At one point there was a top 25 list of the issues in support and the goal was to make sure that everything in that top 25 list was always covered by an insights rule. And that way we say hey like these most common things that we're finding that other customers are hitting. You don't have to hit it you can just check with insights and we're going to tell you if there's anything kind of wonky going on. Yeah, so as as always support cases, you know it's not just raw data that's being fed up support cases play an important role in all that. I have that just this morning I think in two separate conversations my first question was did you open a support case. All the time. And it's you know it's you know internal folks asking questions on behalf of their customer. And you know trying to get stuff fixed and you know yeah I get it you want to help as much you know and get things back on track as quickly as possible but even if it's just a follow up support case. And that helps us to identify you pointed out right trends and stuff like that that can be fed into the larger systems. So, John, I don't know if there's, and I'm going to put you a little bit on the spot here I don't know if there's a way because I deploy a lot of clusters, my clusters don't live for very long and they're almost always horribly broken by the time I'm done with them. They're just beyond repair broken so do you have like an example like can I see what insights for open shift looks like. Yeah, yeah absolutely. Give me a second here let me stuff around to share my screen. All right. So, where I am right now as I just switched to a screen where I have the hybrid cloud console up this is available for you at cloud at redhead.com. You do need a red hat account, you sign in if you're already using any of our support articles and like access dot red hat.com. It's that exact same log in so there's there's nothing new for you to sign up for. And if you already have an open shift subscription, you have this today, or a real subscription or an answer automation platform subscription. This is that central landing page in this environment. I've got a whole bunch of stuff connected this is a developer environment that I use. And you can see on this left hand bar application services open shift rel and Ansible. I do want to pause for a second I think I saw an earlier question about manage Kafka. And if you can spin that up for a developer instance if you wanted to do that. Click that application services link you can create an instance right from this page. But we're going to focus on the open shift piece of things. So if I click into open shift I can see all of the clusters that I have registered. I've got a number of them and again I mentioned this is an open shift environment so there's a lot of broken things in this this developer world. You're using an employee account too so you'll see you'll see everything all of our abandoned broken clusters in there. Yeah, this this one's yeah this one's a part of our developer instance properly shut them down. So I've clicked on the overview screen and I'm showing this because today I don't have on this left hand nav. You'll see I have subscriptions I've got cost management these are also features of insights. There's not an advisor link here on the left today that is a future growth area for us. So I suspect at some point in the next quarter or two. So hopefully you'll start to see an advisor item which will give us a better view here, but you can see it within this overview issue. What I'm going to do is go to, I've got this cluster with a number of issues. It was a eval. And within this view, you've got overview monitoring and insights advisor, I used to just be called advisor we've updated that to say insights. In this particular example there's two issues that we found, if I scroll down and look at them the first one is simple. This version is estimated to reach its full end of life in less than a month. So it's low issue like low risk right now because we haven't hit it yet, but we're letting you know that you're going to, you're going to come into a time when this version is no longer supported. So if I go back to the view details, it's going to give you a little bit more details about the issue how to remediate it and in this case how to remediate it is update your cluster. But if I go back to there was a 45 cluster so that makes sense. Yep. So if I go look at this other one though this is a little bit more interest we've actually got a CVE here that's that's popped up run sees vulnerable. So this is something that we would want to fix with one of these potential versions we can see a little bit more detail about that CVE, or if we actually go into the list here we've got recommendations of updating and this recommendation unfortunately is again update the cluster, and you can go see it. The reason that we've hit the issue is it's running a vulnerable version, and then any kind of additional info this is going to link us out to that CVE page to that security vote, bulletin where you can get more information about what's going on and why this is impactful for you. So this is just the first one that I clicked, because there's so many clusters in this environment. I'm glad I hit one that had a couple things showing. Sometimes you'll go in there and it's just like hey welcome to insights because there's a couple of those recommendations out there as well. So just the importance of it you'll see, like, this is one moderate and one low so it's not super impactful that there's anything critical in there that's when you'd want to address pretty much ASAP. So, this is interesting and I like this because I don't think there's another components right in the portfolio that does this operating system level notifications. Great. ACS I think does pod level, right, you know, and as well as Quay, you know, they'll do analysis on the container images that are in use and stuff like that, but I don't know if they do operating system level stuff. I've never looked, I should ask Foster or somebody and see whether or not ACS is capable of doing that. So this is interesting to me because it gives you that level of visibility. And where I hope we're going is kind of like what we do with the the rail side where you can actually see the full list of recommendations that you could hit and you could see it. A view where I could see every cluster that's hitting this particular issue and that will hopefully be available once we get advisor listed on the left hand side here. We don't have that view today, but we've got that similar view and rail so I would expect it in the future to look more like that and I can show that here in a few minutes before we bounce out. But there are a couple of things I want to touch on in this console. I'm curious if there's a list of the rules that it checks for and it because things that come to my mind are like at CD latency. Right. This is, you know, it's a well known issue that it's usually something we hit. Yeah. So today there's not that list for OpenShift. I just bounced into rail if you look at the top rail. What I can see within the rail space and this is what I hope is coming in the near future to OpenShift is here's all the recommendations and I can just clear like systems impacted one or more and I can see every one that we have so there's for rail there's over a thousand. I don't know. I can't see that number quickly for OpenShift today in the same way that I can for rail, but I hope this is the view that's coming and that left hand nav bar on the OpenShift where I can see, you know, hey, here's everything that I could possibly check for because for rail for years that's the biggest question that we've gotten is, you know, can I see a list of the recommendations? I'm like, yeah, it's right there in the product. And then for rail we look not only at the OS level we actually look at hypervisor level cloud provider level workload level so you see here we've got recommendations for SAP or for SQL server or any of those. So, advisor is capable more than just that OS layer. I'm going to switch back to OpenShift. Christian, I see you trolling. Is Christian trolling? Oh, damn it. Yes, he is. The other piece I want to make sure that we include is if you look at there's this little insights bar and underneath that we have subscriptions and cost management. So the subscriptions piece this is pretty impactful to anybody that has to ever deal with what have I bought, how much have I used and how much do I have left. And this is still collected by that insights operator where we're looking at how many am I using remember this is a dev box so the subscription accounts are ridiculous. So what I'm going to do is your subscription threshold. I'm going to hide this by clicking on it. And now I can see what I've actually consumed OpenShift subscription utilization is by course. So I can hover over any day. And I can see how many cores that I have used so I can get a sort of a trend line, and then I can scroll down a little bit even and I can say hey here's master one OCP using a course so I can get that idea of what am I utilizing. And if I wasn't in this environment. Andrew mentioned earlier it's an employee account so it has a ridiculous subscription threshold. We have, you know, 866,000 cores available in this account. So I'm so it looks like a flat line from here. So I'm just going to hide that. And now I can see what I'm actually consuming but if you're in a more realistic environment which I do have it's just not this one. I can see the comparison of what you've consumed to what you've paid for within your account. And then as you consume cores, it's going to, you know, it's going to track that trend up so you can either say okay I'm trending up to the point where next quarter I might need to buy some more subs or, you know, I've used more than what I purchased next time I talked to my account rep I need to go and have a conversation about, you know, getting more subscriptions. So we want to point out that this is something for you to use this is not a tool for red hat to go say, Oh, our customers using more than they paid for let's go after them that is not the intention of this used to be called subscription watch. The idea the behind that name was you could watch how many subscriptions you're consuming. Some people thought that meant that we read had could watch how many subscriptions you were consuming and that is not the goal so we, we changed that back to be just subscriptions. If you are using other means like on demand dedicated. I don't have any but you could see that or annual dedicated you could see all of those represented right here by how many you've purchased so we get all of that information available to you. It's a good service makes it really easy to use to all pause there to see if you got any questions about this piece of puzzle. I think you've covered everything that I would have asked because, and you used to hear me say this all the time, like, people ask me how much it costs, and my answer is, I have no idea because as an employee it's just we ask and here it is so this is helpful. I'm seeing this because it gives me perspective on how what we were finding from talking to customers is just dealing with general subscription pain of working with red hat was consuming about a quarter of an employee's time a month. It's not a little deal and most of our customers are handling this using complex spreadsheets and I got to spin up an open cluster, the easier we make it for developers and for admins to spin up open shift clusters the harder it becomes to track how much you're consuming. This is the idea of this subscription service is to make it really easy for you to get quick visuals on what are you consuming what are those trends. It's not down to the skew level so we're not saying that, you know, we're using X number of skew wise, you know, it's, you've got the ability to deploy a certain number of course. That's our threshold line that green line and we're actually consuming, you know, why number of course so in this case on July 30 if I was consuming 5505 course and it is not on a. It's on a daily increment it's not any more granular than that today, and we can switch that over to weekly monthly quarterly whatever, whatever increment I think we only keep this data around for this one might be a year that we keep this data around for it's not super long term but it's it's enough to get a good trend, and we can see you know we had a dip in October, we peaked out somewhere in April and then we're turning down a little bit again. So I'll address a few things in chat here. Yeah, not necessarily answer questions but more comment around them. I believe I see you're asking about doing upgrades with get ops right Argo CD operator etc. So as Christian pointed out. So, I don't know that we've publicly talked about it but our product management team has a cluster that they keep running long term they are responsible for administering it deploying applications to all of it other stuff, and they have fully adopted the get ops philosophy for that management so it's really interesting to me that our product management team has. fully embraced, not only our own product but also a specific management style associated with it. And it's it's resulted right they they've gained a lot of experience with what's it like to be a hands on administrator and found you know issues and created, you know bz and you know rfe's as a result of that experience so I personally think that's really cool that they are, rather than focusing on their individual piece they're looking at the entire cluster. And the way they do it is really cool to like, each PM is responsible for like a two week span of whatever goes on for that two weeks, you're wholly responsible for it not. Oh, you're the PM for this well you you take care of that inside of the cluster. So, I think that that's beneficial to us all overall. Christian asked about it when I said he was trolling he was asked, can you explain why you don't need to run DHCP in a pod. So, you technically can and I think Christian Christian I had a conversation about this with the helper node v2 because he's trying to containerize everything so I think it's technically possible. But they can Kubernetes and an open shift if you're trying to use like, and we had a conversation with this with somebody in slack. And if you're trying to use like, multis with a second CNI where you're doing like DHCP IPAM right so Mac VLAN or something like that. So you want to have a second interface for the pods to connect to. There's an internal DHCP server that handles that you don't need to deploy or manage anything it's it's all done magically magic. So, yeah, it shouldn't be necessary, unless you're trying to do something unnatural like Christian, which is deploying external services inside of containers. Let's see. Well, I see you saying here there's a cluster that's reporting like 78 cores when it only has 52. That is probably the result of hyper threads. And you can oversubscribe that so remember hyper threads. We don't don't need a subscription entitlement so Yeah, generally, if you're concerned about that I'd say open a support case and they can explain to you that the exact business rules they're using the team that built the service will very happily take in any RFEs or fix any issues that that they may determine they can explain very clearly the business rules and the reason why you're consuming one core count versus another in a way that I'm just not yet educated to do. Yeah, it's it's an odd. Yeah, it is. Yeah, yeah, it's it's definitely cores not sockets for open shift for rel it's sockets and not cores so it's you know there's but it has to do with the way that the individual products, you know, handle their subscriptions. Tiger when is open shift dedicated coming to IBM. It's already there. Yeah, there's a managed service on IBM cloud for open shift so it's not called dedicated it's used to be rocks they just changed it red hat open shift Kubernetes service. I don't remember the new name for it. I think it's open shift IBM cloud I think I could be wrong. I don't know. Yeah, I had a chart for that but I'm not seeing it. So, and Tiger, Tiger, welcome back. Yeah, Tiger, Tiger was our intern two years ago. So, yeah, good to see what and welcome back to red hat. So, I know we're at the top of the hour we're actually two minutes over. Chris I'm assuming because you haven't harassed me and Buddha, we don't have a hard stop. So, I wanted to be respectful of everybody's time john I want to make sure that you have coverage. You've gotten through all the things you wanted to say. And this one last one I want to mention real quick and I went to it on the screen cost management as a feature that we have specifically for open shift and what that's doing to you is is calculating all of your open shift costs, whether that's in, you know, really AWS Azure so we can actually break it out in that way, and you can see in this particular cluster, I'm four days into the month and I'm consumed about $5,000. And we're going to go ahead and provide you some trend lines to say based off of that. So far by the end of the month you know here's here's what we're forecasting your expenditures to be. And there's a lot of other capabilities in here around that calculation. There's an entire document somewhere on the model that we're using and the confidence of the model. You can see below we've got usage for CPU memory, all that information here that we're tracking towards that cost and then we can even break that down to a course open shifts not available AWS and available on this one. But none of them are available in this one is lovely. I'd say your internet is broken but yeah, you're still here so. We'll do a lovely refresh. I mean, I get in about five different accounts at the same time and then sometimes it just gets one of the cookies just breaks it. This happened to me the other day too. I was and it was on Chrome no less. I switched to Firefox and it was fun. I don't know what was that that could be it. Yeah, but that was the last one I wanted to show luckily it happened at the end so I don't have to spend much more time on it. You'll get to see a little something but yeah the cost management capability is pretty cool if you're especially if you're dependent. Even though I'm not running a whole lot of open shift in my AWS account, it does show me the expenditures that I have overall for rel things that aren't even open shift so I actually use cost management to look at my AWS expenditures. I get those sets from AWS directly as well but I do like to compare those just to see what's going on. So all about the cloud. So, thank you John really appreciate you coming on. Absolutely. It's, it's been a while since you and I did any kind of presentation or anything like that together so it's, it's just like old times so thank you very much. Really appreciate you being here for our audience. Thank you for joining us today. Hopefully we addressed all of your questions I tried to scroll through chat here if I missed anything. If we didn't get to answer anything to your satisfaction. Or if anything comes up if there's anything that's comes comes to your mind after the show ends please don't hesitate to reach out. You're always welcome to send me an email Andrew dot Sullivan at red hat.com or you can reach out on social media practical Andrew. All one word just like you've seen me posting in the chat here which should get rebroadcast across all of the others and I saw your coffee cup there john your bear on just water. Yeah there's a did you know that you can use emoji and URLs. Yes. It's awesome. You can buy domain names with emoji. Yes, so there is a bear metal right so bear and then the horns dot WS. If you happen to browse to HTTP, you know colon slash slash bear emoji horns emoji bear metal dot WS. That's a fun little. What do they call those hidden thing white words Jim. Yes, Easter egg Easter egg that's the phrase I was looking for. It's just just my kind of humor you know I try to keep it around. So, you know it's it's past my bedtime. What can I say. So thank you everyone. Again, we will don't hesitate to reach out Andrew dot Sullivan at red hat calm also keep an eye out for a blog post on cloud dot red hat calm slash blog that will come later this week or early next week. So some of the publishing is still a little off as a result of the transition so keep an eye out for that and thank you so much for joining us today. Yeah, we really appreciate you take it easy out there, tune in that two o'clock for the scalable multiplayer game design show.