 Hello. Good morning. Good afternoon. Good evening. Possibly all of those things again if you had joined us at our earlier start of the level-up hour. So we're back. The level-up hour might be more like a level-up half hour today, though. I'm your host, Randy Russell. I'm joined by my co-hosts, Jafar Traibi and Scott McBrien, and we have, well, we hope, will still be an interesting show after a few little technical hiccups this morning for us. So we're looking back at the season, you know, since we began hosting the show, and we have a few episodes that are our particular favorites. And so we thought we would run by some of those again. And we had already, when we first started at our nominal start time, we had already talked about UBI Micro and the visit we had had with Scott McCarty. So now for number two, I think we want to try again, crossing our fingers, because that's what we do when we're technical people. We rely on luck, right? We're going to go to episode two, which is migrating Kubernetes workloads in states with Crane. Conveyor is kind of an umbrella, like collective of projects that all sort of have a shared objective to just move people to cloud-native environments, specifically Kubernetes. And so that can be, like, there's a lot of paths there. There's a lot of problems that are involved. So there's a lot of projects that seek to solve a bunch of different problems. So you can see here there's a re-hosting. So maybe you've got, like, one cluster in one place. You want to move it around. Crane concerns itself with container workloads. So, like, traditional Kubernetes migrations, maybe you want to go from one cluster to another, and it's already been containerized. Then there's a project called Forklift, which concerns itself with actually picking up VMs and moving them to Kubernetes clusters, which is pretty cool, uses KubeDirt. So it's kind of Forklift and Crane are kind of sister projects in that respect. Then you've got re-platform. So move to Kube expects a source that's not a Kubernetes cluster. So other platforms such as, like, Docker Swarm or Cloud Foundry, this will pick stuff up, kind of analyze it, spit out Kubernetes objects, and then that'll help you get, like, sort of convert your workloads to Kubernetes. Tackle, I think, was the one that was actually on here, if I'm correct. But this will help you actually refactor your applications. I think we'll probably talk a little bit about this. But all applications are not just created cloud-native out of the box. There's a lot that actually goes into architecting and designing and writing, like, software so that it expects to be running inside of a cloud environment and that it can take advantage of all the benefits that are associated with that. So Tackle helps you actually look at your application and make some recommendations about how you can manipulate that. All right. So that, you know, Japhor, what are your thoughts on that episode and some of these tools that are being used to help migrate workloads? Yeah, so I think what's interesting with this episode is that it shows you where you can go once you are further advanced in your container adoption. So the first one, the first clip spoke about your BI that helps you build your container images and get familiar with it and maybe as a developer experiment with how to containerize your applications. But as you evolve into a more sophisticated adoption, you have this maybe tens or hundreds of applications that you have on a cluster that you would like to migrate to a different one. And Red Hat has actually built a tool that allows you to easily do that. So basically you select all the containerized applications that you have on an existing OpenShift project cluster in version 3. And then you can migrate to a different cluster in OpenShift 4 or actually from an OpenShift 4 to another one. So it basically allows you to have this mass migration. So there's basically tools for anything, whether you are a developer or you are a, I would say, more advanced adopter where you can easily migrate your application. So I thought that it was nice to have that, I would say, sneak peek into what you can do once you go further ahead in the adoption in the containers. All right. Well, and that's one of the things we want to try to do with the level up hour is sometimes we want to start with some more foundational things and get people started. And then sometimes we dive deep into some things that are really, I think, more sort of leading edge things that people are doing with their clusters. So let's take a look at our next episode. And no pressure here, Jafar, but this one is getting started with OpenShift single node cluster, which if I'm not mistaken, hey, actually, that's God or was that Jafar? Who thinks about it? Yeah, it was me. In order to make it easier for you to install the single node OpenShift, we actually came up with what we call the Assistant Installer that supports the single node OpenShift installation. And let me go ahead and share that with you. Stand back. Okay. So can you guys see my screen now? Indeed, we can. Okay, perfect. So what we're going to do, we are now at console.redhat.com, which is our hybrid console for managing your OpenShift clusters, for managing your subscriptions, et cetera. And as you can see here, it says create cluster. So it's something that we made to make it easier for you to create your OpenShift clusters. And basically here, we are going to create a cluster name. We are going to specify which is the base domain that we are going to use. And here we are going to mention that we want to install a single node OpenShift. Then afterwards, what this is going to do, it's going to generate an ISO for us. And this ISO needs to be loaded on the machine that is going to be hosting the SNO. So you can boot it via USB drive or PXE booting. So whatever suits you would work. So for the sake of the demonstration, I'm going to be using a virtual machine. But keep in mind that this is not the production use case. So what answer did I ask earlier? Yeah, exactly. We're going to start with an exception. Exactly. So just for the sake of time, I had already prepared an ISO that I have loaded on my virtual environment. And this is the one here. So as you can see in the UI, it says waiting for host. And what I'm going to do now is I'm going to create a machine on which I'm going to load the ISO. And we'll see how the installation goes on. All right. So let's go ahead, create the VM. Let's call it SNO. And here I'm going to choose the local installation and choose the ISO that has been generated. So there's a minimal disk of 120 gigs. And for the memory here, we're going to go and put 64 gigs of RAM. Although again, with 32 gigs, you should be just fine. So I'm going to change the CPU here and put it to the minimum. And now I can start the installation process. So Jafar, what's it like to watch yourself presenting? It's actually very funny because it appears like you are knowledgeable, although we know that it's not true. Well, you did a pretty good job of bringing it off there. But that was really a topic that I think that we have some of the highest level of highest level of interest on that particular topic of the shows that we've done. And I think that there's a growing amount of interest in single node because it is something that can just address a lot of use cases that you don't need to go all in with the hugeness of a full-on OpenShift installation. Right? Yeah, exactly. So I think that's so especially when we're speaking about Edge and like 5G deployments and these types of very specific use cases. But what I also liked about that show is that it was the first time that we introduced a graphical installer for OpenShift. Whereas before that, it was more a command line approach. So yeah, I also liked that we had that addition into the way we are helping our customers go with their installations for an OpenShift cluster. Of course, we do have helpers if you are using ACM. And that's probably something that we would like to show next year. But yeah, so that's also something that I like from that show, from that episode, actually. Yeah, so that's a good one. And I would strongly suggest if you're working with anything like Edge or 5G or other use cases where you need a lighter, smaller way to go into OpenShift, I think that's a really good episode. So continuing on, Episode 4, which was Episode 44. And this was administering CI CD services on OpenShift. And I believe we talked about some capabilities that are associated with Tecton and that are in OpenShift for being able to really do more with CI CD. So let's go to the clip. Vincent, maybe why don't you share with us what is CI CD? Essentially, from the infrastructure perspective, I think the easiest thing will be to use an example. Imagine you have a repo that controls your infrastructure, right? As an infrastructure person, I have multiple such repos. And every time I want to make a change to any of my infrastructure, I will submit a PR to that repo, maybe get appeared to review. And essentially, I want to see that the outcome is what I expect it to be, even before I emerge and after I emerge is even more important but harder to predict. So the integration part comes when different team members collaborate on the same source code to produce a single result, which is the artifact. And that is also an overloaded word, but I guess we can get up from here. So some thoughts on that particular episode and CI CD, Jafar? So yeah, I remember that I was sick for that episode and I was very sad to not be able to assist because there was a lot of interesting content. And I think it spoke about some of the non-technical sides of CI CD as to what are the type of administration or organizational tasks that you need to take care of when you are putting enterprise-wide CI CD services. So yeah, I think it was a very nice show with lots of interesting content. And I believe it also introduced the Kubernetes native CI CD on OpenShift, which is OpenShift Packlines and Tecton. And yeah, maybe Scott would like to tell us more on, I would say, those things and how they played all together. Well, as much as I am an expert on CI CD and developer tools, which is not at all, I am starting to notice a theme to our show. Just like the Level Up Hour takes folks that are experienced in Linux and OpenSource and introduces them to container topics and then helps them grow into more complex container specialty skills like Kubernetes and other things. I am seeing a sense of a same trajectory here in our clip show. We started off talking about base containers at TBI Micro and then migrating containers with tools like Crane. And now we are talking about doing better development on your OpenShift instances with CI CD built in. Yeah, exactly. What is funny is if you look at the UBI image, which is like your own instance running on your laptop, and the SNO is sort of the equivalent of that where it is like a very tiny cluster running somewhere on the edge and you have your hundreds of clusters. And if you compare that to Crane, you have your hundreds of your applications. So with SNO, you did just maybe the other way around where you are going from like, here is a typical deployment to go with many control planes and many nodes. But now let's go back to something simpler and go back to a single node installation. But yeah, there is a sort of pattern in there. So nice spot. Well, a natural evolution of that pattern is to at some point be a bit concerned about security. And so actually number five of the clips that we wanted to cover was we had a visit and talked about key lime and cloud native security and about some of the things that are, you know, unique to trying to secure a Kubernetes OpenShift type environment. So have we got that clip? The key lime architecture. We mentioned this before, but we have the key lime agents running on some untrusted hardware somewhere talking to a TPM or a virtual TPM. And then over the network, it can be trusted or untrusted. It doesn't matter. It talks to the key lime server side. And the key lime server side is basically split into two parts, which is the verifier and the registrar. So when the agent comes up, it contacts its registrar to say, hey, I'm ready to go. Here's the details of my TPM. Here's the certificates for my TPM. Let me know what you want me to do. And then after a node is registered, somebody can tell the verifier. Okay, these are the details of how I want this machine to be tested. And so then the verifier starts talking with the agent, getting quotes and then periodic quotes, and it just does this periodic attestation. And then at any point if it fails, it can do revocation actions. And so in this demo, what we're going to do is we have a setup on AWS where we have a key lime server with the registrar and the verifier, which is talking to several different web applications. And so it's just a generic web application. Doesn't really matter what it is. But it's all running behind a load balancer. And so we have the key lime agent on each of these web apps talking to a software TPM, because again, AWS doesn't expose hardware TPMs yet. And then getting the quotes periodically to the verifier. And then we're going to fail a node by running some evil script. And then the verifier is going to kick off a revocation action to remove that node from the load balancer. So this is a real world scenario where you have a normal web application behind a load balancer. If any of those nodes gets compromised, you would like to remove it from the load balancer. So this will do this automatically. So let's see if this works. All right. So I had some terminals here and I'll tell you explain what's going on. In the top here, I have the output from the key lime registrar. So right now it doesn't do anything until new nodes come on. Bottom left, I have things running the first application. This is the output from the key lime agents running in that application. And this is the second one. So I'm going to start the key lime agent here on the third application. And we can see it starting up. And so it talks to the register, gets everything returned. And so now it's just waiting. And so you can see here waiting for revocation messages. So then we can go to the screen. And so the top here, I have the key lime verifier running. And so it's checking these two nodes, just continually checking between these are the EC2 IDs of those nodes in Amazon. And so now I'm going to tell the verifier about this new node, this third node, and what it should do and how it should verify. And so I do that with this key lime tenant command. And so I'll walk through these options real quick just so you can understand what's happening. So the key lime tenant, we're saying that the verifier is at this IP address. The tenant, the new post that we're monitoring is at this IP address. This is its ID, which corresponds to its EC2 ID, which could be anything. Like if it's an EC2, it's an EC2 ID. If it's just random hardware, you can use hardware specific keys or IDs on that, whatever you want. It could be even randomly generated user IDs. It doesn't really matter. But it's just to track this machine over time. And then we give it an allow list. An allow list is a list of all the files and hashes that we are allowing to run on this system. We can say what we're going to exclude from monitoring. Which it's very common to exclude things like log files and anything in that database directory or things like that. Things that are going to be changing all the time. Right, right. But you can't know ahead of time what they're going to be. This is just the CA certificates that are used to communicate for that. This is a payload that's sent over when access station passes. And this payload will also include revocation actions. So security is obviously a very important consideration in our new containerized OpenShift world, right? And I think this was a really, really great demo about some of the kinds of things that we have to think about in that space. Gentlemen, any further thoughts on that demo? And perhaps Steph, if we could share a link for people who want to learn more? So I know that Michael said in the episode that they were targeting OpenShift first. But I have to tell you, Randy, I was looking at the demo and was like, wow, there's a lot of places where, especially on Edge workloads, even on RHEL, where this could be really, really handy by either removing something from being able to communicate or alerting as you're putting things outside the data center and need to have more at the stage. Or attestation at the end? Attestation. I like saying that word, attestation. So we're actually new hosts to the level of our, relatively speaking. We started a few months ago, and we actually have some additional highlights from our early days and also when demos go wrong. So maybe if we can run those. Now to the announcement, which is, I sadly am planning to leave the show. And not just the show. What? Yes. Really? I'm going to be leaving Red Hat. And I blame Chris Short because he regularly refers to me as the professor on the show. And so I am going to go and become a professor. Over at Boston University. And teaching in a new group. Which is all about computing and data science. And teaching people about, like kind of approaching how we teach computing a little bit differently. Also reaching out into other organizations. So that, or like other majors kind of, so that we can kind of leverage data science and things like journalism and give them kind of the formal teaching and that kind of stuff. Yeah. So I am going forward. I am leaving you in the capable hands of our current guests and Randy, I'm sorry, I find Mr. Twitter handle. But these are their Twitter handles. So you should find them on Twitter. You can also talk to me on Twitter still. I will still be there. I will still also be of my expectation as I will be heavily involved with Todora. I also think because Red Hat and BU, Boston University, sorry, have a very strong relationship. And so I continue, or I hope to continue to work with Red Hat on a regular basis. I also find all of this quite confusing because my son is also starting college. He is also going to BU, which is Cornell University. Yeah. So it's like, I got BU. So you're moving from BU to BU to BU. Exactly. So I currently work in a BU and I'm working and then in about a week, I will be taking my son to BU and then I will be working at BU in fall. So yeah, it's not confusing at all. No, no. So it is the 42nd episode. It has been an awesome run. I think it's been a lot of fun. And yeah, I don't know, I'll miss you all. Hopefully I will still be around in the chat periodically. Maybe they will invite me back as a guest periodically. But that's our plan. Yeah. So Randy will be taking over the hosting duties. I will be in the background executive producing, producing as need be. And Scott and Jafar will be the open shift and rail experts of Dajor to talk about all the things that we come up with in the future. Yeah. So Kudos to Landon, who did a great job leading this show. And hopefully we can fill those big shoes as we continue with this. So besides having new hosts, we also had a demo. And you know, the thing about demos is, you know, as we always say, what could possibly go wrong and how fitting, how fitting in that case that we have, in fact, a demo from our own Stabby Mick, Mr. Scott McBrien, talking about containerizing your application. Now to the announcement, which is, I sadly am planning to leave the show. And not just the show. Here we go. Next clip. Yes, when demos go wrong. But today I thought we might kick it a little bit old school and talk about, well, how do you build a container? And there are a couple of different ways that you can do that. You could certainly use some open shift technologies like source to image, where you take your source code and through a CI CD build pipeline, build yourself a container image all the way down through deployment. But maybe you're not ready for that. Maybe you're just starting your container deployment journey or container engineering journey. So that's what we're going to talk about today. All right, so the last thing I'm going to do is I'm going to run my container. Yes, Scott. All right, so last time I containerized my image, I had an interactive application that I want to run. So when I ran the container, I used dash it to indicate that it was an interactive executable that I wanted to run. I wanted to have that terminal session and be passed through to the content inside the container. This time I'm Pogmed running dash D. I'm detached from it. Just going to run out there kind of like a daemon and whatever is running in it is going to be running in it. Additionally, because this is a web application, I need to be able to get traffic to that web application in the container. So dash P sets a big port mapping so that port 8080 on the host operating system, when I receive connections to port 8080 on the host operating system, it's really going to redirect that to port 80 on the container clumsy bird. Interesting. Interesting. Did I typo something in my creation? Will you start with typos? Could it possibly be, Scott? I don't know. Maybe my hypervisor went belly up like that one time on Red Hat Enterprise Linux presents. Interesting. So he says that he doesn't recognize that. No pressure. See if we could superimpose like a tick-tock, tick-tock, tick-tock. Which is weird. I stand by that placement. That would be weird. Instead of in it, let's do, maybe that's right. We're rooting for you. It's still not going to work. It was like the result of... All right, let's try what we're doing. Find something new with every casting. All right. So we're running. And what I'm going to do is I'm going to open up port 8080 on the host. And that's no bueno. I'm completely flummoxing this demo. What could possibly go wrong with a live demonstration, you ask? Well, you're looking at it. What could possibly go wrong? All right. So let's just see what's happening here. We can end the pain anytime you want, Stephanie. Yep. Yeah, I think I've seen enough. All right. So we're going to do a podman PS, interesting. Apparently, Stephanie wants us to explain. All right. So this guy ran, but then he exited immediately, which would be a problem. Every broken glass moment perpetuity. And he exited immediately. Okay. So it looks like my change to Apache is not working. I like how our show producer is telling everyone to like and subscribe and share when my demo goes down in complete flames. Yeah. Still all right. So sadly, we had a little bit of a traffic technically today. Maybe it's a... So we will... We're going to have to say goodbye for 2021 to Mr. Scott McBrien. But I think we have a couple more, maybe one more clip that we can play to close this out or maybe we just wrap up. All right. Well, let's go ahead and get one more clip in here. This was on Red Hat Insights. And I think Insights is an amazing capability that comes with a number of Red Hat products. And we had a great show on it. So let's roll that clip. Hey, Randy. Hey, Scott. Great to be on the show today. And I am Technical Marketing Manager for Insights. And for anybody not familiar, it is a managed service whose purpose is to continually analyze platforms and applications in order to help you manage your hybrid cloud environments. And when we say platforms, it works for REL, as you mentioned. It's also available on OpenShift and Ansible Automation Platform. But the idea here is we can take a look at your system essentially with Red Hat's glasses on and give you our experience supporting the product that you're using and say, hey, is it in a best practice configuration? Are there any vulnerabilities? How many subscriptions are you consuming? And we can get you all of that information in an easy to consume manner. All right. Now, again, I'm in beta. The next thing I'm going to do is not going to work. And that's expected because I'm in beta. What we're trying to do is up-level all of these advisor findings. So you don't have to go into each individual cluster and then click the Advisor tab. We're trying to bring this into the left-hand navigation bar. So we've got Advisor listed right here. So if I go into Advisor, what we want to see in the future is all the recommendations listed, the way that I showed you in REL. Right now, it's not going to work. It's not done. So this might be the first time that we've had a demo that didn't work, but we intended that. This one was on purpose. And that's because the code's not done yet. We know exactly what's going to go wrong. Yes. It worked in not working. Exactly. If it had actually worked, that would have messed you up. That would have been train wrecked. Yes. I would have not known what to do. Yeah. So that was a unique when demos go wrong, an intentional incorrect. So I think because we did have a few technical issues earlier, we're going to probably wrap up here shortly. I was going to talk a little bit about certifications for 2022, but you know what? I think that we will save that for another episode where I can talk a bit more about it at our leisure, along with some other things about things like complying with security policies in your organization or implementing scanning and reporting for your system's population. And of course, the imminent release, maybe sort of kind of, of Red Hat Enterprise Linux 9. But Ja'far, anything you want to share before we sign off for 2021? Yeah. Yes, I was on that. Yeah, I think it was a pretty good transition. So yeah, again, kudos to Langdon for what we've done before. And I really appreciate being part of the show. As a regular co-host, or maybe sometimes being the guy who tries to, you know, risking his word, I can say, live, but taking risks with our demos. But yeah, I think that would be nice. And I like the skin that we gave to the show. And I hope that the people are enjoying especially what we do. And because that's the end goal is to provide some interesting and useful content. So and thank you guys for what you do with the show. You Randy, Scott, Stephanie, who helps us in the background and everyone involved, NC Jones also. So yeah, that's, I wish we still provide some interesting content for the next year. And of course, I wish everyone a happy new year. Likewise. So yeah, have a happy holiday season for those who are celebrating and have a great new year to come. As I mentioned, this is our last show of 2021. But please do like, subscribe and share. Keep an eye out for us in the new year to help you level up with the level up hour. And so I think with that, we'll close it out with maybe a little bit more, Stephanie. Yeah, so that's working. Let's see if I do it to 8080. Pretty cool. And he's working. So let's try one more time to look at Port 8080. Oh, and there he is. So it's complaining about a knit from my non-root container. So let's see if we can figure out what's happening there. Oh, by the way, this is me playing my video game inside my container. Tell us what.