 Let's get into the big topic of open source, something that we actually have in mind. This is so awesome. We are an open culture that is actually in place. It's that process that a developer or, let's say, a developer. How's the Kubernetes ecosystem really going? Good afternoon, good morning, good evening. Welcome to the Level Up Hour, where we talk about all things containers. Kubernetes and OpenShift. I'm your host, Randy Russell. And I'm joined by my co-host, Scott McBrien. And for our Chariwi, how are you this morning, gentlemen? Hello, Randy, thank you. So we have a great episode, I think, today on a topic that is sort of a back to the future. And I just want to remind everybody, like, subscribe and share so everybody knows that we're out here doing Level Up Hour. Today, we are going to talk about SE Linux and containers. And the reason that this strikes me as a back to the future kind of thing is that for those with a long memory, you might recall that OpenShift began as an SE Linux trick, right? Am I wrong? Well, I mean, we definitely use it in how we segment off containers and keep their workspaces separate for sure. Well, but the actual, before we actually did it as a container orchestration thing using Kubernetes, it actually was an SE Linux built product very, very early in its lifecycle, way, way back years. But now we have a different interest in it, right? But it's the same interest that we have in general with SE Linux. So maybe tell us a little bit about what SE Linux is before I go chasing after this. While you're opining. Well, I'm opining because I've got a lot of opining. So SE Linux was added to the Red Hat Enterprise Linux distribution as one of the standard features in Red Hat Enterprise Linux 4. So it's been with us for a long, long time. And its goal is to change how processes and users interact with resources on the system. And there's a whole like vast policy rule set that determines those rules of engagement between processes and users, or processes and files, or files and users, or ports and files, or ports and users and ports and processes. It's immensely complex. And I will warn you in advance that the demos that we're going to go through today and Randy, you've got to see this this morning. It's a lot of command line. So we're going to get into the weeds a little bit on SE Linux. Right. Well, so one of the things that I think that is notable here is that SE Linux is sort of at the heart of REL since REL 4. And I think when it was first introduced, it actually caught a lot of people by surprise. The scripts that you had lovingly curated for years under previous releases might not have worked. Applications might not have worked. But it's because it really fundamentally alters, as he said, everything, files, processes, users. You know, what users the processes are running under, what files a particular user running a particular process can run under. So it's very all encompassing, but it's always possible to be the executioner and just come in and go, no to SE Linux, right? Yeah, and we're making that harder and harder to do. And I will, full transparency, you know, when we introduced SE Linux in REL 4, its default policies were draconian. And like it did, it did cause interference with a lot of functions. And so I think in the REL 4 days, you know, like 15 years ago, it was very valid, especially because there was a lot of documentation and tooling and a whole bunch of other stuff to just like turn it off. But we don't do that anymore. In fact, many of your REL systems, if you've just installed one and are running it, probably have it on and enforcing by default. And you don't notice it because of all the improvements we've made to the policies and the interactions between things over the intervening 15 years. It actually, I'm going to say, I think not only is it improvements that we've made, but I think that it also made a lot of people give a second thought to a lot of things that they had maybe done and implemented prior to the advent of SE Linux that made them realize, well, okay, maybe it's not a good idea for this to run under this user or for, you know, this particular process to have this degree of file system access. So I think that it, in a sense, it was the rising tide that lifted all boats and we rose the tide a bit by trying to address it and make it a lot more just, it works, it's transparent, it's not a problem. But I think it also affected people's behavior. I think it made a lot of people rethink how they did things, which kind of takes us to the present day, right? And containers. Yeah. Well, it certainly interacts with containers. In fact, there's several SE Linux types that go along with containers and we'll see a little bit about that today. And then we also added some tools, specifically one that we'll take a look at in the second demo to create custom policy modules to insert into your policy for specific container contexts. Sounds good. Ja'far, are you ready for some command line wizardry here? Yeah, sure. Let's go ahead. All right. Stand back. That's right. Let's get a loosen up. So what we're looking at is a Red Hat Enterprise Linux 8.5 system. It does have SE Linux enabled and running. Just check that with the SE status. So it's enabled. It's using a policy called the targeted policy, which is the normal one that we use for Red Hat Enterprise Linux these days. And so it's there. And if you've not seen SE Linux before, one of the other Red Hat Enterprise Linux shows touches on some of the basics for it. And just the rules of the road are put things where we expect and know that SE Linux could be stopping things to know where to look to troubleshoot. So I think those are the two first pillars of working with SE Linux. So I'm just going to become a normal user and we're going to do some rootless container stuff. So I'm just going to build a container real quick. And to make my lights a little bit easier, I'm going to copy and paste from a document. All right. So I'm going to download the REL 8 universal base image because that's going to be my starting point for making a very simple container. I'm going to take and oops. Let's try this again. Install Apache into this container. So it's just going to run a web server, right? Super simple. And then I'm going to do a little bit of extra stuff to make sure that Apache is running when we start the container. Sorry. I'm having a very sloppy mousing today, apparently. Hey, come on. We're on the keyboard. It's all about keyboard today. We'll get there. Nobody wants to hear other than the copy and paste that you're doing on the keyboard. All right. So with this system CTL, I just made it so that when the container boots, Apache will start. And then I'm going to make sure that that's the service that, oh man, I did it again. I'm going to make sure that's the service that starts and attaches to port 80 in the container. Maybe I just need to be better at where I put my cursor. Okay. So now we've got a very simple Apache container that will start Apache when the container boots. So let's go ahead and commit this working container to a container image. All right. So if I do a podman images, right, there's this ELHTTPD container that I made that contains the Apache server. And we can spin it up. So if you're seeing podman for the first time, we're running container in detached mode. We're assigning a port assignment to it. So connections that come in on port 8080 of the host will be redirected to the containers port 80, which is where we have Apache sitting there and listening thanks to our configuration inside the container earlier. So there we go. And if we do a podman PS, right, there's my running container called using gates, apparently. So what you may not know is that as we invoke to this container, a whole bunch of SC Linux stuff started. So if we use PS AUXZ, right, you'll notice that all of the SC Linux commands have a capital Z to identify that you want the extended SC Linux attributes to also be included in the output of the command. And we're just going to grab HttPD. So these processes that are running in my container right here. What's interesting about them is with this Z output, I get this field of SC Linux stuff, right? And this field of SC Linux stuff tells me the contexts that this program is running with. Earlier we took a look at what the SC Linux policy was was the targeted policy. And the targeted policy typically looks at the third field in this set of context, this guy, for making decisions. But what's interesting about containers is that we also get this additional set of fields that is not usually set on processes on the system. So just for reference, the fields of this output are the user field. That's the system you the role that's the system are the type. So here we have a container underscore and it underscore T. The sensitivity. And then this last bit is called the category. And this is assigned in a container. Whereas if we look at other processes on the system, right, these other processes, we don't see that category set. Right. Also, if we invoke another container of the same image. And if I look at my same output. What we see is that one container has these categories assigned to it. And the other containers processes have these containers assigned to our categories assigned to it. And so this category assignment is actually how we keep the different containers separate so that they don't mash on each other or incorrectly. Align themselves to resources the other one has right because according to the system, if we're just to look at like basic Linux process management. These are running HTTP D processes and we would look at things like ownerships or permissions to determine their access to things. But thanks to s a Linux we now have this additional dimension of control. The identify individual containers to control them even more tightly is this assigned category parameter that's on the process. All right, so pause for a moment because we had a question I've actually had a couple of questions today and and one of them concerned. Should we use the uni net container for these sorts of use cases. You be in it. I think that might you be I in it. Well, Benjamin wrote uni in it but I think probably means you be in it. Alright, so I think that that's a typo. Oh, it's been blast. Oh, that guy. Okay, so it is a typo. I think he's talking about you be I in it. Because I didn't know there are several flavors of universal base image. And we've talked about that on a previous show. And we can use the UBI standard image like we're doing here, which is fine like clearly my process is running. It's listening on the right ports. The differential between UBI standard and UBI in it is if instead we had more complex processes running. So maybe we had a database running in this this container to. So there we can't just start up Apache as the base process when the container starts, we need to start up both Apache and the database. And that would be a reason why you would want to use the UBI init flavor, which includes a full system de implementation. So you can write the unit files and start services and control like the order in which they start and things like that. So if you're looking at some simple application containerization, UBI standard works really well, which is the one I used. If you're looking at taking an existing monolith or a set of services that you would normally install on a virtual machine or a server, and you want to run that collection together just like you would on a normal system. That's where you would look at something like UBI init. It increases your base image size because it includes that full system de implementation. But it allows you to have that more complexity and flexibility of using the full system de implementation. Hopefully that provides some clarity to that question. So back to the demo. Where next do we go with this? So just to like round out that SC Linux is there and doing things. So let me go back to being rude because I just want to show you some rules. Well, I just want to show you the rules. Okay, so I installed a package called SE tools dash console before the show. And that gives me this command called SC search. And I'm going to look at all the rules, but I'm really only interested in those rules that contain container. These are the ones that would affect that process that we were looking at that was running with the type container init T. And there's going to be a lot of them. So the rules that we've got are highly complex. In fact, let me take that and work on that. It's highly complex. And these rules determine what our container init type process is allowed to do in the system. So just the container init type, there's 99 rules. If I take away that filter and just look at all the rules in the SC Linux policy, it's going to be many. So I don't want to get into all the details about, excuse me, about how we write rules and things like that. I think that's probably below the fold for our discussion today. And we don't have four days of which to talk about it all. But just real quick, if you're interested in reading a little bit on your own. So let's take a look at this right here. So what does rule that I've highlighted says is that container init T type processes are allowed to attach to Unix datagram sockets, right? So they can open or use UDP based network connections. And when they attach to those UDP sockets, they're allowed to accept append, bind, connect, create, get attribute, get opt, IO control, lock them, memory map for them, and a variety of other parameters, right? So what we're looking at is whether it's an allow or deny, the context that this rule applies to, the function that this rule controls, Unix degram socket. And then when a process of this type tries to do an activity of this type, here are the actual things it's allowed to do. So we can see maybe a little bit smaller list. Like here's one where it's allowed only one activity. So if a container init T attaches to or tries to use peer, I don't know, system call maybe, it can only receive V. That's the only thing it's allowed to do. So, again, we don't want to kind of jump into the craziness of SELIX rule manufacturing. In fact, we have a tool to help us do that. So that's the second demo, but I'll pause there for anything else. You want to bring up Randy before we move on? Well, you know what, so far, so far, this is familiar looking stuff. If you have delved into the SELINIX, it's SELINIX in the container. Are there any sort of hurdles or gotchas that people might need to know if they're working in the container environment that they might not have encountered working more just on regular OS and SELINIX? I think that they, if they're familiar with SELINIX on RHEL, then transitioning to SELINIX managed containers or SELINIX interactions between containers is a pretty straightforward transition. The problem is, I think a lot of people work in an environment where they don't have SELINIX, either they're on another Linux, or they're using tools to build and manage their containers that don't interact with SELINIX. And so they take this from their development environment or from this other thing that they're working with, and they send it to a repository. Somebody on RHEL downloads it from the repository and tries to run it and it all comes crashing down on them. So sometimes this is going to be necessary as a troubleshooting measure for a container that you received elsewhere. Right, and that's where I think our second demo will get into it. So let me real quick build essentially the same thing, but in Roots local, automated repository. So we're just going to make another one of these EL HTTP containers. Let's see, Jafar is cringing because he's like, if only you put it in the registry, you goon. It's fine, no worries. We are open to everything here. But yeah, what I'd like to mention at some point is that we do provide also this kind of out of the box features for OpenShift in terms of SELINIX rules and configurations. But yeah, let's let's skip that a little bit later for a bit later. And apologies for the people on the stream. I'm a bit sick today, so it's better if I don't show my camera. Well, get better soon. Thank you very much. Here we're going to invoke our container a little bit differently. Number one, you'll notice that I'm doing it as root. So we're not going to be running rootless for this example because doing things like adding your own policy is something that requires root ability. And rather than trying to figure out like how to play between the root account and user accounts, I just said, well, for time's sake and simplicity, let's cut out the middleman there. All right, so I'm going to run my container. We're going to store its container ID that gets returned when PubMed invokes it into this variable called container. But this time, instead of just running the Apache container and binding it to a port, we also do a couple of additional things. We provide volume mapping between the container and the host. So inside the container, when somebody accesses slash home, that will access the hosts slash home, and they'll be given read-only access to that directory. And when somebody in the container accesses varspool, then that will access the hosts varspool, and for that container should be given read-write permission to that directory. All right, so here it is. If I do a codman ps, there's my running container, agitated vizvestraria, whatever that is. And so we're running, except it's not operating the way that one would expect. So notice there were no errors on the command line there, right? It's running and everything looks fine until you try to work with it a little bit closer. So I'm going to connect to the container and just have it invoke a bash shell so I can interact with that running container. All right, so here we are inside my container. And if I look at home, remember this was set up as a volume pass-through, so I should be able to see the contents of slash home on the container host. But no. And remember that I said we should also pass through varspool. But when I access varspool inside the container, I should pass through the container host varspool, and I should have read and write access to that. But no. And so this is actually SE Linux rearing its head and stopping the container from doing something that maybe it shouldn't do. So let me go out of my container here. And the first thing, remember, way back at the beginning of the episode, I said the two main things for SE Linux is put things where we expect and know where to look to troubleshoot if there are problems. Right. So here's the problem. We didn't put things where we expect. Why are containers accessing things in varspool? Why are containers accessing things in someone's home directory? Because it's an unusual activity for container to be given access to. And so the default SE Linux rules are like, nope, sorry, buddy, no dice on that. So now that we've broken the first rule of put things where we expect, we need to leverage our understanding of SE Linux and use the second rule, which is know where to go when things go wrong. So I'm going to use a team looks here, and I'm going to create two screens. And in one of them, I'm going to go back into that bash. Whoops. Sorry. Team X doesn't like me in one second. Right. I'm going to kill off this container, then start team X and then run the container again, because that way my variable will actually work. Or I guess I could have exported it, but she'll quote. So we're going to invoke another version of this container. And I'm going to connect to that container. In my other team X window. I'm still in the container host and I'm going to. They'll batch off our log messages. All right, so we see some error messages from beforehand, but I'm going to wait space in there so we can get some fresh data. And I'm going to go back into my container image. I'm going to do that LS again. I said LS whole, which according to the container invocation should work. We know it's not working. And. And then we're going to take a look at VAR school. But according to the container invocation should work, but we know it's not working. And as I'm doing these activities, you'll notice that some stuff is going into the VAR log messages file. And that stuff is specific Se Linux error messages. Right. And so it's telling me that right here. This is probably the most interesting message straight into the point. Se Linux is preventing our container from read access on the directory VAR school. And then it gives us some stuff that we can do if we wanted to. Resolve that with audit to allow. So we know that it's an Se Linux problem. Yeah. Because we're doing things. We're seeing us in next problems. All right. So to further drive that point home, I'm going to go back to being rude on my host. And I'm going to run off Se Linux. Which is, you know, something that we don't normally do kids. Don't try this at home. Well, no, I mean, it's a valid troubleshooting. Yes. methodology. Just kidding. You wouldn't want to do this on a production system. Where you might have other things that if you turn off as he likes now, like expose some things you didn't mean to on, on the system. But what I want to do some fun. Okay. But yes, Scott is absolutely correct. This is if it was something you shouldn't do it, you wouldn't be able to do it. But you will need to do this sometimes. And it's for exactly the kind of thing that Scott's doing now where it's a diagnostic thing. Well, okay. Let me, let me confirm that it is in fact what I'm thinking it is, which is Se Linux. This is the easiest way to do that on the edge of my seat. Now Scott. Well, my box became in, in effective. Let me, let me refresh this real quick. All right. You know, a box just got too busy. One of the magic things of P mocks, right? You can reattach your session in case you lose it. All right. So let's try this. So we turned off Se Linux. And now if I list whole as though my magic, I now get content. Right. If I touch a file and bar spool, as though my magic, it's there. Right. And we can verify that that's actually working by going back up and host. And there's that file that I just created from within my container. But if I re-enable Se Linux. Well, wait, wait, wait, wait, before you do that, let's, let's do a list with the Se Linux attributes and see what. Yep. Oh, there it is. Yep. All right. So let's re-enable Se Linux. Reinforce or set back to enforcing mode. And when we turn it back to a forcing mode. I expect if I try to look at home, it fails again. Right. So we've diagnosed that Se Linux is really a problem. We saw the error messages that were happening in context. Then when we turned it off, all of our problems were magically solved. We turned it back on. Oh gosh, it was terrible again. And at this point, the less seasoned administrator would be like, I'll just turn it off. Right. But we're not less seasoned administrators. We're people that know what we're doing. So what we're going to do instead is we are going to create a policy extension that will allow our container to actually do the things that we intended to do. So I'm back on my host over here. And let me go up to my top window. I'm going to install a package. In North America, you probably pronounce this Utica. But if you were a Czech engineer, you would pronounce it Utica, which apparently is Czech for fishing rod, because if you teach a man to fish, you feed him for a lifetime. Oh my God. Yeah. Fun times. Your Czech trivia of the day. All right. So I've installed this Utica module here. And what we're going to do is we are going to create an SE Linux extension to apply to this container. So what I'm going to do is I'm going to pull the JSON definition for this running container. And I'm going to pass that to Utica to make a special policy extension. So we're going to do a podman inspect. And that results in this container JSON, which is like all these definitions around what this thing does. But what's important about it is that in this definition, it's going to have information like the fact that this container was given pass-through access to home. And this container was given to read write pass-through access to VAR school. That's what's in this JSON that we care about. Also, it's going to have things like what it was attached to and some other stuff. So next, what we're going to do is we're going to call Utica. And we're going to tell it to make an SE Linux policy extension based off this container. And we're going to create a special type in the SE Linux policy extension for this type of container. All right. So it has successfully done that. And it says in the output here, like, hey, don't forget to load your extension into the SE Linux policy. And then also, when you start your container, you're going to have to tell the container that it needs to identify itself as this special kind of process so that it is able to be covered under this rule set that you just created. All right. Let's go ahead and do what it suggests, which is insert our policy extension. And then we're going to have to run our container again. But this time we're going to have to add a couple of extra options to it to make sure that it is of this type. So let me kill off that one that we're running. Two of them. So let's kill off both of them. Now, for the grand finale. So we're going to invoke our container again. And we're going to do the same volume path, pass-throughs that we did before. But notice that we also add on these two additional parameters. We're using this extra security option. And we're telling it the label to this container's processes. So there we go. If I do a PubMap. Yes. Reverent surely is our magical special container here. All right. Let me go into this. By the way, the containers are special. The container names are especially wonderful this morning. So here I am on my container. Let me go back up to my other window so we can verify that there is nothing up my sleeves. So we're enforcing. And we're using the same policies before. So if I LS whole. It works. And now if I touch a file our school container. File to. It works. So there's the file that was created from my container. Again, give me the long listing, man. Okay. There's my container file to rate right there. Okay. And so the magic that's happening here is those additional policy extensions. So let me get out of my container. And we'll go back to full screen so we can see it a little bit easier. So if I do an SE search. Hey, remember we use this tool earlier to look at for things that were container in it type and the rules that went along with them. But now we're going to look for rules that are Scott app process. Maybe don't that didn't give me what I wanted. Let's change it a little bit. Add some more credentials on here. Let's do a type user home T. If that gives me what I want. There we go. So this one right here got added by adding that you need to generated extension. And it says that when things are running a Scott app process type, they're allowed access to user home directory type files. And when they access user home directory type files, they're allowed to do these things and those they're allowed to get attributes. So they can do things like an LS. They can work with it with IO control. They can lock files. They can open files. They can read contents and they can search through files. But notice what's missing from there? Right files. So I was able to look at my volume pass through of that invocation from my container. So the volume pass through was read only. And then it set up the SE Linux rules to only permit that read only access to that directory that we wanted. And likewise, if we looked for. So right here. Same thing. Scott app process type running processes are given access to bar spool type directories. And then here's the things they can do add name means that they can do things like create new files in there. There's also a right. Right. So I can actually like put the data in the directory in my files. So that's that's what we did with that you need to think right so. Super complex SE Linux. Became magically delicious by having a container tool. You need to create some container specific rules for our stuff. Right. And now we can invoke any container with the Scott app. Type. And it will be covered under these rules that we've added into the policy. Well, I'm liking the pizza. Yeah, I like the story. That's a great name. So. Yeah. Jafar see if you can you know drag yourself out of your sick bed for a moment and share with us. Are these things that when you're working within the open shift environment. Are. Things that you have to consider there as well or are they things that in a sense open shift is already taken care of for you. Yes. So so that's a good question and actually one of the great benefits of using. Rail as a foundation for open shift. And when I say rail it's in the in the broad sense because we used to have a rail. Before and with open shift for we switched to. Co or. Which basically shares the same kernel as as rail. And we do enforce a sea Linux by default. Which prevents from a lot of. Bad things from happening. So I think we Stephanie has shared the link on on the chat. And. So one of the great things is that all of those a Selenics. Rules are already also implemented for open shift. For containers to be able to run safely. And not be able to have too many capabilities. When running on open shift by default. So. We have what we call security context. Constraints. Which are ways for us to determine. Basically what. Containers are allowed to do. By default. So we do provide different types of. What we call. And I'm going to share a link. To the open shift documentation that explains basically what it does. And. So. Yeah it does. Provide all of those. Very. Rich. Policies and. That already. Implemented for open shift. And as an administrator you can. Change. The attributions and define you a security. Come. Scc's and such things but they do also control control. What happens with the Selenics. Context etc. So. That's one of the. From a security standpoint is one of the great things that open shift provides. And yeah I'll invite you to check into more in depth what this. This does. So I'm going to share a link to the. To the. Open shift. Documentation. Yeah. Well great so. Back to you Scott. Some closing thoughts perhaps on. The world of us in Linux and containers. So. As a far point out like. If you're using open shift all this stuff just kind of happens. And just like we. Manage the policies for. Well in a way that. Like mostly the services that work with relages kind of work. Same thing is true for open shift right so. We take care of that automatically in that product. So. When you have to get into the bowels of. Of Selenics is when you're using rel natively as a container host. And you have more eclectic container. Activities that you're trying to do. Where you're trying to pass through host. Accesses or host resources to your container. That's where you're going to. Probably run into some bumpiness and have to use some of these. Selenics troubleshooting techniques. That we talked about today. And tools like you did so to help. Just kind of. Smooth over those problems. You could also use audit to allow which is what the. Selenics error messages suggested. Invalid messages that would work too. But there you would. Make that the case for all containers. Not just ones of this type like we did with our. You need some policy extension that we added. No Stewart no. Man somebody. There's always going to be somebody. There's always going to be somebody. Yes set it for a zero. No. You know actually in the meeting notes there was one other. Thing that was mentioned and that was rootless containers. And are there some unique. Challenges associated with that. So. Yes. There's a reason why my second demo was being done as root. Because I couldn't figure out how to make it work for. Rootless users. And that's mostly because like. Can rootless users extend the as the next policy. Not really. Can rootless users do things when they invoke their containers. To make sure that they have the right policy set. I was having trouble with that. It's not to say that it can't be done. I'm just not smart enough to do it apparently. Or it's. Something that needs to be further enhanced. There's a couple of weird things that happen with rootless containers. And I know the pod man team specifically has been working. Just. Feverishly to try and resolve some of those things. Around things like namespaces and some of their stuff and they're getting better. They're getting incrementally better. And you should see some changes in pod man for. Which. Is some of those not all of them. And there's actually a. A doc. One second. Let me look it up. Sir. I have dropped it in the chat. If it's. The one that was in the meeting that it's. It is a. So that one is a an excellent article about rootless things from. Dan Walsh. Yeah. The other thing I wanted to put in was. Dan keeps a running tally of all of the annoyances of rootless that they want to. Resolve but are not yet resolved in the community. Get up for pod man. Yeah. This is it right here. So I'll put this in the chat too. Maybe. So these are the things that he knows about. That don't quite work the way one would like. That they're going to work on resolving but so when you were doing rootless. Stuff I always if I run into weird problems. I always check here first to make sure that it's not a known weird problem. Cause cause yeah there's still some where it is there. Yep. Well, maybe that's something we can consider in a future episode. But for now. Unless. We have some additional questions or comments. We had a lot of comments today. Your. Your. Your questions were comments like seven four zero. No. No, I say. Doesn't look like we have anymore. So I think with that. We could. Call it another one. So everybody please do remember to like. Subscribe and share. Let everybody know we're here. Approximately every other Wednesday morning morning in. U. S. Eastern time. And I guess I will. Say thank you for joining in gentlemen. Thank you for. For the show today. Bye bye. Thank you everyone. Bye bye.