 Hello everyone Welcome to I guess the first talk of the day after the keynote I know not first talk to you. It's our first talk of the day. Yeah All right, so I'm over Shibonani, I'm a software engineer on the containers run times team I work with all the container tools that you all know and love cryo podman build a scope you and I'm Sally I All I work on open shift. I've been on a few different teams right now. I'm on what's called the workloads team with RCLI and controllers. I Have worked on the containers team in the past and Ever she and I just love giving talks together. We love traveling around and it's fun Except for last night like at 2 a.m. At the bar cello Ever she was spinning up VMs and I was trying to get a cluster going Yeah, but we're here and so our talk today last year. We gave a talk on the build the container tools and This year we thought we'd just give an update of some new features new things going on with podman scopio build a and cryo Yeah, so we would like it to introduce you to these superheroes of the container world the container commandos Hopefully one of them has already saved the day for you So a few years in the past few years We have been developing a few container tools sort of breaking them down into focusing different areas in the container space Trying to fold the next philosophy so they do one thing and do it really well Yep, so podman is the general purpose Pretty much one one-to-one Docker CLI is you can do anything with podman same as Docker It's a little it works different, but it's a general all-purpose run containers on your local system scopio is a tool that's meant for managing Images from remote registries You can inspect off of a remote registry without actually pulling it down to your system moving images from one remote registry to another even Moving between arches and Nallin has a talk later About about that. So that should be really interesting so that's scopio and then build a is our tool developed just for building images and Cryo is for running containers in production cryo is meant to is meant for Kubernetes. You're not really meant to run Cryo on your look, you know on your local system. That's what podman's for so cryo is locked down Fewer permissions very opinionated for OpenShift and Kubernetes and then of course there's OpenShift too Yeah So tying back more to like making them work well together podman build actually uses build under the hood to run its builds and then In OpenShift all the S2I builds that run are using build and they're spinning up clusters with cryo And so yeah over the just over the past few years is when Red Hat has Been developing these yeah, so we're sort of going to dive into each of these tools and talk about the new the new Development that has happened since last year Some of the cool features And I think that our talk is going to be short or she doesn't think so so if it is short then we can open up a discussion at the end We have some containers experts in the audience and yeah, we'll see where it goes So so build a like I said, it's for building building container images and we The tool really encourages best practices for building securely with build a building minimal images is Very easy and that that's the key to security the more you have in your image the more that can go wrong So build it has this command builder from scratch which sets up a Working container with absolutely nothing in it Just sets up like the c-groups and the namespaces and then you can set up a mount point from your local system and you can do things like in and move binaries Into the contain the working container you can and use your host systems package manager to install something rather than do that In the image in the you know big that into the image and then once you're happy with what's in your working thing or you can just commit it and There you have an image with exactly what you want in it and nothing else So that's the idea with minimal images and building securely. Yeah So if you all went if you guys went to Dan Walsh's talk yesterday, he sort of talked about the three bears the Papa grandma and the baby bear so With build out we have that sweet spot between performance and security You also give the users the option to make it as secure as they want But that would that will affect their performance and if they make it the least secured I mean if they make it as fast as possible that would affect the security So we will also demo a way of having that sweet spot in the middle where you have you're both secure and you're not Giving up on the performance ish area They probably weren't at Dan's talk yesterday because it was the same time as our talk and you guys were probably all at our talk Yeah, and as we said open shift for open shifter X uses build on out for all their builds And it's the default build tool and rally it. Oh, yeah. Yep, and the build without root. Yeah, so I I use build and podman and I Just forget that I'm running it rootless. I always run it rootless until I get stopped I get stopped if I'm trying to do something that Requires root on my system because with rootless It works up to a certain point But of course just because you're running a rootless container doesn't mean you can go and do something On your local host like mount, you know slash Etsy You can't do that rootless because you can't do it rootless. So there are you know My recommendation would be just run rootless until you can't and then you know for those special circumstances then go to go with root Yeah, yeah demos. We have some life demos for you a Lot can go wrong with this talk. Oh like really a lot. So if it goes smoothly We deserve Something hopefully that's big enough for the people in the back All right, let's get started. Oh One second That's not the beginning that is not the beginning so the beginning one is With build up we're gonna do some demos with build up and Build up can use the CPP processor for docker files. So In a in a minute, you'll see it But say you need to build a slew of images for different distros So basically you'd have a docker file Everything would be the same except for a few lines like if you were a bunch of something with app get Or with fedora you need DNF So basically you have one docker file where everything's the same, but you have a few diffs depending on like what distro or Yeah, so you can use the CPP like pound define pound include and you can have a single docker file and then Builder can handle the replacement to create You know to use the if deaths to create the image. So Yeah, so here if you want to do Ubuntu you just pass the Ubuntu.in and It's using the docker file, but just making that one line replacement Yeah, so that created my Ubuntu here We could have done something cooler like install something or but you know the internet Yeah, yeah, this is especially useful when you want to I don't know if you mentioned that you want to build For multiple distros and it's like the same you have like 20 same lines only one line is different or two lines are different yep, and Then there you go. My fedora is built All right, so when I was talking about that sweet spot between performance and security So the most secure way is to run your build process completely isolated from your host system That is building it inside a container And I am doing this with Bodman here I am using our stable builder image Which is just an image that has all the build a functionality in it So you can just pull that down and inside the container you can use build as you would use on your machine So as you can see here the time it took was like five seconds here So this is the most secure way most isolated. You're not sharing anything You're not exposing anything and there is no demon. So more secure And then this is the least secure way basically you're volume mounting your valid containers on your host into the container That is where all your container storage is basically and you're disabling the Security so this is actually was slower than the first one. I don't know why Yeah, so but this is the fastest way to do it because you've already pulled the images down onto your host machine You just volume mounting it so it doesn't need to reach out again to the registry to pull it down And it can start it pretty immediately And then the last one is sort of the sweet spot between the two where you are creating an additional storage So basically you're mounting your storage on your host into a different location in the container But you're also setting it to be read only So if your processing the container tries to write to this path I wouldn't be able to and it wouldn't affect anything on your host So and it also doesn't compromise on the performance as you can see it was faster so So build is now used in OpenShift as of OpenShift 4 so in 311 Cryo was the runtime engine in OpenShift. I believe that's correct, right? Yeah. Yeah But we still required the Docker demon to be on every host every node Because our builds were dependent on the Docker demon. We have two main strategies So well first of all one thing that differentiates OpenShift from Kubernetes is the fact that you can point OpenShift to a source code that isn't containerized or even has a Docker file and we have two strategies the Docker build strategy and the source to image strategy that can interpret your source code from github and Provided you use like best practices for whatever Language you're using we have builder pods like Python or Ruby or Node.js that know how to assemble Containerized application from source code. So In OpenShift 3 we used Docker for our builder pods in OpenShift 4. We're utilizing Well the limitation of that is you have the Docker demon exposed in your builder pod now a regular user can't exec into those pods It was pretty secure, but theoretically that introduced security issues. So in OpenShift 4 we use builder, builder inside of a builder pod and You as a user you don't really notice the difference and that's that was our goal, but it's more secure and We don't require the Docker demon anymore on the nodes So I okay, so I want to show this So here we have OpenShift 4 running and I have a sample Django application That's one of our samples source to image. You can go on github and find it. It's like You can just type in like OpenShift Django application. You'll find it so I launched it and We both are sharing our kube config. So I'm going to push a chain earlier. I set up a github webhook with OpenShift you can Create a webhook through the github settings so that every time you push a change to your GitHub repository it will kick off a new build So I want to do that so that you can see the build pod pop up and through the logs You can kind of tell that it's build up working. So here I'm going to push to master and hopefully a new pod It will pop up a builder pod. It did this morning It's still pushing it's not Yeah, we might have to come back to this it's like just pushing I Am on Wi-Fi Do you want the internet? Yeah, we could yeah Oh, I need my adapter. Yeah, never mind X1 carbon. Oh Everything up to date. Okay Did I make a change? Hold on Just give me one second. No I'm making a change. She's making a change One file change eight insertions get push Origin master Yeah All right, we should we move on Live demos Okay, so that was bolder we will get back to the S2I builds demo. Yeah. Yeah, okay So the next tool we're talking about here is pod man, which is the all-in-world all-in-one CLI tool You can do everything from running containers building images And also you can also spend a pods in it now and the name pod manager So in the past year the new things with pod man is that we have Added a lot of the podman pod commands to it So now you can easily spin up a pod sort of replicate what it's like in a Kubernetes cluster A pretty cool feature that I think is one of my favorite commands is pod man generate cube So, you know after like you have played around locally We you container is you're happy with it That's trying that you want to run in a Kubernetes cluster an openshift cluster You now have to sit and think of how to like put it in the Apple file There's a lot that goes there's a lot more that goes into it than just doing a pod man run command with a bunch of Flags and everything so to make life easier for developers and users we created this pod man generate cube which ought to Automatically you can pass a container to it will automatically generate the cube yaml for you Then you can just plug that in into your cluster and run Container with the pod or however you configure it very easily and the in the cluster The next one is pod man generate system D similar concept. It creates a system D service file a service file with it so basically The idea behind this was you can easily share your pod man Containers and your pod man commands between machines. So let's say on my machine. I have I'm happy with my container I I like the way it's running is like running a simple small web service or whatever and now I want to share this like a hundred different computers So all they need is the pod man CLI and I can generate this unit file And I can pass it on to them and they can just start the unit file and it will spin up the container for them We'll start the processes that started with the container Did I miss anything? No, I think you're good. Yeah, and yeah, really yeah, so this is a default container CLI and real eight and Pod man also supports you groups V2 So with Fedora 31 it defaults to see groups V2 and C run and we are Side of self pushing it into the game and we move part man We gave part man the support for seagroos V2 basically so on Fedora 31 if you have seagroos V2 enabled you can use it So let's just go back to the console Even if it didn't pop up, which that's really strange like I Don't understand. I can go into the pods and show you the logs from before Completed there we go. Yeah so like just before we Came into the room. Where's the builder pod? There it is These are the logs from the build a build you can see we our Source to image uses an assemble script and in obituary it used the assemble script to just Build the image but in obituary for We've streamlined everything so our docker build plus our source image build are pretty much the same So they both produce a docker file and then What is supposed to happen? Did you push now? Yeah, it's what the heck is I don't understand anyways Yeah, I can trigger it manually yeah, it's not though The actions where Yeah It's not that interesting anyways I just it's fun to watch the it's fun to watch the blue circle pop up, but Guess we don't get to see it today We can go back. It literally worked like 10 minutes ago Okay, all right, so back to pod man. Yeah No, we're good. Let's let's go to demos demo time Use a namespace and a username space and a username space All the memes you're gonna talk about pop and pod. Yeah, all right So just gonna demo a few of the podman pod commands here. You have the podman pod create Which creates a pod for you when you do pod list you can see it created I'd give it the name funny Pascal And you can see the number of containers so right now is just has the infre container and that's why it's just one container and This is just just showing you more information about the pod Yeah, and so now I'm just gonna add a container to the spot. I'm just gonna run an alpine image with the top command and then When I do the ps you can see the number of containers has gone up by one So now I have two containers running inside this pod Pretty much how pods work Yeah, and then when I do ps dash dash all oh, yeah, that was the infre container there So this is my alpine image and it tells you which pod it's running and under the pod name here Yeah, that's part of my part and there's more to it. That was just a small highlight So let's do the cube generate one right now so like over she said It's a pain in the butt to Generate your yaml for Kubernetes objects So if you have a pod running you can run Podman generate cube and that will give you the yaml that you can then Upload to Kubernetes to create a pod in OpenShift or Kubernetes So here a podman container run label is a command that Can read a run label in a docker file and it will start that image with whatever you tell it to So we're using podman container run label and we're running that in the background and From that let's generate some yaml. Yep so well here's it's just a simple web server, so it's running in the background there and There is proof that it's running So podman generate cube is pretty easy. There's the help menu You can pass it a file to save and then you can use like OC apply with it So here that would not be really easy for me to whip up Without a lot of googling. So I'm glad that if I needed to Run a pod I have this tool because it makes it way easy Save it to a file Yep, so it's saved to you that that that file you just saw saved to my local system And now I can again we're using that same cluster That I have up. Oh actually we could we could well. We don't need to I'm curling so error Yeah, what the what? What does it say Something for better against any security That is really are you in a via are you in your I'm on my whole I Did not hear that huh? I couldn't hear that. Yeah, I couldn't hear you. Yeah, but this worked five minutes ago Is there something wrong with that yaml no, I think yeah Something with the cluster. All right, we're skipping this We'll try it again at the end the whole script just exit it out Not a good day. All right You can continue talking about what you would have seen About the what's not all the pot magic energy. Yeah. Yeah, I'm just fixing the script. That's all We've I've actually never seen that happen. Yeah All right Don't ever mind that in matter. Okay. My bad. Okay. Uh, so Next one is the system D. Yep Just like pot man generate cube and just like Openshift objects and yaml those aren't fun to write manually neither our system D unit files So I asked Dan the other day like what would be a good use case for Running in running a container in system D And he said like with edge devices say windmills or oil rigs. You might have a bunch of You might have a bunch of Windmills that all need to run one single service and don't have very good connectivity So you can generate a system D unit file to just run that single service You don't need like OpenShift or Kubernetes, but With system D you can do that So here I've created a pod running in the background and from that pod I'm going to create a unit file and There it spits out the unit file again. I would have to Google this. I don't know how to write unit files So it's very convenient. I'm now once I have that unit file. I saved it to my the correct directory on my host and I can now start that service. It's just a pod running top and you can see the It's not really failed. It's like if you go down to the bottom. Yeah, I'm at the end. It started successfully Yeah, so those that's the I think that's the journal CTL logs Yeah, and now I can stop it and you can see if I go and podman ps the pod has stopped If I now start it again the service you can see the pod has popped up again Yep So that is and here's the top logs from inside that container So yeah, it's really easy to run a container with through a system D service Yeah So just to sort of show how so one of the main advantages you think see groups v2 is that you can run rootless Podman and be able to configure your see groups resources But see groups v1 that's not possible So I'm just gonna try that with the with v1 and you get this error that you can't do it And now I am on a fedora 31 machine and I'm gonna run this with see groups v2 So on for 31 by default the runtime. That's use a C run which is Basically run C but return and C and it has see groups v2 support. Yeah, so the the reason my See groups v2 hasn't been implemented Yet very widely is because it doesn't work with Run C There's some work being done. Just just seppy is not here, but he has rewritten run C in C and it's C run and he's also working to Make run C work with see groups v2. Yeah So with podman and for you can see here those here one time. I'm using is C run So let's do there and my container created successfully All right. I think that was it for hard men Yeah, back to the slides. All right Skopio again, that's our tool for inspecting images from remote registries moving them around Copying from one storage to another or even from one arch to another And We just have a simple demo for that. Yeah So Skopio is pretty stable. It's not much has changed and then in the last year But one thing that's changed is that now you can copy Images of different arches if you're an x86 for you can get an arm image and you can copy it over So that's a pretty cool feature And if you want to learn how you can do bills on different arches go to now in stock today at 230. Yeah He will dive deep dive into that So Skopio was our first container commander was the first break off the first Like a point of where we started dividing these container technologies into different aspects so when When demo we think it's pretty cool is copying over images So what Skopio you don't need to have the image locally on your machine if you have it in registry a anyone in registry b You can just do scope your copy and it will copy it over with a downloading it locally So I'm just showing an image of copying it from the image the storage that docker uses to the storage that podman uses so I'm going to copy the Ubuntu image over and name it Ubuntu demo and podman images And as you can see here, that's where it popped up pretty simple not much nothing complex about it and That's what we have for Skopio All right, so There you go. Okay, so the last tool is cryo. So now we have built our images We have tested them locally and everything we have shared them to our registries internal External etc the next thing you would want to do is probably run these in a production cluster So that's where cryo comes in cryo is a tool It's a lightweight demon that runs contain that talks to Sierra and runs containers in a kubernetes cluster It is made specifically for kubernetes as Sally mentioned earlier So whatever whenever kubernetes makes changes we match it whatever version kubernetes is on cryo matches that version So if kubernetes is 117 cryo 117 is Works with it So it's sort of like a way to make sure you don't have to have tables to see which version of cryo works with Which version of kubernetes is pretty simple Cryo joined the CNCF incubator last year around March. I believe So we're an incubation project the CNCF now. It is the only runtime and openshift for the next clusters and With cryo, we have very security focused so we will talk a bit about those features here yep, and Urvashi's a maintainer for cryo and the only talk that we haven't given together is when she had to go And give a keynote at Qcon Yeah Anything else? No, well FIPS mode support cryo has always had FIPS mode support and now we have FIPS mode in OpenShift as well Yeah, another one is registry mirroring support. So let's say you are you want to work sort of in a disconnected environment You don't want your clusters to talk to the internet You can mirror all the images that are on the external registry to a private registry that your cluster actually has access to and Cryo and all our tools will actually know to fall back to the mirror registry because I won't be able to talk to the internet So we will demo that as well All right more demos Yeah, so the first one is cryo with mirrors So I'm not running these commands. I'm just showing you what they will be so the first one is setting up a local registry On my machine. So that's my private registry and then using scope your copy. I'm copying over an image from My repository on Docker hub to my local registry And this is the digest so the thing about mirroring is that you have to reference the images by digest because this guarantees that The image you're getting from mirrored registries actually the image you want So we have a common file call at ccontainers registries.conf this sort of is where you add your Logic of what is mirrored to what so here under registry my I'm saying my primary Registry is this one repos like registry slash repository is the Docker hub one and then I'm mirroring it to my local host one here So basically when you try to pull an image with cryo or podman or whatever It will actually try your mirror first because usually it's faster to get because that's usually what you want if you have a mirror set up and if You can have multiple mirror setup So I will go through all of them and the first one it hits and gets the correct image I will pull that in if it doesn't that it finally falls back to the external one So I'm going to try to pull my alpine with docker pull because docker doesn't have the Better registry, so I'll just show you that my My image actually doesn't exist on the Docker hub so now But I can still pull it when I try pulling it it works because I have already mirrored it over to my private registry So cryo knew to fall back to that and pull it from there So a lot of our customers require disconnected environments where they don't have access to The internet so this is this came out of that. Yeah requirement Yeah, another one is it's a work in progress Basically, so we're adding the support to drop the infre container when you spend up containers with cryo So basically there's some cases where you don't need it. Well, the infre container is that that small It's con mon that small container that is watching For this the standard error the exit code. Yeah, that's always running with any pod man container Yeah So basically if your server like if your server is can manage the namespace you you basically said that This this option here manage names is life cycle so it can set the life cycle So whenever your processes like for example exit you don't need a reaper to come and reap it It'll automatically cure it out for you and everything But this only works if you are sharing a pit namespace at the node level So in Kubernetes if you have this set you don't need the infre container. Yeah in any environment Yeah, so I've said that and this is my body ammo. I am setting my pit to node level here, which is one This just increases the efficiency. Yeah a bit. So it's still a work in progress. We're still Working on it. So now when I do sudo run C list You will see that There's no infre container that was created. There's no docs on this yet because last night at midnight We were like, what do we say about the infre container? And we had to go and sift through the pull request the code to the comments to be like, okay This is what it does. Yeah, and then now the second one. I I'm not sharing at a node level anymore Set of the pod level and now when I run it, you can see an infre container was created So there is surprise putting in the automatic code to like drop infre containers were not needed Yeah, and you don't you don't need it when you don't need it because the It's being cleaned up by Cube kubelet. What cleans it up? Your server just gets managed by it gets managed by something else Yeah, all right, so one more thing we wanted to show was we mentioned that cry is a pretty has a pretty big focus in security So we have this cry.com file here Which usually gives the users the ability to when you go and you can quickly you can easily change things regarding security So we have for example, we have our capabilities list here So the capabilities are like parcels of pseudo power So in podman and docker, there's a long list of capabilities that are enabled by default Yeah in cryo, it's a very short list and we would recommend If you know what workload you have to go in and remove any capabilities that you don't need Yeah, so you can easily go in and drop some at some Etc. Another one is the read-only mode. So When in production, it's always ideal to run your containers a read-only mode So if someone if a bad actor actually gets into a container They won't be able to change anything because you won't be able to write any pots in there You should only be able to write to your volume mounts and your temp of us So when you said this to true all your containers run in read-only mode Yeah, so again second filtering. Oh, yeah, you can set that up to you. Yeah So so you can set a second profile up on profile So we have a bunch of different things you can easily configure to make your Plus containers run as securely as possible Yeah, so just for the record I ran the cube admin the cube Generate cube script locally and it did pop up a pod if you want to go and check it out. Oh, it did. Yeah, I Guess something was it go to projects because it's in like test something So here's open shift for three when I went to log in it was a little different because 4 3 was just released So the the console is there's some updates that I noticed Not test app it would be in a different project. It's like something else. It's like something else It's the k1 go Yeah, hello, yeah All right, so anyways the pot Podman generate cube does work and I always see applied it and yeah, the pods the pods spun up here Yeah, that's you have the blood running. Yeah All right, so you want to talk about the housekeeping. Yes, this is very important This is you know something that you The past few weeks I've been building a lot of images and there it's a it's a big image It's like a builder image that has a ton of stuff in it and I always run Podman rootless like I said, so my my home directory all of a sudden was out of space. So I Had to make sure I was cleaning up my images. So With podman if you're running in root Your storage is located in your in var lib containers if you're running rootless It's located in your home directory. That's how we do it. So It's important to podman system prune and pseudo podman system prune because They won't delete each other's storage You need to do both when you're cleaning things up and the volume prune also But if you want to be sure to like just clear everything out It's not going to break anything if you just remove that local share containers and var lib containers I I actually usually just do that because it's easier. No, and it doesn't break anything, right? Yeah, I Actually had an image that was like well not an image, but it's a multi-stage build So one of the the layers was like eight eight gigs and like that was just a few of those and oh Wow, yeah, we took up all the time with our Broken script. So these are some resources you can check out We have a pretty cool coloring book on all these Projects called the continuing command is coloring book and you can find that and that get hub link The demo script and slides can also be found on that link there And yeah, so do check out the other talks we have on podman build and all I believe Matt is giving a talk today more of a deep drive on podman Matt Heon and and Nalan yeah, Nalan's giving a talk on the build of stuff with different architectures So, yeah, do check it out basically the the new exciting stuff is see groups v2 in fedora 31 the podman generate system D and cube is kind of cool But it would be really cool if you guys all used it and like made it better You know made new features extended it whatever made it more useful so please do that and Sorry for the live demos It wasn't too bad I promise that the github webhooks in overshift really are they really work But it worked. I don't know what's going on here But it's very easy to set up in github and just push your code to your repo and then a new build triggers Yeah, we've actually done quite a few talks with demos and it's never crapped out on us as much as it did today So Yep, that's it. Thank you Anything else Dan do we forget anything? Yeah, you have any questions we have a lot of people here that can answer anything if we can't do yeah I'm sorry. Can you can you repeat a missed part of it an effort going into verifying signatures with those? Verifying signing. Yeah there is there is and We had that discussion in the run times talk yesterday and Dan can you Where is it? It's not it's there's a group started to discuss it upstream and downstream Nobody can really agree exactly on what needs to happen But there is a group of people Working on it and you could probably join too. Yeah, if you yeah the difference between build and Coleco Connico Connico, I do not know Now then do you know? Yeah, here Okay, now admittedly it's been actually a bit more than a year since I last looked at Connico but my understanding is that it runs entirely inside of the container and That makes certain things like multi-stage builds a little bit more complicated The Longer version is the build also is running a true to an environment inside of your build So you're it's trying to do as best as it can an approximation of running a container inside of the container when you're building inside of a Container both of them are trying to solve that particular problem If one works better for you, then I'm not going to feel bad if you use the other one But you know, we'll continue to try to improve the offerings that we have Thank you guys. Thank you