 Let's see, that's right, yeah, yeah, I will, I don't have my name on it, but yeah, you've got it right, yeah, uh-huh, yeah, Ryan from Red Hat, that's good enough, yeah. Oh, thank you. Oh, very helpful. It goes to this. I get it, interesting. Yeah, it does not really fit like completely well, so hopefully, if not, flip it the other way. Okay, looks good. No, this is my first time in Prague, yeah, or in Czech Republic, first time, yeah. Yeah, yeah, a little bit cooler, yeah, yeah, not bad though. Yeah, yeah, okay, yeah, sounds good. Yeah, I got, uh, Steve at the same time? Right now? Yeah. Oh, okay. This or will the, probably not be, she would take this off. Okay, got it. I have HDMI on this side. Does the projector do more than 1080p? No, it does. So, what, one, maybe, you know, pretty little, one 1024. Oh, it's a square. Oh, okay, well, let's see what this digital does. Okay, yeah. Try another one. Yeah. Okay, probably, still a little bit different. Let's get, uh, this, no, it's 168 here, okay. Okay, yeah. Guys, thank you for coming to the presentation. Please don't forget to leave us feedback on the official website. You can also tweet and blog about the event. We really appreciate your feedback. There's also Wi-Fi. So, hello, everyone. We have the next presentation ready. So, please welcome Ryan Jarvin and from Red Hat. So, I'm Ryan from the, from Red Hat's OpenShift team. We'll be talking about securing containers on OpenShift. How many people here are using OpenShift today? Anybody? Ooh, all right, good, good. Well, hopefully this has good information for you. There's also a lot of links in the slide deck. So, I've made a URL to make this easier for you to follow along and for take-home notes, right? So, bit.ly-dev-conf-container-sec has all the links. I'll have this URL up again at the very end of the slides. But feel free to follow along on your laptop if you like. So, like I said, I'm Ryan Jarvin and I work on the OpenShift team, specifically focusing on Node.js development. And here's the broad overview of a couple of concepts we'll try to cover today. So, I'll start with a general introduction to OpenShift for folks who are new to the topic, new to containers and new to Kubernetes as well. We'll cover a brief glossary of terms. I'm going to try to speed through it very quickly so we have more time for container security topics. The first real security topic we have is creating images, establishing consistency for these images from the operational side. Then we'll go into runtime security for containers. Part of that is going to be the networking that's available to the container, setting up network isolation, or at least links to how you might set up network isolation. We'll talk a bit about SE Linux, about how OpenShift handles user IDs within the container. We'll talk a bit about image composing multiple images together using Kubernetes template. And we'll talk about a couple, briefly, some advanced topics that you will most likely want to follow up on if you're really interested in getting security done right. So, to start off with the overview, this is a little bit coming into all this terminology with Kubernetes. You may feel quite lost at first. There's a lot going on. So this is why I have links so you could come back as a future reference to what I have here in the following. So I'll start off with the stack. What are we composed of with OpenShift? OpenShift runs with a rel base. You can also use CentOS. One of the projects we're especially working on is something called... Oops, did my network drop? Oh, not a good sign. Okay, reconnected. Or not. Thank you. Old link in my slides. So this display resolution is a little bit awkward. It'll look better on your laptop or mobile device. Maybe it looks like this on your mobile. So you could learn more about Project Atomic and what Red Hat is doing to streamline the OS distribution specifically for running containers. So Project Atomic is generally a stripped-down version of rel or CentOS, similar to rel minimal. But then we add in Docker and Kubernetes, everything you'd need for running containers. And if you need to bring in, let's say MySQL, you bring that in as a container. And we've really followed all the way down this rabbit hole. OpenShift is itself running in a container that manages your other containers. So anything we add from Atomic, it's containers all the way up. Instead of turtles all the way down, containers all the way up. So Docker, of course, is our primary container runtime. We also, like looking into CoreOS's rocket spec as a way to achieve greater density within the cluster. So both good topics to look into. Kubernetes is what we use for container lifecycle management. This will automatically restart containers as needed if they crash or if a node fails. And of course, we've got links to OpenShift itself. Here's some more details about OpenShift. If you want to track the project on GitHub or if you'd like to participate in the community, github.com slash OpenShift slash origin has a lot of great information. I'll open this up really quick. There's also a releases tab here that has a lot of information about what's new in this particular release along with binaries for the command line tools. So if you have a developer who wants to contact a remote OpenShift environment or if you would like to contact your local self-hosted OpenShift environment, these are the command line tools that you might want to access. Okay, so terminology. Here's the main glossary of terms that I'll quickly loop through. First, we have a node. A node is basically a host machine for our purposes. An image, very similar to a VM image, but we're using container images instead. The main difference is with container images. The host OS is going to share the kernel as a guest container, so you only have one kernel per node, usually. Containers basically is a running image. A pod is a Kubernetes term. Let's skip down to his image. Another thing about images, there's an abstraction in OpenShift called image streams. An image stream gives you a way to, it fires events anytime a new image is added to your repository, and you can add automation based on these image stream events that fire. That's how we automate our deployments. A pod is basically one or more containers that are physically co-located. Ask me more about this topic if needed, but I'll try to get to the security stuff right away. This is basically a software load balancer. I like referring to my web services as services, so this is a little confusing for me, but for the purposes of this talk, when I say service, I'm talking about a load balancer. A route is something that allows a service to be exposed externally. If I have a load balancer for my web applications and another load balancer for a set of DB resources, I would probably want to expose the front end but not the back end, so I would give the front end a route and leave the back end for internal addressing. Replication controller helps really control the life cycle of these containers and guarantee that if you ask for a minimum of three to be running at all times, Kubernetes will help offer some guarantees around the availability of your pods. Deployment config helps automate the distribution of images onto nodes. And our build config is related to our first security topic here, how we're going to help standardize the container images that are available inside your OpenShift cluster. So here's a couple of those terms linked together in a relational diagram. The blue pieces here are core Kubernetes abstractions. All of the orange pieces are Kubernetes objects that are not part of the base terminology, but they're extended terms that OpenShift has added. Some of these features are becoming available as upstream features in Kubernetes. For example, the deployment configuration is being contributed upstream to Kubernetes. I'm not sure if it will be called deployment configuration after they merge it in. We may have to rebase around whatever changes that Google asks for on that, but a lot of development that the Red Hat team is doing is really going directly into the Kubernetes project to help add multi-tenancy and security features and advanced deployments and other automations, as well as securing the Docker runtime environment. It's not just our open source, it's the community's open source, and we contribute across the board to all of these projects. So builds. This is a link to... This is a link to the OpenShift documentation that will introduce you to builds. This will go into how we use a variety of build strategies. You could see there's three listed here. So our first build strategy is Docker build. If you have a repository with a Docker file inside of it, you could run Docker build locally and then push the resulting image into the OpenShift registry. You could also do this as part of your CI suite if you're already using Jenkins to run builds. You can have it do an extra step of running Docker build and then pushing the result into OpenShift. Running Docker build has some inherent security risks. Inside a Docker file, you'll see things like apt-get install this package or yum install this other package, depending on which base OS you're extending. In order to carry out those actions, the build script needs to have root permissions in order to successfully carry out the yum install. So there's additional risk. You're handing out basically root access during the build life cycle. So for this reason, when OpenShift updates all of their code on OpenShift Online, on our hosted service, it's quite likely that we will disable the Docker build strategy since we're not comfortable handing out root permission to random people from the Internet. It's something that if you're running your own OpenShift, you can definitely have enabled. It depends on how much you trust your developers and with random users from the Internet, we can't afford to trust our developers. So hopefully with our base assumption of don't trust anybody, we can prove that whether you trust your developers or not, you should be able to do things safely. So a safer build strategy or an alternate build strategy is custom builds. This is another option that's available in OpenShift. But source to image is the main one that we will have available with OpenShift Online when that relaunches with Docker support. So let's do a quick example if my machine holds up here. Let's see, it looks like pages loading. So I can go into one of my projects here and I'll click on Add to Project and we could show how a typical build and deploy looks. So for this particular example, I'm going to name the service or the thing I'm deploying, I'll name it www and I'm going to deploy some code that I have from GitHub. This is a Node.js project. So when you're running your build, if you're onboarding new users, it can really be this simple. Type in a repository, name it, and hit Create. Very simple to get started. If you want to see some of the advanced options, we could start someone on a dev branch or a particular feature or even enter a commit hash in here to build something specific. So very easy to customize this. Here's the route that we'll be exposing. Since this is a web service, I'll leave this box selected and we'll say, yes, go ahead and expose this publicly. For databases, we definitely unselect this box and say leave it behind internal to the Kubernetes network. So I'll leave that selected. We can also do additional automation via webhooks from GitHub, GitHub Enterprise, Bitbucket, a variety of revision control systems that have webhook support and that webhook will fire in, trigger a new build, and possibly a new deploy based on your deployment config. And in each stage of your release pipeline, you'd have a different deployment config that might encapsulate any differences between your dev environment, your staging environment. You may want to do high availability in production, but maybe not for casual developers. So you can encode all of that, some of those details in the deployment config and the templates per stage. So I'll leave this on auto deploy any time a new image is available and you could also see that I'm going to automatically rebuild any time the operations team updates the base image that I depend on. On the previous page, I selected a Node.js base, which already includes rel and Node.js is maintained by the operations team and so any time there's an exploit, let's say shell shock or heart bleed, one of these issues comes up, you shouldn't have your Node.js developers be responsible for closing that bug and saying, oh, hey, we think we have it fixed. You want to have someone from your operations team who's responsible for standardizing the base images across your enterprise and this allows you to automatically rebuild any of the application containers when its base image dependency has a change to it. So if the ops team pushes an update that closes that heart bleed bug or shell shock, we'll rebuild the application container as well automatically. We could also inject a couple environment variables here. Here's an easy way to, let's say you had a database that was outside of your OpenShift cluster. You could give it a reference via an environment variable and allow your application to then contact a MongoDB at a specific URL or something like that. So I'll hit create. That's basically what I wanted to cover. We could watch this build as it happens. This should be streaming logs as the build processes. Since Docker is a layered file system, we have the base image and now this is adding more layers on top. Once that build is complete, we'll upload the application image back into our internal registry and then deploy it across the nodes in our cluster. Let's see if I could get back to overview. I should be able to catch the deploy here. As soon as that push to the registry is done, we should automatically see, since I selected auto deploy, it should show up right here. We'll check back on it in a minute. Securing builds. There's more documentation on this topic. If you want to dig in deeper, here's another good link in our OpenShift docs. Also, since we're submitting this image into our internal OpenShift registry, if we want to have an external service, Jenkins or something else like that interact with our Docker registry, we have some notes on how to set secrets and secure that registry. Another good topic that the Red Hat team has been actively developing and has contributed to is Notary. This is a feature for Docker that helps with image signing so that you know internal to the registry. I have a certain checksum or a certain hash, a SHA value that I can identify this image. We want to know when we download the image that we can check the signature and verify that no code has changed in transit. That I got the image that I asked for. Notary goes into that topic. Here's a link for more information. My build didn't work, unfortunately. I should have kept the tab hidden. Let's see. I'll rebuild and see if it recovers. Runtime security. Now that we have theoretically, now we have an open shift container up and running, let's see what we can do. Now that we have an image built and we're ready to deploy, let's see what we can do to help secure the runtime environment. We actually have one of our Red Hat engineers who works specifically on the networking backplane is Rajat Chopra. He has a talk at, I believe, four o'clock called Networking in a Container World. We know a lot more about how to guarantee network isolation or how the internal software-defined networking works for the cluster. This is a good talk to look into. There's also videos on our software-defined networking model and some notes on how to set up SSL or TSL certificates with the route that's established into your container. These are good notes on network security. This one particularly, the software-defined networking portion. Let's see if I can find... We have a topological diagram that should have a link right about here and I think this display resolution is not going to allow me to show it. Browse? Yeah, that's not it. That should be... Where? I can't... I'm not sure where you're pointing. Oh, okay, yeah, yeah, yeah, thank you. All right, so here we can see a couple objects that have been deployed. This represents the route or the host name for my application. This works similarly to Apache virtual hosts. So anytime we have incoming traffic with a certain host header that matches the name of this route, we can pass that along to the service or the load balancer which will then pass this into the containers that will be part of a scaled set. So let me scale up the... Looks like the second build completed and was able to successfully deploy. So here's our dub-dub-dub service and I can then scale this up to four containers. We should see those containers coming online and we should see similar information on this diagram as these containers come online and OpenShift provides a flat networking space across each of these containers. So if I click on each one, we'll see more information in the right-hand pane and I could see the IP address of each of these containers. So we'll have a flat networking space within this particular project. There's a variety of project scopes here, but these pods can communicate directly IP to IP if needed. Ideally, you'll communicate via the service or the load balancer in order to spread traffic across your pods. And if you want to add additional networking isolation, we have a feature you can enable on our OpenVSwitch network which basically does a private VXLAN per project. That would prevent you from communicating across projects. So it depends on how you want to architect your solution, whether that's needed or not, depends on the way you deploy your code. But we do have additional support for network isolation per project using a private VXLAN. SCLinux is an important topic. How many of you are familiar with SCLinux? How many have it running on your laptop? All right, good for you. So I'm using Fedora on my laptop and it's pretty solid. I think Red Hat has a lot of experience with this topic in particular. There's been some pain over the years, but it's working pretty well now. And this is one of the tools we use to help lock down these container environments and create a security context. Dan Walsh had a talk, I believe, earlier in the day or somewhere on the schedule. He can go into more detail on this topic if you like. But we basically create a security context that's bound to a specific user scope. And what we try to do is run every container with a random user ID. What this helps is if we have an SCLinux policy for user number 2030 and we start up two containers with the same user ID, there's a potential risk that someone could break out of one container across to the other container with the same user ID. So we assign random user IDs to the containers to help accommodate for this potential risk. So here, if you want to see more about inside the container, we use this must run as range, and we have a range of UIDs, and we'll select one, feed it into the container. So when you're building containers, whether it's via the source to image, this should work automatically, but if you're building images externally to OpenShift, make sure you don't run as user. That's the primary thing to remember. Never allow the container to run as root. You don't want your containers running as root, and they should allow random UIDs to be assigned if you really want them to work well with OpenShift Online. OC is our command line tool. If you run OC get SCC, you should see a... My terminal is not available. OC get SCC should give me a list of some of the security contexts that have been set up across my OpenShift cluster. Next topic we'll go into is composition. Now that we have one service, one web service running, I shouldn't say the word service. Now that we have, you know what I mean, one web service, one application that I've deployed, we may want to add a database. We may want to have multiple services that compose multiple microservices that compose a larger application. So you can help compose multiple containers together using templates, and then configure these images to talk to each other or to be aware of each other using environment variables. So if I wanted to set an environment variable, I guess first I'm going to go into one of the containers we have here. This isn't the one I deployed. Here's a container we deployed. I should be able to check the logs and get a live terminal. I could see that inside this container, ProcessID1, well, if I could scroll here, ProcessID1 is actually npm. So I'm clearly inside a container. It's not an init script or something like that. And here's a random user ID that I've been assigned inside the container. Also, I set a generic key value, right? So if I grep for key... Oops. Here's the key and value, the generic key and value that I configured during our build phase. So this could be a connection string to a database or something else that my application needs to be aware of. If I wanted to set a new environment variable, I can update the deployment config or the build config using a command like this. And this will go update the Kubernetes configuration file and automatically redeploy my containers with the new configuration. Templates. This is one of my favorite topics. I almost think that these would be better named as maybe installers. This really encompasses everything that your application is comprised of. So let's look at an example here. I've got a project on GitHub that includes a template. So this project uses Node.js with a RESTify framework. It uses MongoDB as the back-end environment. And on the client side, it uses leaflet.js. So I have a file in here we could look at. Here's the template file that I'll be deploying. So there's a couple things that are unique to templates. They all have a template name. You can set an icon in here and then make this template easy to install to developers. If I install this template in OpenShift, I'll get a one-click launcher in the web interface. So I'll install this and we'll see what it looks like after I install. A template will also include a list of Kubernetes objects that will then be posted to the API as the template is processed. So the processing for the template is to substitute in variables. So you'll have a parameterized injection of config into the template. We'll see an example of what this looks like as well. So this is generally what some of the data might look like that's being fed into Kubernetes. Here's our deployment config object and it's going to get a particular database service name. It has a list of triggers or change events so anytime the image changes, we'll deploy. We could set a default number of replicas here and set up ports and environment variables that will be used. So for this MongoDB environment, injecting a MongoDB user, a password, a database name and a couple other details, we'll also inject the exact same configuration into our front-end environment, the Node.js web server. That should be, here's the front-end environment. We're passing in the same credentials. So let's see if I can spin up this application really quickly. Actually, I think I'll flip through the rest of these slides and then close with the demo. A couple other topics. Other things you might add into a template. If your application requires a persistent volume or any kind of disk, your containers are meant to be stateless, easily destroyed, easily recreated. So if you need storage in the template, identify a volume or a persistent volume claim and then basically detail that in the template and make it available to your application. Other advanced topics that you're going to want to look into for security is there's something called Secrets. If you wanted, let's say, your SSL configuration, I'm not going to publish all of that via an environment variable. What I'll do instead is create something called a secret. That secret will then be mounted as a file into my container. So that's how I would put in things like SSL config or other details that might need to be injected into a container. Service accounts are a way to delegate authorization into a cluster. So this is another good topic to follow up on. And finally, if you want to validate your containers or do security auditing, this open SCAP project will help validate and review the container content, check for vulnerabilities, and possibly reject a deploy if it doesn't pass the test. So there's a couple ways to try OpenShift. You could sign up for OpenShift Enterprise. We've also have a hosted environment called OpenShift Dedicated. Please feel free to sign up for either of these environments. If you're just interested in the upstream code, I showed you the origin releases earlier. We also have an all-in-one VM. If you'd like to run the whole cluster locally, just with vagrant up, this OpenShift.org slash VM has a virtual box image in a vagrant file. Also, if you'd like to deploy a large cluster, the OpenShift Ansible repo has playbooks for deploying OpenShift to Amazon, to Google Compute, to raw machines. If you have raw machines anywhere you like, Ansible is our deployment tool for large environments. The environment that I am using today, I set up using this particular Ansible playbook, this one command. One command 20 minutes later, I had a cluster of 10 machines. It should be very easy to set up. Feel free to file bugs or issues if you've run into any problems along in the process. More great links for you. If you'd like some free eBooks courtesy of Red Hat, we've got an eBook on Kubernetes, an eBook on Docker security, and more great documentation online, official training courses from Red Hat, and more information. If I have a minute left, I will risk running this project. Let's see if I can do a quick deploy here. I am in the demo project. Switch to demo. I will run oc create on our template file. This will install the template locally into the project that I am using. Now when I run add to project, I should be able to find on this page a parks, should be a project here with my parks application. Anyway, parks should show up as a one-click, kind of like this, with a Node.js MongoDB example. I'll click on this because this will have a similar example here. We can substitute the repository URL. We can add in the database user, database password, and database name. The result that we'll end up with has the front end and the back end fully configured. It will end up looking... Let's see if I have a backup. Yeah, I don't have a backup of it. But if you'd like to see this demo in the Red Hat booth just outside the door, I'd be happy to show you out there. Thank you. That's all I got. Hi, Pep. Test. I'm going to the... workshop. Yeah, it should be interesting. I don't know what this will be in the open stack of the second containers. That sounds quite weird. That sounds interesting as well. Open stack in containers. I don't like to say that. What could go wrong? Yeah, yeah, yeah. That sounds like a really good idea. So where is the open? A113, but I don't know where it is. A113. What is this? D. I think you know where it is. Okay, if something... Bye, Pep. Hello. Hello, everyone. Just a reminder. Please, if you like the sessions or you don't like them, which I hope won't happen, leave some feedback on our official website. Please also tweet and blog about the events. We have a competition for the best blog post, so you can win some prizes. Basically, that's it. Thank you very much. Look at the swing for the speakers, like the hoodie and the bag. Is this your first time here? Yeah. Cool. You're having fun. I am, yeah. Sorry? What's this monitor to? I think we can actually switch them here. No, it doesn't matter. I'm not sure. Hello. Sorry? I'm searching for an organizer. For Jan Blecha, you mean? No. David Kaspar? Someone from the technical team. Nobody here. Is there a time around? I don't know. Maybe. But don't worry, we'll time you, and we'll be showing signs when it's 10 minutes left, 5 minutes left, no minutes left. No, I didn't think so. All right. So, we have the next presentation ready. Please welcome Ryan Halisey. Thank you. Okay, perfect. Thank you, Jay. So, hi, my name is Ryan Halisey. I'm a software engineer at Red Hat. I've been at Red Hat for about a year and a half now. And my specific focus at Red Hat has been around OpenStack. I've been working at OpenStack for that entire time. I've specifically been working on containers for about a year now. The container technology really has become a hit within OpenStack, and it's something that's really been growing. There's been some projects around it, and it's really started to take off and be successful. So, today I'm going to cover...