 Live from Austin, Texas, it's theCUBE, covering KubeCon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and the Kube's ecosystem partners. Welcome back, this is SiliconANGLE Media's live coverage wall-to-wall of KubeCon and CloudNativeCon here in Austin, Texas. Got the house band rocking all day. I'm Stu Miniman, happy to be joined on the program. Dan Walsh, who's consulting engineer with Red Hat, rocking the Red Hat. Dan, thanks so much for joining us. Pleasure to be here. All right, so we've, you know, Red Hat has a strong presence at the show. We had Clayton on yesterday, you know, top contributor, won an award, actually, for all the contribution he's done here. Going through a lot of angles. Why don't you start with, tell us, you know, kind of your role, what you've been doing at Red Hat. So at Red Hat, I'm consulting engineer. Which basically means I lead a team of about 20 engineers, and we work on the base operating system. Basically, anything to do with containers from the operating system on down. So kernel engineers, but everything underneath Kubernetes. So traditionally, for the last four and a half years, I've been working on the Docker project, as well as other container-type efforts. We've added things like file system support to Docker. Lots of kernel changes. Lots of, you know, we're working forever on user names, space, things like that. More recently, though, we've been working, we started to work on sort of one of the, well, OpenShift and Kubernetes were built on top of Docker originally, and they found over time that the Docker base was changing in ways that were continuously breaking Kubernetes. So about a year and a half ago, we started to work on a project called Cryo. So a little history is, if you go back, Kubernetes was originally built on top of Docker, but CoreOS came to Kubernetes and wanted to get rocket support into Kubernetes. And rather than add rocket support, Kubernetes decided to find this interface, basically a CRI, container runtime interface, which is an API that they would call out to run containers. So Rocket could build a container runtime interface. They actually built a shim for the Docker API. But we decided at that time to basically build our own one, we call it the Cryo. So it's container runtime interface for OCI images. So the plan was to build a very minimalist demon that could support Kubernetes and Kubernetes alone. So we don't support any other orchestrations or anything else. Totally based on Kubernetes CRI. So our versioning matches up with Kubernetes. So Kubernetes 1.8, you get Cryo 1.8, Kubernetes 1.9, you get Cryo 1.9. Yeah, Dan, we've been talking about this. Red Hat made a pretty strong bet on Kubernetes, relatively early in there. Red Hat, very open, everything you do is 100% open source. So for Cryo, why only Kubernetes? There's other orchestrations out there that are open source. So let's take a step back. So one of our goals in my group was to take, sort of what does it mean to run a container? So if you think about when I run a container, what do I need? I need a standard container image format. So there's the OCI image bundle format that defines that. The next thing I need is the ability to pull an image from a container registry to the host. So we built a library called Container's Image that actually implements all of the capabilities of moving containers back and forth around, but basically at a command line or a library level. We built a tool on top of that called Scopio, which allows us to basically command line. I can move from one container registry to another. I can move container registries to different kinds of storage. I can move directly from a container registry into a darker daemon. So the next step you need when you want to run a container is storage. So you need to take that container image and put it on disk. And in the case of containers, you do that on top of what's called a copy on write file system. So you need to be able to have a layering file system. So we created another project called Container's Storage that allows you to basically store those images on storage. The last step for running a container is actually to launch an OCI runtime. So OCI runtime specification and Run-C takes care of that. So we have the four building components for what it means to run a container is separate. So we're building other tools around that, but we built one tool that was focused on Kubernetes. And again, the reason Red Hat bet on Kubernetes is we felt that we had the best long-term potential and judging by this show, I think we made a sane bet. But we will work with others. I mean, these are all fully open source projects. We actually have contributors coming in that are contributing at these low level tools. For instance, Pivotal is a major contributor and container image, and they're using it for pulling images into their base. We have other products that projects are using it. So it's just not Kubernetes. It's just that cryo is a demon for Kubernetes. Yeah, Dan, it's really interesting. You're listening to Clayton's keynote this morning. He talked about one of the goals that you have at Red Hat is making that underlying infrastructure boring so that everything above it just can rely on it and works on. There's a lot of work that goes on under there. So it's like the plumbers and the mechanics down underneath making sure it all works. A lot of times when I give talks, the number one thing I'm always trying to teach people is that containers are not anything really significantly different. Containers are just processes on a Linux system. So if you boot it up a regular RHEL system right now and you looked at a PID-1 of a system, let me take a step back. I define containers as being something has C groups associated with the resource constraints. It has some security constraints associated with it and it has these things called namespaces, which is a virtualization layer that gives you a different view of the processes. If you looked at every process on a Linux system, they all have C groups associated with them. They all have security constraints associated with them and they all have namespaces associated with them. So if you went to PID-1, if you went to slash proc, PID-1 slash NS, you would see the namespaces associated with PID-1. So that means that every process on Linux is in a container. By the definition of a container being those three things and all that happens on the system is you toggle those so you can tighten them or change some of the namespaces and stuff like that and that gives you the feel of the virtualization. But bottom line is they're all containers. So all the tools like Docker, Rocket, Cryo, RunC, any one of those tools are all just basically going into the kernel, configuring the kernel and then launching the PID-1 on the system. And from that point on, it's just the kernel that's taking it. The red hat has a T-shirt that we often wear that says Linux is containers and containers is Linux and that actually proves the point. So bottom line is the operating system is key and my team and the developers I work with in the open source community is all about how can we make containers better? How can we further constrain these processes? How can we create new namespaces? How can we create new C groups, new stuff like that? So it's all low level stuff. Dan, give us some flavor as to some of the customer conversations you're having at the show here. Where are they? I mean, we know it's a spectrum of where they are, but what are some of the commonalities that you're hearing? And I mean, at Red Hat, our customers run the gamut. So we have customers who can really get off of REL5, which came out 12 years ago, to sort of the leading edge customers. And the funny thing is that a lot of these are in the same companies. So most of our customers at this point are just beginning to move into the container world. They might have a few containers running or they have the developers insisting, hey, this container stuff cool. I want to stop playing with it. But getting them from that step to the step of say, Kubernetes or to get into the step of OpenShift is sort of a big leap. My fear with a lot of this is a lot of people are concentrating too much on the containers. The bottom line is what people need to do is develop applications and secure applications. I actually, my history is very based in heavy security. So really, we face a lot of customers who sort of have homegrown environments and their engineers come in and say, oh, I want to do a Docker build or I want to talk to the Docker socket. And I always look at that and question, you're supposed to be building apps. You're building banking apps or you're building military apps. You're building medical apps. They should be concentrating on that and not so much on the containers. And that's actually the beauty of OpenShift. You can set up OpenShift workflows in such a way that their interaction to build a container is just to get check it. And it's not, you don't have to go out and understand what it means to build a container. You don't have to get the knowledge of what means to build a container and things like that. Yeah. Dan, you bring up a really good point. At this show, most of the customers I'm talking about, it's really about the speed for them to be able to deliver on the applications. Yes, there's the people building all the tooling and the projects here. And there's many customers that are involved with it. But we've got further up the stack where it's closer to the application, less to kind of that underlying infrastructure. And a lot of the other thing customers are looking for. In my case, as I said, I have a strong background in security. I did SE Linux for like 13 years. So most of my time talking to customers is about security and how can we actually confine containers? How do we keep them under control? And especially when they go to multi-tenancy. And it's a good thing that I don't know if you're going to talk to Cata. Have you heard about the Cata project? So we've talked to a couple of people. Cata coming out of the open. The clear containers. Yeah, clear containers in Intel. Yeah. And I think those, getting to those levels of using hideware isolation. Yeah. It really helps out in security. It's interesting. Because that's, you know, we're first looking at it. It's kind of a lightweight VM. It's a container. Does it, you know, where does that fit? And they're not really, they're really just containers. Yeah. Because they're not, a lightweight VM would be actually booting up like an init system and running, you know, logging and all these other things. Okay. So a Cata container, I'm more familiar with clear containers. A clear container is literally just running a very small init system. And then it launches run C to run, actually start up the container. So it has almost no operating system inside of the, you know, lightweight VM, as opposed to running just regular virtual machines. Yeah, Dan would love your take on, you know, you talked about security. You know, security of containers, you know, the role of security in the cloud native space. You know, what are you seeing and what do we need to work on even more as an industry? Yeah, it's funny. Because my world view is at a much lower level than other security people that would talk. Because other security people would be looking at sort of network isolation and, you know, role-based access control inside of Kubernetes. I look at it as basically multi-tenancy. So running multiple containers with different workloads and what happens to one container gets hacked. How does that affect the other containers that are running and how do I protect the services? So over the years when we've been working with Docker, I got SE Linux support in, we've gotten Seccom support in. We're trying to take advantage of everything in the Linux kernel to further tighten the security. So the bottom line is a process inside of the container is talking to the real kernel on the host. Any vulnerability in the host kernel could lead to an escalation and a breakout. So that's why, no matter what you say, a hyper, like a hyper shell or a separate container running inside of a VM is always going to be more secure. But that being, on the other hand, containers, in a lot of cases, if you want to have some interaction, if you go all the way to VM, you get really bad isolation. So you really have to cover the gamut. So a lot of times I'll tell people to look at containers as being a, they're not zero sum game. You don't have to throw away all your VMs to move to containers. I tell people the most secure way to run an application is a separate physical hardware. The second most is on VMs. The third most is inside of containers. And then you can go on all down the line. But there's nothing to say that you can't run your containers inside of separate VMs, inside of separate physical machines. So you can set up your environment in such a way. Say you have your web front end sitting inside of VMs, inside of your militarized zone, on separate physical hardware. You set up your databases with your credit card data on separate physical machines, separate VMs on separate containers inside of you. You can build up these really high levels of security based on containers, virtualization, and physical hardware, so. I can go on forever on this stuff. So, yeah. Well, Dan Walsh, really appreciate sharing some of the ways that Red Hat's trying to help some of those underlying pieces become boring so that customers won't have to worry about. That's really what it's about. If you know what's going on at the host level, then I haven't done my job. So our goal is to basically take that host level and make it disappear, and you can work at your higher level off the streets. All right, well, Dan, great to catch up with you. Thanks so much for joining us. We'll be back with lots more coverage here from KubeCon 2017 in Austin, Texas. I'm Stu Miniman, and you're watching The Cube.