 We are ready for the panel. We've got from now until 5.20, pepper them in questions, give them hell, show them no mercy. All right. Well, welcome, everyone. I'm Katie Miller from Red Hat, so I'm the facilitator for the panel today. We have how long? Half an hour or? You've got 40 minutes. About 40 minutes. Okay. I don't want to do much of the question asking, so start thinking of some questions. First up though, better introduce our panelists. We have seen them all speaking here today, so we've got Brandon Phillips from CoroS, Tycho Edison, and Steve Pustey on the end from Red Hat. I might just give them all like 30 seconds to just say again, you know, what they do, quick high-level spiel and what you have to do with Clown just to give everyone an idea in case not everyone was here during your talks. So I'm the CTO and co-founder of CoroS. We develop a Linux operating system that is designed for managing containers and for doing distributed systems. We have a number of open source projects that you might have heard of, including XED, Flannel, which was talked about a bit in this talk, and then also the new project that we launched called Rocket. So I'm Tycho Anderson. I work at Canonical on various cloud things, including Lexi. I'm a core contributor of that, as well as somebody who's hacking on Cree, which is a migration engine for containers. I'm Steve and I'm king of the universe. I spend my day. Now, I'm a professional smack talker for Red Hat, which is an open shift developer advocate, and that's about what I do for my day. Before we take our first question, I should just mention we don't really have any ground rules other than please observe the code of conduct and be awesome to each other. And Steve, that means you have to keep your shirt on. Sorry. Do we have anyone who wants to ask the first question we do? Have we got a mic runner or? Okay. So what can each of you please answer? What do you think will be the dominant container technology in five years' time and why the others won't succeed? The dominant container technology will be the external. Really, I think there's going to be a spectrum. There's always going to be a spectrum of technologies that take advantage of the kernel ABI. And more and more, we're seeing that all these container runtime systems are using statically compiled binaries. And as we know, Linus Torvalds gets very upset if you break the ABI. So I think that for the foreseeable future, we're going to have a lot of options. It's going to be a spectrum of technologies. So he kind of stole my answer. Fundamentally, all of these things are using the kernel. And so they're all sort of, in some sense, the same. They all will be designed with different use cases in mind. And so depending on what you're trying to accomplish, one container technology might be better than another. So I don't work for Docker, so I can't really start any sauce on this one. And I don't know enough about... I think the best answer was this one, where it was, they'll probably, by this time next year, they'll be more specialized containers with different use cases, maybe. I don't know. How many different container technologies... This is back at you. How many different container technologies do you see out there today now? I know of two. So there's LXC, Docker, whatever the Khoros guy got. I'm sure there's another one I've forgotten. Open VZ. So I'll just go out on the limb just to make something happen here. So I think... I'm going to say it's dockers to lose. That's what I'll say. And they're doing a pretty good job right now of losing it. So what I feel like... Wait, one clarification on this. I'm not speaking officially as Red Hat. I'm low enough on the food chain that I am not part of any of these strategic decisions, as opposed to being a CTO, where everything he says really counts. And I'm also... My role on this panel is to try to get something going, because there's nothing I hate less than everybody... Hate more than everybody agreeing on a panel. That's like the worst. So I'm going to say it's dockers to lose, and they need to get a sense of what the community really means, rather than them trying to monetize. And I think that's the real problem from them right now, is that they're trying to own too much of the ecosystem, and not being a good community player. This t-shirt may contain forward-looking statements. Yes, exactly. So it was a bit of a follow-up to that. Can I ask then, do you think we need an industry standard for containers, or do you think we have any chance at actually coming up with one that everyone's going to find? No frickin' way. In general, standardization, I feel, in my experience, standardization at an early stage of a technology is a sure way to kill it and help IBM make a ton of money, because it'll become too hard to implement, and it won't meet the use cases, and innovation will crawl to a halt, because everybody will be pushing in different ways on the standard. I'd like to wait maybe three or four years before we think about standardizing containers. I mean, again, all of these are technologies that are implemented in Linux kernel. So in some sense, there is a standard. It's whatever the kernel exports. Now, the user land side of things, we all sort of have to do the same things. We all have to implement the same kinds of stuff. And so there might be some room for standardization there. But again, if some use case is just radically different from some other one, maybe it just doesn't fit. So I guess the answer is I don't know. Okay. So no, maybe yes. I think that having a standard means that on the image side is the primary thing that we need. I think standardization at this stage, arguably for APIs and that sort of thing doesn't make a lot of sense. But I think we've kind of, for a long period of time, we've had all these container technologies in the kernel, but we haven't said what is the format for actually getting those containers off of some other server onto my hosts that I actually want to run the container on. And that's kind of hurt us. LXC is a great technology, but we haven't had a standardized way of saying until just a few years ago when LXC added the idea of images, but we didn't have a standardized way of saying, this is my thing. This is the name of it. This is the URL I can find it at and put it on my host and run it. And that's part of what we're trying to do with the app container spec. But I know that there's a lot of folks that are interested in this too. On the runtime side, I don't think, I think it is too early. We don't know the use case as well enough. Okay. So as you rightly indicated, all the infrastructure for containers has been and is always there in the kernel. But as I indicated in my previous talk about the kernel ABI, how are you going to ensure? Because if you go to the Docker website, it says build once, run anywhere. But the problem is when the kernel ABI changes, or even there are new syscalls added, and you build on, build with the same user land, user space, but with a different kernel. And during the build time, the whatever the auto tools or anything, it checks for the presence of a specific call. And if it's available, it uses it, right? And when you run it on a older kernel, it breaks in ways, it can break in non-deterministic ways. So is there a way you are going to fix it as in like detect the ABI or, you know? The really the only fix there is to use a modern kernel because these container APIs are continuing to emerge from the kernel. And honestly, unless you're using a kernel in the last six to 12 months or so, it really containers don't work that great. And as we continue to fix things like user namespaces, et cetera, in the kernel, it's just going to make less and less sense to try to shoehorn this into old kernels. So my recommendation is to use a modern operating system, whether that be HoroS or REL7 or something that can do this correctly. Yeah, we're a new bunch too. We back port kernels to our LTS releases. So I'm going to go with the majority. You should go with the majority vote. And I'm going to say something like REL7. That was the only one that's been mentioned twice. So REL7 is the obvious choice for running a modern kernel. And I don't know. I mean, I don't know enough about the kernel to answer intelligently. So I'm just going to make a joke. So I had a question. So Kubernetes seems to be under heavy development. When it comes to container orchestration, can you really rely on that now to provide that layer? Is it really something that you can depend on? Well, since I'm the only one who talks about Kubernetes at all, I'm going to say Katie's going to answer that question. So that's actually a problem that we're running into right now on the OpenShift team, which is this is always the problem you run into when you use a bunch of different community projects in your project is if it's a fast-moving project like Kubernetes is, we are faced with the challenge of when do we say this is our release based on which Kubernetes release. So it's moving fast. There are companies who are running it in production. It's the same thing with containers. How many of you are running Docker containers in production? And when I asked my question before, how many of you have used Docker, almost everyone and when I could slip through, skip through the slides. So there's different use cases, right? So it depends on how comfortable and how low level you feel comfortable being. There will always be companies who say, I can use Kubernetes right now. And then there will be others who'll be like, no way until it's a version 2.0, I'm not touching it. So is that answer your question kind of? It just seems like a really moving target at the moment. Right now, in Google, we'll tell you that, and we'll tell you that as well. Between the two are two companies, there's a lot of active development happening. Even the name changes are only within the last month or two. So they haven't even sorted out their terminology. So that should give you an indication of the stability of the projects. Unless you're comfortable being in the code and running nightlies and stuff like that, don't touch it yet. I want to add just one thing. I think one advantage Kubernetes has is they are following the Google pattern of using API generation documents. So their API clients, as they evolve the API, they generate new clients for like Go and Python, et cetera. I really hope I see more projects doing that. Because if you're evolving an API and then having to rewrite the clients every four or five days, it's total pain and it's slowest velocity. Anyway, we're doing the same. OpenShift has Swagger. We're using Swagger for... Can you guys all hear me? I'm not sure that microphone is close enough that I'm just kidding. That we do the same thing. To second that, that people should be doing that, we're doing the same thing with the OpenShift APIs. We're using Swagger under the hoods and so it's auto-generated every time we generate it. There's no excuse anymore, I think, unless you're a lazy developer like me. So machines with a couple hundred cores and a couple of terabytes of RAM are now physically quite small. You can have a dozen of them in Iraq. If you're not using lower level virtual machines and instead want to use containers, the kernel has some isolation features, but I don't see them getting out there. I want my memory to come to be on the same numinode as the cores I'm not locked to, for example. Features like that. So isn't that what C-groups is for? So you have the memory, you can do memory region, control, I forget even what they call it, but aren't there C-groups to control that stuff? There's the numinode pinning. So the one thing that is obviously missing from C-groups and that I know various orgs have been trying to get is out of memory killing prioritization, is this is the big thing where you have several terabytes of RAM and you actually want to utilize it at a high rate is that somebody is going to allocate some page and then that's going to be an overallocation of available memory and you need a new co-process and right now C-groups and nameshaces provide really poor ability for you to prioritize that as a user, but like numinode pinning and the memory allocation stuff should be there. I didn't quite hear your question. You say you're not using VMs or containers or you would want to use VMs or containers? Oh okay, instead of VMs use containers, right? Okay, yeah. I mean, just so I was on a panel, yeah, with this with one of the developers of Zen, I'm also not thinking that containers are everything moving forward, right? They're a tool that you use in certain cases with certain kind of IT infrastructures. It's not docker everything or pass everything or I mean, let's be clear that VMs still have a very valid use case in many parts of the organization, especially around security. If you boot with a hypervisor there's nothing that can kind of beat that security right now. One of the steepest parts of the learning curve for me in dealing with containerization was the distinction between the host and the container and I still see, I still have a natural tendency to view the container as the host and the host as the container, but they can be different. They can be the same. Certainly in the case of docker, docker gives me the ability to configuration manage my environment and to move it around from one host to another, but things like networking and memory and out of memory allocation and stuff is still tricky. Where do you see the container management engines differentiating in this area and if so how certainly rockets trying to differentiate in terms of its process hierarchies and its ability to manage multiple processes in the container, much more elegantly I think, but things like networking are still really painful, storage, moving a container from one host to another. Where do you see the big features and how do you make it conceptually easier for people to make a distinction between a container and a host? So the approach we've taken in LexD is probably the simplest which is we don't do any of that. There are lots of ways to do that. People have invented all kinds of technologies to do that and so we're just going to piggyback on all of the work that everyone's already doing and just do what we do best which is the container piece of things. So I think the last thing that you mentioned was moving your containers around. If you mean migration, I think we're absolutely going to do that, but if you're just talking about our sinking things around or maybe you can clarify. So there's a lot of pieces of that question. The one I think that has been kind of evolving and I think we're settling on a solution is the networking piece. Docker had this idea of a link and so I would like link from port to port and it's a very difficult thing to get correct and I think you've seen in Kubernetes this idea that every container gets an actual real IP and this simplifies a lot of the pieces particularly migration. It makes it a lot easier because if you have a real IP, you can talk to your networking fabric and say move this real IP from here to here. It's easy to run things like Dissonder or anything that requires some sort of gossiping because you can be like I have this actual IP, here's the interface and I can actually query that like a normal thing. I think that's one of the pieces. Voluming and migration I think are still too new and nobody's really gotten a good answer there. I think that's another thing. The pain points that you're hitting are the pain points that everybody's hitting. That's why you only see five or six people saying we're running in production. It's moving so fast. That's the two places that I think people still need to figure out is local storage or storage story and the networking story and I think the networking story is actually farther along than it seems like the local storage story. Yeah. Do you want to follow up on that? Yeah, I just want to, I mean there's all some technology that's available today that does things like replication synchronization. Is there, I mean you're already dependent on the next kernel. How far do you want to take your advances? So the follow up was there's a whole bunch of other storage technologies out there that handle all these advanced things and I'm just repeating it so that we get it for there. And your question was we're already dependent on the kernel. How much do you want to be dependent on other things besides the kernel? Is that what you're saying? I think I like the, from what I've seen of the Docker stuff, what I like is that they're making it pluggable and it's up to you to decide because I think storage is definitely one of those things that's even less standardized between different, like it makes a big difference if your database is on different types of storage, right? Or it just, I think that's something that should be pluggable and you depend on this container is going to plug, talk to this and this container is going to talk to this because these containers have very different needs for disk. Yeah, one of the things that's been an ongoing discussion I've seen is there's kind of a hierarchy of different storage needs and we don't have the right tools in a lot of places in the Linux stack. So we do have the tool like NFS, which is kind of a sledgehammer solution to the problem because, for example, if I'm running a web work, let's say a database like MySQL, you really don't want every write to the write ahead log file to go over the network and be synced over onto your NFS server. So it does, it solves like the problem, but it doesn't solve it in a way that's scalable. There's other things like, for example, plugging into migration of the host. So only migrate my files over to another machine. Once I know that I need to evacuate this host because the disk is going down or the hardware needs to be rebooted or this data center is being powered off. And we don't really have great tools for those sorts of like orchestration pieces. And there's sort of the lazy replication thing that we haven't built either as an open source community. So I know internally there's systems at Google that had papers written about them where sort of I write to disk and it is lazily replicated to other places. And then when I actually need to evacuate the host, I do the final snapshot and move. So those sorts of systems we just have to build. But staff in NFS are obvious solutions to the volume problem. It's just that you take a huge performance hit on punting, essentially. I mean, we've run into that same problem with OpenShift because we've been using containers for a while also. When we run it in online, we can't allow users to actually, when you auto scale their application and they're maintaining state in their application, you can't actually allow them, we can't replicate that without writing some arcing scripts because Amazon doesn't understand, we can't NFS mount something with SD Linux permissions on Amazon. So in our online use case, there is no auto replication of your file assets, your static assets. You have to write arcing scripts. But if you do a local and if you do an OpenShift install, we'll tell them, yeah, put it on NFS mount and then just have all your OpenShift mounted. So it's, but I don't want my database on it. Like you said, I don't want my database on it. I want that on local. Just sort of throwing some points on that. They're the kind of things that most people might not have come across. But there are actually publicly available file systems that do have some of the properties you were mentioning that Google internal ones do. And they predate NFS. They've gotten a little aged and very rusty at the edges, but the Andrew file system from the very early 90s has asynchronous replication as one of its fundamental tenants. So you get that behavior you were mentioning about that regular off-disk snapshot. It doesn't kill everything and there's always something that you can get back. And as far as a Docker based tool that provides migration and storage, the guys at cluster HQ over in Europe have been working very hard on what they call Flocker. And that has a really strong entire pipeline around that because that's what they want to build. They want to build something that migrates your data, lets you run the database on there, deals with snapshotting. It does two phase commits for the data movements. And there are these tools beginning to emerge quite thoroughly at this time. Yeah, they're using ZFS for how they're doing it over the network. Excuse me. We've been using OpenVZ for a few years now and where each container's got its own IP address and it all seems to work okay. But is OpenVZ kind of, we haven't updated it since it was set up and I'm just wondering if is OpenVZ sort of like, does it have anything that the others don't have? How does it compare with the other LXC and so on? Is LXC a kind of derivative of OpenVZ? Well, so the OpenVZ guys have been upstreaming a lot of their patches to make a lot of this container technology possible. So they developed it as their proprietary thing and then they've sort of slowly been upstreaming things. Last that I knew which was OpenStack Paris, James Bottomley said that there was maybe one patch set left, some memory C group patch set that had not come up yet. Yeah, essentially the OpenVZ stuff has been upstreamed and the guys have been at parallels have been doing a fantastic job of all that. The two pieces that come to mind I think are the network, their special network walking stuff, but Google recently upstreamed a similar patch. So I think the OpenVZ project will probably take advantage of that walking stuff and then the C groups hiding stuff, I have no idea. I think they're working on it. Okay, but essentially the story is that that technology is continuing to live on in kernel.org. Yep, and in particular the migration stuff, Checkpoint Restore, their primary drivers behind all that is well upstream. So although I and others have been contributing, they're the guys. So one thing I think we should just clarify is there's essentially two types of containers that we're talking about here. There's the LXD and LXE and OpenVZ style. I'm running an entire init system like that's been in it down and there's the application container where I'm running a single process that's just my SQL without running that init system. And that's what OpenShift is currently running now is we're running our own custom containers. Like we basically use PAM namespaces, SE Linux, and what was the other one that was C groups. We basically wrote, we know enough about the kernel that we wrote it ourselves and just use it. There's no LXE, because LXE didn't have SE Linux for the longest time and so we're like we're not running it. So you don't have to bring LXE to do containers. Another little interesting thing on like the spectrum of containers is the idea that for example in system D you can say I only want to use this namespace in the C group and you can essentially create your own containers by just directly in the system D unit file without using any container runtime system at all. Just place some files on disk and run the service. And just to keep going on this like all this, I think this is actually one of the, in the computer world I think this is one of the fastest and interesting growing areas. There's also, what was it? KVM just released I think a driver to spin up containers rather than spinning up VMs. I think it, what's it, did you see that John? I just announced it. It's like it's like no VZ or something like that. No VM. That's right. It's called no VM, right? And it's basically KVM spinning up containers. So this is all, I think it's awesome how much interesting things are happening around this space. Oh, uh, Livvert LXC or Livvert LXC is actually a different code base than LXC as most people use it today. So anyway, just to complete, I think the point we're all trying to make is that when people talk about containers, what they're really talking about is a collection of APIs that the kernel exposes. There's no like create a container API call. There's like, you know, PID namespaces, user namespaces, mountain namespaces. And you kind of have to set all of these pieces up in a very nice way in order to get the container. And so the, the, all of us sitting up here, we're all setting these things up in a slightly different way. And, you know, which way works best for you is, is kind of up to you. So we've been really using containers for a long time starting from BSD jails and Chiruls and Open VZ. And well, it's for CIS admins, it's not very new technology. What, why do you think Docker made it rocketed like, like, like crazy? Can we reuse Docker success in any other field? Taking something. Did you get the question? Yeah. I think the question essentially is what, what caused containers become so exciting in the last like 18, 25 months? Like, what was it about Docker that's got everyone excited? When people ask me this question, I think of essentially two things. The first is the container, like having an image format. For the longest time, how people use containers is they cobbled together a bunch of like bash scripts that would start from a Debian RudaFest they found on the internet somewhere. And then they would, you know, build up the source code and whatever to actually run their application and then write out the CIS-5 init files and then try to launch it. And it's kind of, it was just a mess. And it was something that really advanced system administrators have slash developers were aware of. And then Docker came along and defined essentially a way of having a registry where these files come from and pulling them over the internet. I think that's the primary thing that caused it to happen. And then the API was kind of secondary, I think. Yeah, I mean, I think they got a lot of velocity from the community too. There's lots of people contributing Docker images and things like that. So it's not all just container velocity. It's like, wow, this is a really cool way to do things. So I actually think it goes farther back than that. So for me, I think zones was that when I remember when zones came out and I was like, whoa, that's fricking cool. Look at that. I mean, I never used BSD, right? So I think part of the thing, so I think part of the problem was BSD, not enough people used it, right? So at this show, sure, there might be some BSD users, but in general BS, not including the Mac OS, because no one runs that as a server. And so in general, I think that BSD just wasn't that well known or well used. There wasn't as many books about it. They just, it didn't have the community. And I think what has changed is Linux is now mainstream. I mean, we're not most of us because we probably run on our laptops and stuff. But in terms of servers, it's like Linux is not an argument anymore. It's running everywhere. When you say I need to run Linux, you're like, oh yeah, I run that for all this stuff. It's like, I run Windows servers and I run Linux servers. In the Linux, I probably run, if I work in an enterprise, I probably run some sort of red hat. But if I'm not, then I probably run, I may run some Debian. But it's basically settled down a lot. Like the Linux community settled down. So we're now all set, Linux has set the ground. And then what came was VMware, right? And as much as there are closed and proprietary company, there was VMware and all the other VM technology that came. I mean, for me, I remember when VMs first, I remember when VMs first came out and nobody wanted to use them, right? I mean, because basically you'd get a, people would be like, oh yeah, I'll get a VM for you. Like, no, I want my own box. I need my box. Everything was about a box. And I think what people really need to get over was the idea of I don't need my own box anymore. So I think for me, what I think of is that VMs actually did a really nice job of ice breaking and getting people used to the idea that I'm going to virtualize out my hardware. And that's a good thing, right? As opposed to just a cost saving measures, it actually brings a whole bunch of other great things for me. And so I think once those two mental shifts happened, I think that set the way for containers. And I think what Docker did was bring this standard way of packaging it up, making the image, doing the versioning and stuff like that. And that's kind of why it took off. There's enough people behind it. So there's a question way in the back, but I don't know. I have a question. But wait, he's had his hand up. No, you don't have a question. It's going to take five minutes for me to get the microphone up there. I'll take the microphone up there. Oh, sorry. Oh, sorry, he's got the, they have questions, but I'm the one with the mic. Wait, you already got a question. That guy in the back is handed and it will repeat it. That's a, by the way, you have great taste in shirts. I can, I cannot, you are a man of great fashion sense. Thank you. So, being where they seem to have the strategy of no longer selling the hypervisor, they see all of the value being in the management software around the hypervisor. In the same way I see sort of, you know, the debate about containers going the same way, in that the container is going to become boring and the real value is going to be around, you know, sort of the management of those containers. Do you sort of agree with that proposition? And if so, what do you see as your sort of key advantage compared to the competition? I totally agree with you. In fact, I think LexD is already there. In some sense, it's, it's fairly boring. It's a nice API that exposes migration, a few other things, but it doesn't really do anything magical. For us, the magic comes from Juju. And Juju, so I don't know if you're familiar with Juju, but it's an orchestration tool that does all these nice things for you. Does everybody know what Juju means? It's like voodoo for magic. That's what he said, the magic, you guys, it's on the inside most. So yes, I totally agree. And I think we are a step ahead in terms of orchestration. Yeah, I think the people that are, VMware realizes that their margins are going to go down for VMs now, right? I think they definitely see that that they control the entire, like if you go to a Gartner show and you look at like what percentage of people who actually pay for technology use VMs, what they use, they almost all use VMware. And I think that they recognize that that market share is commoditized. And if you look, if you look at the industry in general, I think that they recognize that a part of the OS is commoditized as well as VMs, right? And so the real money is to be made higher up the stack. So VMware, they've started to, VMware, they've tried to switch their message to, we help you just run your data center. We're a data center company, right? Because that's, they're going to say, yeah, you use vSphere to run VMs, to run containers, to orchestrate your software defined network. We're going to help you with that. That's where they're going to push in the future, because that's what they, that's where their value is. So I think you're completely right. And I think it's going to be interesting times. What do you think the future for container images and the package based distributions is going to be? And I think there's some huge problems that we still have to solve as a community. For example, we now have a proliferation of open SSL binaries. We have thousands of them running on our hosts instead of one that we have to update. And this is completely, I mean, we're sort of solving it in various ways by like hackily rebuilding everything. And, but even that comes with a cost, right? Like if we rebuild everything, it would be great if we also re-signed everything so that everything actually has a signature from a human being that this actually was something that human beings intended to happen. So there's a lot of unsolved problems. And I think that there is a lot of technology that needs to be built yet around this, the container itself, there needs to be some sort of standard. And I think we need to design with the idea in mind that we need to solve this updating and its verification problem. So this is actually one of the things, there you are, the person I didn't, what I didn't like most about OpenShift, what I did actually like about OpenShift and the way we did containers before is that we actually spun up from the same binary over. It's like it was you installed rel and then we spun up whatever was in that base OS into the container. So if we had a, let's say we had 100 different containers, all of them running Apache, we basically patched one Apache and then just kicked all those hosts and told them to reboot and everybody's patched. And in going to Docker, we're going back to the old way, which is more like VMs, where I'm going to have to patch each of those individual images and then reboot each one of those individually. So it's a trade-off, right? So the problem with our approach though is like, if you want to run a different version of Apache on that host than what we've installed there, you're out of luck, right? So it's a trade-off and I don't know that there's ever going to be a perfect solution. It's going to be what's your manageability issue versus how much do you need customization? I agree with everything these guys said. The only thing I'll just add is it seems like some people don't care about this problem. For example, most of your phones, like they may be linked against some old bad SSL library for some phone app and it just doesn't get an update and nobody uninstalls that app. So you have a similar problem with maybe people take server security more seriously, but I think getting people to care about this problem is going to be the first step in solving it. On the app store thing, I think app stores have an advantage in that they can revoke apps. I think there's a troublesome app. We don't have the tooling yet in the containers to say, I want to revoke this hash across my data center. Like this open SSL hash, never ever allow it on my box. Any image that has this open SSL hash in it, it's shut them all down. I just wanted to ask about the software defined networking. What's the advantage of doing that rather than just have everything have an IP address in the same subnet? You can have software defined networking and everybody have their own IP address as well. You understand that those aren't mutually exclusive. But then why do it at all? Because so I'm not a software, I'm probably going to hand this over to second when I quickly go out of my level of expertise, but at a high level it gives you more flexibility in how you define your architecture. You don't have to have people going out and rewiring stuff. You're not hooking flukes up to stuff. You can do your topology in software. What it will allow in Open Shift v3 is that as we spin up a new application, we can then say, oh, this application has this IP and as we add things into it, we can do actual separation of stuff based on IP addresses without having to update all the DNS stuff and all that other stuff. I think he said it has an API. He's actually speaking South African English, which is totally foreign to me. It has an API and it has an API and something about architecture. I got the word architecture. It adds an API. Oh, it adds an API. So basically you can programmatically change your network topology on the fly. How would you do that if you were using? Everything's on the same on an IP address that all you have to do is change your DNS so you point to the right machine and it doesn't allow him to take care of itself. The first practical use case of why you'd want software to find networking in containers is because you wanted to say do a migration. So I have this application that's buzzing along just happily on HostA. HostA needs to go down for whatever reason. The disks are dying. The power supply is dying. I want to migrate that process without having to reconfigure my load balancer or lose whatever connections are coming in from paying customers. And so you move that over to host B. And so you have the flexibility of saying IP address doesn't belong to a host. It belongs to an application and moving that around with APIs. Wouldn't you just start the other container with that IP? So you got the F1 here with IP1 and you want to move it. So you shut it down, start another container with that same IP address. It's now there. It doesn't... So I would say the best answer to your question is none of us are experts on software defined networking and there's a lot of propaganda on it. So I would say go out and read. You have better propaganda? The only thing I would say is my understanding is you have to buy some fancy hardware in order to make that happen. So you say I want to move this MAC address and you don't. So software defined networking can be as simple as what we did in our final project, which essentially is this host. Any IPs that are addressed, it goes to this host and that host essentially is a switch. And so all you need is a global routing table for IP addresses to internal IP addresses to external IP addresses and you can just route that data around. It's that level of sort of switching and moving IP addresses. If you look at the predictions for what the market is or software defined networking, it is huge. So the fact that we are not adequately convincing you means nothing. You should actually go read some stuff because there's obviously a market there with a lot of very valid use cases. I agree. Okay, we have time for one more question. Okay, speaking of abstracting the kernel itself and this is for like Rocket and Docker, do you suppose that if a different operating system provides the same capabilities, such as namespaces and C groups as Linux kernel, do you see native binaries for such say like free BST or Solaris or Windows or OSX in future for Docker and Rocket or LXC? Yeah, there's already implementations for, I know there's implementations for smart OS for Docker and I'm pretty certain that there's people implementing the app containers back in other operating systems too. I know for a fact Microsoft is doing it as well. I mean, forget even Linux systems like the Microsoft, they're doing it with their building. It's not the exact same name. I forget what the name of the technology is, but they're basically doing the exact same idea on the Microsoft kernel as well. So containers, it is caught on. It's going to be everywhere, whether it's compatible across all the different OSes, I don't know, but you're going to see containerization. We didn't go through the benefit, we assumed this, I assumed this audience knew when I said I can skip Docker and everybody said yes, I didn't feel like I need to talk about it, but the benefits of containerization for so many use cases is so incredibly, I'll use it, we're impactful that there's no way everybody can't go to it for certain things. So if I could summarize this panel, there are multiple container container implementations, there are multiple container frameworks, management frameworks, so I'm forced to ask the question, does the word container mean anything at all at this point? Is there any standard definition of a container? The word container means it's this collection of kernel APIs that you use to virtualize the kernel instead of the hardware from the bottom up. I've gotten to this discussion, what's the difference between PAS and infrastructure as a service and does it make any sense? It makes as much sense as you want it to make, it's a clear distinction between a VM and a container. You should definitely use the word container versus, container means it's not a VM. I'm not booting up that many instances of the OS over and over again. I'm actually taking one OS and making containers inside of it. I would say it's useful at that point, but then to say containers is what Docker does, not useful. I might ask one final question to wrap it up. Sorry, were you going to add something to that first? No, essentially I was going to say, I feel for you all. This space is moving fast and it's super confusing. I think over the next few years, the patterns will emerge. A lot of this is engineers doing what engineers do, which is explore a problem space and then find good solutions. So that leads into my final question, which is in your ideal world, what would the cloud ecosystem look like in say five years? And make it fast. I heard Australian instead of English. What's your vision for the future of cloud? I can upload a photo to the cloud and it fixes it. Isn't that what the Microsoft commercial was like to the cloud? So I work here at Rackspace for a time and at Rackspace, and most fellow providers, there's a very poor conversion from input power to compute that's useful for human beings. And I think the future of cloud is actually taking advantage of the hardware to the point where 50 to 70 percent of the power that we're burning is actually helping human beings out instead of idly updating top and random statistics inside of the kernel and then going back to sleep. I think that's where I'd like to see things happening. My wife used to work on hydroelectric dams and she worked hard at it and I'd like to see all that power she was generating actually do something useful. All right quickly because we're going over time. Okay, right. Nobody gets up out of their seat until we say you can. For me, what I would like to see with the cloud is more flexibility, right? We've gotten that with our software design, like when we actually build software and I'd like just, you know, we've done the object oriented and for Katie's purposes, I have to say we've learned functional, like we've learned this flexibility in how we build software stacks and I would like the cloud to mean we have that same kind of flexibility and how we put together our infrastructure so that I can reconfigure it easily. I can try things out easily and it's very, it's almost becomes as easy as software where I can say build this whole thing, that sucked. Destroy it all, make a new one and it was very little cost to anybody in terms of time or resources. I don't have any idea. We can't even figure out what containers are, how we can figure out what the future of cloud is going to be. Fair enough. All right, please thank our panelists. Awesome.