 many people here are playing with containers or using containers, I guess hands up. Okay. And out of curiosity, so who's focused on Docker specifically, like maybe Docker or Docker Swarm, that ecosystem? Okay, kind of a lot. And then how about Kubernetes? Just a few. Okay. Yeah. And then what about Mesosphere? Anybody out there? Yeah? Okay. Just a few as well. Okay. Cool. Just wanted to... How about OpenStack in general? Is anybody using OpenStack? Okay. Cool. Just a few as well. All right. So just wanted to level set so I know what people are interested in. And how about network administrators? Are there any network or infrastructure people? Okay. So we mostly DevOps, so developers, yes? Or DevOps specifically? Pretty much. Okay. Okay. So my background personally, I'm a systems engineer in Kura. We're a network virtualization company focused on that. Our product was initially started six years ago. Company called Mido Kura. Mostly focused on the engineering of this product called MidoNet. And then we went open source in 2014. But my background personally is I come from a hardware networking background. So I was focused initially more on service provider products, sort of layer one, layer two. I like to joke for networking people that I kind of moved up in the networking layers. Like started off on DSL products, layer one, two. Went over to MPLS, Technologies for Carrier. And then Data Center and Campus. So I've been more focused on networking. I like to think about security too now moving up in the stack, the networking stack. So given this is DevOps and now I'm at more of an open source and that sort of ecosystem company. I think two of the main things, the challenges are open source is really powerful. But there's a lot of things that Kevin just talked about, integrating with this whole host of tooling and integration and the way technology advances. But that's part of the awesome thing about open source is all these awesome technologies come out of it. The other main thing is how do you drive adoption? Just with new technologies, people are weary, there's obstacles to overcome. And then of course, how do you move into production specifically? So all these things need to be sort of thought about. So today more I'm going to talk about, if my slides want to advance, containers. And then of course what's come out of containers because they're not that new of a concept anymore. A lot of people are, there's a lot of buzz about it and that's continued buzz still exists but they've actually been around for a while. Also, the movement to container orchestration, how to manage multiple containers, how to deploy applications and develop for them. But as we'll see, even with other technologies like OpenStack as an example, networking always seems to be an afterthought and this ends up being a hurdle actually when going to production because there's these security implications or how do you scale an application, a lot of these things, network can make or break moving into production. So why containers? As those of you who are playing around with it already, know it's a lightweight way to deploy and it's a more efficient way to build an app, smaller bits and bytes, you don't have to, less overhead than a virtual machine because you're not repeating the complete operating system for every single container. What that means in infrastructure is you get a lot more increased density on a physical host. So that has implications on the network of course too. But as developers it's empowering to be able to do something in a portable manner like a container image on your desktop and then go to staging and beyond after that. So of course, not being around for a while, leveraging things for security like SELinux, C groups, kernel namespaces, the focus was previously more on systems containers. Of course things changed. So LXC version 1.0 was only, in the end it was only released two years ago, right? But that brought about more security with unprivileged containers and then the ecosystem initially was probably not as hot as it is now. But why is it hot now? Docker, everybody knows. Dockers, the way that developers got their hands on containers in an easy fashion right on their desktop. So it enabled a lot of development right on their desktop, an easy container imaging format. Security was more done through Linux bridge and IP tables. Where I come from we cringe at IP tables. So makes people feel uncomfortable and doesn't scale well. But the ecosystem has totally grown around this. So container orchestration, technologies like we were just talking about, Kubernetes and Mesosphere, companies like CoreOS who have propelled these technologies as well. And advanced it even beyond Docker using other container formats. So of course all these container orchestration engines came along and that's what helps really deploy applications on a wider scale. So things that they can take into account are clustering containers and therefore defining services, perhaps having load balancer or service replication to enable having that app always up. But that's the dream, right? So that's where we all want to be. The expectation is chase that dream. But the reality is when it's time to go to production probably someone like your network security person or admin is going to contain you. So multiple ways that would be done, right? So you can do that with kernel namespaces, SE Linux, and that could affect the way the app is actually deployed. So it might totally hit a hurdle in the app or you want to leave these things open to vulnerabilities. So what's the problem then? What's wire containers insecure? They weren't designed to be fully isolated low AVMs are. Not everything in Linux is namespaced processes and shared components on a physical host. And what that means is you can't assume that the container next to you on the same physical host is totally isolated and can't reach you. But what does that do to the network as well? So even coming off a physical host. What to think about those type of things? So yeah, the container orchestration engine helped the clustering and the deployment of multiple apps. But what about the networking? And as I mentioned, even previous technologies that were adopted widely on the open source world, like OpenStack, sort of left networking to the very end. Everybody's excited about the technology and deploying and having new tooling and the networking is more like the plumbing, the boring afterthought. So we're sort of seeing similar patterns here with how to address networking with container orchestration engines that we saw in these other technologies. So the framework is sort of being redefined. We feel like we're reinventing the wheel. Open source communities, the part of the beauty is bringing a lot of collaborators together from various organizations. But the problem is you're going through that battle again and in the end there's only a set defined type of way to define networking too. So it just depends on the abstraction level, what kind of battles. So even in this world, what networking model does one choose? Because Docker is leans towards this container networking model, which we'll talk about those definitions. Whereas Kubernetes and Mesosphere are leaning towards this container networking interface model. And they both approach networking a little differently. So don't know why it's always last, but maybe it's just not the hot technology stuff, I guess. So who's going to care? Well, your network administrators and your security team are definitely going to care. You can't just leave everything light open. Some of these projects started by Google, the roots of Kubernetes, everything was internal and open, so they didn't really care. But as soon as you start deploying these in public spaces, you're going to start caring about which part of your apps are talking to each other. So that's why you should care too. So containers, as I mentioned, you're increasing the density on a physical host. And therefore in your infrastructure, you're just going to get way more end points in your network and infrastructure. So that means more complexity. And this is sort of trying to depict the microservices type approach to deploying applications, which of course has benefits from many levels, right? The best of read type is that you're doing for the specific microservices that are deployed, which means better and easier development for developers, easier to manage in terms of operations, like in terms of isolating parts of those microservices and how they operate together, and also from a business standpoint, being more efficient in your operations. But in the end, in the network, this gets much very, very complex. And even when you're testing all these points, and we're talking about operations, a lot of people like to blame the networking people. Why does it take so long for me to get a VLAN configured on my router or switch? I need a subnet now. I want to deploy quickly. I want to test something really quickly. Why does it take me so long to get networking? So what's important is we bring these networking aspects into the agility of these more dynamic tooling. But it gets more complex when you have to consider security and as well as scale. So this little guy, you'll see him again. He's our career man. So in legacy infrastructure, you typically have all your servers connected with leaf switches, going up to spine type core, and then usually your firewalls are more at your core. So this is considered more perimeter security type approach. That's more of a legacy type approach. And that was fine back when we had more north-south type traffic flows. So more bare metal type workloads. So things are coming in and out of the data center. So you can just handle that. But that, of course, poses a problem if you have an attack on the inside. All you have to do is cross that one barrier and then the rest of your network is exposed. So with virtualization, there tended to be a lot more east-west traffic bound flows. So how to manage that? Well, one approach was using these virtual network functions. And that means just using firewalls and these could be virtual appliances. And then they're in the data path of these virtual machines. And so how do you manage that? Where do you put them? So there's a lot to think about there. So more doesn't necessarily equal better. Like it's managing more pinch points. And you're still kind of, it's sort of still guesswork on where to place those depending on how your east-west traffic flows are. So I thought, you know, it's Thursday, right? I don't know. It's been a long week already. So throwback Thursday, going to throwback to OpenStack as a reminder. Like what were the challenges there on the networking side? And it sort of feels like it's being repeated on the container side. So for those who aren't familiar, OpenStack is an open source, a large open source community that develops, can compute, basically. And so many projects have come and spun out of that. But the core was storage and compute. And this was started, yeah, six years ago, I guess, 2010. It's gained a lot of traction. You know, these OpenStack summits are every six months and they're in the order of like 7,000 to 9,000 people now, it seems. But it's enabled a lot of engineers from different areas to different expertise to come together. And again, in the beginning it was rough. It was rough going into production with this. So people didn't trust it. But it's definitely come a long way since six years ago, where people have been running in production for many years now. But as I mentioned, so Neutron is the networking project with an OpenStack. And it was late to the game. So networking originally was under the compute project. So that means it didn't really have that much attention. Nobody really thought about it. It was more cool to get your virtualization going on your servers. No one was scaling yet because it wasn't production anyway. But as soon as people started going to production, they realized, okay, there's some inherent problems with the way we're doing networking. We need to focus on it. We need to define it as a framework. So in the end, Neutron is this project and it's a networking framework. So there's a reference architecture. But also the things that came out of being an individual project was that there was a push to have it have a pluggable nature, which means there could be multiple solutions inserted to provide those set defined of networking APIs. So this formed an advanced networking framework. You can have really interesting architectures, tiered architectures, layer two through four networking services. Well, that's what I wanted to get to actually. So part of the beauty was multi-tenancy was a really important aspect of OpenStack, having isolated workloads, and a whole host of these network functions from switching, routing. And people have various ways of implementing these things, but these are things that are important. And they apply also to containers, as we'll see. These are just so virtual machines could be just considered endpoints and all this type of functionality can be provided to whatever that workload is in the end. And it could be, and it's, in the end it's just mimicking what we know from hardware networking, like ACLs, staple firewalls, natting, you know, the way that we are moving forward to deploy applications. And there's several different vendors and projects. Some of them are proprietary, some of them are open source projects. So MidoNet is a completely open source project. OVS is the original reference architecture with OpenStack, and these are all hardened neutron plugins that people are using in production today. So I'm going to throw, as I mentioned, throwing a bunch of open source projects names out there. Has anyone heard of career? No? All right. Eh? Yeah. So career actually, so it's check for career. So like a trans, a carrier of network packets, I guess you could say. And actually it was started by originally a former MidoKura developer, and so he likes check things. That's how he came up with his name. And what it entails is it's an open stack. It's a project basically under the OpenStack umbrella, but it can be used outside of OpenStack, as we'll see. And it basically leverages this framework that was already predefined by this wider community for this networking framework with advanced networking functionality and applying it to containers. So, you know, not having to reinvent the wheel but just leverage the already existing network framework that exists from Neutron. Especially since the container networking model and the container networking interface are really not that mature and it's going through the same battles. Even the identical processes like IP tables which is known to not scale well and these are like the things that were tripped on an OpenStack and recovered from and those were kind of being repeated in the container world. So the mission is to bridge container networking to these OpenStack network abstractions. So what is Courier as I mentioned? It's bridging these container networking to these OpenStack Neutron networking abstractions. But what it literally is is these group of projects, they're all up on GitHub. So Courier Lib is the common libraries between all of these container orchestration engines. So what's important is what's leveraged from OpenStack is the Neutron client, so the definition of the framework, the Keystone client which is used for authentication, and then in turn some of the vendor bindings that happen at the physical host. Courier Lib Network is focused on the Docker networking plugin, Courier Kubernetes, and this is just where it stands today. It's obviously an evolving project and continues to be worked on. Courier Kubernetes is the Kubernetes API watcher and CNI driver which will go over also. And then there's also this other project that is focusing on the storage aspects on how that could be leveraged from OpenStack as well. So this is like a growing project. It's already got wide adoption across multiple contributors and vendors. So that it's been widely received and welcomed for enabling Neutron in the container world. So I'll talk first a little bit about Docker since these are the more mature projects from Courier. So Lib Network is the project within Docker that focuses on the networking obviously. So there are various types. These are the, I put an asterisk for former because things sort of changed with Docker 112. But for backwards compatibility, they kept the three main ones, the null, bridge, and overlay. And then the remote driver which is where Courier fits in. But how do these play out with security? Well, no, obviously not really anything. So you're relying on your perimeter, firewall, or your physical box upstream. The bridge, IP tables, overlay IP tables, these are things that have been known to have issues and don't scale well. And then there's the remote driver which allows for some external third party organization to come in and leverage. And that's where Courier leverages Neutron. And so Courier's been working with Docker, Lib Network since 1.9. So as I mentioned, it's the remote driver. But Courier actually handles two things, the IPAM. Passing the IP addressing to the containers. And then also the binding that happens on the physical host. So from a networking perspective, you know, these from our, well, I'll speak specifically from a metonet perspective. Container can just be like an endpoint at the end of a virtual topology. So the way we handle it is you can build whatever virtual topology you want with whatever functionality. And to us, it just looks like a port. It's a virtual port. Just like the way we traded virtual machines. So there's not a lot of things that have to happen or change on the back end. We can leverage a production grade networking solution and apply that to containers. But so Docker, oh, so I did mention, yeah, Docker 112. So Docker 112 was announced at DockerCon, I guess back in June. What they did was they talked more about their clustering with Docker Swarm. And unfortunately, the only supported networking they had was the overlay solution there. So the courier project is hoping for Docker 113 that they'll have time to work with Docker and insert again as the remote driver. So I think Docker 112 was just released quickly and didn't have time for the third party vendors to insert. But the way the abstraction looks is the important definitions are the sandbox, the point and the back end network. And this is different from how Kubernetes defines networking in the fact that it doesn't define networking. Docker explicitly defines networks and subnets. So courier, what it does is it leverages Docker's push model for the architecture and when those calls are made to lib network, courier does a translation to neutron networking. So this allows any neutron solution to be leveraged within a Docker container environment. So it just does a translation. It makes the call according to what neutron is. Neutron has its solution and that solution goes and implements the way it always did before. And so that means you can apply a host of functionality to containers without having to wait for a particular definition. These things have already been well defined and proven with more solid production-grade networking. So a whole host of functionality that can be applied to Docker. So just one example, courier with Meadonet. Meadonet is specifically a neutron plug-in. So that's one option that I mentioned that could be used with courier. So basically when the Docker API calls are being made, courier is doing that translation to the neutron. So for example, it's providing the IP. And then what end up happening at the back end is each vendor or solution has a binding script. So that's what's happening locally on the physical host where the container might live. And so these calls are being made and the binding is created. And that way, the networking solution knows which exact figures of physical host the container resides on. And then after that, you have the freedom to apply all these networking definitions that were from neutron and have a production-grade networking solution. So that's courier for CNM. But now what's developed, what we've seen is there actually has been, we've seen from the sample set that we've spoken with, people have this adoption of Kubernetes. And Kubernetes doesn't have the exact same framework for networking as Docker. And so there's this other standard called CNI. So why is Kubernetes important? Well, it's already a proven solution within Google. So Google happened to open-source it. Of course, there are some... It was implemented differently at Google. But now that it's open-source, it's gotten tons of adoption in the last 12 months. We've seen huge growth. And even in the ecosystem, several vendors have supported this because of the interest. So I'll just go over a little bit of the Kubernetes architecture and then how it fits in on the networking side with Courier. So what's challenging with Kubernetes is it has more of a pull model. So that means any kind of courier in the end of the implementation has to watch for the events that happen in order to make any kind of configuration changes. I'll probably speak really briefly that almost sort of looks like OpenStack in the sense that it has this master node, kind of like the OpenStack controller. So this is taking care of the state storage, the API servers and things like that. And then where the work happens is actually on the worker nodes. And that's where you could consider compute nodes, where the containers and pods are actually living. So in the Kubernetes world, the definition is more of a pod that's supported by containers. And in terms of networking, you would look at it as an IP per pod. So these are more of the compute nodes from the OpenStack world or where the work's happening with the containers and pods. So at CD is where the persistent state storage taking care of that, these things are happening all on the control plane through the master node. The Kubernetes API server is running there, scheduler, the controller manager server. And Kubernetes is built in a way that all these things are supposed to be interchangeable and pluggable actually, that's probably a better word. So what courier is done, and on the worker node, there's these cubelets that are managing the specific containers on the worker nodes. And then cube proxy, which is kind of what you'll see with courier the way it was developed. It's basically providing a load balancing type functionality and implementing it through IP tables on the physical host. So keeping that in mind, the Kubernetes networking model strives to define itself with network policy actually, which is a series of labels applied to a service and that sort of therefore defines what's allowed and not allowed to talk. So more of a whitelist approach. But there's the distinct problems that they recognize are container-to-container communications, pod-to-pod communications, pod-to-service communications and external-to-internal communications. So one of the main default solutions for Kubernetes is Flannel. So originally, even in the early days, just like Docker, things were open and weren't necessarily doing nothing, but you couldn't have basically cross-host communication. So that in turn enabled or called for an overlay to enable cross-host communication. So an overlay just puts a tag in front of the packet. So one example would be like vxlan header to define the service and enable cross-host communication. So some of the problems with that, well, as I mentioned, so IP tables for netting and maybe security depending on how good you are with working with IP tables, because the definitions are not really there yet with the framework. And then multi-tenancy. If you're doing like a host per tenant, what happens with your physical host goes down? What happens to your units? What happens to the virtual machines? Just as a lot of people are happy with their tenant then. Other things to consider is, even with the movement towards containerization, people are still working with a lot of virtual machines. So not everything is immediately containerizable. So how do you share containers and VMs on the same network or the same solution? in the networking stack with everything on the same physical host. So I'll talk about some MidoNet integration with Kubernetes specifically, and career, but any neutron networking solution could be used with career. As I mentioned, this MidoNet came out of the company MidoKura, which has started six years ago. Our founders and our CTO, they had more of a distributed systems background. So they worked at Amazon previously, and that was their specialty and expertise. So when they built the architecture of MidoNet, it was specifically so that it was a solution that's scaled without any bottlenecks or pinch points. So it's more of a distributed systems type architecture. And then, of course, another highlight was in 2014, we went open source. So everything's available on GitHub. It's completely open. So what that meant is some of the way that the networking solution was achieved was actually at the edge of the network. So all the intelligence is at the edge. The edge only needs the information for the things like leaving the box. But that, in turn, also means the security. So if you have a policy or a firewall rule, for example, the MidoNet is simulating it at the edge. Pretending the packet goes through whatever logical topology was built, and if there's a rule anywhere that says drop it, it's not even going to leave the physical host. So it's eliminating these hairpins and bottlenecks to virtual network functions or even your perimeter firewalling, for example. And that's what gives it the ability to scale. So with Kubernetes, today, so this is Docker live network work that was done with career is a little more advanced. That's upstream now and released to work with Docker standalone and the old Docker swarm before 112 and then expected to be with 113 onwards. With Kubernetes integration with career, it's a little more new. Even with MidoNet, it's still in tech preview stages. But it works starting with Kubernetes 1.2, which is what was officially tested. And then the two integration points are the CNI driver that sits at every worker node, and that's what's doing the binding. So that's how the solution knows this container is on this worker node, and it's tied to this logical network. And then Raven, and that's the name of the process that's at the master node and is doing the listening for all the events. So any of the Kubernetes API calls that are made, Raven is doing the listening for that. So looking at default versus how it's enhanced by career to get these more advanced networking functions. So I mentioned Qproxy resides at the worker node at the edge server and then Flannel is one example of an overlay solution at the edge with a pretty simple, more primitive overlay. And that's enhanced by using the CNI driver that's residing at the worker nodes. Raven at the master node and then, in this case, MidoNet agent or whatever networking solution likely implementing at the worker node. And as you can see, the definitions are a little different than we saw with the container networking model. So there's a few more definitions here. Some of the isolation being achieved with namespaces. So a namespace would map to the Neutron API network. And then, as I mentioned earlier, a pod would represent a pod or something that might ask for an IP. And then, in the end, when you're defining a service, what makes Kubernetes powerful is you have these options to define a service of replication factor and then the load balancing that's checked on it. So health checks that are done, if one pod is going down, Kubernetes is going to bring another one up. But in turn, from a networking perspective, that is just load balancing. So the service equates to the load balancing VIP and these endpoints on the back end would just be these load balancer pool members. I will take a closer look at the worker node, as I mentioned. So the kubelet still remains on the worker node. We replace kube proxy, which was doing, figuring out the load balancing and endpoints. And that meant per host. Whereas now, MidoNet agent, which has a cloud-wide awareness, resides there and programs the kernel data path. So today, where are we, career and MidoNet, specifically? As I mentioned, it's in tech preview. These functions were done, the CNI driver, Raven, as the watcher, the namespace implementation, specifically done on CoreOS. As I mentioned earlier, CoreOS has made great strides to be the supporting operating system for these container orchestration tools. And what was interesting for MidoNet was we had one process that was running in user space, programming the kernel. Now all of our processes are actually containerized, so that's kind of cool. So I think that has more benefits even beyond just working with container orchestration engines. But specifically, or more broadly speaking, career, the bigger project, where will it go next? It will go and define more of bridging containers and virtual machines. So you can already attach containers to existing neutron networks, but more testing and consistency there. The multi-tenancy aspect, obviously, so that isolation across worker nodes and taking those aspects of isolation as well, allowing tenants to define their own security policies or administrators on top of that. Then of course, the more advanced networking services that come with Neutron, especially as these, the container networking model and container networking interface get better defined. They can map to already existing neutron components. QoS, of course, too, taking care of bandwidth from a networking perspective, traffic shaping, making sure things aren't being hogged and prioritizing maybe certain applications. And then of course, things of interest are working with other container orchestration engines, so adapting to whatever their API framework might be. So definitely, I think probably at the top of the list is working with Mesosphere. There's a lot of popularity there and CloudHandry and OpenShift. And then Magnum support as well, making sure that works. So Magnum in OpenStack is putting containers in VMs for isolation and defining bays and leveraging these orchestration engines. So Kurya will work with that project as well. So I think I sort of tried to whip through it because I know you guys are probably getting hungry, but wanted to make sure you guys got a little bit more education on these open source projects focused on security for containers. And as I mentioned, both of these projects are open source as well as of course OpenStack and Kubernetes and Mezos. But I thought I would just share some of the information so if these kind of things interest you or you know people who are interested in learning more about it and joining, we always welcome more community members, so. So thanks for your time. Hopefully we can all take a deep breath now because I probably talked your ear off. But thanks for your attention. Thank you.