 Welcome, everybody. I believe this is the last session before lunch, so hopefully you're still chipper from the morning. My name is Phil Robb. I work for the Linux Foundation. I actually carry two titles. I'm the VP of Operations for networking and orchestration across all the networking projects at the Linux Foundation, and I'm also currently the executive director of the Open Daylight project. This panel is on containers and networking, a symbiotic relationship. I've got some canned questions, of course, but hopefully we actually get a dialogue going and we get a good discussion amongst the group. We seem to have a pretty good crowd, so I hope we can have a good and lively discussion. I'm going to go ahead and start off by having each of our panelists introduce themselves, tell you where they're from, what their background is relative to containers and networking, as well as give them the opportunity to write off the bat, express what they most importantly want to convey in this panel to you as part of their introduction. So I'll go ahead and start right here with Chris. Am I right? Chris, go ahead and start. You might have to step it off. Okay. My name is Chris Wright. I work for Red Hat. I'm from Portland. I run the technology office in Red Hat, and I'm actually on the board of the Open Daylight project as well. My focus in my group is looking forward and understanding what the networking needs are, so we spend a lot of time looking at networking. There's one thing I think I'd like you to take away. It's that networking is absolutely fundamental to both the infrastructure that we're building and the modern applications that we're building. Hi, I'm Josh Wood from CoreOS, where I am responsible for documentation. Chris tells me I look like I'm from Portland, but I'm in fact work out of San Francisco and from Kansas City, Missouri originally, but I think it's a compliment, right? I look like I'm from Portland. Clearly at CoreOS, a lot of our concerns in container networking revolve around the modularity and the abstraction of the interface to different network regimes. We want containers to be portable between your own data center and cloud provider environments, and more importantly between those cloud provider environments, and that's sort of at the heart and soul of the work we've done around CNI, first in our rocket container engine, and now in Kubernetes itself. What we like to think of is sort of a VFS for different networking regimes, something that allows a lot of different kinds of networks, network policy, network IP address management schemes to be plugged into what we think of as this important orchestration system without requiring that system to internalize a lot of knowledge about those individual kinds of network styles of networking. So in my view, and I think in defining some of CoreOS's products and projects, that view towards modularity and the key point of good software architecture revolving around interfaces is probably the thing I hope I can draw out the most in the talk a little bit today. Patrick. Hey. Hi everybody, I'm Patrick Chenezon from Docker. I'm from Paris originally, not from Portland, but I live in San Francisco. So I work at Docker. I'm a member of technical staff there. I represent Docker at the CMCF, and my first project when I joined Docker two years ago was to help establish a plugin model for networking and storage. In terms of networking, what I'd like you to remember at Docker, we really try to make complicated things simple. So trying to make the simple things easy to use and then the difficult things possible. And that's what we're trying to do with our container networking model and the plugin system that's behind it. Hi, I'm Swarna Padila from a company called Aave Networks. I'm not from Portland. I'm not from San Francisco. I'm from South Bay. I live in San Jose and work in Santa Clara. One thing that at least I hear over and over again from the networking industry is that there are two distinct elements of networking. One is the connectivity part where we've seen a lot of discussions and movements where we've pretty much resolved everything. There's probably an overlay that just works out of the box. The other interesting part or the most exciting part is where it comes to the non-connectivity part that the network infrastructure provides. Could be the service discovery like IP address management or DNS, scalability, security, all these kind of elements that we have to constantly keep in mind. And one of the things that I mean all these services, the non-connectivity services kind of form something called service mesh and we at Aave Networks we provide the service mesh for the modern architectures and also for the traditional architectures going back to the load balancing kind of things. And like Patrick, I also represent Aave at CNCF, Kubernetes and the OpenStack communities. Very good. Thank you, Swarna. So Chris, what are the options today with regard to container networking? CNI, CNM, OVEN, etc. And how did they come about? And what are they trying to solve? Where are their similarities and where are their differences? We're not going to make lunch. So maybe a little bit of background. And I won't be able to speak as well to CNM. But in both cases the goals of CNI and CNM are to create some kind of call it standardization and plugability into networking for the container orchestration layer of the system. And OVN being for OVEN, you pronounce it being slightly different, which is more of an implementation, which could then plug into these plug-in interfaces. CNI came about, as you heard from Josh from Coro, from work they were doing, trying to create some standard specifications around the container runtime, which was at C and then was later been folded into work. We've done an OCI and standardized part of the OCI runtime specification. And then associated with that, a container is not very useful if it's not connected to the network CNI came about. And then similar timeframe, Docker was working on the network. And again, I can't really represent well the heritage there. So I'd get Patrick to give us the details. In a different parallel universe is the implementation side. And that's OpenV switch as one example of a virtual switch that sits on the host that's connecting all the different containers together. And OVN being a control plane managing OVS instances, building the logical typologies and connectivity between containers. From a red point of view, we've gotten involved in many of these different projects and currently we've spent time building OVN into Kubernetes through the CNI interface. And for us it's about building connectivity between the different services that application developers are building. And I think what's most important out of all of that, at the beginning I said I care a lot about networking as fundamental to modern infrastructure and applications. Maybe equally important, because it's so fundamental, networking is not the domain expertise of application developers and even of infrastructure operations teams on the server side. So finding the right balance of separation of duty and keeping it really simple for people who need it but don't care about the details is really critical. And in both of the cases of CNI and CNM, you're seeing an abstraction that's really friendly to the application developer who's trying to connect services together without having to really care a lot about the underlying details. Hopefully I got some of, answer some of your questions. I'd love to hear Patrick's view. And Patrick? Sure. Yeah. So I'd say CNI and CNM don't play at the same level. CNM is really a container model or it's a container networking model that we created for Docker that involves or that creates a specific set of objects that need to be implemented in the actual system. In that model there's a driver model that corresponds to the plugin. So I'd say the driver object in the CNM model corresponds to the CNI plugins. And actually these drivers that are implemented in Docker with web network, our team is working to make sure that they could work well as the CNI plugins. So they participate into the CNI working group in Kubernetes. Another angle that I wanted to add is this driver model allows operators to really optimize the networking layer, the expertise that network operators have using traditional technologies like Macvlan or IPvlan. But for developers it presents a very simple interface where they just need to define the network that their container should be on in order to do the partitioning without knowing internally how it's going to be implemented by the operators. So creating that separation and abstraction between what the operators need to configure in order for the system to be very efficient and what developers need to define at the high level for the applications to be well separated is really what all these efforts are about. The last thing I wanted to add is there's an important aspect of networking that our team have been working a lot on recently which is Windows. Windows networking is a whole other beast and we've been working pretty actively with Microsoft I think in 1706 of our Docker platform that ship recently. Now you can finally have swarm clusters that are multi-OS between Linux and Windows. Arna, do you want to comment there? I actually just had maybe a sort of continuing question for you on that Patrick. I'm like you were talking about sort of doing ports from the drivers to the CNI plug-in model. What does that port look like in short and what are some of the differences between these two ways of abstracting the network interface? Yeah, so this port means implementing the CNI interfaces in terms of the implementation that are in the live network drivers. That's all I know. I'm not coding these drivers myself so but Madhu in our team would be a good person to ping about that. The fact that such ports would be possible even conceptually I think tells some of the value of trying to construct these sort of interfaces so that we can trade these parts around a little bit. Yeah and I would add that that's one of the values that the Cloud Native Foundation provides helping all these projects interoperate between them. Well said. Agreed. I'd like to just add to Chris's point earlier about the services abstraction that needs to happen in the networking. I think more and more we see non-networking as Chris mentioned, non-networking application developers deploying all these network services. So how do we make it more simple for them to deploy like a service discovery service or an auto scaling service, a load balancing service, or a DNS resolution kind of service? So how do we make this easier in service meshes? I think that's where the exciting space is and thanks to Envoy being just announced this morning. It's an exciting space and it solves an interesting challenge because it's not being addressed in all these at least in all these years as something that the non-networking developers or the non-networking teams could also handle it themselves. It's more like it enables that self-service so that now the app developers can focus on their real things where they get to deploy or roll out the blue-green updates if they need to do that. Let's them to focus on day two operations rather than kind of pulling them into the mire of the day zero on how to set up my infrastructure, how to set up my network infrastructure. Swarna, so what is the relationship between VMs and containers now and what you see coming in the future? How does Greenfield versus Brownfield environments affect that relationship? Containers and VMs, I love the argument or I love the relationship. At least from what we see containers are more single application kind of deployment and single purpose kind of deployment and it's less about the containers versus VMs and it's more about the containers and VMs that we see at least that's being more and more adopted increasingly adopted in the enterprises so you can even deploy container in a VM so I think thankfully at least the audience that we talk to don't look at it as containers versus VMs. There's a different use case in each of those and app teams can deploy their application or their instance in a container quickly up and get it up and running and test it in a quick environment and just migrate that to a production kind of an instance. Where we see in the Brownfield versus Greenfield kind of deployments is we see a lot more Greenfield being deployed on bare metal but they have a choice to deploy either on bare metal or even public cloud these days so it probably will become more like a container in a VM in a public cloud because that's the only way you can deploy a container in a public cloud anyway. In Brownfield I see at least from our deployments that we see it's more around virtual deployments. Just add a perspective you're too nice it's a mess you've got legacy bare metal applications that we all understand and they're sitting on some VLAN that's been programmed for you know a decade. You've got VMs that are part of this Brownfield environment that may or may not be hosting containers that need to talk to the legacy side of the network and then potentially to containers which may be running on bare metal or in VMs and you have to make that all work and it really is complicated especially when you have different network orchestration components managing the container piece as well as the underlying virtual machine piece. So it's non-trivial and so the Greenfield environment is really nice because you can simplify the problem domain but it's not realistic for most enterprises they have this this really rich history of stuff that just isn't going to go away anytime soon so I think it's really it's really complex and some of the things that I know we're working on is integration between those layers so that you can have maintain the isolation that's critical but don't do something that's fundamentally lacking in performance like double encapsulation just because you have two different systems that don't know how to talk to each other so there's a lot to go. It's interesting to use that word double encapsulation because something that's come up in kind of a discussion around the edge of this is one of the reasons we might want to put VMs into containers is so that we can then communicate with the container networks that we do have a good understanding of in an orchestration system that we're working you know within and and that like sort of drag them into our world but if that is if anything is double encapsulation it's got to be that right so although we would point out that the you know the Borg and Omega systems at Google fundamentally schedule all of the VM workloads inside of containers that are that those systems know and understand how to deploy how to monitor how to manage the lifecycle of so I think that's a little you know sort of it amused me when it came up in conversation and then yeah to rebound on that and answer the original question but what we see in terms of customers adoption I think while VM have been very successful in the past 10 years they haven't completely replaced bare metal you still see both and I think with containers it will be the same there will be more workloads moving to containers but still they will coexist with VMs and with bare metal like creating the mess that you were talking about Chris that said what what one of the trends that we're seeing with a lot of our enterprise customers is that they have a lot of these legacy applications sitting in VMs today that they don't they are touching because they are well configured and one of the things that we see them doing is modernizing these traditional applications by containerizing them and then deploying them on an enterprise container platform um so this is only a 40 minute session so I want to make sure that there's plenty of opportunities for you guys to ask questions it's always nice to get a set of experts up here with a variety of viewpoints any questions from the audience at this point okay there's one in the back it'll be grand perfectly that's how well next question that sounds like brian um so there's some real challenges so uh how do you even start um there's been a lot of work done to date to enable the kind of workloads that that brian's describing in virtual machines and some of those work uh some of that work is actually largely unrelated to networking um and it's more about the platform being able to support high performance network applications running inside a virtual machine or in this case inside a container um and then connecting that to a more physical portion of the network and that work took you know I'd say a number of years we're just starting to see that in the container space and we'll have a whole set of just kind of architectural discussions and arguments over what's sane and what's not in terms of um how you first where do you start to break what feels like cloud abstractions when you do something like pin an application to a numinode um and give an application access to multiple devices in the same pot or in the same container essentially and trying to connect it to the physical network all of those things are either the beginnings of active discussions or discussions that will be pending but today it's we're not really there we're the container space is really servicing more uh enterprise and and web style workloads so there's there's real work I believe there's a lot of potential and we just have to find the right path forward because I do think the the value of containers is well understood from a development process point of view and then from a network application and packet processing performance perspective you eliminate a lot of overhead by working in a container directly coupled to the os which could be coupled to to io devices um without having the virtualization overhead in the way so I I no I'm not knowledgeable enough to sit on that table so I really like what Chris said it's a mess out there right and then we knew we know that there are a lot of developers in the VM world a lot of developers in the container world and um in reality especially in the IT environment customers of both and bare metal and legacy stuff too so from your perspective where are the projects that you think the two communities can collaborate and then really make a good stack for our customers well I mean I I think um and and Patrick certainly already mentioned it by name and I think we've all kind of hinted at it and and you know come closer on the edges of it and it's one of the key things I think we believe that the value of the CNCF is in helping to define these things and I think it's why um as we sort of uh prototyped and thought about CNI and kind of built it out for Rocket we always had in mind moving it farther upstream so that something like that could at least engender a discussion for the other folks that we were trying you know we're all trying to solve really really similar problems so I think the CNCF can kind of be the hub of of discussions and and and currently obviously is the owner of of that CNI standard and to go back to the the song I I used in my introduction I think when you can describe a development environment as a mess when it's new enough when there's so much interest and so many different corporate and development and technical and architectural points of view represented that's when interfaces and standards for those interfaces become absolutely key because if we have something modular that lets our applications or a cluster orchestrator connect with with whatever brilliant new networking scheme that I'm never going to be the person who thinks of that modularity is what gives that new scheme a chance to actually be adopted to get any uptake to to really be something you can work with and use you know if if if they're all a driver interface that you have to master at some low C programming level to be able to employ them at all then as you know we've all sort of in different ways had the had in different words said you know a lot of these are not the the core competency of the application developers who actually want to use these networks so I think the the modularity of interfaces is key to empowering the kind of people who have really interesting ideas for what applications ought to do rather than really interesting ideas about how to implement networks might be the best way to put it and I wanted to add that along with the cloud native computing foundation the cncf um at least I personally would make a request to the linux foundation to also think along the same lines work because linux foundation from the open networking um the own app or I forget the acronyms but uh from that initiative or from those projects they address the those projects at least set standards and address the traditional networking and cloud native foundation the cncf addresses this from the container cloud native kind of um angle so I think bringing some kind of standards and bringing that kind of modular interfaces is much more critical now more than ever um so if anyone can help with the linux foundation just something for all of us as a community to look sorry duly noted yes and I can actually say that in particularly between own app and cncf discussions have begun to occur perfect actually there is a place so you asked for a project it's a little hard because that it's kind of um you quickly get into e max versus vi kind of discussions where it's hard to have a rational discussion um there are technologies today that exist that allow you to bridge and they just all use different um schemes whether it's gateways or whether it's integrate directly with existing uh dynamic routing infrastructure um but again you get this kind of these these arguments I think josh has a great point that if you create well understood interfaces you can at least choose your own implementation um and maybe that's punting the problem down the road a little bit but eventually we'll see some best of breed like best practices emerge in the cncf not not that we're here to pump pimp the cncf but just so happens there's a working group focused on networking um so there's within projects there's project activity doing development there's a working group focused on networking trying to understand um kind of this isn't really how they would describe themselves I'll describe it this way how do we evolve something like cni just put it really simple but really understand what are the use cases what are the what are the challenges that need to be addressed from a cloud native perspective and as Phil alluded to there is work underway to bring about closer collaboration across all these different projects that are addressing all sorts of different parts of networking not just container networking so I'd say I'm not the best qualified to answer that question that would be my do in our team uh so I can connect you to him uh to uh to see what his plans are in this area uh I know what they're working on right now is to make sure that the the plugins that we ship by default in docker like mac vilan ip vilan and overlay uh could work as well uh as cni plugins but here I think you're talking about the reverse like using cni plugins in docker networks as plugins so I don't know what the plan is for that very good oh Julie noted thanks and uh I'll ask you for your email address after that so that we can continue on email you you bring up an awesome point I was talking about it's got to be simple so you're essentially you're obscuring a lot of the internal implementations it also has to be debuggable so you need to have visibility and maybe that's more on the op side maybe there's some part where the at the app dev side you need to see but the more complex these stacks get the more difficult it is to do any kind of debugging and on a on a linux server server administrator understands how to do you know pings and trace routes and tcb dumps and kind of figure out what's going on when you've got mac view lands and ip view lands and vx lands and all this stuff you quickly don't even know what's going on and so having tools that help you understand the state of the network give you visibility uh you know draw out instrumentation and let you know where when a physical link goes down what containers are affected that's really critical if we if we don't get that right we're we're kind of building something that's just not maintainable just to be clear I I didn't want to say that they are similar to VMs it's really just practical reality there's connectivity requirements so we should absolutely work towards the simplest model especially in a can pardon the pun but in a contained space um and so you know one example is give every container an ip address and ipv6 address and just consider it a simple routing problem um that's great except you're not going to be connected by default to the rest of your applications that are inside the enterprise or in some cases even to the rest of the internet so um you know yes we need to be doing things that are simple and make sense but we also have to factor in reality and whether that's gateways or or working through existing integration with existing routing protocols you know we have to do something that makes sense I know for as a concrete example something something that we spend a lot of time on is building infrastructure with our customers that's a virtual machine base like using open stack and then deploying kubernetes on top of that and doing that in a way where they're separate creates a headache for both all the teams involved so there's a project in open stack that builds a bridge between the two so you have plugability at the infrastructure networking layer and it's exposing that plugabilt that network capability into kubernetes so that you could do something that looks more like vlan tag networks in a container out to the vms which are ultimately vxlan tagged on the physical network so it's not quite this complicated mess that you might build otherwise any other comments from the panelists on that okay other questions yes sir so first of all maybe even just for the question for the benefit of the room but at least for my own benefit what so the question is what can sort of a group of vendors do to get involved in the discussion about okay okay so again i'd like the the question is we have a we've got some really trick hardware for high performance networking how can i actually get these new systems to allow my applications to take advantage of that hardware like how do we connect kind of the the last couple of inches between some nifty container orchestration system and i have this awesome hardware i bought from del last week with these wicked network cards in it right um to me that answer probably lie you know like the the proper place to answer that question probably lies in the like what i think of as the layer of orchestration like the some something is making scheduling decisions about where containers run part of uh the the knowledge that that orchestration system can look at to make those decisions is the nature and equipment of individual compute nodes that are available in the cluster now i like for your standard run the mill stateless application of the of the the new model type that we want to run on this kind infrastructure it's a little bit anti pattern to ask i want to run only on a place with an ssd disk and a super high performance gb network card in it right but that sort of tagging is possible even just in the in the the basic state of the art of an orchestrator like kubernetes today to to say that this group of applications based on its manifest this this uh set of uh of containers should only be scheduled the machines that that match some some hardware requirement that could be that could be those kind of cards or a kernel with with support for the bypass feature in it um so that would be like maybe my general answer on a boots in the ground way of like how would i do that tomorrow if i were trying to do it how to advise you to get involved in the discussion of of like what can we do to automate those decisions and like well i'm and so it seems like i've been not quite answering your question for about 25 minutes now and i apologize for that but i know certainly um things that would that that we hear from our customers and one of the major things we try to support with the with the tectonic product and a lot of things we do at coro s is we have customers who have a continuing bare metal on-premises story for reasons of compliance or regulatory demands or performance that they're going to continue to have and those customers ask us questions about really similar if not exactly identical uh things so i haven't necessarily heard a question about doing kernel bypass for high performance networking one thing i get asked about really frequently is we've made a certain amount of investment in uh machines with these particular GPUs so that we can offload that kind of processing um uh to the GPUs how do we support that with our container workloads and how can we sort of mark out those machines among their the other nodes that they're with in a cluster to me that's a really really similar kind of question that's going to have an answer that's probably provided out of similar primitives so it like it's it's something you know we're interested in like that you know that's an interesting thought to me to try to figure out how to to solve those so your spot on i alluded to it earlier i don't know if you saw the little lightning keynote thing i did it's optimized workloads um and GPUs is a perfect example and it's maybe a more accessible example to the uh cloud native community because you you really draw out this instinctive reaction which is that is a bad idea don't do that um but the reality is these are these are application workloads that that are that could benefit from running in this orchestration platform so the discussions will be orchestration project specific i expect um and i know specifically in kubernetes it's happening in the resource management sig where they're focused on scheduling to um hardware scheduling constraints that take into account hardware capabilities um and in addition to that there is uh something that internally we call performance sensitive applications but the work is happening in the resource management sig which is looking at new ma pinning for could even be hpc workloads doesn't necessarily have to be um what brian was alluding to earlier which was the network function virtualization kind of workloads um and related to that would be the what i mentioned earlier is the cncf working group focus on networking that's a place where the industry tries to collaborate to figure out what are the networking specific requirements um and in there uh you know in the resource management sig and related networking groups bypass for offloads is the kind of topic that comes up and again you hit this instinctive reaction that like that's not cloud that's not cloudy um but you look at public clouds and they already offer compute is no longer homogenous they already i mean first it was you get more memory or more cpu since not very interesting but it's i o devices it's it uh storage side as well as network side uh gpu so you see specialization in the large-scale clouds already so it's clear that we need to support that um in the container orchestration platforms okay final question patrick so what would you consider the most popular and interesting use case for containers that you've seen to date and how do you think that might evolve um over the next year or two what uh what networking tools do you see as the most critical for further adoption of containers i.e for network visibility configuration life cycle management etc wow that's a mouthful so let's go um i'd say what we can see the the most um the most typical use case that our enterprise customers are using containers for uh is really modernization of traditional apps that's what i talked about before where they take existing apps in vm's uh there are some tools that let you generate layers for a container and then generate a docker file and you can just dockerize that pretty quickly and then they deploy that onto a modern infrastructure and then eventually very often that's their road to the cloud as well they start deploying them internally um a use case i've seen recently they they keep for example the oracle database on prem and then they move the workloads to uh uh to aws or one of the cloud providers so we've seen that uh pretty often and now the most interesting use case which to me is a different question and talking about the future uh in one large company that i won't mention or i won't mention their name but i i i've seen a large company doing lots of different iot uh and industrial internets in ios uh where they're starting to deploy docker uh in factory floors close to sensors to aggregate all the data send that to uh a gateway that then sends that to the cloud to do analytics uh and these people they're they're putting docker in drones in uh very small devices that are sent to the field uh in jet engines uh so so i i can see containers being used in lots of different use cases and scenarios uh for iot uh where the current tools that we have built for cloud native workloads uh don't work for example how do you do orchestrations of containers in a system where half of the nodes are not connected to the network most of the time so the all the raft um uh protocol that we're using both in kubernetes and in swarm just don't work there uh so i think there's lots of development to be done in this area and then in terms of networking to go back to the mta use case uh uh one of the things that i've seen people asking a lot is uh integration of their the existing tools like a mac vlan and ip vlan into container stacks and that's what we're doing at docker right now thank you um as i had mentioned at the beginning uh 40 minutes was going to go really fast we've now hit 42 so i want you to help me in thanking the panelists please and enjoy your lunch and the rest of the conference thank you