 Good afternoon and actually I guess early evening and welcome to the can you yet another container networking Session this one a panel discussion To kick around a lot of the ideas that we've actually been hearing this afternoon on the container track a lot of the interesting Perspectives about sort of where we're going One of things or a number of the things that have been touched on the earlier container sessions We're actually going to dig into in more detail But I have with me and a distinguished panel To actually comment on a lot of what was going on I think how many of you were here for the last session just a quick show of hands Excellent, so we're gonna build off of a chunk of what was said there and some of the details There's been a lot of discussion about Where does networking fit what are containers doing? Where does all of this go next? And sort of what are we looking at? I mean if we think about what's happening We've moved into an area with containers with scale and especially a profusion of projects and capabilities We're talking about courier and the last session You think about what's happening in a lot of the other automation and integration places points around orchestration There's a lot that's taking place and the scale of containers is starting to do some interesting things to network Working you put enough containers on a ship and maybe things start to bend a little bit But I think what we'll be talking about today is what can we do to work around that? So with that I'd like to do introductions for our panelists today So Dan when we start with you and just a quick introduction. I'll let you guys introduce yourself Hi, good evening. I'm Dan Dometri. I'm the co-founder and CEO of meet okura, and we are a network virtualization overlay company I Keep it short It's Scott's Ned, and I'm with Juniper Networks I'm kind of an evangelist around SDN and virtualization technologies because the SDN community needs another evangelist, right? Come on the more evangelist the merrier Danny and Hansen I'm a software engineer with Cisco. I've been a technical contributor to several Container related projects with an open stack over the last year plus Chris for Lilian Stopey director of solutions architecture at meta switch and the chief architect for project Calico and Mike Owen director of product management Cisco. I work on the Cisco ACI which is one of Cisco's SDN solutions And that by the way, I'm Eric Hanselman. I'm chief analyst for four five one research And we've been over the course of a series of summits really looking to address Some of the more interesting networking challenges that are cropping up and in containers We've certainly got a really interesting one The last session was talking about a couple different approaches to tackle some of this complexity But I'd like to start with just sort of defining. What's the problem? I mean containers are just awesome and amazing and super and we should be using them and they're great And they do all this stuff by themselves, right? Oh, yeah good. Yeah. All right. Well, all right. We're done. Thanks. Hey, it's been real So What's wrong? Is this simply a matter of scale? Is this a matter of? Some of the complexities I think one of things we saw in the demo That you know what we're setting up in containers is a certain amount of abstraction networking, which I think is okay But that has a couple potential issues around it. So thoughts is a good thing bad thing actually Christopher and I were talking right before the panel and You know one of the aspects of containers is that it's a tool that different people use in different ways a way to Package and deploy an application with its dependencies a much lighter weight version of a virtual machine All this business about the composing applications into microservices. It can be very many different things That said I'll give you one of my views Which is that? application developers that are deploying Things in containers in the microservices approach don't want to care about the low-level details of networking like IP addressing Low balancing services, etc. So I think the challenge there is to integrate with the orchestration systems and infer What kind of networking we should be setting up from it? There is of course also a scale component potentially, but it's a orthogonal issue Yeah, since I have the mic. I'm gonna chime in on top of what Dan said I think you know this is a really in there's a bit of an inflection point going on it is Interesting point for us to potentially rethink the way we've been exposing networks out of cloud management systems And you know infrastructure as a service that we no longer need to expose the direct Infrastructure as a service if you're using containers, you're trying to think at a higher level You're trying to you know essentially manage an application and you know We actually it's time to separate out all the operational constraints from how the network works am I using VLANs am I using VX Lens what IDs am I using all the stuff needs to be completely hidden and Really, you know if you look at kind of what the different comp you know application composition languages are offering That's really all the application developer needs to interact with they should be able to say how pieces of their app fit together Who needs to call what API and that can be composed into a set of network policies you know on the back end and You whatever system we cook up needs to be able to offer this clear separation of application composition and Operational and operational constraints, and I think that's there's an opportunity to make a big step towards that as this transition happens So I'll chime in a bit too. So Calico is doing both open-stack networking and various flavors of container networking And one of the things we talked to container folks is echo what's already been said The developers the folks who are gonna who are building container environments don't need ethernet They don't need they don't want to think about VLANs. They don't want to think about constructing networks They want IP addresses to be reachable For their containers or maybe load balances be reachable for their containers and that exposes the components of the application and that's it Coming so I'll spend your question around a bit the challenge for networking There are many challenges for networking in in container land and primarily the the big ones are scale and Ephemerality those containers come and go very fast orders of magnitude faster than a VM There's orders of magnitude more of them than VMs So that puts stressors on to any kind of virtual networking infrastructure That's different scale than VMs the problem for the open-stack community. I think is Our API is the way we think about networks. We still is if you look at neutron We still make people think in terms of segments and subnets and VLANs and VX LANs and all of this stuff That's exactly what these container folks don't want So if we say we're going to do container networking in open-stack by using neutron We're saying we're going to be asking the container folks to use an API construct That's exactly what they're trying to run away from. I think that will probably mean that would be less successful than we would like Yeah, I mean some of the traditional thoughts around segmentation for compliance and all of the things that we have in traditional enterprise networks Neutron and a lot of the other kind of networking for cloud focus has been how do I take the old world and adapted to this in the world And how do I how do I represent and describe the functions? I was doing before in sort of a virtual way and I think the opportunity is as you guys have been saying is Maybe we can rethink that maybe you know We can hide some of that complexity or even in some cases possibly ignore some of that complexity and simplify the approach So when you think about that simplification, where does that need to happen? I mean, I think we've Addressed the idea that Developers shouldn't need to know about that sort of capability But yet if you think about what's out there today, and we don't have a lot of abstractions that that can do that for them and Dan you were talking about intuiting what the Implications of the network ought to be and what those network structures ought to be How do we end up doing that? I think Mike said something very similar is that these application Description systems and I gotta be perfectly honest. I'm not familiar enough with this to say something super intelligent But if you look at I know we've done we've done stuff with Yeah, I know you know we've done we've done stuff with integrating metonets deployment with Canonicals juju and that itself has been sort of generalizing towards an application deployment infrastructure Which describes dependencies between different tiers and the IP addresses and the services there They don't matter you configure them You're gonna sign something random private and and services in between So I think that's the kind of thing that at the most basic level we're talking about Yeah, I think there are you know if you in the container world people are much more running down the road of What you might call an intent-based infrastructure where I signal the intent which parts need you know Which of my containers you talk to which other parts of my containers using you know what APIs or what what ports or something along Those lines, you know, I need an HTTP connection between this container and this set of containers And we're starting to see some bits of that lib network is made in in docker for those of you were here before Has made an attempt at doing that And it's really just a Docker zone when it's implementing that right now The container network interface which came out of the app see work that's been going on the container world is something that some other folks Are starting to standardize around I wouldn't say they're pure intent, but they're definitely more intenty based Than you know more of an infrastructure base that that we're more used to in the open stack world So yeah, I think all of them, you know, we've integrated to live net it's got warts I mean now all of these things have warts on them, but they're further down the right path As to where I think we're in the going to end up going No, so honestly, I think the answer is not that complex You know, there's there's different label mechanisms that have been emerging in all the different platforms, right? And you know what you really want is a composable generic grouping construct and labels sort of offer that you know Different containers can get labels if the labels match up you end up, you know Associating different policies and those policies can trigger different behaviors on the back end I think this is one of the simpler constructs. I've seen you know most of the orchestration systems can drive this concept in some ways and That's really the way, you know, I've seen application developers Well, you know something that resonates with them that also is still meaningful to the back-end systems So that raises the point of where does this stuff actually start to happen even if we're doing label constructs if we've got some higher-level way of intuiting intent other projects that are doing that now and We're sort of hovering on the edge of it mentioned some of what's happening in the apse world, but What what's sort of a state of the project environment today in an open-stack perspective about where these things fit and sort of where they need to go? I think within open-stack a project that I'm involved in Magnum That is part of the solution outside of open-stack Project that I've been involved in in the past is open-shift, so For example with open-shift Over the last year plus it's gone through a refactor based on Kubernetes and Docker And basically adding to the constructs offered by those systems Taking very much a developer's approach, right? So understanding that As an application developer The way that I do my work doesn't stop at the kubernetes level of defining services and pods and so forth But I have to also think about my source my source code and how I can Go ahead and tie my source code to my Docker images how I can create deployments to that encompass Maybe environment variables for certain system settings and and kind of just taking those constructs to the next level Going back to Magnum What Mike said with labels I put together with the help of the community the container network model for Magnum and we're going through the implementation and two parts of that was a to take out the Networking from the core of Magnum very similar to what libnetwork and Docker did with libnetwork, right so that we can now make networking more pluggable and Flannel is the first network plugin that supported But besides just defining those Network plugins each of those network drivers or plugins can have a wealth of configuration information associated with it specifying subnet sizes and Configuration back ends and so on and so forth And we didn't want to bloat the Magnum API by having to add all these different attributes and so forth And so instead we took the labels approach right so that you can define what network driver you want to use for your Magnum instantiation of your containerized environment and then pass in Specific configuration attributes to that particular driver by using labels and there's actually a bigger initiative within Magnum to to make the system more pluggable across all the different Software components and add the label support across a different software component So hopefully the network model is just kind of the starting point for that effort So is Magnum enough of where it stands today? We talked about the pluggability the interchangeability of networks Certainly we take a look at projects like flannel that it give you a little better way to stitch together container capabilities So to build that into the orchestration What are those pieces that that need to be added on top of that and is sort of the Magnum model sufficient? So I think it's just a starting point now right and so if you look again Yeah, Magnum with what we're doing with networking is Identical to Magnum's overall approach to the project Magnum's not trying to recreate the wheel Create its own set of tools that competes with Docker Kubernetes But actually embrace those tools right and so We go back to the the Magnum container network model You know we use lib network as kind of that reference model And so it's just a starting point What we need to be able to do Just from Magnum's perspective is add more drivers and and what we're doing is is You know looking to the different vendors to get involved in the community and start adding support for things like Calico or Meet-O-Net and so forth so that when it comes to networking within Magnum It's it's a feature rich service within Magnum and so You know if the Yeah, I take a look at at the work I'm doing with Calico and and the people I'm out talking with and there's certainly a subset of the the user base that I'm talking to that are Pure open stack or our pure container Model folks, you know, they there's going to play Mesa, so there's going to play Kubernetes or there's going to play open stack there's a growing Consensus among a lot of the user base though that they're going to have a mix They're going to be running some containers and they're going to be running some open stack The thing that I'm not hearing from them though is that they want open stack to manage their containers What I'm not hearing is they want to run containers and VMs under open stack What I'm hearing is they want to run open stack and In the same physical infrastructure, they want to run Mesa, so they want to run Kubernetes So they're almost peers. So the real question is, you know, we that for the folks who are going to run containers within open stack It might be an interesting starting point But I think there's another use case which is that there is a common fabric might be storage and networking and other things that is shared between Open stack and something else And just to go back to the one question to comment about tags and labels And that's the way where we've cracked in Calico The policy model and everything else is exactly the same way that everyone else is talking about it It's we use labels and policies and it's I think that's pretty much anyone who has looked at this and figured out How you're gonna do this at scale how you're gonna do this so that five years down the road You don't have the problem of looking at the firewall and going okay Well, there's 8,000 rules here in this firewall And I have no idea which one of these is still relevant or not But I dare not remove them because I'll break something you want the policy in your network to be Transient and rendered just depending on the load that's there if we're in a container world That policy may only be relevant for two seconds and then it may be gone again There's no reason to put that hard into the networking. You do this by labels and asserting Rendering the policy as necessary in the network and then removing it when it's done Like what you said about the about the peer level relationship between open stack and other stuff, you know Not related to containers We've definitely seen that with with VMware and open stack in in our deployments And we what we did actually is we made neutron the common networking layer But but which which is great And I think that we can add mesos to that and other container things on bare metal and everything will be Pretty good, but we're still missing that intent-based, you know, networking model and Right well neutron is neutron is neutron is whatever we make it to be right and you know So sure we could add something to it that that is intent-based But there hasn't been a whole lot of consensus in the community around that yet yet You know Partially I agree with what Chris said I've seen a little bit of it where it's you know There's bare metal container environments and and there's open stack and VMs although I don't think the world is as simple as that and I don't think we can design a solution with that Simplifying assumption, right? I think there will be you know, I've definitely seen customers They're using VMware as their base environment and they're running Container systems on top of it. They've always orchestrated infrastructure with VMware and that's what they're doing I've seen other environments where they're using open stack and they're running continue It's a magnum model essentially either directly orchestrated via magnum or one they've handcrafted themselves And I think whatever the solutions we build have to be one where we pick points of control that we can actually manage Either over-the-top scenario for containers or a bare metal deployment of containers and it can't break You know, you know and it has to work in a pragmatic holistic way across these different scenarios that people will deploy Yeah, and again going back to the app developer standpoint, it's You know an app developer is going to deploy the containers wherever that app developer can do it quickly and Expensively reliably and if open stack becomes that platform that meets those requirements Then, you know, we may see a shift From your perspective that you're talking about but you know, I do agree in the sense that When it comes to container networking implementations, it's still you know, the container words the world is so new That whatever container networking implementation that you're evaluating One of the evaluation criteria should be can this container networking implementation work? outside of an open stack cloud because it's just still too early and and and I If we were to pause time right now to your point you would see more container Deployments happening outside of open stack, then what is happening with an open stack? Sure in terms of platforms, but that does bring up the question about bridging the various worlds of networking that we need to link together I'm Dan you're talking about neutron as being a vehicle to be able to do some stitching around that Does that do enough do we need to get some level of? You know lower level connection We said I was spitting out VLANs as we get all the way down to the bottom of this and and how much of that has to get Integrated into the network orchestration piece to work I think it doesn't do quite enough and again my example was VMware and open stack. So the model is quite the same It's virtual machines one way or the other Launched by this thing or by that thing For the in that sense neutron the neutron model is nice because it's VMware ESX I'm sorry VMware vSphere. It doesn't really have a network model per se. So we said okay just adopt neutron We said to the to the customer and that that seemed to be Positive positively accepted, but with containers. It's a bit different because the framework like mesos mesos not even a multi-tenant Thing actually right? I mean you sort of have to layer something on top of it with a with a service registry it doesn't It's it's very different. It's a it's a bit orthogonal in that sense. So we have to come up with something that's More abstract. Maybe the labels are Are the way but that That's very confusing for me to be honest, you know to to think about applying just the labels only to to the neutron Networks and policy and then to the containers as well. So I'm not sure if Yeah, maybe Christopher can enlighten us because I'm I'm getting confused. So At least in one implementation, I'm very aware of it's fairly easy to map security groups into labels and policy Right, so it's purely a way of saying I'm attaching this security group to this VM Well, that's that can be that security group can be a label and can can be represented by a label and get label Can also be rendered in? Container land When you talk about sorry when you talk about the integration I think there's one layer of integration, which is the control plane or the management plane integration You know what whatever we're using to orchestrate these things The other end which you're saying down lower How do we interconnect these things this neutron really is just a orchestration shim, right? The other day the container world anyway is pretty much all IP You know and I will guarantee you that almost all of the packets coming out of your VMs today are IP So if you start thinking about what you need to bridge These things together, so I'll talk to one another we sort of solve this problem a couple of decades ago You know I you know IP sort of won this war So all of the architectures that we've used to interconnect big disparate networks That all run IP, you know sort of come into play, you know, you can you can route it You can provision you can provision IP paths you can route it you can use MPLS There are all sorts of tools, but so as we sort of keep in mind that at the end of the day What we're slinging around are IP packets. I don't how many people here running IPX in their cloud Apple talk Banyan vines Yeah You just shot my idea down, thank you I thought there was a hand going up for vines back there too, but yeah But I mean this label discussion kind of is spurring ideas in my mind that I need to go find a whiteboard and think about a bit more Probably not here We have there are whiteboards there I see But you know I mean it would be a relatively straightforward thing to define a way where one of these labels is associated with the BGP community which is associated with some service chain definition that the SD uncontrolled right so back to what you were saying You know these are problems that have largely been solved at the networking layer and and there is probably an abstraction model We can define that that aligns with with this model of labels for applications that that could map Yeah, no, you know in reality Yeah, you know if you think about one of the ways that the future can go here You could think about it, you know in some containers of service deployments a drastically simplified networking model I've had conversations with folks today about you know an all v6 network that was you know could Filt on something like Calico That was you know a route a routed v6 network every container gets an IP well that work in all environments It won't but it's very simple. It's a it's a relatively simple You know environment to think about and it's a relatively simple environment to scale You know, you know, I think I've worked with different enterprise customers They require more complex networks and we can look at other solutions, you know for them You know, but you know, I think we can start thinking about these kind of simplifying assumptions and what they can do You know, we can actually design networks around these, you know, these things if we can start coalescing around them And we could really simplify the environments we're dealing with. I just came up with the answer to what is the problem with container networking? Networking guys love their abstractions and they're We can do this. This is really cool. You should do it So there are you know, not everything is a simple network But one of the things we need to do as an industry instead of saying to your point You know instead of saying here's a really complex construct and I can do all these wonderful things with it Start off going the other way and say here's a really simple construct What can you achieve with this and you will find some things in some cases where people will need slightly more Complex things but it's the 80 you know, it's the 80% rule Do you design for the 100% use case which is what we've always done in networking? And that's why we have RFC stacks that go up to here to make MPLS work Or do you start with something at a very basic level and say okay? This gets you 80% and now I'll bolt this on to handle this next 10% etc. Our biggest problem is us We like making things complex And and we need to stop doing that because the app developers want to make things complex in their space They don't need us making things complex for them I'm not a networking guy That's it's a really good point though and I mean it's what I've been focused on is what I like about Docker is that Philosophy of the batteries included, but but replaceable and so you know if there's you know a key point You take away from the session is really continued to try to put yourself in the mindset of the application developer. I mean Docker got big and the main part was that it took a lot of these complex technologies and and exposed Exposed those technologies to a user and made it very easy, right? and so If networking comes in and starts to make you know over-complicate things then it's never It's never going to be consumed by the app developers and they'll continue to use You know the the native bridge for live network and deal with it in other ways, right? So speaking of all those things we've got a wraparound complexity The previous session sort of skipped merely over things like address management in a much larger context Is that something that that we can sort of Generally leave behind as we get a little smarter about orchestration I mean we've got to exist in a much larger address space and even v6 You know we we can talk about sort of flinging addresses all over the place with with great abandon But there's a lot of that service capability. We've got to manage in that larger environment How do we deal with all that? At the container orchestrators one thing they do do is manage resources reasonably well IP address is just another resource You know so and some things need pinned addresses, but less than as people said here less and less things need pinned addresses I see to give you an address. It's ephemeral and you'll find it via service discovery or whatever else But you know, I don't think the networking folks And there are IPAM and a lot of these container networking infrastructures now people are adding IPAM and as was stated, but this is you know, this is just another bit of Another resource that needs to be managed. It's nothing You know, it doesn't need to be anything special you know going back to Lib Network and I reference Lib Network because I Think it's probably the most well-established Networking implementation for Docker containers and that doesn't say a whole lot in the sense that I don't think it was even around What a year ago? But where I'm going with with this is that Lib Network the one of the purposes of Lib Network was to you know extract the networking functionality from Docker core Docker engine and and Lib container and And all those core services within Lib Network are being Modularized right so whatever key value store you want to use you use Lib KV whatever IPAM you want to use that's now being pluggable as well and so You know, I guess the point that I'm trying to make is that if functionality doesn't exist At least the foundation's there if it's Lib Network or or most if not all other Container related networking projects or Lib Network plugins Are doing a good job of making the services pluggable Right, so again if there's a service that's not there or a plug-in like the IPAM plug-in doesn't provide the functionality that's needed go through the process like all open-source communities to follow that that process of Creating the design proposal and then start contributing to it looking to the future of We've talked a lot about the developers need to at some point track an IP address Whether or not that comes out of a service directory or what have you How do we get to a point at which we move beyond the need to actually grab an IP address move to namespaces Is that something that's like way too far out there? Where do we head towards a brighter future in which? You know that address, you know, you don't need to know an address to be able to get connectivity Are we going to named pipes? Yeah, I mean, I don't know if we ever get there It's kind of hard to tell but I think one of the bridges to there is service discovery and so, you know service discovery if it's not something in that that you've looked into spend the time looking into service discovery and That could be part of the answer is is relying less on IP addresses to manages to manage the services that make up your application and Build that service discovery layer so that when you add remove or change those application components, they dynamically update that that register and the other application components now can dynamically communicate with that Application component so I used to work in Amazon comm a number of years ago in infrastructure not AWS it was just a backend infrastructure and in fact the whole thing was built on a service discovery It was the so-called service oriented architecture. There was a whole service registry Service discovery, etc. But it was one system that was sort of horizontal across across all of Amazon infrastructure It was mandated. The problem is that we don't have a standard like that. That's not You know IP or DNS, right? So DNS is service discovery with respect to applications with respect to Docker It's it's a it's a clue, but that's what it is right now Yeah, yeah, no no last spring Adrian Cockroft gave a talk about using DNS and namespace and what Netflix was doing in that space for exactly solving this But it's their environment. It's somewhat closed. It's not a standard and and so yeah, I think Focus on this So all we need to do is change DNS, right? Well, maybe Easy no problem Yeah, I think something. I'm sorry Oh Some of the container folks are already doing this I mean you look you look at at cd Infrastructure, etc. That it's already a service discovery mechanism in this environment. So I don't think you need to look too far ahead You know if you're writing in this new environment You're already doing that you actually sort of have to work against the flow to actually go back and try and use Real addressing so I don't think you have to look too far in the future Anyone is writing something into AWS today, you know, you you don't Worry about your IP address is too much. So, you know, I think we're almost sort of there Yeah So the only bit I've done in that you know, I do believe the problem's pretty close to solve with services discovery as everyone else said You know the the only caveat I would add is you know We need to be able to you know as you look at different kinds of port remapping techniques and stuff that people can use It adds a significant amount of back-end complexity when something does go wrong So doing it in a way that actually helps you still manage the network and actually troubleshoot a problem when it occurs That could probably still be significantly improved Matt bad Matt especially Pat Pat bad don't do Pat So that was that was sort of what I was hitting at a little bit like these are you know, there's some simplifications We can make that you know, you know what goes beyond you know underneath the service discovery mechanisms Which could actually make it easier easier or hard to you know troubleshoot and operate the network and scale it out Just to add to what Chris was saying with the service discovery. I mean the technologies are there You just have to leverage them, right? you know, you still have to Create your docker file or your image that is using the ability to share that configuration expose environment variables so that As you instantiate containers or remove containers Those service components are dynamically added or removed from your service discovery Mechanisms so again technologies are there, but you still have to you know, build your application images to leverage a service discovery layer All right, well we're running to the end of our time. I wanted to hit each of you for One thing you'd like to see in container networking capabilities. We head towards the the future General thoughts about what you'd like to see next Simple ask that we keep it abstracted and and pluggable like you talked about so You know, I think one of the faults of neutron in the very beginning And I said this in Vancouver too was we tried to build a product instead of building a framework And and so let's keep in mind that we want to build Frameworks and and in some cases some vendors SDN solution might be the right approach maybe some open-source project might be the right tool to use but stop trying to package everything into one shiny little ball and Keep it a pluggable framework So I would probably you know hit on alignment around points of you You know points of trust and security essentially as we think about containers running inside VMs containers running in bare metal Interacting with you know true kind of you know non-container as bare metal workloads Where are we defining the different points of trust and the points of enforcement of different? You know security policies whether they're labels or you know or however they're defined You know actually have a community you know align on you know What software tools that we're using to enforce this is it open v-switch is a tool you know tools on top of this How can we do it in a unified manner so that we can actually give a unified security model across all these different deployment mechanisms? I'm gonna focus on something related to what Chris said visibility so with so many different containers moving around being ephemeral I'm moving around necessarily but being you know being very ephemeral and changing and not necessarily having their addresses be Important they're all they're all automatically assigned in such figuring out what has actually happened You know after the fact means that we need some sort of capability to record and pour through a bunch of data potentially when things go wrong so I think Mines in sort of in two parts one keep in mind that What we're interested in is what people are actually trying to do not the underlying Infrastructure so don't go down that slippery slope of oh, I can just create a subnet data model and that Pops into the second half Fairly Standardized data model or approach to a data model that that has in concept You know labels or some other kind of intent-based Infrastructure that pretty much everyone agrees on doesn't have to be exactly the same, but it's much easier if we have a common data model or metadata model that Would make it easier to then integrate into things like an open stack on one side and a mesos and something else if We all have our own data models because mines five percent better than yours It just makes things more difficult and there's more abstractions more abstractions make it more difficult to troubleshoot Etc. I Think one is gonna be it's gonna be a tough I would say simplification is high I mean when we start thinking about containers inside VMs inside cloud networking inside physical networking Okay, great. I got it up and running and deployed what happens when things break, right? Is it the container network overlay is at the cloud network, you know, so simplifying Container networking I think is very important Standards, I'd like to see more standards be developed around container networking, you know, I was really happy to see OCI or the open container initiative and actually it being stemmed from Lib container I'd like to see something similar Maybe if it's Lib network or some other software library that becomes the standard for container networking Last but not least I'd like to see better integration and this kind of goes back to, you know labels I think labels as part of it, but just better integration with Application development or application platforms. I gave some examples of open shift But if it's cloud foundry or whatever it may be Right now you could do a lot of really cool things With these application platforms, but when it comes to a networking standpoint There's not a whole lot you can do there. I would really like to see that you can specify policy That you can specify all sorts of different characteristics and expose those potentially through labels to that application platform All sorts of characteristics including potentially net biosupport All right. Well with that, why don't you join me in thanking our panelists this evening? We're standing between you and beer We'll talk to you down at having a beer. So thanks very much everybody