 So, as Diane mentioned, we're going to go over where we're headed with networking in OpenShift 3, talk a little bit about where we've been and some of the design constraints we've been working with and what we see coming in the future. I'm going to try and cover a lot of the topics in the first 20 minutes so we have a lot of time to show the demo and to show some of the stuff actually working and give people an idea of what some of the lower level technical details are. So, with OpenShift 2 and the existing OpenShift system that's out there today, we had a fairly straightforward system for networking in that the original goal of OpenShift was to allow administrators to run multiple applications on the same machine to make better use of the hardware resources that they already had as well as to offer additional capabilities like being able to run lots and lots of development applications that take traffic very frequently. And one of the key constraints there was networking. So, a server that needs to connect or to host a whether it's a web server or a message bus or a database needs to be able to listen on a network address and have people be able to reach it. So, in OpenShift 2, this system was fairly simple in that each of the gears, what OpenShift calls a container, each of those little sub-components of an application were able to get their own private internal IP address which was not reachable from outside the host. And for each of the services that would be running inside of that container we would map in a port. And part of the reason for this is that long before network namespaces were really usable within the kernel, this is one of the few ways that you could achieve this kind of hosting without a lot of complicated network and complicated network crossover stuff. This idea of mapping an external port to a high external port to a low internal port or address is a pretty common pattern. And so OpenShift had started down this path quite a few years ago and as the networking and container space evolved we both learned a lot of lessons about building applications that can talk to each other as well as learning the things that we knew we wanted to carry forward into our new system. So, in the diagram here I'm just giving a simple example of what in OpenShift 2 the world might have looked like. So, you've got four hosts and each of those hosts is running multiple components on each of those what we call a gear or today we're calling containers. And each of those containers has its own internal address where Apache for instance would listen on that 127 address on port 8080 and OpenShift would set up a mapping between the hosts address and a high port to the inside and then applications, other components of the application elsewhere in that cluster would be able to talk to it over that high port. So, both an external load balancer or the component of application one that's on host number four would talk to 1053341 which is the hosts IP address at port 34985 which is just an example. And there's a lot of things that we took from this as we've worked and helped people build applications with OpenShift over the years. I think the hardest one that straight up front is that most network software in the world is designed to automatically listen on all addresses. It tries to bind to 0000 and in the OpenShift 2 model that actually wasn't possible because without some way of isolating each of the containers from each other at the kernel level, 0000 would be the host address and it would be all the other different interfaces that were set up on the system. And while there are some benefits to making sure that software can work, can parameterize which address it wants to listen to, in terms of practical effort this was a cost that was pushed onto users. Users had to know that they couldn't bind to 0000. They had to use an environment variable or a config file or some other mechanism. And so, as we go forward that was something that right off the bat was a barrier to entry as people brought their existing legacy applications or even existing frameworks and tried to run them on top of OpenShift. Secondly, port mapping adds an extra layer of indirection. So not only do you have to know what host your service is running on but you also need to encode what port they're running on. That means that the internal and external port was different. So a lot of software that we want to run on OpenShift, things like MongoDB or ZooKeeper, scalable databases and other things like Cassandra as well as some of the scalability solutions that are built around MySQL, all of those, almost all of those at least, have as one of their underlying assumptions that the address that they listen on is the same address that other people reach them on. And so that the port mapping, and they also in some cases assume that the port that they listened on was the port that others could reach them on. So going into OpenShift 3, a key consideration in our mind was trying to reduce the complexity there so that we can run more of these kinds of software. And in general in OpenShift, depending on how you configure your system, you can set up global blacklists or global whitelists of things of how these each individual application components can talk to each other. But there really are, there's no one solution that fits every deployment's use cases. In some cases, allowing one application to talk to a component in a completely different context within the OpenShift deployment was actually a very valuable feature. People building microservices or who had well thought out integration and authentication solutions benefited from being able to reach out and directly connect to other components. And on the flip side of that, in some environments, the individual applications needed to be tightly isolated. And OpenShift has mechanisms for restricting that, but they weren't as easy to use and as developer-friendly as they should be. So there's been a lot that's changed in the ecosystem since OpenShift 1 and OpenShift 2 were launched. As we've gone forward as, you know, obviously to anybody who's been paying attention to the platform as a service space or the infrastructure space, the rise of containerization and leveraging the abilities that the Linux kernel now has in a fairly stable form to allow individual processes on a Linux host to pretend like they're these little independent machines to be able to break some of these assumptions that before, if you were running lots of different services on the same machine, each of those services had to be pretty careful not to stomp on the other. And that's really changed with the introduction of containerization. So containerization is essentially taking the lessons of virtualization at the hardware and software level and taking those a step further. We have the same tools that the virtualization space has in a lot of cases. And for us, it was really clear that it's a bad idea to try to pretend like everything was being built in the containerization space is new or to not look at those spaces for examples. But instead to focus on this really from the perspective of what we want to offer people are very simple components of an application, but all of the patterns and abstractions that make running individual VMs and running individual machines in a network and the way that those machines connect to each other, we want to apply that and make that pattern apply to containers as well. And if, you know, people who are deploying at scale today already in a large case they're running some form of software-defined networking. That software-defined networking might be the controller plane where switches and big enterprise networking gear is in place. Those might be things like OpenVSwitch and the work in OpenStack around Neutron. It might be some of the new startups in this space who are building overlay networks on top of existing hardware. And there's trade-offs and advantages to both, but to us it was really important that we acknowledge that we're moving into a space that there are solutions and OpenShift really wants to leverage those solutions and apply them to containers, apply them to the applications that people build on top of OpenShift, provide an abstraction that makes, as a developer, takes their application and runs it, we want to be as flexible as possible to let the software pretend like it's running on a regular machine, but impose administrative and operational controls on how those components talk to each other that benefits the operators of this large pool of applications. So coming out of our goals is really having, if there is an SDN solution already in place in a network, we as OpenShift want to be able to take advantage of it. So instead of virtualizing VMs, we might be virtualizing containers, but the network infrastructure that exists to support that is a very real solution that we don't want to ignore. On the flip side, for organizations who either are scaling up small deployments and need something small and self-contained, they don't necessarily have the programmability that a containerized system might need, or if you're doing a proof-of-concept or in a very small test environment, we also don't want to have an overly complex setup so that you can start OpenShift up and try it on a few machines and grow it. So taking those lessons learned from OpenShift 2, there were really three fundamental principles. First, that every component of an application should have its own IP. And when we say component of an application, usually in the OpenShift sense, we're talking about a network service. And so a network service having its own IP starts to play into other higher-level concepts that already exist, that have been baked into software for 30 or 40 years now, things like DNS and load balancing, things like L3 routing, the ability to isolate and define components, not just by a port, but actually by an IP to segregate networks. And segregation is also really important. So observationally, many cases of developers aren't the ones who are setting network policies today. And developers, that's not really the thing that matters to them. The use cases that we've tried to target are an administrator, whether that's the cluster administrator or an administrator who's responsible for a subset of the applications on a cluster can fairly easily define the segregation so that within this small cluster, they can easily tie together a set of applications so that they have visibility and keep the rest of the world at bay. And there's more advanced scenarios here, too. Obviously, in the future, we want to enable a greater programmability of this. But with OpenShift 3, we're kind of starting at that simple subdivision, whether you have one subdivision or lots of subdivisions, it needs to be something an administrator can go make happen, especially in production environments. And then finally, there's a pretty significant difference between development and production. There's a lot of use cases for OpenShift where you're building lots of development applications, and each of those applications is a throwaway thing that gets used for a few days, and maybe it sits around or gets deleted after a while when people haven't used it enough. That lifespan for a development application is usually about just trying to keep it isolated from everything else in the cluster. When you start moving into production, many of those applications start to have more significant requirements, so things like DMZs, demilitarized zones, controlling the flow of network traffic into a cluster, and fitting into the existing requirements and organizational structures that exist. So if you are building a new production application from scratch on top of OpenShift, it's likely that you're still trying to fit into the organizational and security requirements of your existing infrastructure, and our goal with OpenShift would be to allow those sorts of decisions to be made by administrators for those applications. And ultimately, at the end of the day, our goal with OpenShift 3 is to stop pretending like these individual components aren't just talking onto a network. An application is composed of multiple pieces. The best way that we have today for multiple pieces to talk to each other is to give them an IP address. So OpenShift 3 is based on Docker and Kubernetes. If we haven't hit you over the head enough with that by now, we can certainly come and hit you over the head a lot more with it. But there's really two implications here. OpenShift 3 being based on these lower-level concepts, so Docker as a container engine and Kubernetes as a cluster runtime environment for containers. There's two pieces that need to work well. Docker today assigns an IP to a container when it gets started, so those application components are getting an IP assigned by the container. The default setup for that today is typically not very friendly to cross-host communication. There's a lot of discussion and generous back-and-forth in the Docker community about what the right way is to implement and integrate networking, and we want to remain flexible to that and where the Docker ecosystem goes in the future. But I think that when you step above the Docker level, there's a really important piece of building these components of an application that it's worth talking about what the implications are. So if I have one container, if I have MySQL, sometimes I need to do something that's more complex than just talking to it over the network. I might need to share things on the file system and coordinate. I might want to do an IPC call from my one service to my other service, and those processes need to be co-located on the same machine. In some cases, I might want to set up one container that is a public Web server, like EngineX or Apache, that goes to another Web service like a Python Web framework or a Ruby Web framework like Unicorn or some of the existing technologies in the space. And that local host communication isn't really well defined if we're just talking about these separate individual containers. So kind of the key concept in OpenShift 3 being built on top of Kubernetes is the idea of a pod. It's like an OpenShift 2 gear. It can be just a single container, but another one of those takeaways from OpenShift 2 was that there's a lot of very important use cases for making things work together well that don't necessarily need to talk across the network, but need to talk across a file system or IPC or over disk or over local host when you're having a concept above the container level. So in Kubernetes that's called a pod and in OpenShift 3 that'll really be our fundamental unit of scaling up. So a pod in a sense is like a little VM. It's a set of containers that are running together. So down in the lower right here on this slide you can see that I might be running MySQL and PHP MyAdmin. Both of those are individual containers. MySQL is listening on port 3306 but perhaps I don't want to make PHP MyAdmin visible to the network. So PHP MyAdmin might be listening on local host port 8080 and MySQL might be listening on 3306 but accessible to other components. That pod is what gets an IP address and that's a really important distinction just in that some of the sophistication of the use cases built around this depend on being able to tell Docker what the IP address is that we want the container to have. So in order to do this what happens is OpenShift is going to spin up these pods on these machines by working with Kubernetes. Each of these pods when they start up is a container that has the network namespace that's going to get an IP address and then the rest of the containers for instance MySQL or PHP MyAdmin will start up and they'll be part of that network namespace which means they all have the same IP address and they're related to each other. The assignment of that IP address is really where we come into the next phase of what we'll be talking about showing today in a few minutes is how do we assign the IP address. So that pod is like a little VM. It needs to be assigned an address by the cluster and that could be depending on what sort of environment you have today you're running on a host you're running as a sub-component of a host so you're not really using the host's address although there are some scenarios where you may want to do that it really falls into three categories broken down by how much work someone else has done for you already so in cases where you're running on top of infrastructures of service the majority of the solutions out there of the infrastructures of service solutions out there today have some concept of machines getting multiple IP addresses so Google who originally started the Kubernetes project that we've been involved in quite a bit since it started took advantage of a lot of the things that Google Compute Engine adds such as the ability for each host to get its own subnet that has 256 IPs so on GCE if you're running Kubernetes on GCE or you're running OpenShift on top of Kubernetes on top of GCE the networking story is pretty well baked in the story gets more complex for everybody else so AWS for example gives you depending on the size of the instance you can allocate and use multiple IPs per host and in many existing infrastructure many existing enterprise network solutions this capability is certainly possible but it does require some level of coordination to allocate those IPs out and give them what started to happen with Docker especially is lots of people realize how important it is to be able to allocate containers IP addresses and so you've seen a number of solutions coming from various startups CoreOS with Flannel and Weave with their overlay network have created little daemons that run on each process that are able to talk to each other and form a virtual network and this is a pretty old concept but again as with with containers there's more room for lots of things to be set up and spun down on the fly now overlay networks do come with some performance disadvantages and so we when we talk about setting up networks in a dynamic fashion kind of the big boy in this space at least as we see it as open V switch so most people doing some level of SDN are either dealing with open V switch or have integrated into V switch open V switch and many configurations is running an open V switch type of setup where the sort of abstraction between the control plane and the runtime plane is either handled in software talking to hardware or software talking to software so with OpenShift 3 we knew that there were going to be existing solutions that we needed to integrate into but we also knew that we would need some level of a solution that worked with open V switch to do just enough to let someone work at small scales but to have the flexibility to replace to change out or to supplement those capabilities with their own so these choices aren't binary it's not one or the other and there are many other ways of configuring these networks so our goal with OpenShift has always been to work with the upstream communities Docker and Kubernetes to make these flexible and then out of the box have something simple enough that we can get over that hump and if you're an OpenShift administrator and you want to spin up and start building applications right away to give something that works at a reasonable enough performance level that we can leverage the existing investment that many, many companies and organizations have already put into their SDN solutions and Open V switch was a natural target for that so at this point I'm going to turn it over to Brigitte and Mernon to do a demo of what exists in the current, in a very development form in OpenShift to work with Kubernetes containers and Open V switch All right there was one question that popped up before they get going and Judd Naitlin had asked how much of this is dependent on C Group V2 Very little of this should be dependent on that although I'll let Brigitte answer that So are you talking about C Groups in general or the networking aspects Judd to answer that from his car Can I use voice? I can use voice, no? Great For the containerization containerization in general was implemented in the Linux kernel over the past several years of C Groups just went to a Linux users group meeting where I got the guys who are rewriting C Groups for V2 and it looks like we're going to get a lot more protection, a lot more flexibility and predictability in C Groups from what's coming in V2 but they don't have a timeline they're hoping in the next six months So at the lowest level is lip container users C Groups today and as the V2 comes out those features will be integrated and available in Docker So nothing we show today will be dependent or require V2 they'll just get better as those things make their way into the upstream communities So you're 100% relying on lip container? Yes Docker uses lip container today and LXC is the other backend and so both of them use C Groups under the hood using memory, CPU and all the options that C Groups provides OpenShift is when all is a contributor to lip container or one of the maintainers for lip container and so our goal usually is going to be to work in that community as features like that come up we'll work to get that in lip container and then into Docker and then serviced up through the various mechanisms in the system Okay, more questions later Alright, let's go on with the demo Munral, is that you talking right now? No, this is Rajat and I'm going to show the demo of OpenShift SDN which is the simple form of networking what Clinton has been describing I can describe what the architecture looks like what it tries to achieve though is any container that comes up or any pod in OpenShift that comes up gets a unique IP address and anyone else on the cluster be it a pod or the host itself or an external plugin component should be able to reach the new pod with the network assigned by this SDN controller so what I got is this three node rather two node cluster controlled by a master and just if you can see in my screen I think yes I got two minions here which means two nodes and pods can land up on any of the minions containers can be born any of these places if they're entirely controlled by OpenShift then only pods will be created but if it's being shared by someone else then just random containers can show up as well what this controller does here there is a master setup running there's a demon which runs and this guy is supposed to say okay I've got some minions here minion number one, minion number two and as they will come in the future or go away I'm going to start assigning subnets to the entire minion and the subnet would be default 256 IP addresses which means 256 IP addresses would be available to that minion and I got minion one here on my left minion two on my right and if I go there and say hey how is Docker doing I'm going to reassign Docker saying hey don't use your original bridge start using this new bridge that I've created and if I say IPA there I can see this Linux bridge there and it's got a subnet already which was assigned by the master on the other node I got similar stuff but of course a different subnet what I see from here say okay let me create some part I will actually just do a creation of part there and say get some part there and say okay there's something already running there where it's running on open chip minion one and it's got an IP address here and this IP address should be reachable from of course the host there we can say docker ps well this is running here and I can say ping and I should be able to ping it from the host itself I should be able to ping it from the other host and just copy paste that I should be able to call it because it happens to be serving something there yes it does work what's going on here is that docker was given a subnet and docker says okay I'll spin out new containers using that subnet but who is going to transmit those packets from when it is going across the host that's where openvswitch comes in and openvswitch says I'm going to grab all packets which are meant locally for the host itself and not bother about them and what I'm going to do is the ones which are meant for across the host I'm going to identify by the destination address and I'm going to put some flows there open I can show the flows there openflow30 dumpflows this is openflow this is implementation of openvswitch ovsvsetail show I got a vxlan kind of a dangling tunnel there which says the remote IP is programmable which means when the packet needs to go out of this machine which means the pod was living inside this host itself when I did a ping there there was no docker container here when I did a ping here it says okay 10102 I got to figure out where I need to go it says okay if you're coming from the docker which is local and you're supposed to be in this 10102 subnet 10100 subnet go out to output 10 output 10 is a vxlan port and I'm telling you where it is living on minion 1 and that's where it gets there and the same thing happens here and openvswitch is able to say okay I'm going to transmit packet is controlled by openflow any new node that gets added in the cluster we add new rules here saying how well there's a new guy pods might just land up there we need to direct traffic accept traffic from there what this thing does is connectivity between containers what this thing does not do is isolation I don't know if Clayton has few words on isolation but I have a different demo where we can show how isolation can happen by isolation I don't know Clayton if you have more slides there sure we can show some of the slides afterwards why don't we go ahead and talk about actually why don't you just continue with the demo isolation in general what we're talking about is I mentioned before administrators want to be able to segment so what we would like to do kind of out of the box by default is every group of people every project which in OpenShift 2 we call to domain in OpenShift 3 it's called a project to more closely align it with things like OpenStack each project would probably get its own we want to have a mode where each project gets its own segregated network and you can see the IPs of other things but you wouldn't be able to reach them from inside that network and as soon as we have that obviously there's some solution there's some problems people would hit obviously if you need those two projects to talk so there's an additional set of features up above where we'd want to have some simple programmability and then let administrators go build and integrate more custom solutions that work with their existing if they have programmable networks already something that can do network programmability we'd want to let those integrate and we'd want to feed those with information but for most administrators people who don't have that kind of flexibility we'd want to give very simple tools to let you to manage those networks so why don't you go ahead okay yeah so just this is the simple part of the demo I mean we have a flat network every minion gets a subnet every docker container gets a portion of that subnet and across the network every container should be able to talk to another container well I showed how we can talk from host to another container where I'm just spinning up is a new container here I just did and it says okay well you got this IP address on the other minion I'm going to say and get inside inside the container there so I'm going to get inside this container and say IPA now this guy has got this container and I should be able to reach from container to container yeah that works and the other guy also should be presumably working now that works so at the end of this kind of demo which is simple flat network I'm going to switch to the other demo which if you're ready for I will be able to show an isolation piece what Trayton was talking about now the isolation bits are kind of subject to how the OpenShift Administrator chooses they want to do the isolation wait a second you stop sharing your presentation yeah I know I actually have a different computer for my isolation demo I didn't want to squeeze all of these virtual boxes into my puny box there okay I think I have the other one running now yeah okay and in this one I got one master and three minions and the isolation that we have chosen I as an administrator of this cluster has chosen is that I'm going to isolate burn namespace now Kubernetes has the concept of namespace as in you could say that you know in an enterprise this is user one or user two or department one or department two we can also say that hey I don't want to isolate by projects but I want to also isolate by placement of the machines which means data center one has X machines and data center two has Y machines I just don't want any of the projects there to cross over to the other data center something or you could have an environment saying I have test environment or prod environment I don't want the traffic to go across each other you know I have the network should be isolated what I've done here is though it'll be isolated by projects as in namespaces now this is Kubernetes this is kind of a little bit rung lower than the OpenShift thing but the concept should remain the same so I just did this get pods I got two pods here one is called hi the other is called hello one of them is on minion one and one of them is on minion two I think let's say no three and four so it's on minion two and minion three they're on minion two here a minion three here I also got another namespace whenever Rujat says namespace and project in OpenShift and Kubernetes are very similar concepts project is the higher level access control quota authorization grouping concept that makes Kubernetes namespaces useful to people who have multiple tenets or want to subdivide so just with the distinction there yes thank you in my mind I just change them interchangeably yeah okay so namespace or project so this is this is another project or a namespace which was called ABC the other one was the default one so in the first one I got two containers hi and hello in the second one I got hola and namaste I clearly have defined as an admin that anyone who goes into a separate namespace they shouldn't be able to talk to each other no matter where they're living they're living across minions also on the same host or whatever it is they shouldn't be able to see each other while they should be able to get unique network parameters they should be able to talk to hello hello should be able to talk to namaste but not across each other so I got this on minion 2 there is high living on minion 3 there is hello living on minion 1 there is namaste living on minion 2 there is hola living on minion 2 so I got hola and hi so I'm going to go to minion 2 and say what have we got here docker ps where we got four containers basically two pods one being the network container the other being the actual container so two pods there and docker ps crap just to make it highlightable I got one pod here which is the high one I'm going to go inside docker ps sorry and I see here and I got an IP address here I'm going to go on minion 3 and also do the same thing I see what is running here is the hello guy so let's get inside this fellow 24613 now this was hello guy this is the high guy right so they should be able to talk to each other ping ping the other guy who is 104615 and it does work beautifully there I can call it 145 8080 and it serves it now my this guy's IP address was 24613 so the similar thing should work from that side I should be able to call this fellow say well nice now I'm going to go into the other guy which was the hola fellow on the same minion 2 and now this guy should not be able to talk to the guy on the minion 3 so docker inspect nice this guy's got its own IP address which is unique the other fellow at 104615 this guy 1-6 this guy's got 1-3 but I shouldn't be able to talk to 1-3 or 1-5 1-3 was it 1-3 and I present her and says now one work because it's denied and if this guy wants to talk to it's this is the hola guy and it wants to talk to the namaste guy I happen to know the IP address so I can just say yes it works on the same machine the other fellow lives which is high on the same machine and this container cannot talk to the container on the same machine or pod that's the end of the demo what's going on beneath is the same OVS stuff but it's much more complicated because it's just not as a subnet now what happens is every pod gets connected to an OpenVswitch bridge and we got rules pre-programmed and this guy says hi says I want to talk to hello and says okay before talking are you going to send an R packet yes of course there's been an R packet first there's a responder in OpenVswitch using the openflow he said I already know what the MAC address of that guy is going to be here it is here's the R response says okay fine I figured what the MAC address is and then it says can you send this packet to that MAC address and openflow rules are saying are you a genuine guy am I going to test that if you are a a container which is meant to talk to that other destination which means I'm going to check your tunnel ID then check that if the destination is this guy then I'm going to send you to another port where I will tell you how to reach that guy and when it reaches there then it kind of comes back and says okay well you guys are genuinely okay if a spurious container was supposed to talk to what happened to talk to a guy who was not supposed to talk to then openflow rules would not match and say hey I don't see your tunnel ID matching anywhere which means you're not supposed to talk and that's how it is I think this was a detailed demo and we can go over the architecture in detail for the summary being we're just using OpenVswitch and openflow using isolation the other demo was a flat network where we just kind of assign subnets where there's no isolation but there's no reachability across parts any questions? There were a couple of questions while you were talking which I think I wanted to address so one of the first question was this does seem like there's a lot of complexity and I think our goal with OpenShift 3 is that there is a lot of complexity out there lots of people have different solutions we really have two goals support customers who have existing solutions who have investments in existing network infrastructure to help them build their networks into OpenShift so that they can run those things the way that they would expect to the other side is we still want to have an easy out of the box experience and what Rajat was showing was trying to use the same technologies that we expect many people in the industry to work with so that when we build this example we're actually trying to build it using the same hook points that all our customers and everyone who's using OpenShift who's setting things up in order to go do this for their own stuff so there's a lot of complexity here and this is a really new area for a lot of people who might be coming from OpenShift 2 but we're really trying to bridge the gap for the future so we want to have something that's simple enough that you can configure this in an out of the box way and try to you may not get all of the power or flexibility that you want but you have a solution that will scale and fit a reasonable amount of use cases and then just keep focusing on helping people go from maybe a world where they're just using flat machines to a world where there's a little bit more sophistication involved in the network stack and we expect to, you know, we really want to take feedback from people in the community as we go forward on this about the complexity of various bits and how we can work with Kubernetes to make this easier so we all have been working with in Kubernetes with people from a number of different networking solutions. There's actually a meeting scheduled this Friday to discuss some of the ways that we want to integrate this into Kubernetes and we've also been working at the Docker layer to try and make sure that there's a lot of networking solutions that people can take advantage of that we can just consume as well. So I totally understand there's complexity and if people have concerns about that those are the things that we want to know in that path. Sandeep, if you want to go ahead. So Sandeep. Can you hear me? Yes, we can. Thank you. Hey Clayton, this is Sandeep from Cisco. A quick question around the plugability around the networking pieces in OpenShift and specifically in Kubernetes. I know we've had discussions with Rinal and Rajat around this where if an enterprise or a service provider already has, as you mentioned an overlay network or whatever kind of network and they have policies around IP address management and so on that's where you want to go and how far are you along in terms of getting feedback from the community around what that might look like. So I'll let Rinal and Rajat jump on and add anything they want to say. We have agreement really that we want to the point at which time the network gets set up so OpenShift is going to talk to Kubernetes and Kubernetes, the bits of Kubernetes that carry that information to the point where we go to start the container. We want to make sure that all of the metadata that's necessary to make a decision gets passed down so that there's some hook point down at a very low level that says I want to go start this container that is part of this network and also have the ability to add flexible metadata that comes from administrators might set an OpenShift all the way down and at that point hand it off to a plug point which might be the local script, it might be some sort of compiled module or something but to hand that information off and then to say this is where you have all the information either connect to something that's already preconfigured in which case you don't have to make those decisions or be able to run something that watches OpenShift and Kubernetes to set up things ahead of time so for instance a new pod is going to go get created somebody can use the APIs that we've put together in OpenShift and Kubernetes to see that that's going to happen to make the reaction on the other side that says I've got a new pod coming in from this particular project I'm going to go ask, do a more complex decision making process and decide what network it goes to I think the more complex things are probably going to be a series of steps here where we try to get the simple integration and then gradually increase it but those are goals for us for OpenShift 3.0 okay go ahead I was just trying to say what Clayton said with the fact this demo is an example of how an integration might work out with another provider now OpenVswitch and wherever these flows that I show of course are throwable if there is another network provider which is capable of providing us a network with parameters with QoS and isolation and we can just say instead of calling OpenVswitch let's call that network providers interface I know we've been talking separately there's a work in progress I guess and as many plugins we can have I think that will be better Sandeep I think to answer a question about how progress of the idea is that there's going to be a community hangout tomorrow and we're thinking that will propose some solid POCs to demonstrate each of the hook points to take this forward and I mean we can participate as well that makes sense because as we were talking about I guess it's how you hand off that information so whatever you have OpenVswitch if it comes with OpenShift then it's going to be already the hooks are already going to be there but if we need to add other types of networking how do you actually call that networking piece is what means we figured out right? Right and that would be the whole discussion around how networks would be pluggable into Kubernetes I have another question but this is not related to networking maybe I'll hold off I'll do it it's around the plugability your opinions on how pluggable you want to make OpenShift in general and networking being just a flavor of it So the answer there is infinitely the practical answer is probably slightly less than infinitely a key just as a quick answer to that purely architecturally we have tried to make it so that every major piece of function in OpenShift 3 is a composition type setup so that we have the simple core and then things build around it which means obviously this is always going to come into support and complexity but we would like to be able to let administrators rip out whole chunks and chunks of the decisions we make and replace them with their own decisions as well as having all of the APIs they need to react to things that are happening in the cluster we probably need to have a better document drawn up on how we want to make this pluggability possible a lot of it is embedded in the designs but it's not necessarily called out in one place Sounds fair and instead of talking about it here I'll probably just open up an issue to show where we're going around use cases for pluggability Thanks Does anyone else have any other questions? Do you want to add any closing thoughts Clayton? Sure so the deck that I showed we'll send that around at the end of it there's a lot of different examples of the use cases that we think are application specific you know that if you think about OpenShift it's our job our goal really is to make it easy to run applications and so we've tried to kind of bake those down into a couple of different important use cases around the network for applications obviously this is going to continue to evolve throughout the OpenShift 3 timeline so feedback on are there use cases that we've missed are there things that are important to people who are building applications today either on top of OpenShift or not that they're concerned about how those might be possible because our goal has always been to be able to bring the bulk of the world's applications onto the paths so that administrators can broaden the operational efficiency so with networking trying to make a lot of the networking automatic and have specialized solutions where possible so that that's one less thing administrators have to deal with you know working with this sort of automation of applications at scale if you can define some simple rules that say all of these applications are isolated and all these applications should be linked together that takes one more burden off the administrator shoulders. Alright then well thank you very much everybody for coming and Clayton and Monroe and Rajat for taking the time today we'll have another session next week on storage and I'll send out the notes to the OpenShift Commons mailing list and you're all invited to join us for that this recording will be up in a couple of hours probably after we're done processing it and available on the Origin OpenShift site and I'll post that link to the mailing list as well so thanks again everybody for coming and participating in the OpenShift Commons.