 Good morning. It's still morning, right? It's been a very long morning. Okay Welcome to the Cisco sponsor track room. This is the fourth and final session for us today. Glad you were able to join us Gonna quickly introduce our two presenters Hamanshu Raj and Sanjeev Rampal from our Container team part of the cloud products cloud platform and services group. I'm going to do a little quick talk for you on Networking policies across across containers and VMs and with that Hamanshu Oh, just one last thing be sure to stop by the Cisco booth to check out some demos We've got as well as on your way out today stage right We've got some runs on open stack running socks So grab them at the door on your way out and thank you all for coming and I'm gonna get out of the way Okay, Mike working fine great Welcome again and Almost happy afternoon. Good afternoon to you. I know we are between you and lunch So we will try to be on time and get it done The good thing is lunch is right across the across the hallway. So it's not too far. So again Sanjeev and I we are gonna talk about Unified policy enforcements for containers and VMs and in general bare metal, right? So let's get the show started So The infrastructure today is presenting this really nice opportunity right to do the mix mode application deployment based on your application needs and Where it is in the state of evolution your overall application It can be nicely deployed as either bare metal components VMs on open stack And if the number of Kubernetes presentations in this conference has not convinced you yet on containers very soon, right? So the What are some of the challenges in this model? We know this is where the world is headed to but this this architecture has three main challenges One is how do you do an end-to-end policy? design and enforcement We have some policy enforcement points for VMs we have for bare metal. We have containers. How do you put them all together? How do you do end-to-end monitoring in this some of the you know Challenges there are well you have monitoring data coming from one one layer and It doesn't understand and know about other layers. How do you put it all together and then how do you make use of? Your infrastructure capabilities to get the best performance you need, right? So just to illustrate on the last bullet a little bit This is how traditionally people just put the networking together, right? You have bare metal and then you get VM So you put overlays on top and then you get containers inside VMs So you put one more overlay and so you get overlay on overlay on overlay and In this kind of a world You again end up with all the challenges All this in-cap hurts your performance That also obscures the visibility your lower layer does not know about higher layers So it is just looking at tunnels. So you don't know what's going on And it also makes it challenging to integrate with some of the hardware appliances that you might have, right? So overall These are some of the challenges that the mixed mode deployments Are facing or will face? So to address some of these challenges We are going to talk about content of networking and how it it takes care of some of these challenges So this is the rest of the talks agenda. So We are going to having gone through the the hybrid deployment challenges a little bit. We are going to now talk about the content of content and networking In brief not in too much detail. I'm happy to talk more about it after the after the The presentation and there are some links in there as well the main topic of Discussion today is the integration between the Cisco's SDN fabric ACI and content Which allows us to do end-to-end policy enforcement monitoring better performance All the good stuff the focus of today's talk is purely on the end-to-end policy enforcement again The other benefits are are there they're available Just we are not talking about today and you're welcome to talk to us afterwards or go try it out it's all it's all open source of life and This will be followed by a live demo and I hope the demo guards are happy and we are able to do the live demo but just in case we also have a Recorded demo as well. So you would see something right So without further ado, what is content? Contiff is a container networking Fabric It is a hundred percent open source solution, right? So nothing nothing proprietary about it the two features that make content sort of most powerful container fabric are It's support for a lot of networking modes. So You can of course use overlay to again put your containers together quickly on bare metal or VMs, but it Has the capability to run the containers in L2 mode or L3 mode Which basically puts your containers on the same plane as your VMs of bare metal and that's what it's so cool about Using Contiff with your containers is because now all of a sudden you have the visibility in your whole infrastructure at every level And it integrates with ACI. So we will we'll talk more about ACI coming up But if you have ACI capable fabric It integrates with that well gives you all the benefits of defining a unified policy enforcement and so on And it has some you know Our back capabilities and all the enterprise features built in and so on so forth It's it's available today. It works with Kubernetes OpenShift and Docker swarm and it's an open source community project. It has been made to work with mesos And no mad on all as well, but we support officially These three platforms that are listed there Okay So focusing a little bit on the policy side of Contiff what it has what it provides It's a high-level architecture site. So There is a concept of a policy manager where you go and define policies and then Contiff takes those policies and Then distributes it working with the container schedule the platform Like swarm or or Kubernetes and gets it on to all of the hosts where the actual policy enforcement takes place by the Contiff agent and This policy layer integrates with underlying hardware as and when required so the policy system That we just quickly glanced over It provides it enables Microsegmentation for your applications what that means is That you not only have your applications that you can you have to isolate using networks You can if you're familiar with some of the ACI terminology it it can create Groups of those applications. So some of the tiers of your applications can be further broken down into endpoint groups or application groups and You can define policies on that. So you have the ability this way to define policies on Specific parts of your application and this is this provides a granular way to define more policies based on how your application is constructed That's you know scalable. So this is as much as I'm gonna go into detail About the Contiff at the high level in terms of policy Now switching gears a little bit going to what ACI is right the I alluded to a little bit before So it's a Cisco SDN fabric It enables the network administrators to define an Application-centric view of your application as shown on top. You have multiple tiers like web at TB and so on It has the capability to take that distillate Into policies that are applied at the fabric layer, right? So you group your applications into again endpoint groups you define those policies and then you you basically Deploy it on this fabric and then the policies are going to be enforced in the fabric itself right, so you don't need anything else to to to enforce the policies the top of rack switches and Up levels mines and leaps and whatnot. They will do the actual enforcement. So this is What what ACI provides right so? If you haven't guessed where this is going Basically Contiff is doing exactly this for your containers without requiring ACI of course Contiff is generic It works with any L2 infrastructure L3 infrastructure or even the overlay right? So Contiff does that all for your containers ACI has the capability to do it for Your bare metal and virtual machines now. Let's put them together and that's where we will get the wonders of integrating Contiff and ACI Which the very first and the key part that we are going for in this presentation is Uniform policies for any workload. So you have the mixed mode workload. Are you gonna build with? You know bare metal components what's the machine components containers and Now with this integration you have the ability to define a uniform policy works into it You get policy automation you don't need to now go at different places and type out You know some policies and upload them in this place and then define Container policies differently and so on you have you you can automate at all these are all The all of the SDN controllers provide rest API's and you can use your favorite tools to build out an end-to-end automation for your policies You get these scale benefits that you know, you otherwise, you know get for just ACI Everything is the same everything is working at the same endpoint level your containers have an IP address Just like your host has an IP address just like your VM has an IP address Everything is visible end-to-end And you get high performance. You are not paying the cost of encapsulation encapsulation right and You get you know good visibility Telemetry diagnostics all of that you you have a single pane of glass now and you can see what's going on in your overall system So this is how at a high level the integration works I showed the the the slide earlier about Kantev Where we have the policy manager or the master or the controller whichever, you know The control plane master how you want to call it that? controller component Which is integrating very tightly with the container management platforms Kubernetes and Docker and It has a smaller component in self-called a pig gateway Epic is the controller for ACI platform And Using that gateway it integrates with the a pick controller that is deployed in the underlying fabric So this way your a pick is working with your bare metal machines your You know VMs that are running on open stack on on high, you know Hyper-V VM where all those integrations that are already there and with this integration that we did with you know ACI and Kantev now you have similar integration available for containers, right? So we have the Kantev agents that are running on the hosts and And the end-to-end the policies that you define on your container cluster makes its way into ACI using this integration So just as an example How this all works right how it all works together? Yeah, this is a sample workflow how people use Kantev and ACI so you create your a network admin goes and creates the The the objects that they already do with ACI They define a tenant for the network create a network and so on and then they They all of that information is exposed to Kantev because Kantev is integrating with a pick so you define them in your you know a pick controller In inside your ACI Kantev ingest that information and that information is available to you in container landscape Then the container administrator comes in builds out some policies around the container workload and those policies are You know a sample policy that it shows right that how you know web containers are gonna talk which To DB containers or what port and so on and those policies are automatically converted into ACI policies and they are pushed into a pick right so When this workload is then deployed in the system using the container scheduler of your choice swarm Kubernetes and the workload lands into different hosts in the platform your policy goes with it ACI fabricated It has taken the policy that you defined earlier and has now pushed it do those top-of-rack Switches all the way in that whole fabric So when your communication is gonna happen between the containers or between containers and bare metal or container VMs The policy will be enforced into it. So that as much talking I'm gonna do I'm going to now hand it over to my partner in crime Sanjeev who is gonna do a demo of this overall system Thanks, you so yeah We'll try and do a live demo. We have to VPN in into Cisco for this. So there's a chance something along the way could Go wrong, but we think hopefully it should work and then I Need to make sure I have not logged out Okay So Okay, so what are we gonna demo so? So this is sort of the physical Models that we are That represents our demo topology We'll talk a little bit more about this later, but what is it really is two side-by-side clouds, right? So we talked about mixed mode applications How exactly do you are you running one cloud on top of the other or the side-by-side, you know, which way so in this case We have cloudy which is represented by those black squares Which are the servers and that's our coat-and-coat container cloud running on bare metal, right? so that's running side-by-side with cloud B, which is our VM cloud right that could be open stack that could be vSphere right and These are connected by a common networking fabric Which is that spine leaf architecture, which shaman she was talking about in the in this demo. It's an ACI fabric So this is the model that we are going with we expect a lot of customers will do this because this allows them to get maximum Performance both for the containers in the VMs both are running individually on Bare metal. There's no tunneling of one on top of the other, right, but By having the integration that we thought we've been talking about and which we'll demo now We show that we will get that seamless View without needing to tunnel one on top of the other, right? So this is the network. We are using Again, these are representing the clouds that these servers map to Okay, what is the demo application? So Pretty simple application is what we're going to use here. This is a multi-tier application Some of the components of the application have been containerized and some of the Components of the application are running in VMs that may be either a transition scenario for somebody Moving to containerization that may be a permanent Configuration we don't think we can anticipate all the different combinations of containers and VMs that customers will want to try But you know, this is a reasonable Pattern that we expect will be used. So everything about the black line is running in cloudy, which is On a container cloud running on bare metal everything below the black line is running on an IS service In this particular demo, we shall use sort of the front-end components enough of an application will be running in in the container cloud so we've actually got several components there a load load balancer HA proxy balancing to to front-end web components, which are engine X containers and They are then connected to back-end container components, which are alpine Linux containers in this demo and eventually they are connecting to a virtual machine on a separate cloud, but that cloud is sitting side-by-side with the container cloud So this is the Tier topology that we're going to use We're going to create a logical service for that first year and we're going to assign it a group We're going to call it the default group we're going to have a tier for the second layer and this is These containers are going to be assigned to the privilege group the privilege group gets to talk to the virtual machines on the virtual machine cloud and The containers in the default group do not get to talk to them So this is essentially a form of micro segmentation We are segmenting even within the application which components can talk across clouds which can What are the communication patterns within the cloud and across clouds? so with that let's switch to the CLI and Run the demo from there so The display isn't switching to that. There's always one demogod that isn't happy At least one God doesn't use Mac usually all right, so hopefully everybody can see this the Lettering is large enough. Yeah looks like is it fine Okay, so we have a demo script. So we I'll just run through that. So let's see Okay, so what do we have in this environment? So we actually happen to be running an open shift cluster for for for the container cloud here So as you might know open shift is a distribution of given it is We're using open shift origin in this case, which is the open source version of open shift 1.4 which is using Kubernetes 1.4 as seen here And we're using Contiff a very recent version of Contiff you see the version here 1.0.1 Release just a few weeks ago Netcuddle is the CLI interface for Contiff Contiff has a number of different management interfaces. There's a CLI interface There's a go interface we shall be using the CLI interface for this demo Okay, so let's just check what's going on in our cluster. We've got a cluster of four bare metal nodes Let's just run through this a little bit quickly because we missed a little bit of time So let's look at some global settings of Contiff You can see that it's set to run in ACI mode as you mentioned Contiff supports a number of modes and You know we can go over with you offline about the pros and cons of different modes in this demo we're using the ACI mode We've given it a few parameters to work with You know as an example a range of VLANs to use to indicate the Contiff networks into the ACI fabric So let's keep going We've pre-created some What we call Contracts in net plug-in. These are analogous to contracts in ACI, but these are Contiff contracts Now in this case these are external contracts because we've imported contracts from an external cloud somewhere Right because here we've got actually two separate clouds. So Contiff has the ability to Facilitate multi-cloud policies by having the ability to say okay, I'm importing a policy from another cloud In particular in this demo we're going to focus on those last two policies external VM to consume external VM to provide because we're going to be using those to Interact with external VMs sitting across on the other cloud We also need to create policies for internal communications. So ACI in particular is Has a model where it's a whitelist model you have to explicitly say which components are allowed to communicate by default They're not allowed to communicate So we create a policy, but we haven't actually added any rules to this policy yet So we'll see at what point we need to do that later in the demo and networks So with Contiff you can actually create multiple logical networks and you can tag applications to reside on physically separate Container networks Perhaps for the purpose of different policies in this demo, we shall use just the one demo one network to keep things simple but we shall use separate policy groups within the single network Note that we've set aside a subnet out of which we're going to give IP IP addresses out So Contiff has an IP address allocator as well And you can use that to match your enterprise topology in terms of private address space public address space You can have globally routable container IPs or you can have truly RFC 1918 private IPs and you know use NAT and so on so it gives you a lot of flexibility in terms of how you want to Embed this into your enterprise topology Okay, so we are now going to create the Sick policy groups. They are called EPG's in Contiff to align with the terminology with ACI But but these are Contiff EPG's right, but we just happen to use the same same similar terms so we created a default group and Now we are going to create the privileged group If you recall our application map, we had these different groups with different permissions. So now we're creating the privilege group If you notice here when we created the privilege group, we added additional external external contracts to the privilege group So these are typically administrator actions Administrator will say the privilege group gets more contracts and the non-privileged group gets less contracts and then the applications consume those groups and contracts so Everything all of these groups are wrapped up and meet what we call an application profile and that's the Encapsulating object which is sent over to the controller of the physical network in this case ACI. So we've done that We've pushed all this information to ACI Let's take a look at the epic GUI. So So what what happened on the ACI side, so we were doing so We talked about having there being different roles, right? So there typically be a network fabric admin role that will be the person managing the ACI network there will typically be a DevOps admin or Somebody managing the container orchestration layer and we expect that will also be the admin managing the quantum layer Right. So all of the CLI operations. We saw so far were the for the for the DevOps admin, right? Who's typically working on the server side? This is what the network admin sees right who the person administering the ACI network So he actually sees that a tenant got created This was pushed through the programmatic interface of the ACI controller into ACI by Contev It happens to be called default in this case not particularly imaginative, but that's what it is and Let's see. So what does so this have been all of these have been Programmatically created into ACI so we see that he's got two application profiles this is that Contev Infra app profile, which we just pushed and This this particular profile is actually pre-created. So We could do this in any sequence You can have a profile for the VM cloud and then they have the container cloud connect to it Or you can have it pre-created for first created for the container cloud and have the VM cloud connect to it in this case We had the VM contract pre-created as an external contract, which is this this one on the right and then we Consume and we sorry we created the container contract from the Contev site We could spend a lot of time going into this. I'm not going to go into every detail here, but We have some demos on our website and please feel free to come by our booth as well Just quickly this shows the two EPGs that got created the default group and the privileged group Expanded if it helps a little bit and You know you can examine the details of the security policies You can even see the details of the container endpoints once we create the containers This network admin will see endpoints and be able to do network administration and monitoring Just like non-containerized endpoints So this goes to the point that Himanshu was referring to you get that visibility of all endpoints the network It's not like there's this mysterious portion of the network that is hidden, right? This is all regular networking and at the same time you have the administrative separation between The network admin and the DevOps admin Let's go back to the CLI and and to the rest of the demo from there So now we are deploying our application, so let's deploy the first application tier one now we are doing Open-shift commands here, so we created a new project and we're going to deploy an engine x deployment Right here pretty standard engine x Kubernetes deployment. So let's go ahead and do that Let's create it. Let's create the service which is the Kubernetes service that Gives you that abstraction over the deployment That's been created And then are out right so we if you remember the application picture. We wanted to have a load balancer External load balancer mapping to a service with with multiple backends. So we just did that Let's now create the containers in the privilege group So now this is a typical flow of the developer that we are doing here, right earlier We were doing the administrators flow So he creates this application. He's deploying the different tiers One thing to note is that when he's creating the privileged here. He has the ability to annotate Using these labels right here saying that oh by the way these this deployment Needs to be treated a little differently. I Wanted to use That group called the privilege group so the application developer knows best What what kind of communication patterns he needs and he tags his application accordingly? But the groups were pre-created by the admin Okay, so So he goes ahead and creates the privilege group now We see that of all these four pods are running so that the first two tiers are running That all looks good. Let's check is the application working is has it exposed a URL. So let's actually go to The web and just open up the URL which This application exposes so this is what it does a engine X was the first layer and that's what so this is the outside world Now talking to this hybrid application right We could play around with a little bit more, but I think we're going to be tighter time. So I'm going to skip some of that So so right here is what You know, we had set up that this was the URL we want to expose this application's capabilities on and that's how we got it so So essentially that entire the top level load balancer the tier one and the tier two They've all been deployed the load balancers working now Let's check reachability. So let's check reachability of tier one To tier two as well as to the external virtual machine We're going to do that here. So tier one was an engine X. Let's just pick any of the engine X containers right here and Let's do a first check. Can it reach the virtual machine on the other cloud? Okay, that Happens to have this address We're just going to use regular pings. We could do Other things like curls and so on so we apparently tier one cannot reach The external cloud which is good because that's how we set up the contracts for the default group versus the privileged group So this is good Can it talk within its peer group? Yeah, sure it should be able to let's try Yep, so the two engine X containers can reach each other Can it reach to tier two? No, not yet because if you recall we need policies even for internal communication So we added the policy, but we did not explicitly add a rule said these two groups can talk internally So we'll do that in in just a minute. So all the Expected required communication patterns are working both within the cloud as well as across clouds Let's continue We can check tier two we'll do just just the one check on tier two Let's see So let me just take a brief second here I mean is this clear the flow we've been doing we had the application pattern We had the two side-by-side clouds the deploying an application which is split across two clouds and then we're checking the patterns of each component of the application so hopefully that was clear and You know we can go into more details in the Q&A This was the quote-unquote privileged peer Dear, sorry So yeah, so the privileged here can reach the external virtual machine. So the privileged here can communicate across clouds the non-privileged here can't and Let's add that additional remaining policy So we had one internal communication pattern that was missing. So let's fix that by adding that rule Let's try that again. So we will try to get from the privileged here to the non-privileged here So so adding that rule added that whitelist capability now. So Basically, you have to explicitly enable as we said earlier and we explicitly enabled that particular path so So essentially what we've seen so far is that we were able to deploy a relatively complex application With a requirements to run on a mix of virtual machines and containers With communication patterns required between clouds and within the same cloud and we were able to do all of this with the content policies and Which as and when needed it talked to the ACI But it was essentially the DevOps admin doing all of his application level policies at the DevOps admin role So we've configured all of we've confirmed all of this. So that's it for the app for the for the demo. I Also show want to show one one one thing before we end which is So we we saw this going on but what was actually happening at the physical layer, right? A lot of us are networking people. We like to see where are the packets actually flowing So let's look at that So if you recall, this was our demotopology picture, right? We had two side-by-side clouds We created this application which was sort of split across these two clouds Now those colored dots represent the containers and the virtual machines, right? So you have the two tiers the green and the red were running in the container cloud The blue is the virtual machine running in the virtual machine cloud If we had done this the normal way We would have treated these as two separate clouds the only way normally you want To communicate to the cloud is through its officially exposed endpoints. You don't want to have backdoor paths, right? That's that's bypassing the administration boundary and you know normally you don't even have visibility to that So if you had treated this as normal to clouds, what would have been the communication path? It would have been something like this This red red container wants to reach the blue virtual machine It would have to go out of the cloud a Through its van router for example possibly go through some van routing or Even if it was that there was a direct connection between those those two routers come back into the virtual machine cloud through a floating IP perhaps floating IP based service perhaps and This would be inefficient through because of a number of reasons a Latency is much higher. You're going through so many different hops Be your bandwidth your traffic is passing through bottleneck points. You see those van routers there Right typically there'll be one or two van routers in each cloud Or logical routers which are providing the external connectivity So you are having to go out of of of the van routers come back in go through multiple NAT operations, right? that Part over there would need to go through an SNAT operation on the way out a DNAT operation to the floating IP on the way And this would be very inefficient So if you are doing the naive way of spreading an application across a multi-cloud You would be taking some such network path But all this would be hidden if you were not really thinking about what's going on under the covers Where's the traffic flowing to make this hybrid application work? But the way we did this because we had that visibility we actually took that Direct path so this was the more efficient path that our demo just took the blue line We when we wanted a red pod to communicate with the virtual machine We routed directly without going out of the cloud and coming back in Okay, we had This was facilitated by a number of content features a content has non-overlay networking So we actually used In ACI is also a form of non-overlay networking There is some overly going on in the fabric but between the clouds and the ACI fabric it is non-overlay So you can actually route to the IP address of the back end That's how we were just bringing directly to the IP address of the of the virtual machine, right? We we also had some of that in the way we had set up the contract so this is this is a important Sort of visualization of what was really happening when an application was working across these clouds And this is why we think the combination of content and ACI is very compelling especially when you're doing multiple clouds mixes of containers and and and virtual machines and bare metal And which we believe will be a pattern Necessary as we migrate from virtual machine patterns to some kind of mix of containers and virtual machines So so this is essentially Sort of a look at the network topology With that that's all we have for now We have a website where you can get a lot more information All of us are on our slack channel where we answer questions So we Invite you to go ahead and try a content with or without the ACI feature and We are always looking to add to it. So with that I'm actually and I will now take any questions you may have We do we do have a probably a couple of minutes for some quick Q&A We do have to sort of start making time for the next presenters and our guys need to go have some lunch and back in the HV room, but I'm sure Sanjeev and Homanche will be happy to stand up here take some questions It looks like Charles might have a question Yeah, I do. Thanks for that. I really Appreciated the way the traffic could be routed more efficiently between the two clouds, but I also imagine with Everything we did there was by IP address at least in the demo Sit when you have containers and VMs, I assume there'd be cases where you wouldn't be routing by IP address They might not have a dedicated IP address and true have no overlay. Could you discuss a bit more about how that works? So you're referring to perhaps like how would service discovery work? Right, so I think we need to think through how we can leverage DNS to provide you the ability to point to the IP addresses I mean, I don't see any issue with that. You can have a DNS that gives you resolution across for endpoints in both clouds and Then each of them can use that DNS service to say that okay if I want to map to my back and VM I actually get back an IP address which is You know which is analogous to what we demoed so I think that would be a way to do it But I think that's a great point that we should You know think through the service discovery and we'll probably incorporate that into a future demo But it should be possible Just to add just to add to I mean and my interpretation of your question correct me if I'm wrong, but basically in this demo We are exposed we we are running at the same IP address level in some sense Right, so we are exposing the internal IP ranges of your clouds to each other So when we were running with that 29 or whatever IP space that thing was visible in the whole fabric There was no netting or nothing going on similarly from the VM cloud side that one one one address that you saw It was visible inside the ACI fabric So I mean we are effectively leveraging the leveraging the and the ability that we are sitting on the same layer 3 Reachability endpoints and you're assigning an IP address to every container. Yes. Yes. Yes. Yeah Yeah, so we skipped a lot of those details Contiff has one IP per container and lots of other things. I think everybody's hungry. So Hamantya Sanjeev, thank you very much stopped by the Cisco booth demos more questions and Don't forget your runs on Cisco running socks as you go out the door. So thank you all very much for coming today