 Well, hello everybody, and welcome to another OpenShift Commons briefing. This time we're going to get an overview of OpenContrail and using it with OpenShift. And we have a number of folks from Juniper that have great, great call for them to come and do this for us, because it's something that's near and dear to my heart. And it's always, it's an ever-changing thing, and there's lots of new release information coming out. So James Kelly is the lead cloud architect. I'm going to let him introduce the rest of his team. There's a whole bunch of content here. The way we do these sessions is ask the questions in chat. We'll try and answer them in chat. The great questions you ask, we will read out loud at the end and we'll open it up for live Q&A at the end. So without any further ado, James, take it away. Thanks very much, Diane, and thanks to everyone in the Friendly Commons community for tuning in and certainly for having us folks at Juniper and from the OpenContrail community come and talk a little bit about how OpenShift and OpenContrail work together. So as you mentioned, I'll introduce us briefly, but we will be passing the torch from myself to GM and then to Savvy through as we go, so I'll let them say a little bit more about them. But as you can see on the slide here, I'm a cloud architect. I've been at Juniper for a little over 10 years now with software developer background, basically working on software development kits, Contrail, software defined networking, and other projects at Juniper. These days I find myself inside of the portfolio marketing team, so I'm a little bit more of an armchair architect than a hands-on architect as I was in the past. Nevertheless, I still keep my technical chops up and happy to have worked quite a lot with OpenShift, actually, since Kubernetes sort of generated a lot of interest and caught my attention and I quickly saw how Red Hat and the OpenShift community embraced that. It was something really exciting to see and I've actually certainly done podcasts of OpenContrail hacks with OpenShift even dating back to beyond a year ago. So that's a little bit about me. I'm going to be kicking off the talk sort of just giving you guys a brief introduction of what to expect, a little bit of background about OpenContrail, and then we'll cover the agenda a little bit more on the next slide. Guillem's going to kind of take over from me and talk about the integration between OpenShift and OpenContrail, and then Savithru is going to take it from there with a live demo. So without further ado, let's kind of jump in here and kind of talk about the agenda. So what's to expect today? Again, like I said, I'll be going through a quick introduction of Contrail as we colloquially call it for short. As a project, it's OpenContrail, but we usually just say Contrail, so I'll kind of use the two interchangeably and explain the difference between Juniper's Contrail offering and the OpenContrail community and project. Guillem's pretty much got the second part of this, the integration, the features, the installation of Contrail with OpenShift. And then like I said, Savithru's got the live demo. And then of course at the end, we've put in some actual links and URLs to where you guys can go and find out more information. And we'll see if we can get those links put into the YouTube video description below the video as well. So first things first, we'll talk a little bit about OpenContrail and give you some background on it. So you might have heard about OpenContrail as a project. It's been around since it was seeded by Juniper Networks in 2013. It's got an Apache 2 pretty permissive license. The actual software was originally written by a bunch of people sort of in the networking industry as well as other folks from Google and so forth. It kind of came together to write a software defined network solution for creating virtual networks in a more flexible software only way. And that was done in a startup called Contrail Systems that Juniper acquired in 2012. And so since that time, we've kind of been building and chugging along, adding features to OpenContrail. And sort of the first mission of OpenContrail was to be a network automation or SDN solution for OpenStack, which probably most of you in the OpenShift community know as well. And certainly those of you that are familiar with Red Hat, of course, they have an OpenStack offering. You know, OpenStack is pretty old now. And I guess, you know, fairly quite a lot of people are familiar with Neutron and the networking side of OpenStack. And OpenContrail was really the first probably big popular open source solution for OpenStack that was not just OpenStack, but also commercially available and quite mature and kind of ready to go in production. We've seen large customers, you know, as big as AT&T, many gaming companies, many large enterprises and banks adopt it. And OpenContrail is kind of known in the OpenStack community for sort of being, you know, the number one open source, but also commercially available software defined network solution according to, you know, quite a few user surveys running now in that OpenStack community. And depending on that heritage, of course, you know, we saw applicability for this in other use cases. So we added support for VMware over time a few years ago. And then, of course, with containers coming on the scene and Docker. And then like I said, all of the excitement and the wild success of the Kubernetes community, we said, you know, we really need to address this market as well. It makes sense to have one solution from a network perspective that can do all of these different things. And then, you know, as I said, OpenShift kind of building and wrapping around Kubernetes and embracing that, it just is a natural extension for us to support OpenShift in OpenContrail. Beyond that, Juniper, who's, you know, definitely has one of the commercial offerings of OpenContrail, which is formerly called Contrail Networking, if you were looking it up on juniper.net. Juniper and Red Hat have worked together for quite a long time as mutual technology alliances as well. So it really made sense for us to add support for OpenShift. And some of the guys will talk about all the specific versions and how that's happened. You know, you've seen different support for OpenShift in the community of OpenContrail for over a year now in sort of alpha-beta experimental modes, and we've recently, you know, sort of announced more formal support for it and are going through even the certification of Juniper Contrail Networking with OCP, the OpenShift container platform from Red Hat. So we're really excited about that. So a quick word about what OpenContrail is. You know, I've said it's basically an SDN. It's an SDN that's kind of opinionated in the sense that it's only software-based. It allows you to create very flexible networking and security constructs with pretty sensible defaults, but there is a lot of advanced SDN and customizable options in it as well. Obviously having worked with big customers over a long time and years of development now from hundreds of engineers. And really, you know, in the space of OpenShift, I guess you could kind of generalize it to a few things. I mean, you know, this is really general, but saying that it adds multi-tenant C support at the network level as well as micro-segmentation support at the network level. And, you know, these are probably terms that you've heard of both in the OpenShift community and certainly in the Kubernetes community. I know micro-segmentation or the ability to do sort of custom network isolation is sort of top of mind right now because I know Kubernetes 1.7 was just released and network policy objects were actually incubating for quite a while, generally available. And that's something OpenContrail adds, amongst other things. So going along to my next slide, we kind of position OpenContrail as sort of, you know, like the Lord of the Rings, one ring to rule them all, the one SDN to rule them all. We know that with OpenShift, just like, you know, other orchestration systems, you have a choice of which SDN you use. And there are definitely a lot of other contenders in the space of software-defined networking projects, both closed and open source. Some companies even having many that they'll offer. Juniper certainly only has one. And our philosophy is not to try to necessarily build different best tools for the job because that kind of leads to a lot of variability in your stack. And, you know, harder to maintain, harder to, you know, level up skill sets and so forth for network ops people, but rather to have this one tool that allows you to address many different things. So like I was saying, over time, we've added support for various flavors of virtual machines, Linux bare metal as well as container run times. Certainly today we're talking about containers service platform as a service with support for Kubernetes Mezos and OpenShift and other custom container orchestration systems that people have invented and talked about on the OpenControl blog. And then I've already talked about how if you're building a cloud and an infrastructure as a service, you know, OpenStack is probably one of the things that comes to mind there. And OpenControl works in that space. But OpenControl also works on top of many infrastructure as a services. So if you're running OpenShift, a top of OpenStack or if you're running OpenShift on top of Amazon Web Services or the Google Cloud or really anywhere, as long as there's IP connectivity between all of the different hosts, OpenControl can work. So a little bit more of an overview in terms of, you know, we've talked about this top level of the different environments that Control supports. Many of these on the left-hand side of the top are more enterprise-related. But then, you know, you see things like AMDocs, which gets into the telecommunications, cable communications service provider market where they have operational and business support systems from companies like AMDocs. And there's, you know, integrations there. But a wide variety of services, like I said, that have matured over many years of development, fundamentally, Layer 2 and Layer 3, virtual networking constructs, where you can really bring your own IP addresses. You know, as the next thing says, right, the DDIF, as we sort of call it for short, SDSCP, IPAM, IP address management, and floating IP addresses, quality of service, custom security policy, load balancing, which is obviously something kind of top of mind for microservice architectures for this audience, and a wide variety of other things. But certainly a northbound API and a web GUI as well is part of Contrail. We'll talk about the architecture on the next slide at a very high level. But one of the biases that you have in Contrail, in terms of it being able to work with other systems, it obviously needs to work with other networking systems. At Juniper, those would all be based on Junos. You know, they'd be based on various operating systems that other vendors like Cisco, Arista, Cumulus, and others, that we've tested interoperability and federating through a network standpoint with. And we do that with open standards, long, lived, and proven network protocols. So our overlays are based on MPLS over some IP protocol like GRE, or UDP, and our Federation protocols for the control plane are based on BGP, which has been used for network virtualization across the Internet for over a decade and a half now, I think. And it connects everything right from how it will connect with the V-Router agent to containers, bare metal, and VMs to other bare metal systems that might actually live behind routers or top-of-rack switches as well. So this might make sense a little bit more seeing how Contrail is implemented from a logical architecture diagram of building blocks and sort of a logical view of what it allows you to do of create these flexible network constructs on the right-hand side, right? You create a blue virtual network, you create green virtual network, or whatever you want to call them. They have various workloads in. You know, overlapping IP address spaces are their own customizable address spaces. And normally networks, if you call them virtual networks, are probably isolated, but policy allows them to be connected together. We can even connect them together with interesting services in between them that may be physical or virtual, such as firewalls is a common example. And then the left-hand side of the diagram is sort of saying, you know, we have centralized policy definition and distributed enforcement of that policy. And if you wanted to parse Contrail at the highest level, you could probably say that it's really two components, right? It's a logically centralized controller that box in the middle and then distributed agent that we call the V-router or virtual router, right? And that pretty much runs on all of the compute nodes, whatever they may be and whatever kind of workloads they're running. And then we build some overlay which can exist over top of any network, whether it's a network like the AWS network or a physical IP or Ethernet network, whatever it happens to be. And of course, all of the V-routers sort of connect up. You see it's using XMPP here in the slide to the controller. The controller can federate with other controllers. The controller can also federate with actual network devices, whether they be physical or virtual, layer 2 or layer 3 network devices. A common example is, of course, to talk out over the Internet or outside of the overlay through a gateway or through a wide area network. And at the very top end, you see it's got a RESTful API that plugs into many different kinds of orchestration systems. OpenShift included. So with that quick overview of Contrail, let's talk about how Contrail and OpenShift work together. And I'll hand the talking stick over to Guillem now. Thank you, James, for the introduction and laying out the architecture and the different use cases of Contrail. So my name is Guillem Tesser. I'm a solution engineer. I've been with Juniper for about two years now working in the Contrail business unit, mainly with either telcos or large enterprises on how Contrail works and trying to solve very different issues through SDN and working across different types of virtualization and lately mainly around containers and times, especially with Kubernetes and OpenShift. So let's jump right in and look at how we integrate with OpenShift. So just to put Contrail on the map, I think that's pretty obvious, but when you look at this tag that I guess most of you guys are familiar with, we basically see it at the networking layer and we will replace the native OpenShift SDN solution that comes with the traditional, I would say, OpenShift stack. So that being said, let's look at how we plug all that together. So I guess you guys are familiar with very Kubernetes architecture. You have your master into different nodes, the different components of a Kubernetes master, the API server, the replication controller, the schedulers, et cetera. What we do basically is that we have implemented a sort of plugin that we call the ContrailCubeManager that basically listens to the API server and based on the events it sees there, will then create the appropriate objects on the Contrail side. So a little bit in the same fashion that, you know, the Kubelet watches the API and then will update its configuration if it sees that the resource is being created, then we pretty much do the same thing on our hand, watching the API server, creating the appropriate objects in Contrail and then updating the configuration of the V-Router accordingly. So that's what happens from a control plane perspective. So we integrate in the background. It's totally transparent actually. So basically from the user perspective, the workflow stays the same. So you still create your pod services, your deployments in the same fashion and it's just that Contrail will pick up these events and then implement the network configuration in the background. From a data plane perspective, so, you know, James laid out the architecture with the control plane and the distributed data plane. So from a data plane perspective, we have our component that we call the V-Router that basically installs on the different compute nodes and will replace the cube proxy part. So we were basically substituted to it because we implement our own rules and policies there and we integrate with a kubelet through the CNI plugin. So then when the kubelet will need to instantiate pods on a certain node, it will call out through the CNI plugin to the V-Router to provide the networking configuration and the V-Router will have had received its configuration from the controllers who, you know, would have been triggered by the creation of an object in the system basically through the Contrail Cube Manager. Okay? So that's basically how we integrate same behavior as far as Contrail is concerned. Just one more component is Cube Manager that listens to the API and trigger the configuration on the control side. So just to give you a view of the mapping of the main primitives in Kubernetes to what they map into Contrail objects because some people are familiar with Contrail in an open stack environment. So basically the namespace would map into Contrail into a project. So every time you have a namespace and we'll see that in the next slides, but you want to isolate it, for example, then you would create specific projects in which you will have virtual network, et cetera. The pod concept relates to, you know, from a Contra perspective, again, not from a conceptual perspective, but to a virtual machine. The service equals to an ECMP load balancer, meaning what I mean by that is that every time somebody will actually create a service in OpenShift, that will result in the creation of an ECMP load balancer to access the pods that are instantiated in the infrastructure. As far as ingress concern, we have our own implementation which relies on HAProxy load balancer, and we'll touch on that in the next coming slides. And we also integrate with the network policy objects. So basically every time somebody creates a network policy, we actually implement security groups in the back-end to allow communication or not between the pods, the different pods and services. So we'll see that actually through the demo that we support this type of implementation. So looking at the different features, so James mentioned in the introduction that we bring to the table some different type of isolation. So we obviously support, you know, default mode that, you know, known as the cluster mode, where basically we have one large virtual network, fetching IP addresses from two different iPads, pod IPam and service IPam, and basically we implement all the overlay networking to allow the pods to talk to each other, the services to be exposed externally to be reached out from outside of the cluster. So this is all supported. What we also, what we implemented on top of it is this concept of namespace isolation. So basically when you create your namespace, you can just by defining an annotation in the namespace declaration trigger on control side the creation of a separate project and a separate virtual network. So what it does is that it basically isolates all your pod and services that belong to this namespace and then you can control based on the virtual network who you want to talk to, who. So that gives you flexibility inside your namespace. You can still apply network policies inside the namespace, etc. but then also between your namespaces and between the resources inside the namespace, you can control all that by using control networking policies. So basically we have two main models of default and namespace isolation. So if we look at how we get as a user, it's very flexible because on top of it as it was mentioned we have this mode we call custom isolation basically. So that can be seen in different use cases where you want to break down your application to different networking to different types of networks so you can create networks in control then and then spin up your pods and specific virtual networks. But there is also specific use case for example when it comes with integrated with OpenStack for example. So you actually would create virtual machines spin up on specific virtual network and then pods, let's say you have an application that has a front end in containers and a back end that sits in a VM that runs on OpenStack, you still can create the networking between these two layers by using this custom isolation because you can actually specify where you want to run your front end and that ties directly with the networking from your virtual machine environment. So that's actually pretty nice and it gives you the flexibility across the whole stack of either allowing communication between the elements isolating them into name spaces allowing them to talk with external resources. So that's pretty neat and that's the isolation features that we bring to the table. Just wanted to remind that so every time we do that we provide ECMP load balancing between all the pods so this load balancing concept is totally so pretty fine and automated. We provide all the security policies that go with it and we also provide external access either via SNAP and we'll see that in the live demo or via the allocation of floating IPs. And as we mentioned you know we have the integration from the control plane to external gateways using protocols like BGP so all the virtual networking configuration that you would do in your cluster can be directly and automatically announced and all the practices can be advertised and propagated using BGP to VRFs and speech them to MPS VPN for example to get out of the cluster etc etc. So that was for isolation just quickly on Ingress you know I guess most of you guys are familiar with the concept of Ingress in OpenShift basically exposing services to the outside world and being able to route traffic to different services so this gives a different part of your application being able based on the URL or the hostname or things like that to actually direct traffic towards services that then in the back end serve their purpose through their pod so I already explained that service for us is a CMP load balancer and in front of a service we can have Ingress that basically every time you create one object of that type we spin up two instances of HAProxy one active one backup and these instances of HAProxy will get auto configured with the routing policy that you put in place and we'll see that in the demo. Typically here and that's the example we'll see the slash dev environment and slash QA environment so you have two application into served by two different services and based on the URL this HAProxy load balancer will basically route the traffic toward the respective services okay so one last feature I wanted to talk about is the concept of Nestin installation so basically we support using OpenStack as an infrastructure as a service software to spin up your machine in which you'll be able to create Kubernetes cluster, OpenShift cluster etc. so the idea here is really to have one SDN controller for both layers for your machines for the containers so just going through this diagram you know we use OpenStack and Control to provide the VMs to install OpenShift so that from control networking perspective creates traditional virtual network on RVRotter plugs tap interfaces toward these virtual machines then we'll use these virtual machines to install OpenShift so then you find all your different components I just represented the Kubernetes component you get the idea with the main different components of the controller then let's say we create a virtual network that we call green in that case to connect OpenStack and OpenShift workload so when we spin up a virtual machine that's traditional control networking plug it up into facing create a VRF so this virtual machine in this green network gets associated with this VRF so it gets its own and proper routing context basically being able to reach anything else on this same virtual network then what happens when we launch a pod and that's where for example we can use this custom isolation that I was talking about we would say in the declaration of the application or the declaration of a namespace in which we run our application that we want to associate that with a specific virtual network that has been created in control and in this case that's a virtual network where you have OpenStack VMs sitting and so basically what we do is that we use the same interfaces the same interface toward the virtual machine but then we use Mac VLAN to actually separate the traffic toward the different pods so you get this implementation where you have VLANs basically on top of these interfaces and it allows you to connect workloads from different type of supervisors across different cluster potentially so that's how we basically power this interconnection between VMs and pods using a combination of overlay networks and seven interfaces with Mac VLAN and so you can replicate that as many times as you want and have over virtual machines, over pods different virtual networks allow communication between these virtual networks or not apply different type of policies on them etc so that's what I wanted to highlight in terms of feature so if I recap all the isolation features balancing the ingress implementation and the support of a nested installation of container cluster on top of virtual machine provided by OpenStack so now you might be wondering ok so how do I install Contra with OpenShift so there is different options and there is still work in progress so everything on this slide is not necessarily 100% available right now but I wanted to mention it and most of it is going to be available very very soon so as James mentioned in the introduction we're working on a certification with OpenShift so that is actually targeted for our next release 4.0.1 that should come I guess beginning of September so then we would be directly integrated with OpenShift Ansible so that will give you a way to deploy at the same time OpenShift and Contra together as of now to deploy Contra in a container environment we actually have an Ansible playbook so I pointed out in the links at the end this Ansible playbook so you can go check it out and you can use it to deploy Contra with OpenShift to Kubernetes for example we're working on Helm charts actually so we already have Helm charts to deploy Contra but we're not like yet 100% good proof and tested so they will be supported officially in one of our future releases but just so you know and keep it in the router we actually will have an implementation to deploy Helm charts and again there is a Contra-Docker repo on Github it has the Helm charts you can go check them out so one thing that we did with the Contra 4.0 release so which came out a couple months ago that we actually concerted the whole Contra software so the idea so if you've been following OpenContra we've been posting packages up to 2.2 versions and then for free .ex was no official packages on the PPA the source code is always available and one can always go and build it but basically the idea with a containerization to take advantage of it would be to make this container publicly available right off the CI environment so right now we're actually reworking the tooling of our CI environment and very soon we'll make available the containers directly out of CI so that people can consume them directly and then deploy them in their environment of choice and with that I'll just give you guys an introduction on the demo that Savitru is going to unfold to giving you the main features that are going to be shown and then I'll let Savitru comment through how this is unfolding the demo but basically here is the setup we have Contra running with OpenShift we have a standard namespace in OpenShift context called Trineapur namespace and we're going to spin up some pods so first it's not really presented in the demo but you'll get the idea and that's basically what happens every time a pod gets created but if you give a pod it's going to be pod and services are going to be created we're going to associate external IP so in this case we'll do it for ingress but I'm just giving you the workflow basically what it implies on the Contra side every time we do that type of operation is that basically when we associate the external IP it would create a floating IP that then gets advertised automatically using BGP to the gateway so that's what I was explaining earlier when we take advantage of basically you know MPL and ZPN concepts and advertising these practices through BGP that will give access from the external world to the resources that are on the cluster and then let's say your application scales up you change the number of replicas automatically the service is implemented as an ECMP load balancer so you would load balancer traffic between the different pods so that's you know one thing and you'll see that happening in the background when the services are going to be created we'll show namespace isolation by basically creating a namespace in which we'll say that we want the isolation to be effective and so to isolate the pod and the services inside this namespace basically creating this pod we'll see that the communication between the non-isolated namespace and the isolated namespace will not be possible on the other hand pods schedule in this isolated namespace will have a communication inside the that's going to be just fine then from this namespace we'll show the SNAP feature so basically allowing this pod to access so allowing your application to access internet for example to fetch resources and we'll finish with a fan out ingress example so a little bit like I was explaining in the slide with two different type of services with URL routing definitions and then depending which one we're actually querying QEnvironment and DEV we'll be then redirected to the services and each of these services will then load balance the request toward the pods in the back end. So that's all I had with the integration regarding the integration with OpenShift the different features installation and now I'll hand it over to Sabit for a live demo. Thank you guys. I'm just going to interrupt for those of you who are asking all the questions in chat I'm going to ask the presenters who are not giving a live demo take a look in the chat and see if you can answer some of the questions there while the live demo is going on. Hey guys this is Sabit Roo. I'm from the Juniper BU working in the same team as GM and James so thanks for that detailed information. I'll quickly jump over to the demo and probably give you overview about how exactly Contrail integrates with OpenShift. So this is my Contrail Web UI and also this is the OpenShift Web UI Initially when we provisioned Contrail we create two IPAMS mainly the Service IPAM and the Part IPAM So the Part IPAM is a large subnet it's a slash 12 subnet and whenever a part is launched it's spent in this particular subnet whenever a service is launched then the service has come up on this 109600 slash 12 subnet and these IPAMS are associated with one network which is called the Gluster Network So right now these are the number of different projects I have in Contrail which maps to different namespaces in OpenShift So the first demo is about namespace isolation Let's quickly jump and create two new namespaces I have two namespaces one which is an isolated namespace and another one a non-isolated namespace So the whole idea here is to create these namespaces launch two parts in each of these namespaces and then try to ping between them and verify that the ping doesn't go through So by default OpenShift doesn't provide any isolation this is the value add that Contrail brings in and how we go about creating a isolated namespace is isolation and whenever this annotation which says isolation is equal to true then we attach relevant security groups to the virtual network and that's how we isolate that namespace from the rest of the world So let's go and create this namespace and in the non-isolated namespace I have isolation set to false So these are my two new namespaces the isolated and the non-isolated As you can see this is replicated in the UIs So whenever we create an isolated namespace a new virtual network is created the reason why we do it is we want all the parts we don't want all the parts present in this isolated namespace to talk to other parts present in non-isolated namespace and that's when we create a security group and we say parts within this namespace can communicate amongst themselves and deny all the rest of the parts So let's see how it works and let's jump into one of the isolated namespaces and let's create a part I have a simple Ubuntu application and let's also create a part in the non-isolated namespace I'll open another tab for the isolated So as Gey mentioned in his briefing whenever a part is created we create a virtual machine interface and this is the part which is attached to the part as you can see in the isolated namespace we have this Ubuntu application which gets an IP of 255.251 and the network is in isolated and similarly if we go back to the non-isolated namespace we see that the part got an IP from the cluster network which is 255.249 and here if you see the security groups there's no isolation at all we allow ingress traffic to come from all particular IPs So now that these parts have come up I'll pick this IP address of the part present in the non-isolated namespace which is 249 and the part which is present in the isolated namespace which is 251 So let's go ahead and ping this .251 isolated namespace from the part present in the non-isolated namespace So this guy has an IP of 1047.259 and I'm going to ping 1047.255.251 which is the part IP of the isolated part and as you can see the traffic doesn't go through and this shows so that we have isolated the part from the cluster network So this is the demo of the isolated namespace Now let's go ahead and create a source NAT functionality where we allow internet access to all of these parts So by default these parts cannot access the internet because we believe that the parts are pre-built packages and they don't need to access the internet so we thought this would be a security functionality that we can enhance and hence we don't allow the internet access by default So let's try to ping Google's DNS server and right now as you can see the ping doesn't go through and to allow access to the internet we create a router object and the way we do it is we go into default and then there is this router tab present in control UI and let's just create this Let's call this router example and then we have an external gateway network which is our publicly facing network and let's select that virtual network and let's connect the cluster network to public network So whatever part comes up on this cluster network we'll use this public network to get out of the internet So once this router object is created now if we go back to the part and ping again now we see that the part can reach out to the internet So this way we actually source NAT so basically whatever IP is present on the cluster network we NAT it to the IP present on the public network and we reach out to the internet and that can also be verified in the port tab of the non-isolated network So basically this IP uses a floating IP from the cluster network in order to get out of the internet As you can see this is the IP that it uses the SNAT IP So now we have shown name space isolation and we also shown SNAT so let's go ahead and create ingress types So basically in ingress types I'm going to show you name based ingress and also simple find out ingress So let's go to the non-isolated namespace I'm on non-isolated namespace and let me create a part in dev and QA namespaces basically and what I have in dev is basically an application which is a frontend application which says dev and similarly I have a bad QA application which displays QA in the frontend So let's go ahead and create this part So now that the parts are created let's go ahead and create a service So what I have here is basically a service called as a web dev which forwards the traffic coming from AD to the parts which are present in the backend So service is responsible for ECMP load balancing to the respective backend parts So as you can see I now have three web app dev parts and three web app QA parts and their respective services dev and QA and similarly in the contrary web UI if you go to the ports we see that there are a number of services created three for the dev parts and three for the QA parts and we also have two for the services and the way we differentiate the services from the parts is through this device type which is the KADS load balancer So these IPs .155 and .15 are actually the service IPs So let's go ahead and create the ingress type right now So in the name based ingress I'll show you the fan base ingress first So in the fan base ingress we have the rules here where we say flash dev go to web app dev and flash QA go to web app QA service Let's go ahead and create this now As you can see the ingress type has been created RNG and flash dev points to web app dev had listening on port AD slash QA points to web app QA port AD And now in the contrail without the agent which is running on the open shift node if you see the HAProxy rules that we push in we created a new rule for this fan out based ingress So contrail automatically pushes this rules in the back end on the agent node so that whenever a traffic comes on the ingress IP it automatically forwards this to either the service dev or the service QA parts respectively And in the contrail web UI if I refresh the screen then I have a public facing IP for the fan out ingress which is 1084.31.53 which comes from a public virtual network and on opening on the UI if I put that IP slash dev it should take me to the dev part and as you can see the IP address is 246 which is one of the back end dev parts on hitting refresh it takes me to a different IP which shows that ECMP load balancing is working in the back end So slash QA should take me to the QA part which it does and IP address is 242 and on refresh it takes me to a different part 244 which shows us the simple find out ingress type and there's also one more ingress type which I would like to show and that is name based ingress where we pass the host name and the header field and that's how basically differentiate where to go which back end service to use So in this I have I pass this host called dev.com and QA.com so whenever the header matches either one of these host types then it goes to the different addresses present here So now there's a new which is being created for the ECMP load balancing and in the counter web UI if I refresh the screen I should see another IP which is 1084.3154 for the name based ingress and now if I just enter this then it should throw me an error which it does because I haven't passed the host information, the header information so I use this tool to pass the header information and when I pass host equal to dev.com and I hit refresh it should take me to one of the dev pods and this is the same ECMP load balancing which I question the back end and when I hit QA.com it should take me one of the QA pods So this shows us you know the by passing the host information in the header it directs us to different pods and contrary it does it seamlessly in the back end So this is the demo that I've got in terms of name space isolation, ingress types and so on that and contrary it does offer you know a rich analytics as well so you can see the different node types here so this is one of the node that I have where I actually launched the pod workloads and this is my control node so there's a lot of interesting information here that you can grab like how much traffic is coming to each of these pods, the CPU utilization on the nodes and stuff like that so you would never require a third party you know a monitoring tool in order to capture all this information our analytics components does this for you so this is where we bring the value add-in portion of the control and that's it so thank you for watching and I'll hand it over to James yeah do you want to go back to the slide yeah we're slowly running down on time here and there have been a number of questions in the chat and busily answering yeah do you want to answer some of them live sorry Diane I was trying to catch up and going through each of them at a time writing and reading the new responses to leave a question about how to basically spend the same network across different clusters I was talking about running OpenStack on top OpenShift are running OpenStack on top of it it's currently inside of OpenStack etc so those I already answered and explained one thing I was about to write is also if we consider different clusters like we have a way to kind of federate networks across different cluster but we use different controllers that's also an option you know basically since we sync our controllers using BGP you can have different virtual networks and you know exchange networking information in each other based on you know basically important port of route targets and these kind of stuff so that also gives you an option to basically stretch across different clusters just thinking about it but then there's there might be other options and you know that require a little bit of thinking to answer totally that question I think there was another question so Ali's question was there was a lot of so I answered on the OpenContra vs. Contra then there was a question about the advantages over asian solutions like big switch, nuage sysco there is I mean there's multiple answers to it like some are more like fabric oriented some are more based on overlay some are directly in the end delay I mean there's a lot of differences so it would require like a one-to-one comparison pretty much I don't know if anybody else wants to add things on that but yeah I mean going back to what I said at the beginning I mean the fact that it's open source that it's not a niche solution just for containers like you know if you look at something like maybe calico or conti they tend to be you know mostly for containers the other thing is you know if you talk to net ops people eventually you know they will have some sort of say in the dev ops stack in the OpenShift cluster right because you need to connect to other things at that point one tool to rule them all like so they don't have to learn something new is also really helpful and then you know when you get to this sort of cliff of where you can't do something with these more niche less mature solutions yeah sure they're lighter weight but then you get to some point where you can't do something right controls you know quite mature very rich in the feature set and like I said you know I think there's sensible defaults where it's approachable and easy to use but then when you need to do something sophisticated chances are there's probably you know somebody that's thought of that before we have some tough networking customers so that's something to think about as well the analytics that Sarri through pointed out is another thing right yeah analytics is something that is in general you're kind of an afterthought or but our analytics API is actually pretty rich and gives a lot of visibility on the operating states of your network so that's something we like to point out as a differentiator as well is all the information you can basically extract from our analytics and then you can integrate it in to your monitoring and troubleshooting tools as a customer that's a big difference there's some questions there was two more questions there was one question from George asking about what is like you run OpenStack with Neutron and OpenShift with OpenContrail so in that case they're considered you have two networking domains basically in that case like you know either we integrate with Neutron plugin and then use Contrail with OpenStack but if you use plain vanilla Neutron then and use OpenContrail with OpenShift on the other hand and you want to have somehow some kind of synchronization in terms of networking then you would have to go through a gateway or exchange routing information you don't have like direct integration in that sense you know like what I meant when I explained the nested mode was that basically you use the Neutron plugin points to Contrail API and the CNI plugin uses Contrail as well in that case and there is another question from Ali about how to handle high availability and service recovery so basically Contrail is designed in a fashion where all the services can scale out basically so you know our API the different components of our solution there is RabbitMQ Redis, Kafka, etc these scale out and what we do in general in front of it so we have load balancer in front of it to load balancer the request and in the back end the database is a NoSQL database that is clustered across different nodes to basically ensure that if one node comes to fail one process actually on one node comes to fail then the other processes can take over the load and the work there is just a couple processes that are very specific that are more active backup kind of a thing but otherwise it's all active and we support a 2N plus 1 model of fader where N basically is the number of fader so start with one fader we have three controllers, these three controllers handle the whole high availability and failover you need to do anything special in terms of setting it up high availability and service recovery with Contrail on open shift or is that just comes out of the box because of Contrail's design and architecture so out of the box you will get all the load balancing all the high availability between the services but most likely you'll have to put a load balancing in front of it so that's it, in terms of design the solution itself if you have multiple controllers then they will handle the HA themselves it's more for the failover of the load balancer in front of it most customers we still have their hardware load balancer solution that they use for their clusters we put that in front and they use it to access different resources in their cluster at the point to Contrail in that case we have some people that use software load balancers there's different options there but you're talking about in front of the Contrail service Contrail APIs yeah that kind of was back to the different installation modes that we talked about too and how it kind of differentiates us versus some of the competitors actually as well Contrail does support in-service software upgrades so you can go between one version of Contrail and another within an open shift cluster with no downtime that's pretty huge but it also sounds like with all these questions there's at least one other open Contrail briefing that we could do obviously and to drill down into some of these questions so we have come to the end of our hours I want to respect people's times and what I'll try and do is I'll capture the questions here and maybe I'll share them with the speakers and if there's more links or things that you want to share when I publish this as the blog on the video in the blog on OpenShift.com we can add those links and other further details and along with the blog articles and links that are here so I really want to thank you guys for your time today and for everybody for your great questions this is obviously a big hot topic and sometimes you never know what everybody's going to ask great questions on and these have been really good questions and it was a wonderful presentation so this may be one of those YouTube videos that people watch all the way through the end we'll find out later and so thanks guys for doing this and we'll look forward to another one soon thanks very much Diane thanks to everybody for their time and your life thank you guys have a good rest of your day