 Hello everybody and welcome to another OpenShift Commons briefing this time on Project Calico. We are really pleased to have Andy Randall, the CEO of Tiger here to give us a presentation and update and overview of Calico and I'm not going to talk a lot on this one because I really want to hear more about it and I'm going to let Andy introduce it. The way this session works is we have a chat channel in the background and if you have questions please pop them in there. There are a few other people from Project Calico on the line listening in and they can answer during that and then at the end of the presentation we'll open it up for Q&A so with that said I'm going to let Andy take it away. Great thanks Diane and welcome everyone thanks for your interest in Project Calico and just by way of explanation so I'm the CEO at Tigera and Tigera is the company behind Project Calico who work with a lot of companies across the ecosystem integrating Calico into various environments and you know supporting deployments as well so with that I'm talking about simplifying and securing your OpenShift network and I know OpenShift is a widely deployed platform networking seems to work right and so the very first question that you may have and it's not unreasonable is isn't virtual networking a solved problem isn't this something we've done before we've been doing for years virtual machines have been networked for years containers are you know look just like a mini virtual machine that's done can't we just like focus on developing and deploying our apps and you know I think that's a reasonable question as Mr Bean would probably be happy to put it with that kind of expression I did think as you can hear I'm originally from England so I thought you might like Mr Bean quotes there well so I think that might that might all have been true if it went for the fact that we were going into a new era of cloud native and the way we're architecting applications today is changing very much and the first challenge really is all about Gale and churn and the dynamicity of cloud native architectures and you know the first point to make is if you look at how you pack containers onto a given server versus virtual machines you know you've got at least an order of magnitude more because you haven't got all of that per workload OS overhead so you have one significant order of magnitude impact in terms of the number of workloads but more importantly and this is something that we see in surveys and also just kind of the nature of how these applications are being deployed and a dynamic orchestrate to the whole point of that is you can bring up a container take it down very rapidly and so you've got if you put these together you get in terms of the churn on the network how fast are individual endpoints coming and going application workloads containers pods it's probably at least a couple of orders of magnitude and if you think about the architectures that you build for one scale if you increase that a couple of orders of magnitude it's very hard for that to take that same architecture and have it keep up so the kind of first what I call first generation of SDN that was built around virtual machines that was based on a you know a single centralized controller that was the brain of the network and wasn't really expecting a whole lot of events to happen all at once they were standing to see them start to you know reach their capacity when you really put them into production environments in a you know cloud native kind of architecture the other thing that's happening I'll put more about this as well is you know if you try and use a traditional virtual firewall to route traffic through to enforce east west rules again the the amount of traffic and the amount of different connections within a within a cluster when you start to build a cloud native application tends to mean that you're not going to want to take that approach and there are there are other aspects of security I want to dive into a little bit more here as well you know and the the first is that if you think about how we've traditionally done security I talked to a lot of customers and they'll say things like yeah I have this subnet which is where you know I assign all of these these kind of services into this subnet and then I have a rule which are programmed into the firewall which says who can talk to that subnet now if you're in a much more dynamic environment as you will as you will be in with open shift or any kind of cloud native architecture you're getting dynamic IPs assigned to workloads potentially from anywhere in a very large range and you want that because the whole point of the efficiency of this kind of cloud architecture is you shouldn't be treating anything as special snowflakes you know the servers aren't special snowflakes you just treat them all as fungible resources so a container can appear anywhere with any IP address so that does mean that your subnet rules and potentially your VLAN rules as well the way you've used VLANs in the past no longer have have meaning when you start to think of this much more fluid environment the next thing that's that's happening as well is you know the introduction of microservices means means that you're breaking down applications into many many smaller components and they those components communicate typically over REST APIs i.e network interfaces so the exposure of your application to the network is that much greater you know attackers are already jumping behind firewalls getting from one layer of services to the next if you're now creating you know potentially thousands of potential attack points across your application that's a real security risk so you can't just rely on a perimeter firewall to protect yourself you need to think about how is that intra cluster communication going to be protected and those those might seem a little bit of a kind of negative points but I also want to make a very positive point here as well right which is that you know there's an orchestrator involved here you have in you know the case of Kubernetes which you know OpenShift is built on but there's an orchestrator which is making scheduling decisions about where workloads are placed and it knows things about those workloads and there are there are schemers and there are ways for developers to attach metadata onto them so with labels for example so we know a lot more than we used to at a meta level about what is going on within the cluster so you know I see this as an opportunity this is a this is a kind of a good news bad news story sure if there are some risks and some things you have to give up from the old world of how you implement security but there's also this huge opportunity to automate things because you've got you're operating in an automated environment and that whole problem that you used to have of IP rules hanging about in a firewall and no one knew why they were still there what we call crux things things that are hanging around for years can go away because you know dynamically where every workload is and that can therefore flow through into automated security rules so that's kind of the background so you think about those challenges and this opportunity what what is it what is it that we need to do to solve this problem and at the very high level I'm going to you know I like to keep keep things simple it's firstly I want to simplify the network right I want to take out unnecessary layers of complexity because that's what is causes challenges as we scale up those multiple orders of magnitude and secondly I want to secure the workloads I want to take these fine grained rules say who can talk to whom and integrate that with the orchestrator at that whole point of this automation piece and then thirdly I want to do these things kind of tightly knitted together in some kind of you know architecture which is really tied to the way that we're building applications today and not bolted on the side so this is how you know this is how we think about things and that's essentially what we do with Calico and what I'm going to talk to you about over the next few minutes here how we're addressing these challenges and you know what Project Calico does and just kind of by way of background Calico is a project that's been going for a couple of years now it's open source Apache licensed it's a pretty active community now so we have about 100 community contributors you know a lot of those from outside Tigera so it's a very broad community of folks contributing into the project and you know we're starting to see pretty large scales of deployments now some very large names folks who've talked about how they're how they're deploying Calico in various different environments a lot of Kubernetes but also people working with OpenStack and Mesaus and Docker and you know a lot of different environments so it's it's pretty field tested now and you know it's a pretty solid basis for us to be building networks on and taking this technology forward so step back and think about this simplification of the network and really it's just kind of a checklist right so the first thing that we do and this is very much in line with the Kubernetes philosophy you know Kubernetes is actually was the first of the orchestrators that really said we're going to take a new approach to how we think about networking of these of these pods and every pod will get an IP address and the way I like to think about it is you know pods are endpoints too pods are just things that should be on the network and they have an IP address we want to flatten the network get rid of any intermediary layers and give them a real IP address so what this means is by default we don't need an overlay network we don't want an overlay network in fact and what that means is that packets coming out of a pod just go on to the underlying network without any encapsulation without any additional overhead and therefore you're going to get good performance the other piece the other approach that we have here is we believe in IP routing we believe IP routing is the way to get to scale we try and remove layer two concepts so it's a it's a routed model where a packet comes out of out of the pod is routed onto the underlying infrastructure across to the other pod and that's a very simple model to understand whether you're going to a you know a remote a remote node or a local pod it's just a single routed hop the next piece is you know there was a lot of work done around the first generation of virtualization network virtualization with virtual machines where people built whole virtual switches which did a lot of complex processing in order to emulate you know layer two connectivity across a larger a larger scope than just the local machine we believe linux is out there and i think i think our friends at red hat would agree that linux is a pretty good basis for building a product on and it's got a very good a very efficient network stack a lot of technology there that's very proven and solid and we want a bit leverage what's there so you know the up shoot of this is we want to get the maximum performance while making it really simple to troubleshoot and we think the tools are there and the way that we've um put those together makes uh makes calico the highest performance simplest troubleshoot solution so let's let's look at into the next level of detail in terms of architecture um and and think about first of all um you know as with most networking solutions there's kind of two pieces to it right there's a control plane and then there's how do your packets actually get around so um looking first at the control plane we actually have kind of a hybrid here so we use um we use etcd for uh for communication of a lot of the state so we'll plug into uh the orchestration system obviously here in this case uh with open shift it's it's open shift and that looks you know a lot of the same mechanisms as kubernetes um and we use etcd as a distributed key value store um using the rough consensus protocol um to distribute that state among all of the compute nodes and you know one of the reasons why we chose etcd was because we knew kubernetes was a a key target and um you know kubernetes itself relies on etcd so the thinking was if you know if if that's scaling up for the underlying orchestrator then our state distribution scales up in exactly the same way as the overall cluster that we're we're integrated with there's no kind of question as to whether you're going to get out of step in terms of the level of scale you can reach um but we don't distribute all state via etcd there's a another way we communicate between the nodes as well and that's to communicate where ip addresses are located and this is a kind of a special bit of state distribution if you like because this is a well known problem that's been around for several decades you know how do i if i have a set of nodes on a on a network and i want to communicate to them how i can get to uh ultimate uh ip address endpoints and so you know we we don't believe in reinventing the wheel if there's something there to be uh to be used already so we used ip routing protocols to do that and specifically the border gateway um protocol vgp because uh that is something that's proven to scale that you know at internet scale and is very robust and high performance and has a lot of characteristics that we want now um you know some people think because there's sometimes a misunderstanding because we because we use vgp they think that means we have to talk vgp to the underlying network and that's that's not the case um it's just a protocol we use internally um having said that if you are running on your own physical fabric and you you know you have top of rack switches that can talk vgp then optionally you know this isn't the default configuration but you can change the uh the config to give it the address of a top of rack switch to peer with and then the vgp can talk via that uh intermediate top of rack switch and you know this is this is really nice and this really is what one of the reasons why vgp was such a great selection for this piece of the um of the architecture because it means that from the compute nodes perspective whether or not i'm peering with the underlying physical fabric or i'm just peering with the other compute nodes it looks exactly the same so um we run an agent on on each of these compute nodes um and that is programming the routes into the Linux kernel based on what it's learning from other compute nodes via um vgp it's programming its own routes based on what it learns from um etcd to its its own pod so it learns um what its local uh workloads are and then once those routes are set up we get out of the way um the data plane go is from one pod the virtual ethernet um uh interface is hooked up into the Linux kernel goes through the Linux kernel routing table out over the physical interface and is routed back on the other side um no encapsulation required the routing just works um because that's what routing does it's just ip um and you know it's i think if you kind of think about how sdn has traditionally tried to sell you on on the benefits and the virtualization abstraction layers um you have to say if this just works as your basic model you know why not just do that um there is one other piece as well that we're going to talk a little bit more about today and that is policy enforcement because if i just let all pods talk to all other pods as in the previous picture um i potentially let in malicious traffic i let a pod talk to someone that it's not meant to and uh here again the limits kernel has all the tools that we need it has very highly scalable enforcement of uh of access control lists so that same agent which is programming routes to the local pods also programs the rules into the uh kernel access control list which is ip tables function um and we have a very um tuned way of doing this that manages to get a very very high performance out of that and um and so the traffic that goes out of the pod actually the data plane isn't is that it goes through through the routes routing table but also has to pass the ip tables checks um this will also for example we can we can program in here and anti spoof rules so that a pod can only use the ip that has been allocated to it by the orchestrator um the other point to make here um and this is you know applies to kind of both control plane and the data plane as i've shown both you know physical fabric or public cloud uh underneath so obviously in the case you're just running on say on bare metal on a physical fabric you have complete control over what the underlying network is in many cases you're going to be deploying within virtual machines that are set either in your own data center or in a public cloud um such as amazon or gce or azure and um you know and then you don't have visibility or control over the underlying network um calico works just as well in that kind of environment as well slightly different recipes depending on whether you're in google or amazon etc but um you know it'll work across both of those uh those environments so i've kind of talked a little bit there about the architecture um and and how these pieces fit together hopefully that uh that'll make sense um but you know if i come back to mr bean with his grumpy questions you know uh i've got a firewall at the edge of the data center why why on earth do i want network policies as well what what what does network policy do for me as a developer as an operator um you know exactly how should i be thinking about what they're what they're bringing for me so i want to kind of step back here a little bit and think about a cluster with you know n applications and n pods and so um you know by default if you think about the connectivity matrix between all of these n pods you've got an n-squared a set of connect connections um that could possibly happen between any of them and you know the reality is that of that n-squared set of connections only a handful are actually ones that you um you're expecting so you know that you're not expecting your back end database to have an inbound connection directly from your front end load balancer but if you leave that the possibility open of allowing that connection then you've created an opening for an attacker who gets in and compromises one of those one of those application components get to somewhere else so the goal here is to identify that subset of the n-squared connections and reduce connectivity to that and have the cali that calico agent on each node program the acl to enforce just those um uh allow just those connections which should be allowed and deny everything else and and that in essence is is what we do and um the way this is this is done is via a something that should be pretty familiar if you're used to kubernetes a you know a yaml resource file um and it looks something like this uh so you'll have you know obviously the api version it's a policy file each policy can have it have a name the the key bits are where you get to this the spec piece where um we can use an arbitrary selector expression so in this case for example this policy says um uh says that it applies to everything where you've got the label role equals database and i'm going to specify who i'm going to allow um to make inbound connections into that uh into those pods so i'll allow tcp connections on port 63 79 um from anything that has the label role equals front end um and because i want to do keep this simple i'll allow any egress um uh traffic out of out of this these database pods but i could make that um you know a much more complex egress rule i could include um you know more uh more complex expressions in terms of you know sources and destinations um specifying ip addresses as well as roles i could be using namespace uh selectors as well so there's there's a lot of flexibility in power within this um network policy once you have it then you just apply calico kittler's a command line apply this this policy file um now the uh those of you who've seen any of these webinars where people have talked about kubernetes network policy will recognize this because it looks very very close and um you know in fact you know the kubernetes network policy was was based pretty closely on what we um what we built for for calico um although what's actually in the kubernetes api is a subset of this if you want to just use the kubernetes api you can and we have a plugin for that and that'll um connect into the calico policy enforcement or if you want to get the full richness you can use the the calico um api as well so that's um that's kind of how you how you enforce that um subset of connections within the um you know within the cluster so i'm just going to come back again to um to talk about how i take that file and actually kind of revisit that architecture piece um because i think this is an important distinction between how some sdn um kind of first generation sdn products work and how calico works because we thought a lot about how do you scale this up and how do you make this as efficient as possible um particularly when you have pods that um you know and this is a i think a key metric that um that we look at when we're doing stress testing and scale testing is you want to be able to schedule a pod have it come up and start and have network connectivity straight away and if i if i take you know many seconds to uh to get the policy for the pod or if i'm doing that policy in a reactive way so the first time i see a packet i go i have to go and ask some central controller am i allowed to send this or not um you'll get you're going to have delays on the network and delays scheduling pods um so so that's why we distribute this the actual compute function all the all the compute intensive activity happens on each compute node so the amount of compute scales with the number of nodes in the cluster seems logical um and so what each what each of those nodes does is it takes the essentially that yaml format um you know which is encoded and distributed via the xcd data store and it takes it looks for all of the policies that apply to pods that it has and um calculates what acls are required at that point in time for the pods it has based on where everyone else is in so for example um in in the previous example with the compute node on the left had a um you know a database pod and the compute node on the right had the load balancer pod then um the compute node on the left is going to write a rule allowing ingress from the pod on the right to the pod on the left um at a very simple level now the other thing it needs to do of course is watch out for when things change because we're in a dynamic system and that's where you know the fact that etcd has this capability to subscribe to changes um and to register for things that you're interested in now the left hand compute node nodes it's interested in when new front end load balancers come and go because though those are going to need new rules every time a new front end load balancer um is created or destroyed and it'll update it um it's a local acl tables so um so that's that's the the architectural approach and you know at scale we test this and you know that key metric that i said you know when you create a new pod you've got to set up the network and apply policies to it you know that's typically sub 10 milliseconds and when i say at scale you know i'm talking hundreds of thousands of um pods within a compute node so this is a um you know pretty efficient um proven in that scale kind of system for implementing these policies so i thought it would be useful having talked through some of it's you know some of the architecture just to um just to kind of highlight do a kind of compare and contrast and this is really not to say you know one approach is better because they're pros and cons and it's really just looking at the um you know the architecture really sort of for folks who do understand say OpenShift.spn or other OBS based um networking how how calico is is different so um you know one uh these aren't in particular order but one thing that um quite a few of the networking solutions do and this partly inspired by google i think in fact when they first came out with um with kubernetes they said every node every host you'll get a slash 24 um that's 256 ip addresses a single sub subnet and so a lot of the stn's do this is their ip address management um calico has much more dynamic ip address management so um we will take a smaller range of ip addresses initially um typically a slash 26 so you'll get 64 addresses and then if you schedule the 65th pod we'll pull another address range and we can do this because we have the um yeah the routing protocol where we can dynamically communicate around when uh when addresses uh change and move or you know allocated to new machines so that means that we get a lot more efficient use of the ip address space so you're not wasting addresses having 256 assigned to a host when you're only running 10 containers on it um but at the same time you're not imposing enough limit on how many containers you can schedule you can get you know put 2 000 on there if you want um you'll just pull down more uh more ip pools um so that that's one architectural comparison that the next one is um I guess the use of bridges and um you know a lot of the uh a lot of networking solutions starting from I guess docker um the containers put all of the local containers or local pods onto um onto a bridge in openshift sdm case it's the obs bridge um and docker it's a docker bridge um and the idea here of the it is essentially it looks like all of the local pods have a layer two connection we take a slightly different philosophical approach and say we do ip routing everywhere so whether whether you are um whether you're going from a pod on one one host to another host across the network or you're going from you know between two pods on the same machine it's a single routed hop it's the same path um you know and there's there's no bridge involved so it's a it's a routed connection um the the next thing you know I talked a bit about this earlier uh no overlays if you um when you connect with a traditional fdn to remote pods you're typically doing that via a tunnel interface at the at the bottom of the kernel you'll set up a vxlan tunnel between between two hosts um you know calico can do tunneling because sometimes it is required because you have a network topology underneath you where you can't route across it um but it it's not required it's not kind of the the basic way of um of getting packets out of a out of a host so the pod has a real ip that's routable on the underlying network so you know we just send it out of um the xera interface at the bottom of the stack the uh one nice side effect of this and um you know I didn't mention this earlier but uh you know when you're running on a public cloud environment or sometimes if you're running on an existing open stack um environment where you have um you know maybe using neutron networking or you're using some other sdn and in public cloud you don't know you don't know what sdn is underneath you could be anything um those underlying virtual machines are are going to be having some kind of encapsulation of the packets coming out of them so if you if you um if you use vxlan or uh you know some other encapsulation from the container down to the vm now you're going to get double encapsulation and you know that's that can be a performance problem particularly when you start hitting the threshold of packet sizes so the mtu limits on a given cloud you know in some cases are quite low um and when you're adding dozens of fights on the front you can start to get into significant fragmentation and that um you know that can have significant performance impacts so it can be performance impacts can be really significant when you're talking about that kind of double encapsulation environment um next point is how do you get outside of a cluster well if you are doing everything always in an overlay network then you have to always go out by a net um now talaco can can apply net rules because sometimes for example maybe you're using addresses that are all purely internal to a cluster and you want to be able to um to nap to external addresses um but it's not required by default you could have a set of external IP addresses that you assign to some pods and then they just have a real external IP address no net required um the um the next point and this is maybe a little bit more open-shift SCN specific is you know there's a couple of different methods of network isolation either the multi-tenant plugin or your network policy um you know we've talked about how we do network isolation uh it's just ingress and egress policy rules in ip table rules in the Linux kernel and that allows you to do multi-tenant in fact there was a great um blog post by um giant swarm recently about how they do multi-tenant kubernetes uh using talaco and um in fact they use kubernetes for managing multiple kubernetes clusters on behalf of their clients so um real kind of interesting kubernetes inside kubernetes use case and they get that's how they get that um uh tenant separation um but the last point is is again a little bit philosophical and uh you know about where do you want your software and how much you're using the linux um linux kernel existing code um you know open vswitch it's a you know it's an awesome piece of software but it is it's a big bm off and you know a lot of people say you know what i'd rather have i know that my data plane is running through linux kernel um and just the traditional layer three um data path within the kernel and we're very happy with that and and our code and as such is only uh control um control plane um there's a couple of other points as well uh when if you think about some uh other stn solutions and how maybe some of the more traditional uh stn's works um you know one i mentioned earlier is this idea of a centralized controller and um you know without mentioning names there are some famous stn's that do this right they have a central controller node and maybe that can scale up but um it's still a separate aspect of the network that you have to have to think about how that scales and that is doing all of the calculations for for the network um whereas we distribute that um onto all nodes um the next is more about kind of compatibility with other other devices both kube proxy or in the case of like bouncing you know something like openshift router and because we're using just standard mechanisms kube proxy will work out of out of the box very straightforwardly with uh calico um and same for openshift router because it's all just IP so um so those are some architectural considerations um now what we do here from some people is they really like the way we do policy but also for some reason they and there are some good reasons in some cases they want to use vxlan overlay in particular a lot of folks um with kubernetes use uh use flannel and flannel is a project we're um we're involved with as as well so mr bean saying you probably can't do that actually we can um and this is a project that we launched last year um together with coro s um to to allow you to take if you like best of both worlds to take the uh the simplicity of a flannel overlay which allows you just to allocate a single subnet and have vxlan uh to each host and have vxlan tunneling between them and take all of those policy rules that we talked about and apply them on top of that that tunnel's network so um you know this this works pretty well i've heard of um quite a few people using this now and um you know it's uh uh it's actually very straightforward i mean even though it's it's got its own um kind of project repo under project calico that the key piece is really just how do you get those two cni plugins to um to work together and cni has this um has this nice attribute of being able to actually um you know combine multiple plugins in a in a single environment so um so that's that's something to look at if you're thinking that might be um uh you know oh you have flannel you want to keep it but you want to you want to add policies to it so um having talked a lot about calico kind of in a in a more general way um let's think about when we put this together running with open shift which obviously based on kubernetes and um you know to start with i'll just pop up um diagram which um really just shows the the open shift um network layout and where do the various pieces of calico get installed but first of all on each node we need to need to install um our cni driver our ipan driver and those plug into the kubernetes cni interface so they that needs to be set up on each um node then there's a single container that we have um called calico node which contains or a single pod which called calico node which contains a a couple of containers one which is called felix um which has a local routing the policy calculation another one which is a project bird which is what does a bgp control plane and then we need a um a single instance of the a policy controller and again this is just stood up as a kubernetes pod um plugs into the policy api and um takes that's if you're using kubernetes network policy converts the kubernetes network policy api into um calico data store so that's um that's uh how the kind of architecture maps uh in terms of how we um where we're at with that with an integration um you know if you if you want to go out and try calico with kubernetes there are tons of different ways to do it um you know some of the easiest are um actually yesterday um heptio and am and amazon announced a quick start for kubernetes on aws which uh which includes calico by default so you just run that with its default settings and there's a bunch of cloud formation templates which will get your aws cloud instance set up with with kubernetes with calico another really nice way of doing it stackpoint cloud has a um has a pointed click interface for creating virtual machines preconfigured with kubernetes and and calico you just click yes i want the calico box um and uh and that comes in and then the things like cops and kubernetes um all come with the ability to configure calico connected as well so that's all kubernetes specific when it comes to open shift and until recently um you know that we have had users doing deployments but it's that kind of installation piece has been a bit roll your own and um that hasn't been out there for the community to use so we actually got together with the folks at red hat a couple of months ago and said let's let's solve this problem let's um work together and get uh get the integration um at a point where it's actually properly supported and you can certify it and all of that um we did that on ocp 3.3 and um uh and it was working and then ocp 3.4 came out and broke it so we're working on fixing those uh those issues but that should be up soon so um encourage you watch this space um you know there's there's an open shift channel in the project calico org slack and so please go there let us know you're interested the more that we know this is something people want to see the you know the user it is to um you know to get to get resources behind it and and if you want to contribute and be part of that as well um you know we're always open to that it's a it's an open project and um you know it's um I think some fairly straightforward things that we have to do to get this uh to get this working so so that's that's um that's my piece in terms in terms of the actually you know prepared remarks um here's how to get hold of us here's how to find out more about the project so we're on github uh project calico you know if uh if you like this then tweet to Andrew Randall if you didn't my name is Christopher Lillian Stolfi um you can tweet to project calico um and I'd say you know the most of the place where the community where the community meets is slack dot project calico org um get into the slack group um so that's it and I'll open up to questions now all right well thank you for the overview and um I like to watch the space and I'm loving the mr bean references here coming from Canada we've got overload on uh mr bean so um peter larson's asking what role does the namespace play with calico are there any default policies that implement isolation between namespace spaces etc yeah that's uh that's a great question so um the namespaces I think are often misunderstood to five people who think they give them a lot more protection than they than they do uh so you know namespaces by default don't give you any um isolation between you know at the network level um you can specify namespaces in a policy selector so you can say this namespace can't talk to this other namespace um unless one of the engineers who knows better than me is who's on the call wants to pipe up I think right now there's no kind of automated way just to say all namespaces should be to be isolated I think you'd need to write policies for that but um I know I think I think Casey's on the call I think you may know Casey's on the call on the call Casey Davenport I've just unmuted you if you want to add anything to that Casey um please please unmute yourself and join in the conversation yeah sorry I couldn't find the mute button can you guys hear me all right yep yeah cool yeah I think that that's pretty accurate um there's no default namespace isolation but uh the policy model makes it easy to configure that um like I said in the comment it also lets you do um things that are more or less fine grained than that so if you wanted to say give a group of namespaces to a tenant you could configure policy that that affects that entire group namespace as well and the the other question actually was a question that that I had because I was rifling through the documentation for Calico looking for documentation on the specifics around deploying um OpenShift a Calico on OpenShift and you kind of cover that off in the previous slide that it's a work in progress but there is a little bit of documentation out there and some of the chatter on the call was that this is something that um is coming in the in the very short term so I think Mark Curry is one of our product managers and he's on the call right now I'm sure if he's listening in and I'll unmute him as well if he wants to add any two cents about yeah yeah thank you Diane so we are very close to um a good relationship with Calico and agreement and we are revamping our documentation for our SDN and we would like very much to to highlight Calico within that documentation so I'm sure we will not only be sharing links back and forth to specific to some of the questions in the chat but also we we I think we will probably add something to our reference architectures to that effect as well I think that's probably a wise way to go through and Karthik is on the call as well and you can if Karthik if you want to add in anything Karthik is an X OpenShifter now working with Tigera so we're thrilled to have him migrate over there um but I um I'm thinking that we have another you know a little bit of work to do on our documentation it's one it's not as pretty as the other ones um but there's some more details that are missing there to make it work so I think there's a little bit of work cut up for us still but um and some of the customers or the folks that you showed on your slide earlier about who's deployed it um at some of the larger places I recognized a few of them as actually OpenShift Commons members um and reach out to them and see if we can then get them to do maybe a talk about how they you know if they did the role your own um and get some of that feedback um on Calico from them as well so see if I can put the orange folks on on the spot for that um I'm pretty good at getting course and people into talking just as I of course um Andy into coming to Berlin in a couple of weeks to drink beer with me and talk about um Calico on a panel at the OpenShift Commons gathering on March 28th which is the day before KubeCon so if any of you are coming for KubeCon please come a day earlier and join us um there'll be some amazing speakers including Andy um and you won't have to listen to me blather on because there's also great folks not just in the on the panels and on the speakers but in the audience as well so I really highly encourage you to come it's a cool opportunity to meet um some of the project leads on open source projects like Calico and Kubernetes and Docker and CoreOS will be in the audience and all kinds of good people so um if you're around please join us um and we can get you there one way or the other and the beer will be good I promise um I see maybe one more question here all right yes uh Jeff there will always be a link to the presentation that um Andy has just given um we posted usually on Mondays after um the following week this week we have three briefings going on so there'll be a lot being pushed out on Monday at blog.openshift.com and it'll also be on our YouTube channel so we can find it there and I think we'll finally update the Commons page and get it up there as well excuse me um so I think that's it so Andy I really want to thank you for taking time out of your schedule to do this and I'm looking forward Mark and everybody else to getting the updated documentation there and link back and forth between Calico um this documentation and the OpenShift documentation and reference architectures and we'll try and keep those links um associated with the video of this um so if you're watching this at a delayed point in time check the comments in the YouTube channel and on the blog post and I'll try and keep those updated as well all right Diane I'd like just like thank you for both uh inviting us here but also the great work you're doing on the community I think one of the things that we've really enjoyed working with Red Hat on this and it's just kind of how you know open everyone has been and how um you know kind of inclusive I think there's there's a really good sense of community that you guys are building around OpenShift and I think you know that's fantastic well I will thank you for that and probably quote you on that so um take care and have a great weekend everybody and we'll see you soon take care thanks everyone bye