 Perfect. So I think I'll start recording and you should be able to hear me fine. So hi, my name is Nigel Douglas. I'm a Solutions Architect here at Tigera. And today I'm going to focus on specifically network policy implementation with respect to Project Calico. So for today's session we are going to focus on the policy implementation for those who are let's say unfamiliar with Project Calico or starting off with it recently. Project Calico as an open source project is split up into really three functionality. So there's the CNI plugin, so the container networking interface. CNI for again those who are trying to understand this, it's responsible for setting up the networking between your containers, adding IP addresses for those containers. Now there's also the IP allocation management piece, the IPAM plugin. As you can imagine, this plugin is responsible for creating or assigning the IP addresses for those pods. Although these two pieces are fundamental for container networking, they're not actually relevant to today's session. So the agent we're going to focus on here is Felix. As you can see in the architectural diagram I've got in front of me, Felix is what the agent that's responsible for speaking with IP tables for the policy implementation. So Felix is responsible for calculating and enforcing network policies seen in this presentation, as I just said there a second ago. It will use standard Linux control plane implementation with kube proxy. We do also support EBBF without using kube proxy that's worth noting. And this updates IP tables for you. So this is just a high level view of architecture with the product. If you are looking to learn more about the architecture of the open source project. My colleague Casey, who's a maintainer of Project Calico actually has the video uploaded at tigera.io forward slash video forward slash tigera dash calico dash fundamentals. Definitely recommend checking it out. So we can go into that later. But for this session we're going to assume you've now you're using both pretty much any CNI implementation. And you've decided, yeah, for CNI you can be using a bunch of different plugins but we're going to focus on network policy implementation. So why do we need network policy? Kubernetes essentially is a flat network. So what this means is that the pods on those nodes can freely talk to other pods across other nodes without the use of network address translation. So on day one when you set up a cluster, pods are incredibly insecure. They're freely talking amongst each other and there's no guard rails essentially configured in this case. They're also ephemeral. So what I mean by this is pods don't have a fixed IP that's going to last very long because Kubernetes itself is highly scalable. Pods are brought up and torn down regularly and IPs again the IPAM plugin that I quickly went over a second ago is responsible for assigning the IP addresses for those pods. So we need something that is essentially deterministic because of the particular locations being non deterministic we need something static. And in this case we will talk about label selectors as that Kubernetes specific abstraction layer by which we can build policy around something that is somewhat fixed. So again as the pods scale up and down, as long as they have label schema built around them, then policy is dynamic. It doesn't have to worry about the IP address is changing. And in fact, we're not going to build policy around IP address specifically. So the network policy is the primary tool to secure Kubernetes traditional applications. Let's say you're making a migration over from a traditional monolithic architecture. What I mean by that is you kind of have like a, you know, a server hosting a front end application, maybe the back end being Microsoft SQL back end database. But in this case, we've got microservices, so it's a cloud native or microservice architecture. And in this case, you know, traditionally, the firewall is great because you had a static IP usually associated with that host that server that was hosting the application. It was very easy to build perimeter based ruling to say, allow these ports and protocols over these IPs to this destination. When in reality with Kubernetes because of it being highly dynamic and as we mentioned there pods being ephemeral. It's not going to be something we can work with long term with IP so we're not even going to focus on using a traditional firewall implementation. We're going to use policy or network policy as an alternative to your traditional firewalls. Also, whether we're talking about project Calico or the Kubernetes default network policy implementation, it uses standard policy API, and this is going to be relevant throughout our session. So yeah, these are the three things I just want to focus on when we're talking about policy, whether it be, again, Calico's policy or to default Kubernetes policy is that we are trying to identify what context we're going to select for policy. Rules based on key pair values. So those will be the labels you'll usually have as you see in the picture. Owner equals Nigel or distribution or platform equals, you know, GKE now we're not going to talk about making rules on a platform perspective so let's say I could have type equals front end or type equals back end. Then we know which pods are assigned to those key value pairs, so those labels, and those again are going to be static regardless again of the pods being scaled up or down and the IPs being reassigned on those. Since it's declarative, we're essentially declaring once we scope it to those labels, then apply these actions on it. And since the environment is highly dynamic in by nature, Kubernetes is designed to be dynamic, our policy needs to be dynamic with that. So we shouldn't have to continuously revise network policy with that we would have done with traditional firewalls if we're IP based we'd have to keep changing those rules. We want it so that once we have those guardrails defined especially again default denies we'll talk about in a while, so that even as new workloads get introduced ones that we're not even familiar with with a label based schema. They would be captured by these catchalls, and again that is a similar concept to firewalling, but we're going to apply that to network policy so it has to stay dynamic. So yeah we'll stick to that first point there about labels. So labels is not a calico specific concept this is a Kubernetes concept you can see the source link there to Kubernetes IO, if you want to find out more about how it works. But as you see from the example it is essentially key value pairs. So what are key value pairs, like I say, it's usually, you know, a concept something that you want to focus on a purpose and intention, and then the associated value with that. So if it's ownership or organizational structure to find that is the value, and then or the pair and the key and then you want to assign a relevant value to that. So each key must be unique for a given object. And that way, once we have that unique nature, then it's very easy for us to say, again, we create a schema we strongly recommend defining a kind of a label schema so you understand the intentions the purpose of your, your workloads going forward, and it'll be so much easier for us to build a policy around that. So here's an example, if we didn't use calico. We have an API version coming from networking.kates.io was the standard Kubernetes API. The kind is network policy so that is what we're making right now it's a CRD call network policy. I've given it a name called my network policy. It's a simple name, because that's what it is. By default in Kubernetes, the structure is that there's scope to a namespace. So you have to define a specific namespace, and then whatever pods reside inside that network namespace, then we're going to specify that what we actually want to scope. So under the specification, we're looking for pod selectors. Those selectors are matched labels. And as we saw there about the key value pair, it's role equals DB. So if you assign a label to a database and you call it role, the intention, and then we say, well, the intended pod is going to be a database, then say role equals DB. In this case, any time a new database or change this happens to those DBs, no matter how many pods there might be anything with that matching label, this policy will apply to it. And in this case, it's only scoping action that is ingress received traffic to the pods. So you see ingress from other pods that have pod selectors matching the label role equals front end. Now that's a very simple logic. It is very structured. It is again declarative. So we're saying anything with the role of DB, then it can receive traffic from front end. But notice how the example of the other role, which was the role equals helper, because we never scope that into our role condition. It means even if a new workload came along and he had a different role, it would be automatically denied. And also I didn't clarify that was we only allowed specifically TCP 63794 that DB to talk to the front end. So anything else would be ultimately denied or receive traffic from the front end, I should say. Now, with these, we'll go over kind of crossover between network policy with Kubernetes and obviously the advantages of using Calico's policy implementation. The next bit I'm just going to specify is via an IP block. So I mentioned briefly earlier, we don't build policy based on IP. That's not necessarily true. Like there might be an example where you want to say, allow a pod to talk to public internet, you know, or a specific range of IPs that you want to declare. And in that case, you can absolutely declare, here's my IP block specified a cider range and then say, am I going to allow traffic to, as we saw there, 172.18.0.0 over the 24 cider range. Then you can absolutely say, allow that and that anything else that doesn't isn't part of the IP block will obviously be denied. But what we want to do is not focus on policy for workloads based on IP. We want to keep to that label idea of something that's static. But of course you can build policy around IPs. And of course it's relevant when you're declaring what are the again ranges that I should be allowed to talk to. And as you can see from this example, it's saying anything with the label construct is now allowing egress to talk out to those IP ranges. So there's only an egress action, the previous one was received from so that was an ingress action. Now, some organizations don't do it, I strongly recommend everyone should have the symptoms of no organization. It is the simplest guardrail you can organize but again it's all based on you already enforcing zero trust. And this is what we call a default deny. So a default deny is essentially saying, I've scoped both ingress and egress actions. So as we said, it's still a Kubernetes network policy, so it's using API version network policy.kid.io. This will work for both Kubernetes as you can see here as well as Calico. But I'm specifying for all my ingress and egress action, I'm matching quite literally every label selection I could find for those pod selectors. So regardless of workload, it's going to deny it because I've enabled the action it's looking for something to do because I never gave it anything. I didn't declare that something for specifically do, then it's not exclusively or explicitly denying the traffic, it's implicitly denying the traffic. So regardless of new workloads that get introduced, whether they're permitted or rogue, so someone tried introducing the cluster, they will automatically be denied. So how this would work is we would implement zero trust and I'll try and explain that in this session about only allowing the traffic you actually would permit in your environment. And sometimes you can be a bit broad with it, but certainly try to be as granular as you can. And once those workloads are freely talking over the ports and protocols that you do permit, then as long as this default deny sits at the end of every namespace or potentially a global rule which we'll talk on in a while. Then any new unwanted traffic will automatically be captured. It's a very powerful policy and quite simple to implement as we can see here. But it can be dangerous if you implement this in the beginning without putting serious thought into zero trust because you'll end up denying traffic that you actually would otherwise wish to permit. So yeah, here's some kind of ideas around calico's network policy. I've only shown you Kubernetes so far. So it's important to know what are the advantages of using calico's implementation. So it's an extension on the Kubernetes network policy implementation. It's not an alternative way of looking at it. It takes the exact same structure that we're familiar with. So if you're already using Kubernetes, but you would like some additional capabilities, then do that. I strongly recommend using it as you can see here. It requires calico for policy. So we talked about the Felix agent in the architectural diagram. It's not necessarily as the CNI. So even if you're using AWS VPC CNI on an EKS cluster, or you're using Azure's CNI, they have their own CNI. So there's a bunch of different CNIs you could use. But that shouldn't affect using calico for the network policy implementation. That should be a separate logic here. Another thing is, when we looked at the examples of the policy, we actually can define explicit ordering or precedence of our policies within calico. So you can say, read this policy before these policies, again globally or in a namespace so that we know this is higher precedence. It's more important to have security actions enforced before the zero trust starts implementing on a per application level. All Kubernetes network policies are namespace scoped. That's perfect in most cases. However, as an example, I want to implement a high level security rule. So I want to say, deny traffic associated with known bad IPs or to known bad actors, or even around say the default deny, even though in this case default deny makes more sense this way. You don't want to keep building identical replicated actions for each namespace because it just takes time. You have to keep building new policy implementation. And if you have dozens of namespaces, you're replicating the rule across all of those namespaces. So calico also offers a globally scope policy rule. That way, if we know there's a known bad actor or we want to build a quarantining rule to deny all traffic associated with a specific bad actor, then I wouldn't have to keep replicating the same rule for each namespace. I can say, apply it with this globally scoped. So it'd be called a global network policy as a CRT. And then within that object, you can then filter down for here is the scope of that context, even though it's a global rule. That's really powerful. Now show some examples of that in a while. And within Kubernetes, it is explicitly just allowing traffic. So as I said with the default deny you allow the traffic you want anything that wasn't what you do permit. Catch it through default deny implicitly deny it. Whereas in this case, you can explicitly use an action to deny. You can also explicitly pass and log traffic. And why logging is quite good alongside denying is, as the example I mentioned there, a quarantine rule. So you're saying anything that matches these contexts we know we otherwise would never trust like it might be communication between certain pods that are doing certain actions. Then you quarantine it so you action it to say deny all traffic from this pod to this other location. But then you could also say, well, log it. So that way you're forwarding assist log message to notify yourself via a centralized solution that there is some unusual activity based on the fact that we created a rule that says deny that action. So we obviously don't want it. So we want to be dynamically ahead of it. So we're outright going ahead and trying to deny unwanted traffic, but also logging it so we notify ourselves that there's an unusual actor there. I'll try to go through this a little quicker. So we allow you to scope down per endpoint namespace as you can imagine, but also service counts. And when you're trying to enforce things like PCI compliance, then if there's a user that should be permitted to access, for instance, workloads that handle payments details, then perfect. Allow them to define service account like the user account into the context of the policy to say, well, don't allow the service account to talk to this one. And again, I'll show this example earlier. Before we're handing layer five to, you know, layer three, layer four, you know, network traffic up to their five. We integrate nicely with projects like envoy and Istio as a service mesh. So if we use the envoy demon set then we have the ability also to collect layer seven traffic and similarly build network policy around not just the network specific traffic but also the application layer that creates the GPS traffic that we would otherwise be logging through layer seven. And this is probably my most interesting part that I get excited about is we're not just building policy around workloads like we talked about with the Kubernetes implementation. We can create what we call host endpoints. So you can in the Felix configuration you can ask for it to, or you can flag it to automatically create host endpoints associated with your nodes in your cluster. Then we can build policy around them to say just like workload like the pods to say allow this or deny this traffic for those hosts. So you might have a specific use case where you have an SED host and you want to again allow or deny that traffic. Perfect, you know, allow those specific ports or deny specific ports for that host endpoint. And I'll show a good example in a while for why host endpoint is quite powerful. Again, it's just like the Kubernetes implementation, you can use kubectl to apply dash f on any YAML manifest, as long as you're using the correct kinds and using the correct API version, it's going to work. We also offer own Calico CTL. I think as time moves on, we're probably going to deprecate that as most people use kubectl universally for different frameworks, not just with Calico. So I guess everyone has a preference to continue using kubectl. Again, if you use our enterprise technology, I'm not here to sell enterprise to a community, it's just worth noting that if you use enterprise, there is even further extensions like the ability to specify DNS policy on egress level to say, you know, don't just allow it to talk to an IP address like we talked about, but also whitelist or blocklist this specific, you know, wildcard.domain.com. And that way, again, a further abstraction rather than just focusing on IP, we can now deal with DNS. But this is something you cannot do in Kubernetes policy implementation. This is concept of tiering and preview and staging. All really useful if you want to have better understanding of is the policy going to work or I want to set stronger guardrails around it, but open source with the policy implementation. As you can see from the top, there are so many added capabilities. We should be able to address those on the session. So here is a calico network policy. As you can see it's highlighted for project calico.org is the API version we're dealing with. It has the same kind so it's a network scope or namespace scoped network policy. The order value as I mentioned earlier it's saying the precedence of the policy so order 900 comes after order 800 or after order 700 so you know it's further than the chain. It's also scoping the same way as we did earlier under specification we say selector based on the label role assigned to it. But here's the bits that are different for calico that we didn't see earlier. You can specify action explicit actions. So we never mentioned actions earlier because it was only allow. So in this case we can say, well, I would permit certain ports and protocols, but I would never allow TCP, or in this case where I'm trying to do, I don't permit certain service accounts. Well, then I could say, deny ingress so I'm not allowing to receive traffic that is TCP from the source where it's a service account with the name SRE account and the graph on the right should or the diagram on the graph should give a better idea of that. So the service account SRE account he cannot talk or she cannot talk to any workload that has the label role equals helper over TCP. However, in the case of the second action, we are also focusing on logging ICMP traffic that's coming from the source with the name say selector color equals green. So now is how you can be quite broad. This isn't a really a powerful example. It's just to show how flexible it is so you can not just allow you can explicitly deny. We also allow service context. You can log, which is brilliant if you're forwarding to a SIS log or SIM solution. And then you have ICMP amongst other protocols. It's not just TCP UDP ICMP like ping scanning. We can also log deny allow that traffic. So yeah, I'm going to try and show a real world example of what you might do with a traditional firewalling tool. So in a monolith architecture, you would have a zone based architecture is what they often try to create. So you create a demilitarized zone or a DMZ. It means that those pods as long as they have the label context for, you know, again, we're not basing it on IP we want to do this based on label. So we say, if it has the label FW zone equals DMZ, then it is categorized as a demilitarized zone workload. What do I mean by that as you can see in the diagram, they should be pods that can talk to the public Internet and also receive traffic from the public Internet. So there's in this example, there looks like there's four Kubernetes pods in that DMZ. They can talk to public Internet and they will have labels concept of or construct of firewalls on DMZ. Now as you can see the trusted pod should be able to talk to DMZ. I think the diagram should better explain that it's also allowed to receive traffic from DMZ. The point here is that trusted zone by no means can talk directly to Internet in the same way the restricted zone cannot talk to it. So what we're ultimately trying to do is we're not really concerned with trusted pods. What we are concerned with is we want to ensure that only permitted pods can talk for permitted purposes that we have approved. Having strength and depth, it means that, well, it would be near impossible for us to get a compromised workload into the restricted zone where all the personally identifiable data is. So like keep in mind, we're trying to comply with PCI or HIPAA or SOC2. And we want to make sure that if we have sensitive data, whether it be payment details or other PI data in that Oracle DB example, that backend pod that it can only talk to trusted or permitted pods over trusted permitted ports and protocols. We're applying zero trust and only then will the trusted pod be able to relay that data on to a DMZ, which now can talk to public Internet. So the DMZ essentially has no visibility within the zone. There's large IPs between those zones. So with traditional firewalls, you would have three firewall zones. Again, it's a lot of context here. So we need to make it static. So as those workloads go up and down, I don't have to keep rebuilding the structure. We need it to be highly dynamic. And then those pods otherwise, you know, if we didn't create this, they would have full lateral access between those zones. So there's nothing to stop a compromised workload if it did get into that unidentified DMZ zone from talking straight to a restricted DB. Stealing personally identifiable data, even if it was just doing port scanning. Once it's got that data, if it gets away with it, then it can reach out to a command and control server and do whatever it wants because it can talk to public Internet. So that's why we absolutely need these zones. So as I say, large IP ranges for egress, and then you have a bunch of different tools as you can see from the example that they need to talk to. And maybe the trusted pods or the DMZ. Yeah, probably the front end pod needs to access those external endpoints, or maybe the trusted pods need to. So we will make certain exceptions to the rule will say only allowed to talk to DMZ unless it talks to external endpoints. And again, we need to identify exactly what endpoint we're opening it up to, not opening up DNS to potentially anywhere. So that's why we may mention earlier about the DNS egress rules are really useful in Calico Enterprise. So I have an example, you know that I can share actually can go to docs.calicocloud.io if you're interested in it, but it's an application called Storefront. It's pretty standard. It's got a front end to mark services. So they're in the trusted zone and a backend and a logging component. You know, you have a standard logging tool, but that's also holding sensitive data. And within those zones, we noticed that someone may introduce a rogue workload, like we have to assume that someone's capable of compromising our cluster. So if they do compromise it, they perform TCP board scanning some data exfiltration, then they're going to try and reach out to a command and control server to relay that data that they've otherwise taken from our backend pods. So we are building the zones. We're building a DMZ trusted and restricted so that the blast radius in this case is far more restricted because as you see the rogue workload managed to get into the restricted zone. But it can't talk outside the zone because we've really set no like zero trust to the outside. The only way the data could have got to a different zone is over permitted ports and protocols only from a back end or logging pod to those other intermediary workloads. So in this case, we need to lock down not just north south traditional firewall can do that, but the east west traffic is where it gets complicated. As I mentioned, staging the different environments we can talk about that in a while. But it's important to understand within this tenant that I have like I say they're front end I have a screenshot from a Google cloud environment that I run. And you can see I have a front end pod back end to microservices and they have those labels. So when you say qctl get pods within the namespace storefront where the application is running dash dash dash labels. So I want to see the labels associated with it. Now you won't have a structured label schema by default. You may have something similar to what you can see there where you have a generic pod template hash value assigned to it or maybe just app equals intention that can be okay. But when you're building zone structure, it's a good to also add additional labels like I've done here where you say FW zone is equal to DMZ trusted or restricted. And once you have those labels, it's very easy to build a policy like I've done here. Now I just realized there's one slight thing to mention by this policy is different to the one is because it's you originated from Palico enterprise. So it has for instance this concept of tiering so you have product tier. So the tears called product. Again, it's another abstraction so say enterprise. So in the case of the policy name, it's product dot DMZ as opposed to just DMZ. But other than that, if you remove those two lines or if you just remove the tier and remove product from the name, that policy will be the same for open source for free. Everything about this will work. So as you can see from the screenshot example that I've taken anything that's scoped in the namespace storefront which was the test application we just introduced. In that case, if it has the label DMZ for that far well zone, then I'm allowing it to talk to public internet which I've actually abstracted a bit where I've said type equals public for that IP range so that network set. Alternatively, if you're not creating network sets which we haven't discussed yet, you could just say cider match to the IP range. Everything else as you can see here from an ingress from what we receive is denied. And similarly what egress what talks out from that DMZ is allowed to talk to pods with the labels far well zone is trusted and or app equals logging. So notice how there's fine grained. I guess it's Boolean logic we're talking about here so and or logic so you know we're allowing it to talk to trusted, but rather than creating another rule we're saying, or if it has app equals logging or you could say, if it's far wall trusted and and app equals logging so you can define again fine grained context as for that. If it's not in those permitted zones, as you can see here, the other action is to utter, you know otherwise deny that traffic. So that's really powerful strongly create recommend creating something like this for your own workloads, especially if they have similar architecture. So the example I showed there was yet demilitarized zone. Then we have a trusted zone. Same idea again. So we know what pods this going to apply to it's going to be highly dynamic because even as pods go up and down, they should keep this static firewall zone label, and then we have the rule always stay the same. So it says, allow it to talk to the DMZ so trust it can talk up to DMZ, or it can talk. Yes, again this is ingress so it's allowed to receive traffic from the DMZ the one above it, and it's allowed to receive traffic from trusted below it. But if it's not in those two zones, then it's not permitted because it is trusted as the name goes, and then egress what it talks out we're allowing it to talk only to restrict it. So pods within the same zone. So it can talk out to pods in the same zone, but anything it's receiving is from those two zones on a relay, and then anything else is denied. So it really is fine grain context here. And then the final one, it seems repetitious, but it's really powerful is to say the last zone is restricted. Then it can also receive traffic from the trusted zone, or other pods in the restricted zone. Because as we found out, although there's one backend pod with that zone, there may be more pods in the future so it's important to have that there otherwise if there were more pods than they couldn't actually, you know, receive traffic from one another. And then it's allowing all egress access traffic that's another point make it's allowed any egress traffic out. But of course, if it tried to talk to some other IP range that would otherwise be blocked by other guard rails we've already configured. So yeah, this is our network policy. Like I mentioned, it's, you're probably getting quite familiar with the idea of deny, allow logging. If you want to see which policies you've created. And again, you may have tiering like we saw at Enterprise. Then you can say kubectl get network policy for the namespace scope policy, or you can say and you can see on the right side here, kubectl get global network policy to see which ones are globally scoped. The only reason the syntax is slightly different here is that we have tier dot policy name, which is familiar in Calico Enterprise. But if you're using Calico open source, it would just be policy name because it doesn't understand this unique construct or concept of tiering, which is not an issue. So to find your policies is just kubectl get policy or network policy or kubectl get global network policy for logging and denying actions. Again, just trying to enforce if zero trust, like I say, strongly recommended for if you've got different environments like you have. Let's say you have a single cluster, but you have some things that you consider like test namespaces or kind of in there even in a production environment. It's really important to define what are the purpose and intentions of workloads. So like here we've got environmental variable of environment equals prod or environment equals death. And that way, in this case, if it's a front end pod, it should be so yeah, if it's rolled front end, but it can talk to anything or receive anything from production environments, but otherwise it's denying TCP traffic to prod. I think I made a typo there, but the point still stands as you can see on the right side view. The development view what we're trying to specify here is even if there's new workloads is development it could be identical to production one. It couldn't hypothetically talk to production and compromise that production because again this is the applications that are being used by our users. So it's important to set those guardrails based on intentions as well, not just their zones. So we've gone over the difference between Kubernetes and Calico Network Policy. Calico or Kubernetes goes quite far. It allows us to do ingress, egress rules. We can specify the pods. It already scopes to a namespace. We're matching protocol traffic, matching even IP blocks. However, when it does come down to scoping richly detailed environments, then you may also want to have globally scope policy. You may need to explicitly deny or log the traffic to better understand from forensical analysis where that traffic is going. And from a compliance perspective, you may have in-compliant users or compliant users that shouldn't be permitted to make traffic from workloads to other workloads. So having the ability to scope service accounts into those richer matches as well as integrating with your existing service mesh. If you are using Istio and you need to understand what is the HDTPS layer 7 traffic on top of what we can already scope by default in our network policy, then you can go so much further to get kind of a full scope of what traffic we're going to allow or deny in our environment. And sticking on the topic of those traditional firewalling solutions. Firewalls are deployed enterprise-wide. They can be quite expensive. Again, we're talking about totally free open source writing in Yamle Manifest. And we gave some simple examples, but I guess why some organizations still today ask for firewalls and that's absolutely fine. It does fall under some regulatory standards is that some of those compliance frameworks are somewhat archaic. They've been around for a long time. Will they change in the future? It's hard to know. But in the meantime, you may still have a firewall solution. It needs to sit on the perimeter. That's fine. That's not the discussion we're having here. It's more that if you have a traditional firewall that you've invested into and you can't understand it has no visibility of the East-West, what traffic goes on between pods in the cluster, not just goes out of the cluster. Then it's have no control over that. So we have to use policy implementation here. And also your security team is centered around those firewalls. Their concern maybe is, you know, how much time does it take to start writing out our policy? Because it is a new thing they have to get skilled up and familiar with, or maybe your development team need to write policy alongside security. And we have, again, fine-grained policy, which is easy to use, is good to know. But if you are looking at potentially talking with about Calico Enterprise, there's web user interface and additional controls for your security and DevOps team so that they can work alongside one another. But ultimately, network policy, as you probably could see from the session so far, is the de facto way for us to secure East-West as well as North-South traffic for those pods. Yeah, existing policy creation process is essentially slow for DevOps. So it needs to be, as I say, a tool that works alongside security and DevOps. The DevOps teams themselves rely on Kubernetes to enable agility and speed. That's why they're using Kubernetes today. That's not a surprise. Connecting new applications or services requires a firewall rule change. That takes time. The idea of our policy being highly dynamic, as I mentioned earlier, is as new workloads are brought up, we don't want to invest time in redesigning the wheel. We don't want to spend a lot of time implementing new firewall rule changes based on IP. So we can maybe even from the beginning as you have a small environment and as it scales from something quite large or multiple clusters that you can replicate the same policy implementation across multiple distributions, multiple different cloud or on-prem bare metal environments, without any major choke point in that. So, you know, you've got a centralized deployment. It's easy to replicate a same API for each environment. And realistic DevOps can apply this into CI CD pipeline. That's probably the most important part, that this YAML manifest is no reason why they can't make these changes at the highest point in the deployment chain. So security should never be an afterthought. It will always be, you know, as part of the development or, sorry, security as code, I think is the phrase that people like using, but, you know, use it within your CI CD pipeline for automation. So as we're coming up to an end on this session, it's important to know it's not just security, it's also compliance. You may still be using firewalls for that compliance, which is totally understandable. However, if it doesn't maintain the compliance, the existing firewall investment that it doesn't give us that control, it doesn't guarantee for external auditor that we are complying with the East West traffic that's going on within our cluster. Then, yeah, we need to start really looking into how far can we go with our policy. And similarly from an access controls, if there are no access controls already for those external resources, then we need to do that. We need to define which pods can talk to external CRDs. And again, what is the ports and protocols that are permitted for those. If there's a lack of visibility, we have enterprise grade offering that's dynamic to give more visibility into it. But either way, the policy scoping is so well defined. So as long as you write up a good label schema from the beginning, you are pretty confident that your policy is working and it's matching what you expect it to. So that's, again, it's trustworthy and it's popular for a reason. So I hope the session was interesting and that you got a lot out of it. As you know, it's an open source community. We have open Slack channel. We have over 4000 users in it, which is great. It's always going up. If you have questions about any of the content you've seen today, you can contact you can reach out to me directly for your Slack community. The project itself Calico has over 150 contributors. So it is worth noting that we have the project Calico GitHub repo. Again, you can bring up these discussion points with our developers directly via Slack via that community or via discuss that project calico.org. The project is widely adopted. I think it's still the most widely adopted networking and security solution for Kubernetes with over one and a half million nodes powered by Calico every day. So whether you're looking at using our CNI implementation or just the policy that we discussed today, you can reach out to us in a bunch of different forms. So hope you had a great session and look forward to hearing from you soon.