 So for those who know me, I'm one of the co-organisers of the Kubernetes Meetup with Vincent, and so I haven't really given a talk in quite a while, neither has Vincent. We've been, thankfully, been on a date with a lot of people who've been talking. So we thought it's about time that we actually give a bit of a talk back as well. So just quickly, the stickers that he was handing out are kindly provided by Twistlock. So I will try and hand out a few as I go through my talk as well. If anyone has any questions on Kubernetes securities I go through, please raise your hand and ask. This is a talk, I don't know if actually let's first start off with just a bit of hands. How many people saw the CNCF blog that came out probably about a week ago, I think, on Kubernetes security? This guy always puts his hand down. It was quite an interesting blog that does cover a lot of important parts of Kubernetes. This talk will probably cover a lot of those things as well. So if you've read the blog, then you can probably take a bit of a walk around the blog. But there are actually some other interesting things I'd like to cover on as well. So hopefully for those people who are doing a lot of work in Kubernetes, especially around the security aspects, there's a few interesting things that perhaps will be useful and let you consider as well. So let's kick it off. The topics I'm going to cover off in my talk are really four different things around containers, around the operating systems that will be typically deploying Kubernetes on top of, Kubernetes itself and sort of the benefits that come as part of working in a team or for enterprises and big businesses using Kubernetes. So let's quickly kick off with containers. So when we're building out our containers that are running on top of a Kubernetes environment, I guess I don't need to introduce what a container is or what Kubernetes is. Is there anyone who doesn't know what these sorts of things are? Yeah. I thought I'd try and catch them out. Anyway, so typically what we're doing is if we're building out our containers running on top of a Kubernetes environment, we're trying to keep these things as small as possible. So containers are minimal, typically running a single process, keeping the codes small, reducing the chances of vulnerabilities actually that work with deploying. They're very much task specific. So predefined operations, the application we know about, the ports are typically set up front, the volumes that are being attached to them, really lets us have a stable environment that we can have things like anomaly detection being built around it, which I'll go into a little bit more detail later. Now, isolated, obviously that's kind of the whole point of what a container is, is that it provides some semblance of isolation. Now, there's always a discussion about VM versus containers versus all the various different other isolation mechanisms that are coming out there. But what we're talking about here at least in containers is C groups and namespaces. And of course, reproducible. So these things can be built again and again and again and updated with latest patches without necessarily affecting other parts of the operation. So these are the basics of what we're looking at around containers to build a secure pipeline, secure packaging that's delivering the applications and the services into our Kubernetes environment. So a quick question and a quick raise of hands, how many people think that virtual machines are more secure than containers? Okay, how many people think that containers are more secure than virtual machines? Well, this is interesting, we've actually got more hands for containers than virtual machines. So the real answer is well, maybe. I'll go into a little bit more detail about why it's not so cut and dried. So there's a really interesting blog post that was written in 2018 by a guy from IBM, James Bottomley, talking about how to measure containers and virtual machine security. It was really quite fascinating. It got some interesting discussion around the Kubernetes community. I don't know, does anyone actually see the blog post that came out about the work that he's doing? Good, so I can go into a little bit more detail. So he came up with the idea of what are called vertical and horizontal attack profiles. So if you think about the environments that were running our code typically, we have a few different aspects. We have what is typically what is seen as the vertical attack profile. That's the stuff that the app developers are responsible. It's all the code that's traversed to deliver a service from the front-end to the database, the kernel, that sort of area. And it's typically the domain of a tenant in a cloud service. So it's not the typical, the domain of the cloud provider is typically alongside of the horizontal attack profile. It's the area of things like the hypervisor, the kernels, the container runtimes, that sort of thing. And the cloud providers and the operations team are looking after that in the environment. So on this side of things, the vertical attack profile of the VEP, more code increases the risk of exploitable vulnerabilities. Makes sense, right? On the other side, more code increases the risk of breakouts and containment failures for your hypervisors or for your containers. So things came up with a very interesting calculation, at least for HAP. Not quite as easy for that, of course there's a lot of code, but the density of kernel bugs by the number of syscalls made is really the horizontal attack profile for a particular environment that we're working with. So let's go into a few details about what that is. So the easiest way to look at it is bare metal. So it's the simplest thing that we're used to. We have a physical server, two next to each other sitting in a rack. It's fairly typical or fairly difficult to jump. Sorry, this has got a laser, it does, doesn't it? There we go. So it's fairly difficult to jump back and forth between the hardware and the physical data center. There are ways, but for the most part, these things are independent stacks. So the tenant manages this vertical attack profile here. Their application at the top with all their code, that's a bit of middleware, some libraries running down to the kernel and then any kind of bits and pieces that are exposed these days with IPMI and security issues that are leaking around there. So the VAP profile is fairly large, but on the HAP profile, since the hardware is air gapped, it's very, very minimal. This is our base. This is what we're kind of working off. It can't get much lower than this for what we're looking at. So now if you look at a shared environment, this is the good old days that we're looking at where you might be having two different applications running on a single server. There's no isolation necessarily for containers. We kind of moved on from this way with shared virtual machines or shared hosting environments with Vhost or any of the other ones that you typically think of. Now this is the worst possible option. There's no isolation going on here. An application vulnerability can mean that it both goes down the stack all the way to the kernel, but also can jump across into applications running on the same machine. Significantly higher VAP, both the middleware and the libraries are exposed and a significantly higher HAP. So it's a single kernel with no isolation whatsoever. Jumping for virtual machines. So that's what we're most familiar with in cloud provider environments on-prem and cloud environments too. We still have this stack which actually looks surprisingly long. We have applications running all the way down to a guest kernel. Then we have the virtual hardware sitting down to the hypervisor and obviously then there's probably a kernel there as well all the way down to the hardware. So it's interesting to see that the VAP is actually higher than bare metal, obviously. We've got double the kernels. We've got bits and pieces of virtual hardware that sit in between the stack there. So the HAP here is actually higher because you've got a hypervisor sitting back and forth in there as well. So we're looking at that and building on top of that stack we have containers. So we're kind of not assuming that we're running a container in a virtual machine in a cloud provider. It does kind of follow the same in some cases but in this sense the VAP is almost identical to bare metal for virtual machines. The VAP is almost identical bare metal in the tenant's case. So application middleware libraries. The kernel itself is again responsible and is shared across the different host itself but there is the isolation that you're looking at with the containers themselves. On the other hand the horizontal attack profile is much, much higher. You've got ten times more syscores but if you do start implementing sandboxing with things like Setcom and various different secure kernel options there as well you can reduce that quite significantly. So if anyone knows Jessie Fazzel she is a very well known and very well respected member of the at least Docker Kubernetes and now GitHub communities but she's done a lot of work on containers. She is pretty crazy in running a lot of her workstation running inside of containers but she has written a really very good blog post on talking about containers and security and so when she saw the work that James came out in terms of the VAPs and HAPs she said it makes a very compelling argument for having a security that is actually kind of about the same between a virtual machine and a hypervisor. So the controversial conclusion coming from that, this comes from James blog, is that containers are by far the most secure virtualization technology from the tenant perspective. The HAP is higher but it's responsible for the infrastructure provider to mitigate and patch. The machines is the same deal. Issues, you need to be patching it. Containers you need to be patching it. You need to be maintaining machines. That's the reality of the situation these days. So it's quite interesting and it didn't really cover much about it in the initial blog post talking about the HAPs and HAPs but there's a new breed of technologies coming out around different container methods. James had a little bit of an interest because he was working on a technology called Numbler which is IBM's container runtime. GVisor is another one that Google's been working on for secure isolation and for anyone who's been following Amazon, Firecracker came out a few months ago as well. Interestingly he also did some really he actually put a few numbers behind this so you can see both the attack profiles to compare it against these things. So I would recommend if anyone's curious, have a look at the blog, have a read. There's some really interesting stuff going on there as well. So any questions on that one before I continue on to and all Kubernetes related things? Digging into the operating system very very quickly. The operating systems that we typically have used in the past, Rail, Ubuntu, any of the standard server operating systems, typically have a fairly large attack profile that's not necessarily optimized for containers. So recently and in the last few years, a number of dedicated container operating systems have made their way out. Obviously CoreOS is the big one now owned by Red Hat, Rancher and a few other ones that don't get it used as much I suppose these days. The important things around say, mutable operating systems, read only file systems to reduce the malware getting affected in the machines and the idea of replacing operating systems for upgrades and errors. For anyone who's used Google Cloud and seeing how they do their automatic node upgrades and automatic node repairs, it's a really really compelling way to not have to worry about your systems getting affected, having problems with install them and have this mutable operating system that you can reduce the misconstruction of other abilities. Obviously for anything that you are running be it a container operating system or a standard server based operating system, things around auditing the Docker and the container run times to make sure that they are continued correctly and not exposed in particular. Stockwood supports things like Docker Bench, any of the various different tools out there to actually make sure that these things are actually configured and set up correctly. Fairly controversial one is the idea of disabling SSH access. If we have our metrics being exported out, if we have our logs being sent out to a aggregator, elastic search or various different things, the idea of being able to do new environment when something goes wrong or when there's a risk of vulnerabilities in there can mean taking away a fairly obvious point of access and risk of various different vulnerabilities coming through. So it's fairly controversial. There was very big debate that went on on Twitter a few months ago about is it better to keep SSH open because just in case there's something that isn't actually collected by your logs versus do you keep it kind and secure? But I'm not going to go into that debate if you want. You can certainly talk amongst yourselves. Let's go on to Kubernetes because this is kind of what we're here for. The threat models that we're looking at typically in a Kubernetes environment, obviously external attackers, we heard about things around say Tesla getting hacked. Application and container compromises both on the VAP and the HAP that I just talked about. And the very typical compromise uses of credentials. So people walking in the front door with the key. So typically in Kubernetes, this is probably a fairly familiar diagram for anyone who's actually using it. The idea of the isolation really comes down. The containers themselves that we have, the pods really keeping together things around pod security policies, the isolation and the refinement that comes when they're actually deployed into an environment. The namespaces that can keep, say, tenancy together, even if it's sort of soft multi-tenancy, but alongside of that using things around, say, network policies and control of the network. The physical node itself, and then the cluster as well. So, sort of various different isolation messages expanding out from the container itself. So, you know, here we have the segregation that comes as part of that. On the namespace level, visibility access is critical with things around RBAC, quotas to avoid denial of the service, and secrets themselves. Obviously there are various different ways that you should be managing your secrets in the best way. Be it encryption on disk, be it vault, the standard configuration for secrets in Kubernetes is okay, but it is not really that secure. On the node level being, you know, putting particular workloads using affinity and taint on particular nodes to make sure that there can be certain things around isolation. If you do need strong multi-tenancy keeping your nodes in a single cluster separated between different environments, it can make a difference in that. And at the pod level, so limited communication, again, network policies, service meshes to a certain extent, these sorts of things that are really enabling to control not just for the composites themselves, but actually reaching out into the networks as well. So, access control is obviously a really fundamental part. Authentication obviously being the first part of any sort of things in there. Obviously, you know, the access for humans, the people who are actually working in the clusters with kook cuddle or kook cuddle or kook control or whatever you want to talk about it, you know, delegated out from, you know, the standards kook pki to things around LDAP or OIDC or various different account management systems and apps themselves. So, you know, controlling how various different service accounts are controlled around RBAC and the various different tooling. So, authorization then comes as part of that, both for the humans and for the applications themselves, making sure that we are limiting what can be done by who in a cluster. The access themselves are really provided and limited through various different access points, making sure that TLS is enabled fairly obvious, but a really critical part of it, making sure that various different end points are not exposed to the public internet, various different ports that may have been around for legacy environments kind of stick around. So, the C advisor ports that are now kind of changing to be disabled by default, or metrics that are being exposed internally in the cluster environment. And finally, enabling auto, making sure that anything that has actually done on the cluster is tracked, put out into a log aggregation system or a CM or something that's going to make sure that what's going on isn't in fact the right thing. And those sorts of things it's really quite interesting to see that from the audit logs, RBAC rules can be fed back in there to limit the scope even further. So, some interesting tools out there to do those sorts of things. Image security, obviously we talked a little bit about containers earlier on. Vulnerability scanning is obviously such a critical part these days. You're not going to go to the Docker hub and download something that you don't know what it is unless you make sure that you have standard first or build it yourself, making sure that you trust it as part of that. There are a number of excellent tools out there, Twisluck being one of the people who have come along and sponsored us, but obviously CoroS Clare which has now become a little bit more open source and is bundled in with a lot of other tools out there now. Aquasec for scanning package vulnerabilities alongside of that, a number of these tools will actually do policy based scanning, based on things like this scanner and things too. So, scanning images as much as you can as anywhere you can is kind of the rule, you know, standard in the registries that you're working on, making sure that it's up to date before it's deployed. In the CI pipelines before something actually gets deployed out there as well, and then also as your containers are running, making sure that what you're actually in production is not sitting there with the gaming mechanics, it could be put at any point in time. The nice thing about containers, obviously, the beautiful images mean that a security update is typically a rebuild away and if you're doing everything right with your CI CD pipelines it should be a simple matter of rebuilding and pushing it through the pipeline and back now again as well. Not always that simple when there's policies and anything else, but, you know, that's the goal of trying to get this thing as done as agile and as quickly as possible. Kubernetes admission controllers are a really, really important part of making sure that we're not deploying these things that can be potentially vulnerable. Graphius is a fantastic tool. Does anyone know Graphius? Oh, great. It's a tool that actually came out of Google amongst, you know, all the other things that are Kubernetes related. They actually built out something that is, you know, really designed to handle what they call the software supply chain, but it's really just a way of handling metadata and feeding it into Kubernetes to make sure that the admission controller will reject anything that is not checked and certified. So, you know, somebody checks in some new code for a component it gets checked. So the authorship of the provenance is recorded so, you know, this is like your PGP keys for GitHub when you're submitting code. That gets pushed into Graphius. The build itself gets pushed in there so then it gets verified and an image, you know, key is put in there too. Stamped and making sure that it's verified, again stored into the metadata repository or test the past. Again it gets signed off, put back into something in Graphius again. Code gets deployed, doesn't meet the security policy, yes. And so when it hits Kubernetes with the admission controller that can then check to make sure that everything is okay. All of these bits and pieces here have passed and are green. Again, you can also then look at the various different parts that they get. Again, Code gets deployed out there, gets recorded back into the metadata revestation and the CIA for the various citizen security teams or CIOs can make sure that then it's all compliant. Now this sounds a little bit like Enterprise Magic. There are a number of tools that do this as well commercial ones but this is something that Google is releasing open source first to their cloud servers but also for Kubernetes environments running outside of that too. But it's an important step of making sure that the various different parts of a build pipeline get recorded and tracked and then used as a gating mechanism for when we're deploying code out into our environment. So Graphius is an external metadata? It is external metadata, yes. So it's actually not necessarily Kubernetes related as the diagram here has, it can actually be for other services but it makes a lot of sense for Kubernetes it makes a lot of sense in the CI CD environment. So for anyone who's doing with working with really strong policy enterprise environments, this is actually really interesting to look at. So pod security, obviously we're launching our containers, we're running our pods making sure that they are secure in such a way that we're avoiding our breakout, making sure that our VAPs and HAPs are as small as possible. Keeping things obviously with the same origin, we're not pulling random images from Docker Hub, we're doing a vulnerability scanning on it if we do. Least privilege to run the workloads and making sure that we are reducing the scope of Setcom all the various different ways of reducing what we're launching and mounting minimal host volumes. Obviously it's probably risky to be launching or putting code, putting data on the host itself. There are use cases for it but keeping it to a minimum to make sure that those can't be used for breaking out into jumping across containers and things on a particular host. Pod security policies and security contexts are really quite important for that both for the administrators making sure that they're setting up what is done but also the developers who are launching their application. So setting up mandatory access control, the Setcom the AC Linux's, disabling root and privilege escalation and making sure that they're using read enemy file systems. Now it's not always possible but it is a really important part to set these things up as policy and as process for anyone who's actually deploying pods into an environment. So network policy is another really critical part. Obviously we don't want our containers to be running on the network. It's a way of setting up a really nice granular firewall between apps and services. It gives a lot more control than what we're typically used to in a tiered environment with firewalling but it gives a control back to the application developers the operators but it gives a way of having ops teams and auditors can see actually how things are being implemented and making sure that when you're coming in with an access control, sorry, an admission control and Kubernetes, these things can be checked to make sure that they also fit with policy. So is anyone familiar with Open Policy Pigeon? We've got a few. So it escaped me now because it's a fairly obscure tool that is now part of CNCF though it is essentially a language for defining policy. It uses JSON and it allows a various different rules to be set up for a number of different areas both Kubernetes, SSH, various different back ends that can use that for admission control and Kubernetes to check if things are actually in policy when you're actually inserting the owl into a Kubernetes environment. One way of doing it now Microsoft recently just contributed an admission controller tool called policy controller nothing and becomes a really important way of making sure that we're gating stuff as it actually comes into a Kubernetes environment with a fairly consistent language that can be used across a number of different environments. Alongside of that although Negra policy is really the fundamental piece for making sure that we're good network citizens and we're not letting various different malware and hackers jump around our networks, service meshes are another path that can take security further giving more identity in the network setting up TLS between different environments giving more visibility and monitoring into the network environment itself but also providing more advanced interest and interest filtering. From the network policies where we're looking at kind of level L3, L4 we're moving down to kind of the level of L7 so we can actually start doing filtering based on URLs or making sure that everything that actually goes out of our environment is using HTTPS that can be quite useful at too. Obviously the nice thing that's starting to really come about when we've got things around lakes and crypts or vault certificates is making sure that we're actually getting automatic certificates generated for our workloads as they are put into place and actually talking to the outside world or speaking or providing services for anyone exposed out there as well. Lots of different ingress controllers that will be doing lists and crypts it's a little bit mind boggling as to which one to pick so I'm sure there's a discussion or talk unto itself in that one as well. So for those who are familiar with Istio this is a nice little diagram that sort of explains a little bit about what it does. We've had a few talks in it in the past so is anyone not familiar with Istio? Anyway Istio is what they call a service mesh it's a way of providing proxying between various different workloads in an environment. The nice example that they use from the documentation is really providing both ingress into the network so this replaces what was used in addition to the Kubernetes ingress controller it provides SSL termination. The pods themselves can be utilized with labels to provide more advanced load balancing and traffic between the different pods but it goes down instead of what we typically know with Kubernetes services where you can say well I have a mixture of pods running in a particular environment for doing my load balancing label that's being used for label balancing control Istio kind of takes it to the next level of being able to do things say around user agents or particular strings coming in there so you could target mobile applications versus web applications or doing various different regretting deployments and more importantly I think is really setting up network identity between different workloads setting up transparent TLS between different environments to provide better security making sure that this pod itself is talking to the correct database pod in the back end and finally things around say US filtering as I mentioned making sure that this query back end is only querying https.google.com versus going out there and trying to get Google.com or getting fake Google.com so it gives a little bit more control in that regard and it's complicated but it does provide some fairly strong primitives for building out a very strong network environment so it also gives you some really nice visualizations into network traffic and seeing what's going on. This is a project called Istio there are probably now about half a dozen various different dashboards out there but seeing what's going on in terms of interesting traffic inside of Istio so runtime security so we have our pods running how do we know what's actually going on inside of it so in the old school world we have our antivirus we have malware monitoring various different tools there but it doesn't quite work in container land it does and companies are trying to adapt but it's very important since containers are task specific it can be a lot easier to detect anomalies now obviously there are exceptions to that rule but in the most part you can really see what's happening inside of these namespaces and keeping track of things like a shell being spawned in container directories or paths that are being accessed or written so if you have an immutable image you shouldn't be actually running to the disk while you're running it there are some cases but these things are fairly scoped and contained to know what's actually going on Binary is it should be running in time we're launching only one process in a container anything else that's running shouldn't actually be there and external services it should be contacted there are some really interesting tools Falco is one of them it is also now it's just been Sandbox or incubated at CNCF but it actually tracks to see what's going on inside containers it sends out loads to elastic search or other CMs to help security teams keep on top of what's going on there and then workflows can be triggered on that so they have a really good example in their blog Falco comes from a company called Cystic if anyone knows them where they actually built out a completely open source pipeline for keeping tabs on what actually goes on inside containers in a Kubernetes environment so from a container environment Falco itself will be running on each host it stores a little bit of a kernel extension I think or now it actually uses BPF as a way of not having to necessarily patch too much and it actually keeps an eye on what's going on inside a container so it will keep it has a set of rules keeps tabs on what those rules are and when a rule is particularly is triggered or events that are actually coming out of the Kubernetes say order controller can get fed into a NATS topic and then a serverless function can send it off into into say Slack for loading teams but also killing off if any quite relaunching and so that any kind of potential vulnerability that was triggered there could be then taken out of as quickly as possible so interesting to see that open source tools are really providing an option there that can be utilized for keeping on top of the behavior inside of containers. Secrets, I quickly touched on that before obviously environment variables are not a secure method for putting secrets in containers. Body amounting is obviously the preferred option because it sits inside a temp of s but for the most part secrets out of the box in Kubernetes aren't particularly secure despite their name secrets because they're not encrypted in the data store. Setting up encryption at rest which is now part of Kubernetes but not enabled by the fault or integrating things like fault are a really important part of any kind of production environment so quickly other user environment is user benefits for cluster operators we're looking at a smaller attack surface and faster updates using something like the container operating system a standardized platform for running applications and services we all have a common language that we can talk about as an operator I can talk about what a pod is and talk about a node as a developer I can also talk about the same sorts of things we have a common language but we can also get some order of control over the user access and the workloads that are being put onto these different environments. For developers we get a common delivery format with containers that is easily reproducible and easily deployable we get a greater control over both the network and the application security that is handed to the developers but it's provided in such a way that can be used by operators by order teams to actually have visibility into a full environment. This is a stuff that gets put into source control that everyone can actually get visibility and can see what's going on there vulnerability scanning during development and during runtime making sure that we have a good way of giving tabs on that during deployment and during the operation and giving visibility to things like intrusion with things like Falco for the network operators again being able to see the ammo files that really produce the communication between our pods the Istio CRDs that go in there that we can actually have it as a way of auditing and containing the network environment also is a way of collaborating between the developers the network ops team a way of sharing that and a way of putting that into an area of visibility. The depth of SDN in my previous companies we joked that SDN was actually spreadsheet defined networking for any of those people who are still looking after the spreadsheet containing IP addresses gone are the days of actually having to look after and hand out IP addresses on that sort of time. That's a nice thing about the Kubernetes and the security that it provides in some ways. Automated encryption and identity between applications reduces the chance of developers forgetting to slip in a certificate between different environments, different services, different tiers and detailed monitoring going beyond packet capture going into things like protocols and moving up the stack into the visibility we can see with things like Istio security and audit teams visibility into the present state environments, again source control and declarative configurations. What has been put into production should in fact be what you see in the YAML files that are stored in the source control. The idea of GitOps lets you kind of make sure that if there's any divergence we can bring them back into line. API driven automated compliance instead of handing out a thousand row spreadsheet again. Audit logging to make any kind of changes going into it and again real time visibility into intrusion or unusual behaviors and containers. So in summary from all of the stuff that I've covered up there, it's an amazing secure future with Kubernetes. Everyone gets unicorns and ice cream for breakfast all of that kind of stuff. It's not quite that. Kubernetes itself as we all probably know is it's a collection of tools. It's not a product unless you are using something like OpenShift to the other distributions and then you end up getting locked into a particular workflow. Kubernetes itself has the basics but it does need a lot of configuration. It does need a team to keep an eye on it and that's why you don't necessarily need to get into an ops team when you do have a Kubernetes environment. And again it's also very fast changing. New tools are coming out changing very rapidly and in some ways it's difficult for commercial offerings to keep up as well. But on the flip side of that this presentation I originally put together halfway through last year. I mean updating it for tonight and cleaning it up a little bit. Not a lot has changed. The basics are really still fundamental. They make a lot of sense. They are really in place for the various different aspects of keeping a secure Kubernetes environment. It hasn't changed that much. And the tools are only getting more mature as they continue. So, questions? Yes? I have one question regarding the VM vs container security. Yes. Which way is more secure to me if I run my workloads in VM together with my bad neighbor or wrong workloads in containers together with my neighbor? Which way is more secure? So, again, in some ways it depends on which environment you're running. If you're running at me in a multi-tenancy environment. Yeah, that's what I mean. Yeah. If your operator is the guys who are keeping on top of both the container runtimes and the VMs themselves are making sure that everything is kept up to date and patched. Then make sure that set comps in place. Then you should in fact be reasonably secure. You're probably about the same with a virtual machine or a container. Obviously, there are always vulnerabilities. Kernel is an incredibly large surface area for attack. There's always things that we may not necessarily know about. It'll pop up. It could be zero days or anything else. So, again, I would say that if it's set up correctly, then it is pretty equivalent. If you are running something that you're managing yourself, if you're running a virtual machine environment, it's more than likely that you probably are a little bit more secure only because you still need to then go to your container environment. Make sure that it's locked down your second profile is your policy, your security policies are all in place and making sure that they're all set up correctly as well. So, again, it's always, it depends. But I prefer a kernel level I solution. For networking, a physically isolated network is more secure than soft isolated by using network policies. If security is really that critical, then you probably need to look at running on bare metal and completely isolate it. There's no magic bullet, I think. There's kernel vulnerabilities, there's container breakouts, there's virtual machine breakouts and potential vulnerabilities too. So, it's just a matter of keeping on top of that. There's no silverboard. Anyone else? How are we for time? Sorry, Vincent, I'm going to cut you off.