 Hello back, sorry I messed up earlier, presented another talk. Today instead we'll be talking about right now practical security in the brave new Kubernetes world with Alex Ifkin who's a director of solutions at Eclipse in a U.S. security company. His focus is on secure deployment of insecure or secure software including container orchestration, application security and firmware security. Alex has two decades of security integration experience presented at numerous security conferences, delivered training, hold a master in computer science, co-authors the ISACA CSXP certification in Climbs Mountains in his spare time. So give a good round of applause to Alex. Thank you, thank you. I can I can hear all those applause, right? Yay! Thank you for joining. I'm excited to be here. It's really a great thing to be here and I'm going to start switching into the presentation right now so everybody can actually see the presentation instead of my six screens in the back. Now you should be able to use that right now to start the presentation. So thank you very much. As I'm sure you've seen before, a lot of presentations in the Kubernetes world start with the nautical themes, so I'm no different. I'm actually going to put on my my pirate hat. You don't see it today right now, but you'll see it later. I'm not going to talk all in pirate speak, but at least I'm going to have my hat. So if you've ever been feeling overwhelmed or if you've started looking at the new cool technologies that are coming from the DevOps world and started thinking that, hey, they're building so much rapid stuff so quick and there's many, many new technologies coming on board that I just feel that I'm not following, not coming through, not understanding them enough. Well, it's true. Yes, they're building it and they're building it really, really fast. And the question that everybody's minds who's doing the security is, is that a good thing? Is that a bad thing? How do I start thinking about all the new cool technologies that my developers are picking up? And I'm here to tell you that it's not all that bad. There's actually really good things about it. There's some bad things about it. This presentation is about the details of what it means to be running DevOps and Kubernetes and adopting the developer mindset. So there I am, Alex Sivkin. I come from the land of the trees from Portland, Oregon. You heard all about me and we're going to start. The modern application stack, if you think about it, doesn't consist of the usual platforms that you use to. What it does in consist of is platform in the back, that server that's actually running some serverless components on it that also has an OS and a kernel on top of it. And then there starts the container ecosystem, the runtime, and orchestrator that puts all those containers together and finally the application. So in this talk, I'm not going to talk about the two lower pieces, even though I actually work with the lower level of security, they're outside of this. We're going to focus on application orchestrating container. So everybody heard about containers, some of you have been trying playing, even maybe hacking containers. And the question, the first question that everybody has is, can containers help with my security? And the answer is yes. Yeah, if you build them and deploy them and run them properly, and I'll give you a couple of examples, where they help is persistence. Containers, you can wipe them off. So it makes it harder to persist in a specific container. It's harder to do tooling or leaving off the land. There's just not enough tools and average container for you to do in the ferris activities or for somebody else to do it. Things like path traversal, obviously are limited resource consumption if you run it correctly. So there's certain things that containers do well. Now, if your app is bad, the container will never fix it. So if you've got any of the issues that your app has been suffering before you've put it into container, it's still going to suffer from those injections and security installation, runtime exploits, all the autobahn there is and the references and work overflows and rock rounds and Tom have used type of check, all those fun stuff is still going to be there, still going to be crashing your app and making it easier to abuse. And more importantly, things like cross side request forgery or even server side request forgery become even more important. But because now you're with those, you get an insight into not just the application, but the container that it's running on and the orchestrator that it's on. And containers also add their additional issues with software supply chain. I'll cover that in a bit. So, all right, containers out. We've talked about them. There's plenty of presentation to talk about security of containers and what's good, what's bad about them, good summary. Now, how do you ship your containers? You really don't deliver your application in one single container. It just doesn't make sense that the whole idea about splitting up monolithic apps is you have multiple small components that are independent of each other. So, you ship it with something. And there's all good friends of ours that we've been seeing in the past and tried and true and have experienced with that provides declarative deployments where you tell what you want and then it figures out how to do it. They've been all good and so, but they also have quite a bit of limitations. So, the new kid on the blog's been now for several years, Kubernetes, Kubernetes, the captain there. And you can see from different reports, Kubernetes has been taking a whole precedence over other types of orchestrators on the market. It's a clear majority. Everybody's taking on it. Everybody's trying to use it, trying to understand it, trying to hack it. So, what's good about it? Where does it help? Well, it does help some help with security and helps natively with security. The biggest thing that it helps with is it allows your containers to leave very short lifespans, meaning that if you deploy your microservices that only need to run for less than 10 seconds, you do it with orchestrators. And that kills attempts to abuse that specific container within 10 seconds. So, your exposure is really limited. And likewise, you have a bunch of containers that are mixing and mashing and living on different nodes, flipping servers. So, your persistence becomes a headache if you're a red teamer. That's good. What's bad? Well, I said, can they help with security? Yes. If you know what you're doing. And that if is the biggest if I've seen in my security career. The things that I've seen working with different cluster installations and Kubernetes deployments is the misunderstanding, the taking of the defaults as if they were secure or picking the wrong components is what hurts most deployments. So, things that I've seen just list them, the bad authentication access controls, security misconfigurations. People have always had login monitoring as an afterthought and missing on mutual TLS and service accounts and secrets distribution, all that kind of finance stuff. So, to understand the extent to that problem and to see how we can deal with that, I wanted to walk you through steps of what it takes to deploy a modern application on the Kubernetes platform. So, obviously, you can't just throw a bunch of containers in. If you have your pieces of application on the web talking to each other, you need to have some sort of way to talk to them externally. So, you have to have an ingress point and you have to have a service match so that they can communicate with each other. Once you get that, you do need to introduce a way of monitoring your performance. If it's a production application, they obviously don't want it to be slow. You don't want it to go down accidentally or you don't want to see what kind of issues, errors that it's having and cash some ahead of time. So, you need to introduce a metric store, a log store and tools to monitor those metrics as they're being collected and monitoring the logs. Then you start thinking about it and, yeah, you have to have the authorization, you know, the pesky thing that security folks are pushing me to do and you have to make sure that the API that is being collected is authorized and you're actually making decisions on who to provide data based on authorization, not just your trust. Then you have to have a network controller because, you know, there's other security folks who are saying, oh, we've got to control who and how it talks to our network. And then, once you've got all that, you need to think about how to bring all those containers together and deploy them on my cluster. So, you have to have a registry that contains all those various pieces of stuff. And then, once you have so many pieces, you need a way of managing those pieces and deploying them in one swap or upgrading them as a unit instead of trying to upgrade all the little pieces together and getting yourselves in the mess. So, you need some sort of a package manager for those pieces. And if it weren't already too complex, you also need TLS because, yes, because encryption. And you also need dashboard to have a look at all this mess and have an existential thought of why did I get myself into all this trouble? And the trouble is big. So, when you've collected all those components, when you've put it all together, this is the ecosystem. This is literally a screenshot of the cloud native landscape that now exists that's grown around the Kubernetes ecosystem that provides different pieces of software support and monitoring management for your application on Kubernetes. It's literally that massive and it's getting bigger and bigger every day. So, you can see how many different pieces of software it actually takes to run a fully microservice, fully containerized application in production on Kubernetes. And with that come security issues. One of the most popular ingress points these days has plenty of those and they're being addressed, but as we all know, security needs bashing and this batch has happened slowly. Grafana, one of the collectors or display tools for Prometheus, which is the very popular metrics collection tool in Kubernetes, they both have security issues. And Envoy, the most popular, maybe since LinkerD mesh for your Kubernetes pods, that has its own issues. This is what I'm showing here is very critical components that your Kubernetes system relies on and has a specific interest in keeping secure. So, I'll give you an example. Let's say you're making early choice and you wanted to say, what is that Kubernetes network plugin that I want to use in my specific deployment? All is more than eight available. And if you pick Nazi, we've not, which is a commercial, partial commercial application, you're going to end up with no encryption. So, you'll have to flip it at some point later and that would be a big headache. That's easy mistakes to make in the beginning when you deploy. Now, ingress controllers. All right. So, what I do to pick a right ingress controller into my Kubernetes? Well, you can see there's ingress engine X and I'm not sure if you heard, there's also engine X ingress and they're completely different. One comes from Kubernetes cloud, the other one comes from the engine X cloud. So, completely different. You can see they're completely different in terms of how they're handling or how they even thinking about security. Pick one or another and you might end up in a bit of a trouble later on talking to authentication or trying to support job tickets or tokens. Similarly for the other ones too, Istio thankfully has been getting some momentum and some support from Google. So, it has a bit of a support behind it and a bit of a help in getting that going. But it's still fairly, it still needs work to be done. Let me put it this way. Ambassador traffic and what you see here is actually only six that I picked out of probably 12, 15 different ingress controllers that are available right now for Kubernetes. I have a link at the bottom of that presentation that'll send you to the whole list if you're so inclined. Well, so you thought about this and maybe a thought in your head is, well, I know this pieces are difficult to pick and maybe I can plan the time around in picking those different components and putting them together before architect it. But the very basic decision that you have to make is the one of how to run Kubernetes. Do I run it myself on my own bare metal servers or do I run it in the cloud on their servers or do I trust somebody else to run it like Google, Amazon, Azure, even DigitalOcean has their own offering. So, this is where you run into trouble too. If you've looked and seen previous presentations about security and the container security, you probably know already by now that Docker has really good defaults in the beginning from security standpoint, so that when you run an application in Docker with defaults, it's going to protect you from a lot of things. And I've listed those here. So, Docker D comes with app or enable by default. There's a second profile that is filtering a lot of calls. There's blocking 56 calls and that's all good. Now, what you probably didn't know is if you run MiniCube, which is one of the ways for a developer to run Kubernetes really quickly on their own laptop, so you don't get any of that. You don't get app or you don't get second profiles, you don't block any sys calls. So, that means basically you can hose your own system really, really fast. Maybe not a big deal. You probably break your system periodically, so whatever. But there are others like K3S that promise more of a production-ready deployments. They still don't adopt the good defaults. They're still not filtering in the second profiles. They still allow little more than normal sys calls to go through. And to give you an example of what a non-block sys call can do, if you don't block onshare, it literally takes one command onshare.r inside of a non-privileged container to elevate to root because it'll actually share the root's name space, the user name space with you inside the container and then your root. And so your whole, if you're basing your security model in containers on running as non-root users, then your whole security is blown out of the water just because you didn't block one sys call. Now, when you go to cloud providers like JCP or AWS or Azure, you can see them, even though they're not blocking sys calls, as many sys calls as you see on this lower column, what I've actually realized after you're investigating and playing around with those Kubernetes systems is that they block those sys calls at the node level. They have their own hardened operating systems that are denying the sys calls, not by Kubernetes, but by the OS and kernel themselves. So this is good. What I'm calling this sense is that generally your cloud providers are doing better than you would out of the box yourself. All right. Well, so that brings us to the most important points. Man, this is so much crap. There's just a lot of things that could go wrong with Kubernetes and it's a big if in asking myself if I could actually run and execute all those microservices successfully. So what do I do? Well, I'm here to give you several advice, real-world practical advice that I've run into and I've done over a couple of years in the past deploying stuff. So a very easy set of steps that you can do to make sure that you actually deploy correctly is to focus on getting the images there you don't trust. Essentially, you either start your own images from the beginning or if you can't or if your developers don't want to start your images from a docker file that you've reconstructed. What that means is that you might know that Docker keeps all the Docker history in its list and you could actually run a tool and I have a link there. The tool that I wrote that allows you to recover the Docker file from an existing image. Once you recover that Docker file building and you want it as easy as running a Docker build dash f. The reason I'm suggesting it here is that because that Docker history is nothing more than just a set of comments inside of the Docker tar file itself. Anybody can fake that history. So you could have the history that says, oh, I'm just deploying this file or copying this file or maybe I'm running apt-get to get those dependencies but in fact your layers inside of your Docker container will contain malicious stuff. So the easiest thing to do is either don't trust those ones that you get off the internet or recover the Docker file, build the image yourself, tag it with your own tags and at least you know that your supply chain is somewhat better. Never run privileged containers or share the volumes with the node. That's the easiest way to get yourself on. Privilege containers allow you to deploy things like kernel modules into your host module. If you know how to share the volumes outside of the container, they're generally a very bad idea. Now, unfortunately, there's not always a way of avoiding this. Like for example, even GitLab, when it runs its CICTE pipeline, their runner that they decided to deploy on Kubernetes requires privileged containers and they require it because they want to start other containers from it and there's really not a good way to start a container within a container without having a privileged container. If you can't avoid doing that, avoid it at all costs. That's the easy way to get owned. The shared volumes, obviously, if you get yourself access to the files that are running on the node itself, then you can change those files and get yourself persistent. Hook into the underlying node. Stash all the Kubernetes secrets into all the secrets, your data into Kubernetes secrets. That's really a basic step so you can later ship them out somewhere and encrypt them somewhere and don't keep them in your application. I hope that's fairly simple. Monitor for work containers. Now, that should be pretty obvious. The way that people got their systems abused, their Kubernetes clusters abused two, three years ago already, was by somebody just figuring out that they had an open API and their open API allowed somebody else to push a command to run a container. What that does is that Kubernetes or DockerD will go out and download that image and will run it for somebody else. That's how people had their clusters mining Monero coins for quite some time and that's still happening. There's still plenty of open API, DockerD APIs and Kubernetes APIs out on the internet where somebody will just push containers out to you and get you happily producing coins for them, which is arguably not a terrible, terrible thing that will happen to you, but it's still nothing you want to have on your cluster. Check the clusters from containers periodically. Have a monitoring system. If you're running in the cloud, secure the metadata. I cut out the demo out of this presentation, but my demo is essentially going and looking at the cluster and picking up the metadata. The metadata in the cloud for Kubernetes clusters doesn't just contain information about the nodes, it contains information about the cluster itself. You can get the service IDs out of the metadata and then elevate your privilege very easily into what essentially could be your Kubernetes administrator. If you don't secure your metadata, you're really running high risk of somebody elevating privileges to administrators. Running container optimized OS, if you're in Google or bottle rocket, and if you're in AWS, some operating system that's more secure than the default, that should be natural to you. That actually takes really little in no effort, just picking the image that you used during the cloud deployment. On bare metal, you need to run a hardened operating system like I showed before. Orchestrators themselves don't necessarily provide you with the same defaults that the Docker does. You have to protect yourself some other way, somewhere, do this defense and depth approach. Unfortunately, many of you probably heard that CoreOS is dead. A flat car is supposed to be its replacement. I haven't tried it. I'm not quite sure how good it is. Maybe it's good. I haven't tried it. Can't really recommend or not recommend it. But you can always get by by running minimal debit installation that only has CRI, the Docker D or something on it. In the same, if you're looking to go deeper, have an alpine that is really, really limited to only running containers and obviously run use RBAC. Everybody should be doing that. Normal mode. Now, once you pass the easy things that you've done and you feel fairly, look at the building images from scratch or use distrelas images. This is really cool. I've been using them for a while, literally no issues with them. I really like how they work. BOD security policies have multiple registers and authenticate authorize access into those registered. Don't keep your production mixed in with your development images, obviously, and have developers accidentally do something not very good for your production infrastructure. Remap, route to not route. It's more difficult to use. It breaks stuff, but if you diligence, you can get fairly far enough with it. Obviously, upgrading all the master nodes and the later nodes, Kubernetes iterates quite fast and they are doing a lot of security improvements, a lot of security fixes to the core of Kubernetes. So please make sure you update. Aim for zero trust. That's more of an advanced topic. And then the hard mode is if you actually start doing the scans for images. Notice I put it here in the hard mode, not because it's hard to do, but because those tools in my experience are incredibly fuzzy. They're just not really tuned for the ways the containers operate. And it will call you, especially social composition on all, it will list you hundreds and thousands of libraries that you're never using the container and you'll be scratching your head and thinking, why, what do I do? So maybe when you get to the hard mode, then start paying attention to those or wait until they are mature. Plot admission policies, cool things, they really help with your security. If you can't sign your images, sign them, that will help tremendously with supply chain for Docker. On mix sensitive workloads here, just remember that namespace don't provide what's called a hard multitenancy, meaning that you can't separate different data very successfully using namespaces. There's still hooks and crannies that people can get around to jump from one namespace to another. Just don't mix, if you really need multitenancy, run multiple containers, don't run namespaces. Final things, there's a plenty of the now information available on the internets where you can go and start digging deeper into the things that I've listed here and understanding what they are and how you can protect the Kubernetes environments. There's a decent set now of companies that sprung up that publish white papers and presentation, but it's decent. Some better some worse, but if you can pick one or two interesting nuggets from their presentation, I think that's good enough. And the final slide, I want to leave you here. It's complex. Guys, it's going to get more and more complex too. There's really not an end in sight and simplifying this. This is going to have more components built on top of more components. And if you're a really blue teamer, think about keeping complexity in tap. Think about limiting the numbers of, let's say, ingress controllers, just standardize on one. Don't let developers pick whatever they want. Or, like I said, CNIs or CRIs just limit them. They'll make your life a lot easier. If you're a red teamer, now it's the errors and configuration of Kubernetes containers that are going to lead to a lot of breaches. It's not like zero days in your Easteer and envoy deployments, although those happen too. So just look for something that somebody forgot to set a setting on your access to your secrets. And if somebody is doing, let's say, less encrypt through DNS and they store secrets for accessing the DNS management interface in the container, then you can own the DNS. And what's better than owning a DNS server, right? And for everybody else, make secure by default. That's the only thing that's going to really help us in the long term with the Kubernetes becoming so popular. Don't let folks pick in security falls. With that, may your journey be fruitful and happy. And I believe that Kubernetes is here to stay for a long time. We just need to make sure we steer it correctly. Thank you very much. Okay, we're back. Thanks, Alex for this great talk. Okay. Kubernetes is such a good technology and it's going to become more and more used. And I like the steps of them. Easy, normal, and advanced. It really helps people in either starting with their low security and growing up all the way. And let's go with the questions. We have one that has voted quite a lot. Do you have any resources for someone who would like to start doing some offensive Kubernetes assessments, like abusing misconducts or escaping containers? Yeah. There's a system that a friend of mine put together called BusterCube. Essentially, it's just a Kubernetes cluster that you can download as a set of virtual box VMs that you can deploy and start playing around with searching for... It's like your own CTF for Kubernetes. There actually been CTFs, too, that you can participate in to get a sense of what it takes to investigate. And there's plenty of tools now available that allow you to automate some of that discovery. I don't quite have a link to where you would have something like an awesome Kubernetes on edge, but I'm actually thinking of putting something together. So maybe watch my Docker Hub, not Docker, GitHub, and then I'll see if I can put together those resources in one place. That'd be nice. If you want to add your GitHub account, too, in the Twitch chat so people can link to it. Okay. Second question, what about monitoring the K8 logs? What are some of the things to look for to detect malicious behavior in your cluster? That's a good question. Right now, most of the logs that Kibana and everybody is really geared towards error monitoring. If you can get logs from your SCD, then you can monitor for things that are basically adding new Docker containers, adding or changing configuration on the Kubernetes itself. I guess my really short answer is that I'm not entirely sure of any open source tool that would allow you to do it. There's several available. I don't want to advertise anybody, but there's AquaSec and TwistLock and they all promise to do those things. I frankly have not tried those. I'm just really relying on monitoring the SCD myself. Okay. Is there any danger of leaving sensitive data on intermediary built containers? Oh, yeah, absolutely. As you know, containers are built with the layers in mind and every new layer just tells what the data is replaced on the previous layer when you delete a file from an intermediate layer. I have a whole separate workshop on Docker security. If you're interested, if you go to the GitHub, there's a workshop on container security. This is where I show that if you build something in the middle, it's always there, even if you delete it. I tell you how you can recover that deleted file by just looking at the internals of how the container is built. So remember, never put anything secret, even at the build time. Use build args or use environment variables, but don't ever build anything into the image. Would you consider giving that workshop next year at Northside? Let's see. Philip is asking, do you have any recommended tools to assist the security of container spuds in secure config version? There's several there on the market in Guardian who created the BusterCube training platform also has what they call priorities. That's one of the systems I'm familiar with that basically evaluates the setup, the defaults, the internals of your clusters. That's the one I have experience with. You can run that, but there's probably five or six more and I can probably dig up those links for those two that I'll allow you to assess. They're open source, they're fairly okay. They're not the full coverage, but they give you a sense of where you stand at least. Like, can you share these links or these tools, even though they're not? Yeah. Can you link to any resources to learn more about containers build history and security? Yes, I can. Please link everything. Up to that point, should there be a separation for multi-tenancy? Can you say that question again? I don't see it. Sorry, up to that point, should there be separation for multi-tenancy? I'm not sure if I actually understand the question, but let me try to address the multi-tenancy. As I said, Kubernetes namespaces don't provide hard multi-tenancy in the sense that if you have multiple customers or customer data, you really shouldn't be running it on the same cluster and that's because the controls are not there and it's been proven almost academically that it's not possible to introduce those controls, at least without sacrificing a lot of things. Now for self-multi-tenancy, meaning that, hey, I don't have multiple customers, but I have multiple groups on the same Kubernetes clusters and that's okay if it leaks sometimes or it leaks a little bit of a data. Yeah, you can do it. If it's within your organization and you're okay, maybe trusting somewhat the groups that you have, then yeah, you can use namespaces for that. Without sacrificing security? If you adopt a proper threat model, if your threat model says my groups are within decent control and I can trust them not to run malicious stuff knowingly, then yes, you can provide self-multi-tenancy with namespaces. The opposite situation is if you're running your client's data which you can't trust at all, then you can't really do it. Okay. Between more securitization, securitization, what that word, and less complexity, which would you prioritize? So, securitization and less complexity. I think I understand, but I always prioritize less complexity. It brings more security with it. So, if you can establish, be on the meetings where developers are thinking about points of ingress, air-grass, the storage controllers, etc., etc., that are putting into the cluster before they make the choice, so that you have at least a view into how they're making that choice. Because a lot of times they're not making that choice because of security, they're making it because of convenience or speed or something like that. And if you can be there and participate in making that choice, you will provide them with a valuable service. And hopefully limit the number of choices they make to something that's manageable. True. Okay, we have one final question. What are the weak points of full solutions like Pivotal, PKS? Which one? Pivotal? Yeah, PKS. Can you read that again? I'm not sure. I would like to, but the person I tried the question, so I can see you. Can see it again. Give me a second. So the question is, what are the weak points of full solutions like Pivotal, PKS? Pivotal, PKS. Yeah, yeah. Pivotal. I don't have a direct experience with Pivotal, but I've actually worked with, there's a competitor over there that tries to push PKI on top of everything. The reality is PKI is never a full solution. PKI does descents for authentication, maybe some authorization, obviously the transfer layer encryption, but it doesn't really address the questions of how do I do my end user authentication, how do I provide different controls where if you think about the zero trust controls, how do I authenticate the endpoint without trusting the endpoint implicitly? So a lot of systems, especially like Istio, if you consider now in Kubernetes, provides mutual TLS already out of the box. It's clearly a good component to have, but it's not the only component. It's only addressing part of the issue. Okay. Well, I think we're out of time. Thank you very much, Alex. A lot of applause again for giving us a talk. It was really interesting. And we'll be coming up next with the DMA attacks talk in 10 minutes. So everybody can go for a coffee and Maté or...