 Alright, we are going to get ahead and get started. I'd like to thank everyone who is joining us today. Welcome to today's CNCF webinar, the do's and don'ts for securing containers and cloud-native technologies. I'm Suresh Narvade, platform engineer at Uswitch and a cloud-native ambassador. I'll be moderating today's webinar. We'd like to welcome our presenters today, Kavya Perlman, global security strategist at Volam and Thais Bhanu, chief information security officer at SISENGE. So few housekeeping items before we get started. During the webinar, you're not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop your questions in there and we'll get to as many as we can at the end. So this is an official webinar of CNCF and such a subject to the CNCF Code of Conduct. Please do not add anything to the chat or question that would be in a violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants and presenters. So with that, I'll hand it over to Kavya and Thais to kick off today's presentation. Thank you, Suraj. So hello everybody. Happy Halloween and welcome to today's session. So who doesn't know CNCF? Most of us who live in the cloud-native world know very well that each year, cloud-native cloud computing foundation organizes this flagship event and it will be there in less than a month. This year, it's being held in the sunny California, San Diego, which should still be sunny because it's still November here. If you have never been to KubeCon plus cloud-native con, then this is like a perfect event to attend where the community comes together in real life and they further their education to advance cloud-native cloud computing. So you can go to KubeCon.io for more information and lock in your ticket before it sells out. I know I'll be there to represent Wallarm. So if you decide to go, please stop by at the Wallarm booth and find me just to talk about stuff or just have an in-depth discussion or simply just to hang out. About CNCF. CNCF also has their first Kubernetes forums coming up in Seoul, Korea, December 9 to 10 in Sydney, Australia in December 12 to 13. Kubernetes forums connect international and local cloud-native experts, adopters, developers, end users in global cities to enable face-to-face collaboration and deliver rich educational experience. So engage with the leaders of Kubernetes and other CNCF-hosted projects and we set the direction for the cloud-native ecosystem. Kubernetes forums have both a beginner and an advanced track. About half of the speakers are international experts and half are from the local area. So now that I have that out of my way and a little bit about myself, as I mentioned, I'm your host, Kabyo Promen, cybersecurity strategist at Wallarms, it was just mentioned that. Due to my work and contribution to the security industry, I've won several awards and I feel a sense of proud when security folks really think of me as the cyber guardian. I'm also the CEO of XR Safety Initiative. It's a nonprofit organization dedicated to helping build safe, immersive environments via virtual reality, augmented reality, and mixed reality. So my quest to find the best security solution to protect these brave new worlds is what really brought me to Wallarm. Wallarm solutions plug right into the CI CD pipeline and then they utilize artificial intelligence, machine learning to help secure APIs and cloud environments. Exactly what we need to secure the modern infrastructure. Prior to this, I proudly protected two virtual economies at Linden Lab and in 2016, very, very interesting times during U.S. presidential election, I advised Facebook for their third party security risk. It was quite fun and, of course, challenging experience. So enough about me. Now I want to turn it over to my very good friend and amazing ally, Ty Spano, who is not only just a great professional, but an amazing person who has been my mentor at times whenever I need it. So, Ty, please take it away. Thank you for the kind words. Yeah, my name is Ty Spano. I'm a practitioner in the information security trade. So I'm one of those rare folks that came out of undergrad in my university with a focus on information security. I've had the opportunity to also get my masters in the same subject matter, but my entire career very focused on application and product security programs on a bunch of those logos there. So I've done this a few times. And just in the past year and a half, I joined this wonderful company called Periscope Data. My goal was to take us through an acquisition or an IPO. Luckily, we went through an acquisition and I've been promoted recently to the cloud chief information security officer at a company called Sysense. And we are a data analytics platform. So every couple of months I come up for air, I like to talk about things. And today we're going to be talking about container security. And I've had the chance over the past, I would say seven years as we've seen a lot of technology curves start to take place where, you know, before it was really like, how do we get to the cloud or how do we, you know, do virtualization. And I want to start at a high level. So we have a base understanding as a group. I see about 78 of us in here. So thank you for taking the time to join us. And please remember, at the end, we'll provide our contact info. You have questions if you don't want to ask them during the Q&A or maybe you just want to chat briefly about stuff in your life, you know, your career or anything else. Always happy to do that as well. So let's start off overview of containers. We're all used to the idea of basically an operating system sitting there on a machine. Now that machine could sit in your data center, could sit in a co-located data center, could sit under your desk on a really powerful PC, or maybe even your laptop these days. There's also this other layer, which is the hypervisor. In the security world, this is one of those scary things because a lot of vulnerabilities are discovered here. And if you have your underlining OS as the original virtual machine or the host OS itself, you start to have multiple tiers of vulnerabilities. And the hypervisor is something that we in the security space, especially in app and product security. This is one of those things that makes us a little bit nervous sometimes. So when it comes to security, containers make life so much easier. I'm just a bigger fan overall. And it's made the cultural experience when it comes to embedding security as part of your DevOps capabilities seamless. And people just get it as opposed to having to fight for every patch, every piece of remediation. So that virtual machine, you have those individual OSes, which are typically heavy handed systems that have to stay up all the time. And it just takes a little bit more to manage from a patch and overall just cost standpoint. Containerization, this is really where we start to focus our language. And you're going to hear these buzzwords and we'll get more into that. But when it comes to containerization, you see it's pretty much laid out in a similar format. The biggest difference being the container engine. So the modularized nature of each container or having pods, there's more unity across the overall environment in the container itself. So it allows for a lot more flexibility and things like elasticity. So if you have a large load or so many customers trying to get into say your web application to check out to buy the newest like cool virtual reality headset, maybe there's too many people with containers, you can actually scale up quite rapidly for virtual machines. Sometimes you have to call someone and they have to create new systems, increase memory or it's just a little bit more manual. So let's jump for monolith versus microservice more kind of language that you'll hear in a consistent way, especially if your company is transforming. Or if you're going into an organization that to say is 10, 16 years old, maybe 20 years old, there's usually an application and they call it a monolith because it's everything. It is everything that's sitting on that virtual machine that could be in a data center and all of your coupling when it comes to code is really tightly there. And what that means is your development lifecycle isn't like a modern DevOps or an agile organization where maybe you're releasing twice a day 100 times a day. But this is more in the sentiment maybe you're releasing monthly because of how many components get touched, how much business logic can get affected as you're making these edits and adjustments. But it's really critical to understand that once we start shifting to more microservices and this is where containers really help, you have smaller bits of code. You have smaller and easier coupling. And that way when you make changes, your confidence goes through the roof because you're making all these tiny changes to tiny dedicated services. At the end of the day, you still see the user interface and that's really all you as a customer or a client need to interact with. So if it's a mobile banking application, you don't need to go in and get into this big app that's there. You're actually maybe just using a mobile UI that's touching maybe 15, 20, 30 APIs that have various services to present a beautiful experience that's a lot more repeatable and without fragility. So we're talking containers. What's up with that? How do they work? A couple other buzzwords that I think pop up for me, especially in my time where I focus more on app and network. And I think on the network side, you're going to hear terminology, especially when it comes to data centers, kind of what is north-south. So just passed on that previous graphic, picture that yellow circle being all customers connecting through either their mobile app, their phone, their laptop, their desktop, whatever it could be. But that's how they're getting access. That's that north-south getting outside of that overall environment. East-west, that left-right sort of feel, that's within the environment. In kind of the container world, it's just more your infrastructure. In the data center, just picture those brick and mortar walls hosting all of your infrastructure and it's communicating between each other. Cool. So jumping forward, Kubernetes, Kubernetes, Kube, Kube, K8s. I mean, there are so many ways to say this. I'm just in the mindset that whatever your organization or your team members or your lead architect chooses to say, say it that way and life will just be easier. But I did pull out the phonetics saying here and within this presentation today, I think I will say Kubernetes quite a bit. So Kubernetes overall, you can read the definition there. I think one thing that's kind of cool is the origin of the name, which is from the Greek language, meaning helmsman or pilot. And since it's an open source container orchestration engine, it has a lot of intent behind it. Definitely worth reading more. Greek mythology, always fun. But Kubernetes, there will be a lot to read. The good thing is the open source nature of it. So many people are trying, so many people are testing, so many people are creating patterns and services and capabilities. It's just a very rich community that allows you to go fast. So let's talk a little bit more about security. Where do we start when it comes to containerization? As someone that's rolled, I don't even know how many AppSec programs at this point, more than five. One of the things I always think about is how do I get my arms around what is going on? So when it comes to containers, it's no different. If your engineering or DevOps team has already gotten started, if you've started a new job, you need to get a sense of what is the inventory? What are these containers doing? Seven years ago, containers were mostly for local dev environments just to rapidly spin up, test, and do something. Now we're seeing a lot more production loads purely container-based, and it's really, really exciting. And it's favorable, especially if you join a company that has already set up a containerized production environment, has built security natively into the process. But maybe they don't know where all their services are. And for that, we need to look at opportunities. I can either ask a thousand questions, or maybe I can look at a thing called a service mesh. The service mesh is going to be basically a web of encrypted traffic that creates a high-performance sidecar. And you'll see or hear the name Istio, Linkered, Envoy. These are just technologies for sidecars that enable more capability like traffic management, firewalls, load balancing, monitoring, and overall policies. The cool thing is you don't have to make microservices changes. Everything's using the service mesh. So as a security person or as an engineer trying to ramp up quickly, you can also discover all those services that are out there. And while you won't get the context immediately, we can take that next step together. So one of the things I always recommend is starting high-level with a risk-ranking process and never go 20, 30 questions deep trying to get an intent of what something is. Start very high-level. I'm a big fan of the CIA triad, which is confidentiality. How sensitive is the information? Who needs to know it? Who shouldn't have access to it? Integrity. Who shouldn't be able to edit or modify it? When should we be looking at hashes for, say, libraries or modules to make sure no one's toyed with the thing we're pulling down from, say, NPM or another repo or a source? And availability. Should that thing be presented outside of our firewalls to the internet? Should it just be internal to engineers? How are we controlling that access? One thing I think that the CIA triad is always missing in the security realm. And you can go to more complex models like the Parkerian hexad, which is like eight vectors. I just like to add this one thing, reputation. So adding business logic into how much do we care? So if it is internet-facing, is it branded? Is it a third party? Is it just some service that you use that your data flows through? So thinking about these pieces of contextual information, we can start to get an idea of high, medium and low of things we care about. As a security person, you always start high and you work your way down. You don't want to go look at, say, the internal time server that is just the default service for NTP that everyone stands up. You're going to want to look at your authentication service. How does that work? What are the fail-back mechanisms? And I think risk-ranking that process just makes your life much more intelligent and in tune with the business to say this person is trying to get in there and help. Cool. One of the great things I think with Kubernetes and a lot of containerization is that there are so many great secure defaults. I'm going to give you the 101 flavor. Kavya is going to get into the more the 102 with configurations. I just want to bring you in with some terminology and I'm going to give you the kind of the idea of it. And she'll give you more of the code snippets. But I always recommend, since Kubernetes does a great job with documentation, go take a look out there. So I've included the links directly here which will be available after this chat as well. So the namespace, this is a really easy way to set up a way to divide resources. One of the first steps to do, get rid of all the default namespaces because if you are compromised, a lot of scripts, a lot of botnets, a lot of these things, like crypto miners will get in and try to attack these resources immediately. So when we get rid of those and we customize just for our own work, it allows to have a seamless sort of experience. One of the things to work with architecture and think about is also what is our namespace convention and how many namespaces do we want. If you start with one and only have one, that is not the best way to use effective namespacing. So take the time to read the documentation and figure out a plan. The next piece is the network policy. Pretty straightforward. If you know security groups in AWS, consider this the same thing. This is just allowing directions of what goes in and what comes out. One of the best benefits here is it's secure by default. So policies fail closed. In the reality of technology, we are seeing more of a consistent pattern here. So what that means is if something goes wrong, maybe a port is going to go down. When we talk about Infosec in the mindset of say physical security, if you're in a brick and mortar building, you don't want your fire escapes to fail closed. If there's an emergency or fire or something, you want it to fail open so people can get out, can escape. In security, that's not always the case because that could be the hacker with your data. That could be someone accessing and logging in a Zadmin. It could be a number of things. So controlling your ingress egress is very important. The next piece, pod security policy. This is really just about establishing minimum baselines. And if you haven't taken a look at this in your org or in the context of maybe a project coming up, I think this is always a great place to start when we're talking about a new service. But by default, this should be the minimum criteria for any service to be accepted within the larger container infrastructure. So if it's not, say, having the TLS cert, maybe it doesn't have the configuration to have a clean network policy. These are things that allow you to move much faster because we're secure by default once again. Next piece is security context. One of the big things I really love about containerization is the architecture aspect of read-only pods. While you have a certain area of maybe writing to logs, some of those log aggregates go to third parties now. Not a lot actually happens on the system besides the compute, the execution. So when it comes to, like, let's say we have a bad day and we didn't disable our API access because of, you know, a brand new engineer that was able to deploy or bypass a process, now if this pod is taken over or a botnet is logging in, we don't have to stress as much because it fundamentally can't do anything in that read-only context. It can't write a script. It can't pull too much information. As it can't read locally depending on the permissions it has, but we'll get more into some of the RBAC role-based access controls and permissions. But overall, when you have the security context defined, it is beneficial to having that whole thing contained. One of the key things I always recommend here is just disabling root access. That seems to save a lot of us in the industry. And if you don't and you run in privilege mode for your containers, the risk you start to take is that you're disabling the underlying Docker security controls, which can allow for code to run under the underlying system. And that's where things get a little scary. And I definitely recommend avoiding that type of architectural practice. Cool. So let's jump forward. Up next is App Armor. This is something that I've had the chance to learn specifically in the past year and a half. I'm such a huge fan because it really allows for the ability to reduce your tax surface even further. Yes, you have read-only pods, but now you can control those pods even further because there are going to be times and services that need to do things. So let's talk about a sensitive scenario. Many organizations are going to have web applications that is going to take, say, a file upload, a CSV, some sort of piece of data from the customer. When you take that file, it has to go to a data source. On the back end of Kubernetes, it is at CD. But when it comes to what database or what source or where you're going to store it, sometimes it's going to be temporal on your system. So having that well-defined read-write or putting it through another service or another microservice to take the information, maybe serialize the info to just pull out the relevant data as opposed to taking something that could be a script and executing it locally wherever it chooses to, which is more of a higher risk. In the banking world, I can tell you with loans, with file sharing, this is always a stressful thing when you aren't blocking natively, say extensions.dats.dls.executables. Those are basic controls that should be up in the application, but this is more lower level with App Armor that you can reduce and protect on the attack surface. Once again, definitely worth reading. I'm a huge fan here. Next, let's talk a little bit about disabling default services or not. I think this is just conceptually there. Hardening your cluster, great article there. I don't think I have to describe too much of least privilege, but when it comes to what permissions do you need or does your pod need, never give them too much. It's as simple as that. Next step is certificate management. I think this is something that as a security professional, it may take a little bit to get comfortable with, but the majority of Kubernetes administration, unless your internal team, that may not be a startup, can pay for a certificate authority or have some sort of legitimate certificate and management of it. That could be a couple hundred to a couple thousand depending on the nature of it, but most Kubernetes pods are using the self-signed Cube Admin certificate. I'm going to be honest. I don't see as much risk here. Yes, the self-signed nature can be something that could be compromised because you don't have a third-party CA, but we can also digress and talk about some of the risks with third-party CAs and some of the issues with SSL and TLS. Overall, I think the certificate management and going back to that pod security practice of saying you must have this cert to talk to anything because everything's going to be encrypted. That is really beneficial. Next step is backups. By default, I mentioned SCD. That's where we keep a lot of the backups and some of the storage capabilities. If you end up not having encrypted backups, there is a capability with secrets resources in SCD that can actually protect you from them, but it's just easier to encrypt your backup. The reason being, we always don't think about the secrets that could get stored in some of the snapshots or the images or just some of the localized data. Those can be very compelling. We often forget just like when we move into houses and move from places of our dwellings, boxes that sit in corners and what could absolutely be in them because we may need something someday, but what if it's your birth certificate? What if it's your social security card? It's just that concept of you have to know where your secrets are and you should protect them effectively. Cool. Let's jump next. One of the big things I always like to look at is authentication authorization. Where and how are our engineers and our people going in and looking at things and who really needs access and why. Performing the threat model is always going to be your benefit here. Role-based access controls. I'm not going to go through all the things like a role, a cluster role, role binding. I think that's a little too deep for this chat, but I think one of the big things to know is your versioning of Kubernetes. If you check and it's above 1.6, this is enabled by default. The one thing you want to do is disable attribute-based access controls. RBAC is going to be your friend and the other thing that will be your friend is using a third-party authentication service such as Google, GitHub, AWS, and you can actually configure and perform some of your controls and permission definitions there. That just makes it a little bit more seamless than relying on Kubernetes to actually handle your authentication authorization. Other than that, I think one of the key elements is establishing what are your risk-based alerts? If you can't afford a big logging platform or a security platform that can do things like cloud access security controls, I think this is where you can start simple. When people SSH into production, you should get an alert on that. When they go off the reservation from normal patterns to maybe a file system or a new service that they're troubleshooting, that could be a scenario as well. The other piece I really like is how long are they retaining access to production and their connection and just making sure that it's not longer than, say, a few minutes. They should get in, get out, and just be done. But if someone's leaving console open, picture a scenario with an engineer that leaves console open in their terminal, they walk away. You have a 10-minute control that locks their screen eventually. But what if they're doing that at a Starbucks cafe and production is just exposed in a physical instance? It's likely it's really small but still impact massive. It could be detrimental if someone is just sitting there watching you. Cool. Last thing I really want to hit on is the software supply chain. When containerization really became popular, where did you get your containers from? I think Docker Hub is great. I think there are some other areas to grab containers, but it's no different than software and components that we have out there today. You can find a lot of self-grown or created third-party containers that are out there, and you just don't know what's going to be in them. That ecosystem, one of the key elements when it comes to the vulnerabilities that are coming out is implementing a compromised container. If it's compromised and it gets the certificate and you set up the pod security policy, you authorize it, that's game over. Making sure you understand your ecosystem and flow is going to be absolutely critical. The last two things I'll mention that I really dig, is Clare and Clare. Clare is just an open-source security scanner, some call it a vulnerability scanner. It's basically looking at all the components within the container itself and doing a fingerprint based on version for known CVEs, common vulnerability enumeration. Those are just volums that are out in the world. This one can be a little bit tough because not every CVE has a patch or a fix, however, just having awareness and understanding and monitoring if something is becoming an issue. The cool thing about Clare, you can put this right in your pipeline. This is something our architect did at our organization after some negotiation. It just makes life easier where you get a printout, you get awareness, and then you can lean in quickly if there's a CVE that is becoming a bigger issue or something that you care about within your org. With that, let's wrap security hygiene for the win all the time. Inventory your overall ecosystem and the rest of your organization and all your apps, absolutely. Harden a lot of those containers, know where they're coming from, and at the end of the day, it's always great to perform a little bit of scanning for awareness even though some of those CVEs may not have an easy remediation path or your container cannot just be patched. So monkey patches can get weird. And I always think about how do you want the flow? How do you want to control that ecosystem? And with that, I'll thank you for the time and pass it over to Kavya. Thank you, Ty, that is so wonderful. You have definitely shared some of the awesome pieces about container security. And I'm going to go ahead and dive into the do's and don'ts of container security. These insights that Ty you shared, seriously, I'm really amazed with the personal experience that it gives you this sense of control and then really nitty gritties. So I'm going to pretty much cover some of the aspects that Ty has covered, but from a perspective of what shall we do and what shall we not do when it comes to container security, cloud native technology. But before I get started, there are just some of my sort of high level thoughts that I summed up when it comes to cloud native. Cloud native applications and infrastructure create a new challenge like several new challenges actually for all of us security professionals. So we need to establish a new security program, have a new mindset and adopt advanced new tools that are focused primarily on cloud native technologies. Let me first talk about what that pipeline for a container image may look like. So I actually tend to divide this into sort of a build, deploy, run phases. So you start with sort of an artifact, right? That could be your image and Ty talked about it, an image downloaded from either Docker Hub or any other repository that you have access to, or you create one. Ideally, you download that into your CI CD pipeline. So you're bringing your dependencies into your pipeline and that pipeline is building the new version of your application of your container. And then that gets pushed to the registry, the container registries, and as you do all of this stuff, registries get instantiated later on. Unfortunately, sometimes, as Ty mentioned before, people tend to download some of these images off the internet and that I would say is a big no-no. Once you have the container registry, then we go into the sort of execute, let's call it like a deploy phase. Then you'll have the runtime environment where your Docker runtime environment or the runtime stuff is running. This runtime environment will determine how the host is processing container or host process container as well as the workload that is running itself. So we have the build phase, the deploy phase, and the run phase. Something to keep in mind is most production container deployments have an orchestrator, an orchestrator layer on top of course that falls under the deploy phase. So let's now zoom in to the build time consideration for the containers. So during the build time, what you should really think about as a security professional is application security. What about the code that actually goes into that container? Think about how you scan the images as you're building the image, when you're creating that image that you will push out later on. And Ty talked about this earlier as well. How are you doing vulnerability management on those components? One thing you do want to do is you want to import an old library that has, you know, you don't want to do is if you import an old library that has tons of vulnerability, but at the same time you don't want to import libraries that may break your policies. So perhaps, you know, for example, if you run, you don't want to run general public license code, GPL code, for example. So you must have this layers of checks and any issues that should be treated as defect on the pipeline. So the dev team can simply process them along the way. Thereafter, you sign the image, point to be noted here is that during the build time, you can reduce the size of that image. And one of the ways to do it is via multi-staging multi-stage bills. What that means is you start with an initial image. The first image builds the minimum libraries. And then you build your application and then you copy that into a very small container using this method. You can, for example, reduce like a 30 300 MB image to almost like five to 10 MB sometimes. So these are like some of the build time considerations. Now let's talk about deploy time consideration. Two things. Number one, you should do the vulnerability management on the images that are on your registries is what you build today. And push to your registry. You may not actually use it for a few weeks. So what happens if within those weeks, you get a new vulnerability power up and then that on that image itself. So you get a point. And then I also mentioned our back. Our back, of course, stands for role-based access control. It's all about giving specific permissions within your organization to deploy these images back and forth. Another important piece is to secure your orchestrator. And then there are a few aspects we talked about earlier. So I'm going to talk about that. And then there are a few aspects we talked about earlier. So I'm just going to move on to then runtime considerations. And again, there are two elements during runtime conservation. How do you protect your host? How do you secure the workload? And then how to secure where that container workload is going to run. And then there is the container runtime itself. So host protection is really all about, you know, how you secure a server. And we'll talk about this in detail a little bit more, of course. But the primary consideration is about what action is that container taking. You can analyze system calls. You can segment the network and a few other things that, of course, Tai has mentioned earlier as well. So let's dive in a little bit deeper again. So here's some concrete stuff that you can potentially use where we talked about container hardening. First of all, when we talk about container hardening, the very baseline thing is CIS comes for the win. Using the CIS benchmarks for container images is something to keep in mind though. You should not comply 100% with this benchmark recommendation. Because sometimes that can really break things in the production environment. So don't be so rigid. I mean, you know, if you can achieve about 80 to 90% compliance with these images, even that is really good. The best part is, you know, the best recommendation is to absolutely use the CIS benchmark images. Because if you get it from the Internet and we talked about it, it's very detrimental. We can get rid of root privileges in the container, in the Docker file, just like, you know, run add user, these couple of commands that I mentioned, or provide it in the option at the runtime. That's another command. You can do docker run option, you limited. And with C groups, we can easily limit the system resources. So like CPU, RAM storage available for a specific container. I'm giving just an example here. So now, you know, Ty has touched on the app armor aspect. And really, you can use this app armor to enforce like a really good hardening. So you can use a good app armor profile. Basically what it is, it is a mandatory access control feature. And it further limits the attack surface by white listing only the commands that are strictly necessary and disabling anything, writing and reading directly, reading to directory that you normally won't use or your applications won't use in deployed in the container. So to ease the process of generating a profile, a good starting point is a sample profile available. And I've mentioned the GitHub repository here. It's a tool called Bain. It is simply an app armor profile generator for Docker containers. Basically a better arm app armor profile than creating one by hand because you know, whoever does that, what really comes to my mind is the villain from Batman movie. And if you go to the GitHub, there is actually a meme over there, but it's a Batman and Bain. So something to remember from that. More on the host protection. And as you can see the slide suggests you should lock down the host volume, writing set, et cetera, use set comp to restrict hosts, call access and finally use SELinux to prevent container escape. Now, I really like this slide. I know this is a bit of a zoomed out version, but this is the whole like CI CD pipeline for a cloud native environment. And I love the slide specifically from an application security standpoint because it really captures all of the, you know, you see in the, at the bottom, the tooling aspects that you, you can potentially use, you know, during your traditional CI CD pipeline. But when it comes to containers though, our focus when it comes to security tooling is mostly either on the testing phase or on the staging phase. At the staging, you can perform stability, performance, reliability, and all sorts of security testing. So let's talk about where we have come to when it comes to cloud native environment. It's now all about CI CD, but the whole idea of modern architecture is infrastructure as a code. Like it's all built as a code. We are basically moving fast. We are, let's just say, you know, back in the days, if something was not, something got a vulnerability, we would go ahead and, you know, sort of redeploy, patch, all of that. But nowadays you can, you know, simply just replace instead of patching. And you have these immutable instances in infrastructure. These days we are using cloud formation and Terraform. So specific containers we primarily use, primarily use these tools during the test phase. Some of these tools, and I really just wanted to stay away from too much deep dive, but I recommend, you know, one of the tools is Encore, OpenScape, Cystic, Falco, Lincord, Dada, and Clare. And really just, you can use all of these tools potentially to perform your container security. There are several of these aspects. And earlier we talked about, you know, Bain, a lot of these, these are, you know, I would say a few of my top five, six tools that I recommend that we should use. Let's go into a little bit. What are the dos for containerized environment? And I know I'm kind of glossing over so much when I say this, but you know, I would recommend these must dos. And then the opposite of it, I'll follow it up in two knots. So my three favorites create immutable containers. An immutable image is an image that contains everything it needs to run the application. So obviously, including your source code. And the only difference is that the Docker file will copy your application code and then eventually run any build process or any dependencies installation. Using these immutable images. That gives us many advantages, but the two main ones are portability, because your container itself is sufficient. It's self-sufficient. You will just have to run the container without caring about anything else, like mounting volumes, et cetera, et cetera. Another one is predictability. So with immutable images, you can be sure that given a tag of that given image will always have some behavior because of the code is contained in the image. And that means a lot in terms of, you know, true deployments and management of application lifecycle. Of course, you run, you know, images from trusted sources. Why we are hammering on this, because that can really be a source of bringing in vulnerabilities or deficiencies if you're not running it from trusted sources. And as I mentioned, CIS is one of our best friends when it comes to trusted source or Docker hub is something that you should look at. Use container native monitoring tools. So just like I mentioned in the previous slide, these are some of those container native or, you know, cloud native tools that you can potentially use. Installing and let's go into some of the not to do's for the containerized environment. So installing an operating system inside a Docker container, this can be done where there is rarely a good reason to host an entire OS inside a container. If containerized a complete OS is your goal, then you're better off using a platform like specifically for that type of use case like LXD or OpenBZ, both of which are system containers rather than an application container platform. Running unnecessary services. That's another do nots. Like when building your container image, you should include only the services that are absolutely essential for the app. The container will host anything extra. Well, it wastes resources and widens the potential attack vector that could lead to security problems. This is why you should probably not run such as, you know, an SSS server inside your container, et cetera. So there is no need for SSS container. In fact, when you can use Docker exact to call interact over the containerized app. Next up is storing critical data inside a container. Now, this is a really bad idea for two reasons. First containerized data is not persistent by default. So you risk losing the important data if the data exists inside the container. And then second thing is storing the sensitive data inside a container poses security risks because anyone can have access to access who has access to the container. In some cases, even just like the image registry that holds the container images could potentially gain access to that private information. Storing data other than container images inside a container registry. So a container registry is designed solely for the purpose of hosting container images. So we should use it just for that purpose. Don't use the registry for like a general purpose repository or for hosting other types of data. That's like a mistake like by made last year but it was using a container registry to host source code. Hosting too many services inside a container. Again, a big no. In general, a container should host like a single service. If your app requires other services, you should run them inside a separate container. It's the major advantage of using Docker in the first place. Limiting each container to single service may sometimes not be feasible, especially if your app was ported from like a monolithic chunk of code. That was not designed for microservices, architectures in the mind. But wherever possible, wherever possible, stick with one service per container. And that, with that, we will start to take some Q&A. Let's see what we have here. Awesome. Thanks, Kavya and Tai, for a great presentation. We now have some time for questions. So if you have any questions that you would like to ask, please drop it in the Q&A tab at the bottom of your screen and we'll get to as many as we have time for. Cool. I also want to give a shout out. I'm always very flattered when friends show up to these things, especially ones that I see in person too. But Crystal Prakash, Nikhil Wilson, who I haven't seen for a couple of years, Todd and Marty who I just met this year, but thank you so much for jumping on the call. It just gives me the warm sort of feeling of happiness and joy to see friends kind of jump on these things. So thank you. That is so wonderful. And Tai, I must say thank you for doing this with me. 100%. It's always a pleasure. Our journey really started like a few years ago. In fact, we met at a meetup and then, you know, from there on I realized Tai has so much experience. So I was like, oh wait, I need this person to be my mentor. My background is all about network security and then finally frameworks of various information security. So when I started diving into AppSec, I was like, oh my God, Tai, you got to help me here. And thankfully it's been a great journey together. I'm so happy we're doing this. Cool. So we got one question from Ramesh Kumar. Any tips for ingress egress? So Ramesh, again, I think this is where you start with something as simple as a high level of if you have experience with firewalls, you want to start thinking about what are those IP ranges or what are the things port wise that something is going to need access to a lot of micro services are pretty much all over 443 port 443, which is HTTPS. Outside of that, are you going to have network ports or you're going to have database ports? Are there going to be things that you need to actually talk to the service mesh that we talked about earlier also helps with it. But general ingress egress, what is the minimum requirement for that thing to be functional? And like Kavya said, there's no need to turn on leave it off, you know, and I think that's that's the thing that Kubernetes makes a little bit easy. Ingress egress, just understanding what those flows are and then also some of the data elements if you want to get to that level. Usually that's a little bit deeper than most of us go, but I think that could be the next stage once you get that sort of network policy or pod security policy also locked down. Hope that helps. And Ty, I think you already have provided some of the Kubernetes documentation links and the webinar. What other sources do you think, you know, all of us trying to learn more and get more concrete because sometimes there is too much noise. Yeah. Tell like, you know, one of the reliable sources more or not and people end up in these webinars or other sessions sometimes. So what are your recommendations to like really find out like where is the real deal conversation happening? Do it yourself, figure it out, test it. I think MiniCube makes this a really easy have people on your team that are willing to share work in an open environment. But when it comes to information, you know, I'm not trying to plug the company, but Aqua Security has an aggregate blog that actually pulls together a ton of great resources outside of that. You know, I think one of the secrets I like to practice is looking at emergent CVEs. So I actually have a line feed into Slack of all technology we use in our organization. And then I see all the vulnerabilities that come out and then I tell people to not panic or panic like the billion laughs came out and it was like a week later. There was a side panic that happened in a room our cloud chief security officer came over to me and he's like, yo, Ty, I think we have a problem. And luckily I was already dealing with something else. So I said, I will be there in five minutes. And when he came over, he's like this billion laughs thing. I'm like, oh, it's there. That's not an issue. We looked at it. It's lower. Like here's what has to happen. You need to be authenticated users. So for me, I think it's the practicality of it and being grounded and understanding. So knowledge just takes time. And I don't think there's a real quick way or quick hit to just gain all that info. But as you gain the information and you find a sense that makes sense to you. It'll be fine. But Google, you know, Google does a great job too. You know, I was actually also going to say KubeCon. And I mentioned that in the beginning of the session is KubeCon is a great resource. Like I'm really looking forward to being in San Diego 18, 19 November. And then, you know, this new thing that they started this Kubernetes forums. I don't see all of us a little bit far, but yeah. And we're getting a lot more questions here actually. Let's do it. Yeah. So how do you handle privileged containers from a security point of view? You want to take that away, Ty? I'd say avoid them. You know, I think that's what we covered, right? Like how often do you need to actually have privileged containers and then go back to conceptually what we talked about. Who needs access and why do they need access? Your privileged containers probably go in some level of secrets management, right? So this is where you're going to want to take a deeper step in who has that level of access. What are their permissions? And leveraging namespaces is going to be important to identify what that is and out there. To also have a clear understanding that that shouldn't be talking to every container within the environment. So I like namespaces here. I think the pod security policy is always going to get you there. But if you can avoid it, it's better off. But if you're going to have to go deeper, I think again there are a couple of steps to take to start to get that configuration down. Just to shrink the attack surface. And that's going to be your benefit. Yeah. And earlier we had, you know, I'd added a couple of commands there that you can use for, you know, just restricting your access. You can of course use app armor, create this profile that has these restricted. Yep. Or access controls. The next question. And Ty, I look up to you because some of these things are now getting really detailed. So could you explain in more detail how to set up RBAC through AWS and GitHub. And now this is really just, you know, going very, very specific to AWS or whatever environment you're into. Yeah. So this is again, natively with Kubernetes, like the designers had this in mind. So you're pointing at an identity provider and auth provider with it. What you're going to end up breaking out. And I'll talk to AWS. I think it's just a little bit easier GitHub. You can do it. I think controlling permissions is a little bit different. So I haven't gone that far down that path. I think with AWS, it's no different than any identity access management role. So your role definitions of like, let's be real, most of our developers need access to everything, but then to what namespace, what environment. And you get to just define that within the policy that way your cube sort of container can actually pick it up and provide that level of permission. If that person needs to log into the pod, but likely a lot of folks don't need to end up doing that. So yeah, once again, I like the AWS route. I think I am is just better period. But again, it really depends on how you set up authentication. And I think it's better only because I have the experience and the exposure there. And I just have more confidence again, because I've done the work. So take a look at our back documentation. AWS does a pretty good job there too. My buddy Todd from AWS is on the call. And I'm sure he'd be happy to help walk you through some of that stuff too. That's awesome. And I think just last year AWS has announced like their own sort of a CIS native image. So, you know, potentially that is very, very helpful. And I think that was in Vegas day and house this. So yeah, I think that's a really great point. I don't think we got into like EKS, like the elastic Kubernetes service or Googles. What the hell is it not commander? Like we get another webinar. Yeah. And that's another great element of managing kind of your, your instances overall and just patch management. So what is the latest version that you want to pull out there? And we definitely didn't talk about this. And I'm sorry, I just flew up guys. Like pulling what instance or what image or what build of the containers really important. I've seen a couple of mistakes in my career. So as latest isn't always going to be the best one to pull. Just because you might get things that aren't tested or configured correctly. You can grab the beta one, but just generally you're going to want to validate what is your DevOps team doing to pull from which version? Cause latest can be a little bit painful and things will break. And usually it'll be a wake up call and then a quick fix where people move on with their life. Thank you, Daniel for that question. And then we have another question from Paul Wright. Is there a good approach that can work for containers and functions as a service? I'm having a little bit of trouble of unpacking that one. Kavya, I don't know if you get the intent of it, but I'm having trouble kind of understanding. What we're really talking about here is sort of like functions as a service. Like I'm thinking infrastructure as a code. Right. So potentially maybe, you know, treat everything so that you don't have to really redeploy stuff. Like I would take that, you know, taking this sort of treating your containers as this ephemeral entity. And not like redeploying, but just like re-spinning stuff is the approach. And that's what containers are really good for and they allow us to do. And I'm, you know, it's just sort of going to make that assumption is what we're talking about here. Okay. I will say ditto. Maybe have Nikhil's hand and I have just, we have just like few minutes left in our, you know, hour, but I do want to take maybe, why don't you pick tie amongst these questions? Oh, last one. Okay. Nikhil cause we're friends. Yeah. So do, do, do. Should we go with a sidecar ambassador or adapter patterns more than single container for easiness and compact container patterns. One of the best examples I'll give for a sidecar is something like an HA proxy or hashi corp vault. I'm a big fan of hashi corp vault for secrets management. Sometimes that can be a real painful project, but once you have it up and running key management, key rotation, while you may want to have that service out there and some sort of dedicated container for it, the reality is you want that secret store running aside of every container that's out there. So I actually like that type of service mesh as a good example. HA proxy is another one where you probably just going to want the proxy sidecar just to be there to feed information to your log or aggregate product form. The benefit of a sidecar is just implementations a lot quicker and smoother and getting it across the environment will be easier. And then if we go back to the pattern of making sure that we define that as part of our pod security policy, like you cannot join if you don't have a TLS cert and this sidecar and this sidecar like that service mesh starts to make it a little bit easier to know that that's the pattern as opposed to another individual container. So that's kind of where my head would be. But you know, again, I'm kind of tool agnostic and whatever gets the job done that is secure. If the engineer says it'll be easier to do it that way, I would say it sounds great. Let's do that. Yeah, fun. So yeah, I think Paul right also clarified you aren't deploying container just some code so I feel like we were on the right page. Thanks, Paul. Wonderful. So I think there are other questions coming in, but I would suggest that, you know, Ty and I are available via LinkedIn. I'm Kavya dash Perlman ties ties Bono and be key. You can follow us on Twitter Kavya Perlman or ties Bono. Very easy. I'm also reachable be a wall arm. I am the global strategist for wall arm ties Bono. Actually, he actually even started writing for tech billion. Is that right? Yeah, I used to write for that. I don't know what we're doing anymore. I got a child with my editor now. There's just so much is putting out there. So, you know, keep an eye. I think Twitter is the best. Yeah. I suck at Twitter. You're great at Twitter. Hit me up on LinkedIn. I actually do more stuff there. I find it easier to interact with people and connect. Twitter with the character limitations. You've heard it today. I talk too much, but, you know, feel free to just reach out. I'm always happy to chat. That's awesome. So yeah, thank you everybody for joining us. It's been wonderful for both of us and time. Do you have any last words before we just say goodbye? Go have a great Halloween. And if you're interested in working on security stuff with me in the San Francisco area or Tel Aviv in Israel, give me a shout. Awesome. And please meet me at cube con if you're headed out there and I would love to meet hugs and I have a special sticker that I've made. So yeah, reach out and yeah, have a wonderful, wonderful day. I need a sticker. Oh yes. Thank you. Cheers. Cheers everyone. Yeah, great. Thanks for a great presentation. All right. That's all question and we have time for today. Thanks for joining us today. The webinar recording and slides will be online later today. We are looking forward to seeing you at a future CNCF webinar. Have a great day.