 All right. Oh, actually, let's wait for him to come back. All right, we're gonna go ahead and get started. I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar, Container Security at Scale, Lessons Learned from the Frontlines with Abien Amro and Palo Alto Networks. I'm Karen Chu, Community Program Manager at Microsoft and CNCF Ambassador. I'll be moderating today's webinar and we'd like to welcome our presenters today. We have Webe DeRos, moderate, sorry, Webe, CICD Consultant at Abien Amro Bank and Keith Mokris, Technical Marketing Engineer at Palo Alto Networks. Just a few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop in your questions there and we'll get through as many as we can about that. Sorry, at the end. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow attendees and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. With that, I will hand it over to Webe and Keith to kick off today's presentation. Awesome. Thank you so much, Karen, for moderating and introducing us as well as to the CNCF for hosting today. If you have any issues seeing the slides, I'll look at chat over the next couple of minutes if we get any issues, but hopefully everyone's able to see them today. As Karen mentioned, we're certainly looking forward to the Q&A at the end and we hope to have a couple minutes at the end to answer those questions. My name is Keith Mokris. I'm a technical marketing engineer now at Palo Alto Networks, mostly focused on host container security and serverless security after joining through the Twistlock acquisition that happened about nine or so months ago. We're really excited to continue strengthening our partnership with CNCF and hosting really great presentations like we have today. Most importantly, I'm really excited to be joining by our guest, Webe DeRos. Webe is a CICD consultant with Fluso and ABN Amro. He's a passionate CICD consultant and engineer. He has over 12 years of experience in various ICT related roles. These roles include Java developer, scrum master, product owner, technical consultant and architect, including sparring partner for senior management. His mission is to help companies improve and speed up their DevSecOps journey. Webe, welcome today. Thank you. Welcome all. Awesome. So I'm going to kick off by showing you what we'll be talking about in our agenda today. One of the items I'm really excited to highlight are some of the latest trends in container adoption as well as trends that were shared as part of the latest CNCF survey results. This is something that the CNCF published probably four or so weeks ago, but holds a really lot of great data about the community. Some of the best practices from the Palo Alto Networks view of how you can be secure using containers. And then most importantly, how ABN Amro has been successful in securing their containerized stack across their organization. I think the CNCF survey results that were shared a couple of weeks ago were really powerful in showing that ultimately there's a ton of growth in cloud native infrastructure across all of the respondents. And one of the items that I think won't really surprise anyone who's a container user or has been for quite some time is that containers are really here to stay and are becoming a central pillar of INO strategy. So container use was up. Now 84% of respondents saying that they're using containers in production. This number was already pretty significant last year, but we can really see that over three-fourths of respondents are saying they're using containers in production. Ultimately, the same is true of Kubernetes, especially the managed services Kubernetes offerings that have really started to be a dominant force among the cloud service providers. So almost 80% of respondents said they're using Kubernetes in production. This is another significant jump from last year, but I think for everyone that's part of this ecosystem, it's not necessarily a huge surprise. And then finally, serverless application development is definitely gaining relevancy and gaining traction. So I think 41% of respondents said that they're using serverless today. Certainly AWS Lambda dominates, you know, kind of the market share when it comes to serverless, but there were other technologies utilized as well. I'm going to highlight a couple of other takeaways in the next few slides, but I would certainly encourage everyone to pour through this data. There were a lot of great details on emerging technologies like service meshes and others that I think are incredibly relevant as you look to build out your stack and certainly implement security best practices. One of the, you know, powerful statistics that have been tracked over the last couple years going back to 2016 highlights the number of containers that are being used in production. And so I think you'll see that we're definitely trending to where these numbers are growing, to where greater numbers of containers are being used in production environments. CNCF provided a summary that says the number of respondents using 250 or more containers increased by 28% to more than half of respondents. And certainly we're excited to share a lot of those best practices when it comes to securing this infrastructure in today's presentation. And then finally, when it comes to security, it's definitely a top challenge for organizations and users like yours using and deploying these containers. And so I think one of the really powerful questions that was included in the survey was, you know, what are your greatest challenges when you're using and deploying containers? And I don't think it was any surprise to see 40% of users responded that security was a top challenge. At the same time, there are a lot of challenges that we'll be discussing today from ABN Amros use that talk about some of the changes when it comes to culture, general complexity, lack of training, and other things like monitoring, networking and storage that are also considerable, you know, items and hurdles to overcome. And this is where I'd love to kind of ask Webe, you know, to kind of join us a little bit early and say, was there anything that surprised you about these statistics? Or do you find that they map really well to what you've experienced working with end user organizations? Hi, Keith. Yeah, to answer this early question, I'm not really surprised because we at an early stage noticed that security is one of the big things that needs to get attention here. So to answer your question a little bit more, when we first deployed some large projects that were completely based upon containers, security was a thing that we had thought of. And also coming from a cultural perspective, we faced a lot of challenges before we could run our first project in production. And it also has to do with a lot of complexity. So those three items and especially security, that's the heart of the next part of this presentation. So I really want to zoom into that. And yeah, we have the same experiences with the same trends you picture here. Awesome. Yeah, I'm looking forward in just a couple of slides, having you share, you know, everything you've experienced on the front lines. I think one of the things that's really important, while I know we have a lot of advanced container users in this audience, is really kind of placing some of the security challenges and really kind of the configuration that's inherent to containers, that can be really helpful for educating others that may be at your organization or who are looking to learn more. And so these are three characteristics that are inherent to containers and microservices that really add to some of the security challenges, but also opportunities organizations are presented with. So certainly by the definition of microservices, containers are incredibly minimal. They're typically single process entities. They're declarative in nature. They're built from machine images that are machine readable, where it's easy to essentially predict and model application behavior. So you have a good understanding of what they're going to do from run to kill. And so ultimately, these are three characteristics that we want to reference, because they impact a lot of the security challenges and needs when it comes to securing containers. At the same time, I really want to highlight some of the differences in securing containers and cloud native application environments. So because of the characteristics I shared earlier, there are certainly many more entities to secure in a microservices driven infrastructure. You have a lot of images that are being built and deployed regularly. That's something that I didn't mention in the survey results where respondents shared how often they're deploying code and oftentimes that organizations shared that they're deploying daily. And this really leads to a lot of container images that you need to track across your CI CD workflows and also in all of your different running and production environments. All of this infrastructure is constantly changing. This can be a huge challenge when you're trying to prioritize risk, respond to any unusual behavior, security incidents, monitoring activity or logging activity, and certainly container infrastructure is immutable. So in order to improve the risk posture in these environments, you're going to need to work with development teams as your DevOps leadership on how you can rebuild things in a quick way and improve that risk posture rather than just patching things that you have running. And this is where this places everyone in the audience in a lot of control over the security of your environment. So you have to, in some ways, work with the security team or educate the security team on how this stack can be secured or work together on what the best practices should be in order to, again, secure this growing infrastructure. And then finally, because containers are so portable, they can be run in multi or hybrid cloud scenarios, easily moving across pipelines. Security needs to be portable. And again, integrating with CI CD workflows is key, which we'll certainly be talking about a lot today. And this notion of portability can have a huge challenge on how organizations protect their environments. These are some of the best practices that I think every organization can gravitate toward. Prioritizing risk and production environments is certainly something that every organization needs to be aware of. As your containerized infrastructure grows, this can be a huge challenge, especially in on-premise, hybrid, or multi-cloud scenarios. But being able to easily understand where's the greatest risk and where will remediation efforts have the greatest impact is certainly a best practice. Implementing the CIS benchmarks truly becomes a key kind of third-party resource that you can implement that will improve the configuration posture of not only your underlying hosts, but your Docker containers, your container runtime, as well as how you've configured Kubernetes and continue to manage it at scale. The CIS benchmarks become a really powerful foundational resource for doing this. And then finally, protecting applications at runtime, being able to automate application activity where you can essentially shift to this notion of whitelisting application behavior rather than trying to describe all of the bad things that can happen in an environment is another powerful security capability to consider. This is especially powerful at the network layer where it can be very difficult to manage a lot of network controls at scale or with a lot of blacklisted rules. So this is certainly something that can combine not just this notion of visibility, but also protection. And we'll certainly highlight how all of this can be implemented across all of your common DevOps workflows. This is the last slide that I'll leave everyone with from kind of the Palo Alto Networks view before I tee everything up for AB and AMRA to lead the rest of today's discussion. But if we take a look at kind of the basic workflow that Docker outlined across build, ship, and run, we can start to map some of these security considerations, which again, you can refine based on your organization's standards and requirements. So as you build and ship your images, implementing scanning for both the CIS benchmarks as well as vulnerabilities becomes a powerful use case where you can potentially fix flaws earlier rather than only focusing on them in runtime scenarios. And most importantly, if you can implement some sort of enforcement where you can actually block builds from progressing in your pipeline, you start to get scalable control over again, isolating what can be deployed into a really important production environment that needs a lot of high compliance standards. Vulnerability management and compliance really need to underpin your entire pipeline. So being able to bring together risk across not just containers, but images, hosts, and even on demand, pass or function environments can become a powerful capability that organizations can implement. Again, this is really important at scale as you start to have hundreds or thousands of entities that are you're potentially responsible for. We talked a little bit about compliance on the last slide. But again, being able to implement monitor and force the CIS benchmarks become a really important area around hardening and ensuring that your images and other components are configured properly. And then as you get to running environments, being able to ensure that you've implemented defense in depth. So you're not just looking at one piece of the stack, but looking at several pieces of the stack and how they integrate together really becomes a powerful differentiator. So again, looking at application activity, what's happening from a file system point of view, what's happening with network connectivity, how have I configured all of my stack? Being able to ensure that you're protecting pod to pod or container to container traffic is another key requirement, as well as properly configuring ingress and egress to any front end microservices. And then certainly ensuring that any of your hosts are configured properly, or you're again implementing other security protocols, like things that maybe you may be using from OPA or other activity from Kubernetes audit thing. I think as you build this view that's, you know, can be useful at your own organization, recognizing that this is a full lifecycle and full stack endeavor is something that's really powerful. Thank you so much for kind of listening, you know, to my introduction at the beginning. I'm really excited to turn things over to our guest, ABN Amra, and let them take it away. All right. Thank you, Keith, for that very good introduction. I'll now move over to my slides. I think everyone is now able to see these slides. Hello, everyone. Welcome. Yes, thank you, Keith. Today I'd like to talk about how we are adopting containers on an enterprise scale. I'd like to talk about what is our status so far and which are the lessons learned from our container journey. I would especially focus on the container security part that we do together with Palo Alto with Prisma Cloud Compute. I'm working as a CACD engineer and consultant at ABN Amra, and one of the things that I do is making sure that container security is brought to a higher level. We started very small and I'm integrating that in every single aspect within the organization. Next to it, I'm in the container, the container platform team. We're building a container platform, and in that also container security is a very important topic. I'm at ABN Amra for about three and a half years right now, and I'd like to highlight these topics for you to cross through those slides. We'll start with a very quick overview of our container journey. We will then go through the managed container platform, but more importantly, how we position Prisma Cloud Compute within that platform and the other components and capabilities within the entire infrastructure systems and application systems within ABN Amra. There will be a deep dive on the challenges and solutions that we got, and I'll conclude with some notes about the future. First of all, let's set the context here. We're a bank. We're a financial sector. We're in Amsterdam, in the Netherlands, in Europe, and we're moving away from waterfall. We're already adopting agile way of working, and we're now moving on towards DevOps in which hybrid cloud, public cloud plays an important role. The numbers at the right of your screen are more important here. We have about 20,000 employees, but the most important aspect here is we have like 3,000 applications actively being developed by more than 400 development teams, and all of them are moving towards public cloud, except for the mainframes, of course, but these teams are moving to the public cloud, and a lot of them are concentrating on containers, and of course, we have to facilitate them from a centralized point of view, and also for container security, we play a very important role here. Our container journey so far, as I said in the beginning, one of our biggest projects that we started with to concentrate on containers was Jenkins, Jenkins score in AWS, completely built based upon containers. That followed after we did some PUCs with just some plain Docker containers, and basically, when the development and the deployment of Jenkins in production, we also had to select a container security tool. Back then it was called Twistlock, now being acquired by Palo Alto, so now it's called Palo Alto, sorry Prisma Cloud Compute, and we used to use that in every single deployment that we do in our different clouds or on-prem. We saw a lot of good initiatives for teams to start with containers, multiple initiatives at different departments, and we would like to capture all of those initiatives and build our own container platform, and the first container platform that we built was based upon EKS, and we put that in production in quarter two of 2019, so that's about like six months ago, and we're very happy because it's being adopted by a couple of teams, and one of those teams is a team that is in the in the CISO department, and if the CISO department trusts us with our container platform and with our container security solutions, then we're for sure it's a good thing. We're also concentrating on OPA, an open policy agent. We included that in our container platform as a very first phase, and we're now also rolling that out on multiple other initiatives. And then at quarter four of 2019, that was a big shift, so before that we had AWS and Azure, but in quarter four, the quarter four of 2019, management decided, okay, we want to concentrate all of our efforts, or at least most of our efforts, we want to put it on the Azure cloud, so that's the big shift from basically from multi-cloud to a single cloud. Still we have AWS with our container platform on EKS, but we're now porting that basically towards EKS on Azure. So that's what we are doing right now. We're now in Q1, or well, Q2 just started, and to zoom in a little bit on that container platform, these are the building blocks of which the container platform is being built upon. I can't go through all of those layers, but I think you're familiar with the infrastructure layer, with the provisioning layer, the runtime layer, of course, the orchestration layer in which Kubernetes plays a vital role here, and of course the application layer for development teams, and today we'd like to concentrate on the security aspect from Prisma Cloud Compute at the top right. If you want to have more information about the container platform, sorry, let me go back one slide. If you want to have more information about this container platform, please check out the YouTube link. That's really a deep dive into this entire platform, and to briefly summarize why we did this. It's one Kubernetes-based solution for all containerized workloads. It means we have all of those different initiatives, and we want to capture them all to avoid drift, to avoid reinventing the wheel, etc. We want to have our CACD processes being standardized, so we can avoid teams deploying all on their own and do different things. We want to have team autonomy, we want to support that, of course, otherwise that whole DevOps movement doesn't make sense, but we want to do it in a controlled way. With this, we try to support the company-wide migration to Azure, because we are focusing completely on Azure. We're also supporting teams having that container platform, having their workloads on the container platform being integrated with other cloud-native services, and in this case, we will really help them with their journey on that. We want to make sure that all of the components are compliant. If we keep that in our hands, like the container team, the container security team, and also working together with CISO and other departments, we make sure our components in Azure, in containers, but also the cloud-native services, those should be compliant. We try to make that easy for the teams. With that most important part, number six, container and Kubernetes security should be, from the perspective of teams, of course, coming out of the box. We should not have to worry so much about it, because it's complex. With this, a couple of years ago, we selected Prisma Cloud Compute. This is a high-level overview. We'd like to run that on Azure, still also in AWS. There are some on-prem initiatives going on, and also on local systems. We see that as a standard building blocks in pipelines, also as a standard building block on the runtime level. In the end, we want to block issues before the security risks become a real problem. That's the shift-left principle. As you will see further on in the slides, that adds some challenges to our organization. Let's talk now about the five enterprise-grade challenges when we are deploying containers at scale. The first big challenge here I selected to talk about is, how should you overcome that knowledge gap? Everyone is running containers now on whatever system. People want to move it to test. People want to move it to production. How do we make sure all of them will be secure? How do we make sure it will be secure all across the different environments? What we did is we captured all of the knowledge, and we made it centrally available. Of course, all the knowledge, that's not true. There's always more knowledge. But if we make it available to everyone in a centralized place, then we at least capture all of that knowledge for the teams and make it available. What we did, basically, is we tried to structure that knowledge, and we mapped it with Prisma Cloud policies. Those are the policies coming from Prisma Cloud Compute, and we added detailed info to it. We put the policies on one side in the table. We put our information and our contextual information. We put it next to the policies, and we also created real-world examples. Why did we do all of this? We did this to pre-select all of the information, which is there, that is about container security, host-level security, with regards to the container runtime, for example. Because a lot of information might be good, but it might not be good. You need to interpret that. Also, information can be outdated. With this, we make sure that that information is updated and kept in a centralized place. Those examples with real sample code, those can be executed, those scripts, those manifest files, those Kubernetes configurations, hardening configurations, for example, for VMs. Those can be checked by the teams, so they can experiment in a sandbox. We're still discussing with the seizure department, maybe we should have a centralized Kubernetes cluster in a sandbox environment isolated from the rest in which they can test all of those policies out. It's just a start to have it all available for the teams. What we also did is we are just sharing a clear agenda on when to break things. For example, we have pipelines with a build breaker. That means it will break your build if there's a critical vulnerability, but we have to be clear on the date we should put that in an active mode when we want to enforce this. I think everyone knows the dry principle. Don't repeat yourself, but we try to do the wet principle. It's kind of a joke here, but we enjoy telling. We need to tell them over and over again why security really matters. I'm glad to see teams are adopting it. That makes sure and that proves we are doing our work right. That's about the knowledge gap. Then when you start scanning, then you will see all of those security issues and violations. Teams start to scan their Docker images, which are in value, of course. Results will show up in the Prisma cloud console. Also on the runtime level, there will be scanning results. How do we make sure we can still handle that? Of course, everyone is talking about do the ultimate shift left, do a shift left. That's very nice, but then pipelines will break, runtime systems will break, but the issues are still there. There will be a lot of pressure and a lot of load to the teams judging those security issues. What we did is we created collections in Prisma cloud to segregate teams. We can focus on teams which need to have attention first. That also has to do with setting priorities. Use your business units. I mean, if you have all of the scanning results of all of the applications together, even if you segregated, you don't know anything about the context of that application. There it's about metadata, but that metadata is not there. You can just extend the whole security team, but you can better talk to the business units and be smart and ask them, hey, is this application a very critical application for the bank? What is the real business value? What is the revenue of that application? When do you want to go to production? What is your CIA rating? How critical is it? If you start talking to the business and capture those kind of questions, you can prioritize. Otherwise, it's all a technological perspective. Then you can't decide. You can't really decide what you should treat first and what you should treat later. Also, what really helps in this case is having a very, very good review process in place. If you have the process wrong or a very cluttery process which a lot of manual steps and tickets to be created manually and a lot of sessions with teams, then it won't help you because then you will always be behind. What we did is we requested a feature request for an improvement of the review process. I'm really, really happy if that will be in one of the next versions of Palo Alto Compute. That will really help us scaling up all of the containers and all of the container initiatives within the bank. Let's zoom in. We're scanning a Docker image here. Here you see a SHA. That's the digital representation of that image. You see it's an Alpine Linus 3.9 based version. You see a lot of critical issues. Teams will see those in their pipelines. They will log on to the Prisma Cloud console. They have a hard time learning what to do with this. We are collecting feedback from the teams. We are helping them. We are identifying identical packages because a lot of teams are using the Alpine Linus Docker image here. There's even a high vulnerability in GCC. That's a compiler. That's not just a package you can swap. Then everything in that image breaks. What do we do with that? We are collecting that information and we try to make that available to teams. But still it's difficult because there will be a lot of teams. Then the discussion about base images pops up. It's like if you create a base image for a team and that base image changes, then that team will be affected. It's about governance. Who owns that base images? Maybe it should be one team, but then that team will be a bottleneck. Maybe it will be a lot of different teams. Maybe we should tie it up with business units. That's what we actually did. We have groups of base images. For example, we have a team that creates a whole bunch of standardized pipelines. They own the images which are being used by those pipelines. Those are there. Those are okay. Those are secure. The same is true for teams that do a lot of end-to-end tests, for example. Selenium, other tests like GMater, I think it's called Brace Meter now. They own those base images. We also have very generic base images in Java teams, for example. We encourage teams to provide those base images and maintain them. Otherwise, we can't deal with them all. It will be too much. Therefore, we need to have a registry. In DevOps, everyone is responsible from start to end, basically, from source code towards production. If you don't have a centralized registry, how would you know which base images are there to be re-consumed by other teams? We're still in a discussion of having a centralized registry versus all decentralized registries. I think the best is to have it in the middle. A lot of decentralized registries for teams, for their own images, and a lot of semi-centralized registries for those kind of base images. But you need to make sure that you align on that with everyone. It's all about context-specific situations in this case. How do you deal with this? When do you need to create an exception? When do you need to move forward when you are scanning your images? That makes sense from an enterprise perspective. The context is very, very important here. If we then move on, how do we patch an image? How do we effectively patch container images? I just picked six bullet points from the entire list that we have. We have to educate teams. We have to help them in their journey. What we did is we said, okay, maybe you can swap your container image for another one. Don't patch it, but just swap it. If you can find the same features, the same functional use cases in another image, why just patch it? Be smart. Move on to another image. Or work with your vendor. If you collaborate with your vendor, give him advice, and help him patch those systems, patch those images, it would be great because in the end, a vendor gives you the software in a container image. Just help him because he's also having a hard time fixing their stuff. You can, of course, upgrade your packages, including their dependencies, but that is very hard. I mean, a simple, single package, that's easy to patch. If there's a fix available, you can just upgrade that package in a Docker image. But if you have a lot of dependencies, which will also change, new vulnerabilities might pop up, and then you even have more work to do. Maybe it's better to remove the insecure part. For example, I saw a lot of Docker images, also from vendors, public images, and they have, for example, private keys in it. Yeah, that's a bad thing to start with. But you can just remove that private key from that image. But be careful. You don't know what breaks. But if it breaks, that private key is effectively being used. So there's another discussion then going on. So try to remove that insecure part, but be careful and test out your new image here, or test your container on which the image, where the image is coming from, basically. And also, you can replace a feature from a functional perspective. I always use a very simple example, like if you want to download a package from a repository, for example, you can use WGAT, which can also use Curl. So if one of those has a critical vulnerability, just replace it, because it's an easy way to swap out that security issue. And then the last item, and that's, yeah, maybe it's not so nice to tell, but I have to tell, sometimes I have bad news for you. Maybe containers are not the solution. Sometimes you see images with more than 50 critical vulnerabilities. Maybe it's 100 high risk vulnerabilities, and even more medium and low risks, let alone compliancy issues. Maybe it's just a single image. Maybe it's all the images that are there. Then it's better to choose a different solution. So how do we make sure that our organization stays compliant? So I put in this picture, because it has a lot of different components, but here we're just focusing on the security part. And as I said, we have one Prisma Cloud instance for all of the workloads. On the left, you see those policies and rules. Those will be propagated towards Azure DevOps, to AWS, and to on-prem. And considerations here are we should have a centralized policy management. If we have policies decentralized all over the place, what is the proof? So we try to capture those policies in code within our systems. And one of the things that we have with Prisma Cloud is the container security policies and rules for standalone Docker containers, for Kubernetes, but also for Fargate. There's a little bit of support for serverless, for AWS Lambda here. But there's also now PureSec, and we might use that in the future. That's more or less focused on serverless. So we're also using that. We also have those Sysbenchmarks host compliancy rules. And within those policies, with all of those policies and the tests being done by the teams, we will get feedback. And based upon that feedback, we change policies, or we incorporate new policies, make them more clear, extend the error messages of those policies. So what can go wrong? We have to create links back to our knowledge base. So then we create meaningful feedback from teams and we incorporate that in our product that we offer. There's also the challenge of multiple networks. Some parts of AV Numero daughter departments are on a separate network. We can't connect to them. We can't do that in any way possible. So we'll just give them advice on how we do it. And we hope they will pick that up and use that in their advantage. And then of course, the challenge about multi-cloud and hybrid cloud. I think I don't have to explain why this is a challenge. There's connections from on-prem to AWS in this case, and also to Azure. But what about an Azure workload that needs to talk to an AWS service and also needs to be secure? Those are the challenges we're now talking about. We're now in the middle of all of those challenges. We didn't solve everything, but we are heading towards our way to secure everything in every cloud basically. And there's one big thing that comes to the table. We briefly talked about how to patch Docker images. And there's one thing that's also very important at SD runtime environment. So for example, Keith talked about whitelisting specific images. We only allow images that are whitelisted because it should come from a private registry. We can't just pull images from everywhere and accept that. So we can feel fast by rolling this rule out and break those runtime systems. But that really hurts because your system can run, sorry, your system can fill on multiple policies. So if this policy is being fixed, if you pull your image from our private registry, then it might fill for another policy. So feel fast, but it really hurts. So we are kind of, yeah, we are careful with this. Teams need to learn. And for all of this, you need to have your own defender. And a defender is nothing more than just another container which checks all of those other containers in your container as environment. In Kubernetes, it is like a demon set that runs a container on every host that checks the whole host, including all of the containers of the Kubernetes cluster itself. But you need to manage that. And we don't want teams to create their own defender because they can switch it off. And then we need to create an extra alert there. So we provide the teams with the defenders. They can't control that. But what will happen if that defender breaks or if there's a network hiccup, it will lose the connection with the console, with the Prisma Cloud console. So we did another feature request that teams would be able to control their own defender, but only their defender and nothing more than that. So we really hope that feature request will come in. And also from a perspective of upgrades and troubleshooting would be really helpful because I myself can't log into a system of a developer team, not our decision department. They don't want that. We don't want that, of course, because then, yeah, we should take over. And then we are in control. And yeah, that's a great thing, but not from the perspective of DevOps. We want to encourage those concepts of team autonomy, but we also want to have a little bit more control and visibility. So that's why we still push those defenders. We push those scanning results. We push them to a centralized console and we keep an eye on it. And there's another big challenge here is how to connect different isolated AWS accounts. So we can't go into all of the details, but there are so many ways, for example, VPC pairing. That's quite ideal. It's quite easy, but it doesn't work because it doesn't scale. We have too many teams, too many connections. So VPC pairing is out of the question. Then we had transit gateway that came after that. But we also don't want to use that because in our opinion, we should use private link. That is more like a direct connection. It's a more secure connection. It's less wide open to say it like that. But that also creates a boundary. We need to have permission from both the development team and also from the team that runs Prisma Cloud. So that adds another manual step there. It's basically a little bit of a blocker. But we removed that, but still it's a manual process. So that will slow things down a little bit. But still it's needed and still we use that. Then let's conclude with some other considerations. Briefly talked about when should you really break builds? Share that agenda. When will you start blocking things? It's called the grace period. How long should that be? We're now in a learning state. We are deploying containers. We are running workloads. But the grace period is there. Teams still need to learn and we should gather that information, build up that collective knowledge. But it's difficult to say when we should stop with that. Perhaps before summer, maybe a little bit later. And then with the big shift to Azure, there's another discussion going on. It's cloud native, pure cloud native versus the best of tools. We already worked on a container platform in AWS. Now we are porting that to Azure, to AKS. But Azure also has its own Azure Security Center, for example. We had a chat, for example, with a Microsoft consultant and he said, please keep your Prisma Cloud in there. It's really good to handle all of those container security and Kubernetes security in a very good way. But still there's a big movement towards cloud native. We are building upon other tools and we try to still pick the best tools. But the discussion sometimes pops up again. And to conclude, all of this is we can have optimal security. We can block everything which is insecure. But still we need to deliver business value. We need to have deliverables in which the business makes progress forwards towards their customers. So our journey continues. And I would say security is never done, but we move forward. So in the end, we try to protect all of our workloads. We will protect our workloads. We will never stop doing that in a multi-cloud, in a hybrid cloud environment. When we have our review process optimized, we can further scale up. And that will for sure accelerate the number of containers in production anymore. It will accelerate the whole container journey. And that perfectly fits into the picture that Keith shows. We're also following that. It's not a big surprise. Containers will also be within a highly regulated environment like ABNM row. They will be the new mainstream. I think it will already be the new mainstream. And we might say, and this is a little joke here, we're trying to empty out our data centers. So we say, okay, on-prem, we'll send it to the DAFNL. And I think all the developers, all the nerds will understand what I mean with that. So basically, that's my story. Thank you very much for listening. And I'm now handing over back to Keith to get you through those questions, if there are any questions from the listeners. Thank you very much. Actually, thanks, Wiebe and Keith for a great presentation. We now have time for questions. And if you do have a question, please drop it in the Q&A tab at the bottom of your screen, and we will get through as many as we can. So we do have a few questions as of right now so we can get started there. Why did you decide to move to AKS from EKS? Were you missing features or was it a cost-driven decision? So I can answer that question. So we started with both cloud, basically. That was like two and a half years back. We moved all along, but there were a lot of discussions in the higher management, in the senior management, if this was the best situation for the bank. So in the end, they said, okay, we will choose one cloud. We will choose Azure in this case to concentrate all of our workloads. And yeah, of course, we have that container platform running on EKS, on AWS. But thankfully, we used all kinds of tools that we could reuse. For example, a lot of open source initiatives like Helm charts, we're building upon open standards. So it's, yeah, we can port that right now. Management decides, for us, it was a little bit of a surprise. We did see it a little bit of coming, but still it was a, yeah, quite a surprise for us. Great. Next question. What were the considerations of the big shift to migrate to Azure? I guess it's kind of a similar question, but... Yeah, it's a management decision, basically, to focus only on one cloud, to centralize all of the efforts and not being able to or not willing to maintain all of those different clouds. Basically, AWS is still there as an exit strategy cloud for the other workloads that can't run on Azure. Next question. Where does the Prisma cloud instance reside for multi and hybrid cloud on prem? No, the Prisma cloud, it's now running in AWS. And we connect from our local machines to the AWS based instance. And yeah, what I also mentioned is all our AWS workloads are connecting to that via private link. And we're now in the face of also connecting Azure workloads towards AWS. There's a new project to facilitate that. But also in the end, the Prisma cloud console will move to Azure, but that team, they're not onboarded yet. But we need to move forward with our container security. And when they are ready, they will also move to our container platform. They will create an instance for themselves in their own Azure environment. And then also Prisma cloud will run there. So then it's also closer to our own other workloads. I think that will simplify things. Great. Another Prisma cloud question. Does Prisma cloud replace gatekeeper or use both in concert? No, we don't use gatekeeper yet. We have looked into it, but we should in the end decide what to use for that. But as of now, we're concentrating on Prisma cloud. Just a reminder to attendees. If you have a question, please drop it in the Q&A tab at the bottom of the screen. Let's see. Next one is why did you drop twist lock? Sorry? Why did you drop twist lock? Well, basically, we're still using twist lock, but it's rebranded. I mean, Palo Alto took over, twist lock the company. And now the product is named Prisma cloud compute. But yeah, under the hood, it's still a twist lock. But maybe Keith can give some more details about it. Yeah, that's a great point of emphasis. So I was part of the acquisition from twist lock into Palo Alto networks a little over nine months ago. And I'm still embedded on the same product team that I was a part of at twist lock. So all of the, you know, excellent co founders and company leadership on the product teams, not only from twist lock, but also from pure sec are all working together. And so ultimately, when we talk about Prisma cloud, it's just a new name for the same technology that we've been supporting and using throughout, you know, kind of the CNCF and cloud made ecosystem. So yeah, it's a good point of emphasis. Great. Let's see. Let's do how did, how did you get your security team involved? Yes. So yeah, that's a difficult, it's a simple question, difficult answer. Maybe I should tackle this question from the department in which I was working in. It was the series D department. That's the, sorry, the center of expertise for software development. And I together with another team, a infrastructure team worked on the Jenkins part, for example, that required a lot of container security. And I was a really enthusiastic about putting containers, bringing containers to ABNM role when we, yeah, we didn't have containers in 2016, for example, it slowly started. So I took that initiative with me and also for container security. And then at the meantime, there was a team being formed. It's called the secure coding team that handled all of the static analysis of, they did a whole bunch of static analysis activities for applications that development teams were working on. So container security was also becoming part of their job. And together with them, we defined the policies and rules. And also we involved CISO, of course, and in turn CISO involved the SOC teams, the secure operation teams for follow-ups for any run time issues on a VM level, but in this time on a container level. So it's a shared effort between a lot of different teams. And yeah, I was always in the middle because just I started and I got, I got very enthusiastic and I really pushed everyone to bring container security on all the different parts of the organization. So that's really how the bull got rolling, basically. Yeah, it's kind of complicated. It's not a completely isolated project. It's gathering everyone together from a centralized point of view. Yeah, I hope that answers your question. Cool. Okay. We're going to do two more questions and then we got to wrap up. Are you spanning workloads between public clouds? Sorry, are you? Are you spanning workloads between public clouds? So as of now, we only have a connection as far as I know. We have Azure build agents, for example, in Azure private build agents, which are connected also via on-prem to our AWS cloud. But that's on a very, very simple, on a very, very reduced scale to say it like that. It's not rolled out for everyone. We are waiting for a project called a vortex project. And when that is put in place, then we have a direct connection between Azure and AWS cloud. But as of now, that's not there yet. So yeah, to answer your question shortly, it's only there for a very, very limited amount of use cases. Okay, last one. How is the pricing model for Prisma cloud? Is it linked to container images or to the quantum of white listing? I think I should redirect this to Keith. Yeah, that's a really good question. So ultimately, we understand it can be really difficult to kind of like model consumption and understand how great your infrastructure is when it comes to hosts, containers, and items of entities that need to be protected. So our pricing model is essentially built around several different conversion mechanisms for how many containers or hosts that you have running averaged every month. And so at a very basic level, it's just the number of workloads that you have running, and that need to be identified or protected. We include all the different components around scanning, all the different components around integrations, that's all included. We're really just looking at the number of running entities. And then our team always addresses any concerns or questions that anyone may have around that. All right. Thank you, Lee Bay and Keith for a great presentation. That is all the time we have today. Thank you everyone for joining us. The webinar recording and slides will be online later today. And we look forward to seeing you all at a future CNCF webinar. Have a great day. Thanks. Okay. Thank you all. Cheers. Cheers.