 Good morning, everyone. Thanks for attending a session. My name is Larry Corvallo. I'm going to be moderating this session with three great panelists who are seated right here. I'll start with having them do a quick intro, and then we'll go through one slide for each company. So we'll start from the right, David. You want to? Hi, everybody. Nice to meet you. I do see a couple other VMware folks around. So you already know me, but those who don't know me, David Zendzi and I lead the Tanzu, which is our modern app side of VMware global field CISO team. So I focus on security conversations and help our customers solving modern application problems. And VMware is involved with a lot of open source. So we're really excited to be part of this panel and talk about how that overlaps and influences customer decisions. Awesome. My name's Hilary Benson. I lead the product management function for GitLab's security data science and monitoring solutions. Backgrounds primarily in security. Spent some time in the intelligence community working for NSA, private sector security testing. And I've been building products in DevOps and cloud native space for the past few years. Hi, Kirsten Newcomer. I'm with Red Hat, director of security product management. My team focuses on security for OpenStack, OpenShift, and also were responsible for Red Hat Advanced Cluster security, which came to us through the acquisition of Stackrocks. So I've been doing CUBE and container security for more years than I can actually count, maybe somewhere around 10. And prior to joining Red Hat, I was at Black Duck software where I started getting involved with open source. Great. So what we are here to talk about is not just the security solutions, but how having a good end-to-end security posture at your company is going to enable you to innovate faster. How does that speed to get code to production in a secure manner enable your stakeholders to work better with you? How does it also help you protect your reputation? At the keynote, you've just heard about LastPass and what happened to them. And I know so many people, including me, who was in LastPass, who ran away and started doing other things for that. So the reputation, the costs are in the billions for a poor security posture. What we want to talk about is how can you innovate faster by putting it right and what is the business value. And I hope for you to take away from this session what our panelists will talk about, how they see from their individual companies open source products and their products what they can do. So with that, and then I'll ask them some questions and we'll end up with Q&A from the audience. So please feel free to pipe up and talk about this, talk about whatever questions you have at the end. So we'll start with Hilary, if your slide is up, if you want to stand up, sit down, talk, whatever, it's up to you. All right, thanks. Yeah, so I want to talk a little bit about reducing unnecessary risk, right? So I think that risk is often a really difficult thing for people to engage with and that can be for philosophical reasons, right? At an organization you have to sort of decide what risk means to you, what risks your business is exposed to, what level of risk your organization is willing to accept. And that can look very different from one company to another. But there's a lot of commonalities I think when kind of the rubber meets the road and you're looking at practical considerations. There's a lot of things that different organizations share in common. And the practical implications of trying to control risk come down to kind of what your processes are and how you do things. And so from the GitLab perspective, we've been very focused on building a comprehensive DevSecOps platform that helps you to address those specific problems. So we're really laser focused on the use case of bringing folks together into kind of a collaborative environment and putting security practitioners and developers side by side to fix problems earlier in the process. So that's something that we've all been hearing for how many years now in relation to DevSecOps. But it's really been a cool experience to see folks being able to actually do that concretely with their tooling. And so I think the business value or the impact of that for a lot of organizations is that they're able to not only ship more secure software, but also ship it faster because sort of the time required to resolve issues is reduced if you have folks kind of all using the same set of workflows and the same tool set. And then that also translates into if your way of doing things, if your processes are better defined, if you're using consistent workflows across all of your organizations, it's easier to say what it is that you're doing and be able to prove that from a compliance regulatory perspective. And so yeah, that's mainly what I wanted to say about that. Great, let's go next to Kirsten to talk about her slide and what does Red Hat do to secure the platform and application? Cool, so rather you all can see the slide and read it, but kind of if I kind of build on what Hillary's been saying. So in my role as a product manager, one of my jobs is to be a bridge between engineers and our end users, our customers. And because Red Hat is really a platform company and so Linux has been out there for quite a while OpenShift, we've been investing in OpenShift, CNCF certified cube distribution since cube 1.0 and contributed a lot of code that our enterprise customers needed to feel comfortable with the security of Kubernetes. I really liked one of the points in the keynotes this morning that oftentimes when new technology is built or developed, there are assumptions that are made that may or may not pan out in the long run. And originally Kubernetes was kind of designed for internal services use and there was less focus on insider threat at that time. And so at Red Hat, we really believe in building security capabilities into the solutions that need to be deployed in the enterprise and we start with security capabilities at Linux which is a key element for running your containers. You need namespace isolation, you need seccomp, you need a whole range of things that protect those running containers on that shared host. Then at the Kubernetes layer, our back was not something that was originally present when cube was first open sourced. We contributed our back, we contributed a range of things that enabled security capabilities for the enterprise. Security context constraints which the upstream version was pod security policies. This is a cube admission controller plugin that helped to provide guardrails for ensuring that pods with privileged requirements were not accidentally deployed onto a cluster in contrast with the security policy of the organization. And so we've continued to build up that stack and advanced cluster security stack rocks very focused on workload security, enabling integration at build time, deploy time and runtime security analysis. So really we wanna be thinking about what I like to, when I think DevSecOps, I like to separate between DevSec and SecOps. And we heard some references to that this morning as well. So for me, the DevSec part is ensuring that I am shifting left, that I've got security guardrails in that pipeline. And then the middle piece where they intersect is deployment, I need guardrails there as well. And then I need runtime detect and response which is the SecOps, right? So I kind of need all of those together. And as Hillary was saying, I think one of the things that we see in the container and Kubernetes world is the technology really requires, it's the same security principles but the way you implement them can be different. And to do that effectively, you do need a common language between your development and your operations and your security team. And managing security policy as code is a way to start to move towards that common language so that the security team has comfort that the guardrails are there, the developers can read and understand the code that the security team wants and it improves that communication process. So this is a big focus for us. Great, so we'll go next to David and talk about the VMware use case which is beyond shift left, security, dig a little deeper into that. And I'm gonna do some more, I'm actually gonna build off of both of yours because on the VMware side, we start actually, spring boot, we start with the languages and going from there up to the operational platform. So it's a big picture across the whole application and the parts that make it up. And from my world, the conversations I really get involved with from the Tanzu and VMware side is really the people and processes. That's the hardest shift we're talking about. We're talking about security teams that traditionally don't understand the apps, they're involved too late in the process. How do you actually get this to happen when the changes we've had in the software development processes, the agile development, the operational platforms, automation happening on that side, those are pretty well-established patterns. We're making it better through tools like you have and like we in VMware have. But it really is that people in process. How are we changing the way we think about that? Common language you mentioned. How do we get a language? Because security people, I was having a conversation with a gentleman here earlier, the conversations that security people have is different because it's looking at things from that risk point of view. How do you actually analyze the risk and understand the vulnerabilities and the problems, the environment? And just having a tool that shows you a vulnerability doesn't tell you what the risk is to that. So as we're looking at things from a VMware point of view, we're actually starting from the developers. How do we actually developers to do development and not have to worry about the platform they're deploying to, but have knowledge of how it works and how do we support the operations teams to support the applications that are running on them while all talking to security in the middle? So one thing you'll hear me or any VMware person come up with a lot is talking about balanced teams. How do we actually do this? And usually you think balanced teams, you're thinking, oh, a developer, a designer, an operations person, SRE. Where does security fit into that picture? And how do we actually fit into that? And then we start layering on top of that all the open source that all of us depend upon every day. We saw some of the keynotes getting into some of those things that it's great that there've been great, some PRs committed to a bunch of open source projects, but I think what they say, 15% might respond. How does that really solve the problem? How are we actually getting into the depth of knowing, well, great, once more have a live XML security problem, but is that really impacting my code? So how do we enable a developer platform so developers can focus on code and not worry about the simple, upgrading should be easy today. We know how to do that, we've learned that over the years. How do we actually make it so your developers and then the platforms you're running on can have those table stakes in place so you have real risk discussions and make that language a common thing around everybody. Great, so I had some questions from their slides. So I'll go back to Hillary's slide first and would like to ask her a question about what are the most challenging security compliance requirements and what industries are they prominent in? We see different industries having different security needs, obviously, healthcare and finance are most prominent in government along with that. So I would like to dig a little deeper into that, Hillary. Yeah, that's a great question. I think you highlighted the industries that I would point to you actually in terms of very specific, pretty intense compliance regulatory standards that can exist and these companies have to meet. And there are very specific things there in some of these industries, but something that's been interesting for me working for GitLab and with such a broad customer base seeing everything from small startups to mid-sized SaaS companies, government agencies, these large, very regulated companies is the commonalities that exist between all of them. And so you may have something very specific that you have to do at one company versus another but in terms of how that actually happens on a day-to-day basis, how you actually take action to prove compliance with something, there's a lot of commonalities between the sort of actions that you need to take, the processes you need to put in place. And it becomes a little bit more about how visible is your process, how well documented is it, how well do people actually adhere to those compliance processes, how easy is it to revise those things and insert a particular, a very specific requirement or very specific standard that a given company might need to meet, how composable is it? And so it becomes more about kind of the system or the framework for enabling compliance than it does sort of the bespoke requirements that might exist in one industry versus another. Anything to add, Kirsten? I just, if you don't mind, then we'll hand off to Dave. So one of our focuses, you'll probably hear me say as code a lot, right? Policy as code, security as code, compliance as code. So we're very focused and, you know, Hillary rightly is painting a broader picture. At Red Hat, where we play is automating compliance with applicable technical controls, right? So that's only a subset of what you need to look at when you're looking at compliance. Process is a big part of that. Now we are also investing in process around supply chain security, but we've been doing compliance as code for a long time. For REL first, now for OpenShift. Multi-cluster, compliance as code, configuration. It's interesting, so many of the frameworks actually have overlapping controls when it comes to the technical side. So you can actually get some big bang for the buck on the technical piece. Other than supply chain security, I'm not gonna hit process. And the other thing kind of that, to circle back to some of what David was saying, right, is that people piece, we have to be able to provide human readable output and evidence to our auditors, and auditors don't understand the new technologies either. So one of the things that can become challenging in the compliance space is, you know, how do you work with those auditors to provide them evidence, but also to help them understand that, hey, that traditional perimeter-based network security controls really aren't always applicable to a Kubernetes cluster where I've got a software-defined network, and I'm using a different way to do micro-segmentation and to create isolation. So it's a really interesting space, I think, for us. That, we're gonna keep talking off of each other because he's also talking to the other. But no, actually, that's one of the most interesting conversations I have with customers is really that, you know, you traditionally have, say, a banking environment where you have your firewall team, your architecture team, all these different segmented by responsible duties in their different environments. Now you introduce software-defined networking, either through a, you know, like an NSXT where it's like a global network or it's Kubernetes, NSC and I, you know, different layers, which one's responsible? Is it your networking team? Is it your firewall team? Is it your architecture team? Is it your router team? Who's responsible for managing that? And again, that's a people process. How do we change those pieces? The frameworks, you know, security, like PCI just recently published late last year, their guidance, the best practices on containers and containerization. And now the sad thing is, it doesn't directly apply back to PCI. It's best practices, you still have to apply PCI, which gets you into things like, well, section five, you're talking about any virus. Well, really, how do you do any virus in an ephemeral environment? You know, realistically, it's the same conversation we're having with security teams and CSOs around the globe. The mindset and the way they built policies and you talk to some of these firms, these policies are 40 years old in some places. The technology's not only changed, the language has changed, and the approach is no longer a department of no, but a risk management process. How do we wrap our heads around that conversation? And infrastructure as code is great, security as code is great, compliance as code is great, but the translation and the reality of making that happen is so difficult because there's nuances in how it's interpreted within the company and how do you express that language? I've seen so many companies try so hard to do all of it as code without changing the people and it doesn't succeed. And to your point, right, many security teams don't read YAML. They aren't coders, right? And so, and also some of them at kind of across the team, they may know the policies, but they may not any longer know the why behind the policies. So trying to shift a conversation, right? To get past that department of no, you have to shift the conversation from thou shalt to what are you trying to accomplish? What's the attack vector? You know, what's the threat? And let's talk about how else we can address that. Okay, you guys, after this, let me go to Kirsten. You talked about automated guardrails and I want to know, because there is so much of issues at various companies in the time taken to make sure that all security policies are catered to and how that's speed of innovation. So Kirsten, when you say the automated, does it reduce the need for oversight by the security team and how does the efficiencies gained by automating the security process help? So I think once you've got agreement on the guardrails and you have the automated guardrails in place, it does improve efficiency. That conversation, as David's been saying, right, that conversation about the guardrails and what they should be and how do I have evidence that they're in place and that they're being met can take some time. But I think as a community, we've really evolved in this space, right? So CUBE admission controller plugins have been around for a while. Now we're seeing things like the policies that can be used and tracked with EBPF to say, if something's violated, alert on runtime, we're also seeing one of the things I like about the guardrail on network policies. To your point, network policies in Kubernetes in particular, that is one of the most challenging conversations to have in a large organization because then if there is a network security team, they're still thinking about firewalls between the three tiers of the application. They're still thinking about, you can have a firewall outside the cluster, but then they get all anxious about the node-to-node traffic. What does that mean, right? And how do I control that? And CUBE network policies don't translate to the networking team. And so one of the areas that we're also investing in is enhanced network observability. That's available with StackRocks or ACS, but also now we're adding that directly into OpenShift to kind of make this easier. So I think it's a combination of those conversations. They do have to be iterative, and ideally you find a team, like if you're working with an org that is more traditional, hasn't yet shifted, especially your security team, find a team that you can hopefully get somebody from the security team to collaborate, do this kind of together, talk through what those guardrails are, show them how they're implemented, show them the evidence. Maybe they don't read the YAML, but if you can show them a visualization of the output or the controls, the security team, the network security team over time will be more comfortable. They'll still be a little uncomfortable that their role is challenged. They're maybe not the ones writing those network policies, but if they can see what they're doing and have some oversight, that provides greater comfort and that frees all of us up, any organization, to start moving into cloud native on-premises or in the public cloud when the teams can see policy in a way that they understand. So again, if I think about security policies, you know, the CUBE native security policies available in Stack Rocks, they're written in a way that if you were looking at the code might not make sense to the security team, but if you look at the policy in the UI or via the API, it does make sense, right? Don't allow, don't admit images with critical known vulnerabilities. Don't allow external access to this application. So that translation piece is really important and I think we're seeing more growth in the CUBE community in that space. So, so much on that one, maybe we can extend our time, I think. No, actually two sides to that. One is it's great, I wanna get into the networking side because networking is a big passion of mine, but separately on the application side, going back to the balance team and the communication that people in process change, part of this is we have tools, like we can all scan our stuff and know if we have vulnerabilities, but what do you do with it? What is the basic table stakes where we have automation handling your platform and your applications? So we don't need to sit there and pay $200,000 a year to QALUS, sorry for your QALUS, to actually scan all of your stuff. You know that you are running an automated environment that will always have the latest version of whatever open source or commercial side you have available to your environment. Once you've gotten that table stakes done, that's the basics of that guardrails. You can start going into this question around say network policies and you'll never have a good conversation there if you don't bring your networking team, your security team, but also your developers. You know, as your developers are having their weekly sprints for their agile development they're doing, they know, oh I'm doing a connection to this third-party process, I need to have this important product. They know that, they're writing the code that makes the connections. Wouldn't it be great if that code documented and you can actually tie what the developer's doing into an automated policy admission? You know, there are things we can start focusing on once you get past that table stakes of just spending all of our time scanning and upgrading. Let's get past that and focus on real stuff. And I'm sure GitHub is doing some really neat stuff on that site too. And just really quickly, we added shift left network policy generation for the app dev. It's still in tech preview. And then eventually on our roadmap you'll be able to compare that network policy generated for the application against a system policy that says this is what's allowed and not allowed so that the developer knows before they even deploy whether it's gonna meet it. Yeah, I think some things touching on things both of you were talking about actually some of the things that we're working on are really around kind of up leveling. Like this problem that you're talking about like let's get past this, why can't we patch? Like why are we still talking about this at this point? It's so hard, right? Like the practical challenges of that it's like actually really hard to do. And part of the problem there is sort of like the communication between the security team and the development teams. And so that concept of let the security team express what they need in a policy that works in their language and then let the developer see what the security team wants directly in their merge request when they're trying to merge code and just automate that process, like solve that problem. And so that's, it's a little bit nuts. So we're still at this point like still having this conversation about patching and about basic things but that's still where a lot of the problems are for a lot of organizations these days. One last thing before we go on. Do I want to point out, we can also ask our developers better test-driven development. We can automate anything but the fear is we're gonna still go back to the late 80s and we have Java upgrade problems and things break. We're gonna have that fear that's inherent in our developer communities as well as our security communities. It's afraid to make a firewall change and it's gonna break something. No one wants to be the firewall broken again. I'm sure some more than I have been in those conversations. So how do we actually get to a point where we're actually enabling the developers to do what they need to do and we can do the automated upgrades around it? We can upgrade platforms, that's easy. I can upgrade OpenShift, I can upgrade Tanzu. I can have pipelines. Your infrastructure itself is code like any application. Treat it like an application. Treat your platforms and the platforms the developers applications. Automate the upgrades of all that but it doesn't work for the application if the developers haven't written test-based applications. So when you upgrade their node or their Java or their Go or whatever the platform pieces around it are they can actually test and validate the application works in the new version. Without that, we can't automate anything. Okay, so next question for David and before I open it up for the audience to ask some questions. Open Source, this whole conference is on open source. We have a lot of issues with folks injecting vulnerabilities into open source. So my question is how does an organization justify using open source and how do you overcome those challenges that they may have for open source with security? It's actually kind of a loaded question. If you go back far enough, if you look at the government, they're like, oh, we're not going to use open source. And everyone's like laughing and going, you are using open source. There's no way around it these days. What was some of the stats? I think we have some of them in the key notes, you know, 80% or 90% of all today's software is open source. The rest is just a little bit you've written. We're all using open source and we need to have a better visibility of the risks of using that open source. So, how do you have a partner who provides things like Red Hat or VMware or I don't know if you have, you commit stuff to open source, you share. So, yeah. So, how are we supporting this community? How are we making this better and having visibility into the risks of those? You know, are you able to go to your partner and say, you know, I'm at the latest version, you know, are you always making sure that's up to date? Are you contributing PRs back to the original open source? You know, if the community were like the scan they mentioned this morning, where they ran and patched, you know, thousands and thousands of apps, if they don't respond, maybe the community should say that application or library whatever it is, shouldn't be used. We as a community should say if they're not responding to a PR, that project is dead. Maybe we need to make some hard decisions on that, but open source is used. There's no way around it. Every one of us has it in your pocket on your phone, on your watch. You can't go anywhere. Every device we have in this room has open source in it. There's no, no getting around that, but knowing the risks of that. So having a vulnerability itself is not a bad thing. If you understand the attack vectors and the risks of having it and how do you control and mitigate that? Because everything is gonna have vulnerabilities. There's nothing shipped today that doesn't have a vulnerability in it. Nothing. Yeah, I would just add to that and say that just because something is secret doesn't mean it's secure. You know, it's like, and frankly, like the more you're using common components that lots of other people are using, the better chance that there is, that that is more secure because it's not something that's sort of one off only used by one company because they developed it internally. So there's a lot of value there, I think, to having community around open source components for sure. And I, obviously, I mean, everything Red Hat delivers is built from, with, from open source software. It's, we're not even open core. Everything is open. I think the one thing, David, that you called out that I do wanna build on, when we say it's easy to upgrade, I think that there's a lot of nuance there. In fact, not just because of the workloads and the application requirements, but also you think about a Kubernetes platform which has so many components and then layered services that you're adding on top of that. If you're using Istio or, you know, every Coop platform needs a CNI, you know, you might be using Istio. There's a whole range of things. And so one of the things we intentionally did with OpenShift 4 was to manage that platform with Kubernetes operators, which allows us to use the declarative and automated nature of Kubernetes that is normally applied to workloads and apply it to the platform components. That's how we actually simplify the upgrade of an OpenShift platform that includes the host operating system which is managed as a component of OpenShift. But you still have to think about, you know, we still have plenty of enterprise customers who take their time moving from one version to another, partly because of the idea that everything is always up to the latest. That's not necessarily the world I see for our customers because they have approval processes, they have to worry about, you know, they have certification processes per version, they have to worry about whether the apps work, they have like, can be a six month process to validate. And then in the telco world, where we also have a lot of customers, it's an even longer horizon. And containerized network functions is kind of this new space for Kubernetes where those functions need privileges, that a traditional microservice-based app is not going to necessarily need. So it's this really wide-ranging space. So given that we can't always get the latest, you wanna be paying attention to where your open source comes from. And if you're using upstream open source, yeah, you're gonna want the latest because that's when the most fixes are, but you're gonna have to evaluate whether it breaks the other things you're integrating with. And I really like what you said about risk, right? We see a lot of, I see a lot of regulations or companies saying every vulnerability has to be fixed. It's not possible. It's also not a real risk-based conversation when that's what they say, right? And so at Red Hat, when we have these conversations, you know, we have a product security team that does evaluation for any CVE that affects Red Hat content. What is the actual impact? You know, the data you get from NVD, it's wonderful. We all rely on it. But they have to look at the worst case situation. They have to look across Linux and Windows, whereas our product security team can say, look, we know this is how we compile the code. These are the kind of mitigations available to you. And so you really need to be thinking about when you're using open source, not just are the CVEs fixed, but are there maintainers? How reliable are those maintainers? You know, is the project neglected? And then, you know, how are you gonna manage the lifetime of your use? And you're absolutely right. We all, everything includes open source these days, right? It's amazing. And I think you just hit on the point of why would you have a commercial vendor or partner for open source? It's VMware, Red Hat, GitLab, we're all, I'm sorry. What are you talking about? I'm sorry. But no, if you consider the rationale of this, you know, the work that goes into testing, the risk analysis, the security teams on all of our companies that actually do the work, you know, going back to that test-driven development, I said, if you go grab Kubernetes from upstream and run a release test across all the tests they've built in, they've designed it for all the pieces in Kubernetes. But now if you start looking at all the other CNCF projects that can work in Kubernetes, who's doing the integration testing between all of those? It's generally vendors like ours who are doing various pieces of the puzzle. You know, our application platform ties into Kubernetes just like other stuff. So how do you do that regression testing across all of that? How do you make sure that the functional versions you're releasing are tied together? And then when security does have problems, how are you doing that risk analysis and providing? And I do want to point out if anyone here deals with PCI, go look at PCI version four. If you look in the vulnerability side, as of next year, they're going to require low and medium vulnerabilities to have a risk assessment on them. So if you're not doing that, you have to patch high and critical always. But they're going to start next year by saying you have to have a risk assessment on low and medium as well. So you need to make sure you're going to probably come back to all of us and say, okay, well, we're using your product. What are the low and medium vulnerabilities? What's your risk analysis and how does that apply to my space? So. So let's, you know, conversations were very good with everybody contributing here. I do want to leave some time for our audience to ask questions. So if you can ask a question since it's being recorded a little loud and I will repeat it so that it's recorded well and the audience can listen to it. We'll go from there. So any questions? Thoughts? Points? Yes. Okay, one of you guys want to repeat the question? It's SREs. It's a big question. It was, what are we seeing or recommending I guess for SREs and typical operations teams and shifting left and getting better security conversation? And thinking and how do we help organizations take threat models and shift those left as well? Because also remember, you know, you probably have, you know, any of the large firms I'm working with, they could have 20,000 developers but only maybe 200 or 300 security people. How does that scale? So you do need to have security champions. You need to have a gamification. You need to have things to encourage people to do their best. Because I'll be honest, developers don't want to write bad code. They take pride in the code they're working on. And just because the package has some upstream thing that wasn't their fault, how do we help them just get over that, you know, have a blameless culture, need to have a blameless culture where people are comfortable bringing up issues and discussing them. And anything you can do to inspire and have that and help those teams scale. We need all of the above. Yeah, I would just double down actually on the concept of having security champions. So I think some of the most successful customers that we've had using GitLab are those folks who have that security-minded culture and the thought that the security team are sort of ambassadors and people who are there to help everyone understand what they need to do in order to build secure software. So I'm a big fan of that when it works well, yeah. Yeah, and you know, for a while we were hearing about business information, security officers embedded with the active team so that you could have those conversations up front and early. I don't hear that as much as I used to. But to the attack vector or the threat modeling angle, you know, depending on how deep you're getting you do still need experts who understand how to do threat modeling, who understand things like pen testing and digging in there. One of the things I thought was awesome that CNCF did in 2019. And I know there's a new one in progress. So they open sourced a security audit for Kubernetes itself, including a threat model. And unfortunately, the URLs that I had from 2019 got broken because the content got moved around, but I squirreled away PDFs. And one of these days I'll get them to make sure those URLs are easy to find again. But another approach is to look at things like the MITRE attack framework. And how can we, and this is something that stat rocks can help with, map security policies to the MITRE attack framework and then again have those policies available at build time, deploy time and run time depending on where they best map, right? But mapping those automated policies and guardrails to the MITRE attack vector can also help create a bridge between security teams and developers. So that, and the attack vectors are out there and available, right? Developers like code, they like things that are detailed, have a lot of information. They don't always have time to get up to speed on those things, right? They're under a lot of pressure to deliver the code that provides business value, but that mapping can also give them some confidence. They're not just hearing somebody on their team who has a rule that was developed 40 years ago, telling them thou shalt, right? They can see an external source. Okay, so we have basically run out of time, but I'm not seeing everybody running out of this session. So let's take a time for one more question if there is one more question on the audience. Yes, please go ahead and it'll be loud. So I'll try repeating. Yeah, can you repeat the question? Yeah, I think I caught most of it. I'm not sure I caught the end. So we've talked a lot on the panel about the importance of shifting left and automating policies. And the question is, what about the importance of incident response teams? And I missed the last part. Okay, right, and as noted, right, breaches can be inevitable. There are always new zero days, et cetera. So some comment about incident response from us would be appreciated. So I'll jump in first. Forensics and incident response is one of my favorite topics. I did my first bootable Forensics CD back in the early 90s. So I've been working with Forensics in response for a long time. In this ephemeral environment, Forensics is going to be completely different than anything else your team has use of anything. In case all those tools, they don't work in this space. So part of your incident response plan is designing an architecture that can have those controls and audits in place. How are you going to do, you can't in the middle of an incident go, how are we gonna do this? You have to plan it, you have to test it. Without any of that, you don't have a plan. And part of that has to go back to the architecture, having the proper design. When your developers are doing sprints, when the most successful banks I worked with, they deployed an application in eight weeks, 58 microservices. Everyone, I'll bet it's in this room somewhere, because it's a large bank. The only way it worked is security was embedded every week on the sprint and said, one, here's what you need to have, from both a policy point of view, but also from a risk point of view. And then two, because they were doing test-driven development, how do the developers write a test to validate that so they can continuously test it with every release after that. And part of doing that was making sure that when they did have an incident, they knew how the data was used, how the applications used, all the controls around it, and they have a plan for analyzing that to do proper analysis of where the attack vectors were, what the actual path was, where they went to. And there are so many new tools in here that you have to start planning now. If you don't start now, it's gonna overwhelm you when something does happen, because it will happen. Yeah, I also think that the existence of immutable infrastructure has really interesting, well, declarative and immutable infrastructure has really interesting implications for what you actually do in response to an incident. In the old days, if you have an application that is a pet, the concept of taking down a VM that serves like a business critical application is mind-blowing. Today, the concept of, oh, there's something that vaguely looks wrong with this application over here, but it's in a container, let me burn it down and then figure out what's going on. That's just a very different world that you're operating in, so it has really interesting implications that I think we haven't seen play out yet. Yeah, a couple of things related to this, right? So as you say, there are gonna be things that happen in the runtime and you need to figure out how to respond. It is a challenge, there are some options available now in a declarative environment. If it looks like the running container a pod was tampered with, you should be able to kill that and then it's gonna be redeployed from code. Now it's possible that that doesn't finish everything, that doesn't protect everything, right? But how do you manage an incident in a business critical application when maybe you can't afford to take down that app, right, full time? So the option to kill the pod and redeploy from code while you investigate is something to consider. And one of the cool things that is happening is the Kube community has really matured. They're now working, like I have had so many customers ask me, I want to be able to freeze that environment where the incident happened so that I can investigate and drill down. And that has not been something you could do in Kube. The community is working on capabilities for Kubernetes that will allow you to kind of copy something off to the side and kind of create that sandboxed environment. I'm blanking on the new name for the feature, but I'm really excited to see the community investing in that. I do want to give a quick note though, if you are in public cloud and you do this, if say you're a mid-sized company or say a large company, you have say 200,000 or 300,000 containers and you start freezing things and you automate things, you might end up having hundreds of thousand containers. Remember your security team, forensic work takes a lot of work. Going through and analyzing down to the bits and bytes what happened and if you're under attack every day, what's the backlog you're giving this team of new images scan and how many millions of dollars per year you're going to have with this infrastructure that's slowly being worked through. So it's a great tool, but I want to go back to one I've mentioned. It hasn't been released yet, it's still alpha, so we don't know yet. So remember on the keynote, for those who are watching the keynote, there's those great slides showing like SQL injection, the visibility of things. Again, bringing the idea to your developers saying, well, great, say for example, you're doing input validation, to attack an application is very, very noisy if you're logging input validation. How can you build some of the metrics that are coming out of the tools that all of our companies are working on? So when actually an attack is happening, you can respond around it before they gain a foothold in there. So that's what shifting left is. It really is enabling our developers to take the tools and techniques they have and better enable a security conversation from those. But I think sec ops is just as important. Okay, so with that, if you have any feedback that we can improve this session next time, please scan the QR code and provide some feedback. And we had a great conversation, give our panelists a round of applause. Thank you. And I'll be hanging out at the VMware booth if anyone's going to talk. Thank you all so much.