 Good morning, good afternoon, good evening, and welcome to a very special episode of In the Clouds here on Red Hat Live Streaming. I am Chris Short, a host and showrunner of Red Hat Live Streaming. I'm joined by a few of my favorite fellow Red Hatters, Kirsten Newcomer, Andrew Clay Schaefer, and Jamie Scott. Today we're talking about DevSecOps shifting left and all things security from a holistic perspective. So we did, this is part two of the series, we did part one on the level up hour, and I'm gonna grab the link to that. Kirsten was so nice to join us during a session for the level up hour, so thank you for doing that, Kirsten. But, you know, while I'm doing that, let's debate if cheesecake is a cake or a pie. And I think Andrew had one of the better answers in the pre-show run up to this, I feel like. You want the whole answer? I started by saying this is kind of like the particle wave duality problem. Yes, yes. So it can manifest as it needs to, depending on what you're trying to do at the moment. And then I started working on establishing a taxonomy of the qualities that a pie has and the quality of the cake has, we can start to build up to. The types of cheesecake that are gonna manifest as a pie or cake, depending on, you know, your perspective. The observation that you're making on the system. Right. There you go. There you have it, folks. And then we got into if hot dog is a sandwich. Cheesecake is not a sandwich. We did figure that. No, we did, yes. And in my house, the taxonomy is height. It's a pie, a cheesecake is a pie when it's cooked in a pie dish and it's about the height of a pie. And it's a cake when you buy it from the Cheesecake Factory and it's about six inches or more. Nine layers of cheesecake somehow crammed on top of each other. Yeah. If that were the criteria, then every cheesecake I made would always be a pie in everyone. Everyone else. My mother always made pies. Cheesecake pies. Right. Exactly. So I like Andrew's answer the most though. Because it's pretty much, it's whatever the hell you want it to be. So there you go. Wait, what do you mean? I've had it. What do you know? Cheese pie factory, though. I'm a petitioner in the change of their name. So we've done this rather, you know, long experiment of asking people what they think. And we all think it's pie. So you should. Anyways, enough about cheesecake. What exactly is shift left? What do we mean when we say shift left, Andrew? Well, I think we have to start with recognizing that that's framing things from languages that read left to right. And typically we draw charts that go left to right, which is not true for everyone. But it's getting, getting involvement and accountability earlier in the process. Right. So if you're in the DevOps community, or if you're in the DevSecOps community conversation, but even, even, you know, previous kind of iterations of this in lean manufacturing and all these other kind of process automation process improvement, there, there's a general theme of making the feedback loop shorter, the accountability, you know, as close to the actor as possible. And in order to do that, you need those, those inputs as far left in the process, assuming you believe that the things start on the left, as you can get them. Amazing. So Kirsten, what, in your words, what is DevSecOps and how does shift left apply in the context of DevSecOps? It's great. And, and I'm sure that we'll have additional comments from, from Andrew and Jamie. I've been known to make inflammatory comments about that and on the internet. So, so I'm going to try and pick my words a little more carefully than in the pre-show. Right. So, so as an organization adopts DevOps, there is a tendency to kind of really focus on Dev and ops and helping them making the changes that help them to work together better. Although if you were to look at like a book like the Phoenix project security is absolutely intended to be part of a DevSecOps approach to delivering software, it can get forgotten. And so the reason I think that we see the emergence of the phrase DevSecOps is it's a much more explicit way of calling out that security needs to be an integral part of this process. Exactly. Yeah. And in the pre-show conversation, and I was in the room, there was always a, there was always a security presence in the DevOps conversations, but it was really these like forward leaning kind of people. And I'm going to make another comment a second about DevOps and DevSecOps that were, that were part of that conversation. And what I like about using DevSecOps is that explicit, explicit invitation to the more traditional security community to come into the conversation. So I think there was always a security theme in a lot of the DevOps movement, a lot of the DevOps people, but that explicit invitation is nice. And then another thing I think is worth pointing out is, and this is related to the transition in the practices of operations, also in the practice of security, is that the practices are not standing still. And what looks like, you know, and we saw this trend with operations, we're going to see this trend with basically everything, software is eating the world, if you will, that operational practice became more and more about writing APIs, writing this code that did all this work, right? Infrastructure is code. And you're starting to see the same type of work where a security professional in, in 2021 needs to have like some understanding of code, needs to be able to express some of the policies, some of the practices as code. And so there's the actual, there's the dev part of the developers making product, but there's also the dev part of security professionals contributing to that outcome. Absolutely. And I think that's a place I'd also be interested to hear from Jamie about this, because he and I talk a lot, like if you look at kind of the, some of the tooling that supports this, this shift, and you kind of think about a typical, I don't know if there is such a thing as a typical security engineer, but security teams in a lot of enterprise organizations today, they historically have not needed to be familiar with the CICD process. They have not needed necessarily to understand things like YAML. And, and as we look at this in, Nobody understands YAML. At least I can read it, but fair enough. YAML is the reason why I went from spaces to tabs. My tabs are now two spaces. Oh, man. Just to give you an idea. You see incomprehensible keys. Yeah, but one of our early adopter, like one of our open shift customers who decided to use the adoption of containers as an inflection point for really moving to a DevSecOps approach to their, to, to deploy their solutions, their chief of cybersecurity decided that he was going to require his security engineers to learn the CICD process and to learn things like YAML in order to kind of contribute and participate in that DevSecOps pipeline. And, and so I, you know, it's not a, we're not allowed to share their name, but it's a great YouTube video out there of them talking about, so they kind of outed themselves, but of the organization talking about how they tackled this transition. But, but Jamie, I know this is something you and I have talked about a few times. Yeah, so to me, shift left is a strategic business decision that starts with culture, but the end goal is not necessarily to, the end goal is to reduce cost while maintaining the clients because at the end of the day, security has traditionally been an expense center in order to meet new revenue goals and break into a decreased sales cycles. And ultimately, as you begin to look at DevSecOps as a whole, you see that security has a huge opportunity to contribute to the bottom line. And when you look at it from that perspective, you see that by reducing the cost of testing by shifting left, you can fix bugs more effectively, you can plan, you can make prioritization decisions in order to not bolt things on. And I've practiced this for years at this point. And what we really see is it starts with culture and what you were saying about security engineers being a committer. I personally view that security has traditionally been an auditorial role. And DevSecOps is a means to give them skin in the game and say, if you're going to help contribute to this. And that's something that I think is a really valuable mindset shift for the security community. It's a mindset and it's a skill set shift as well. But that's part of what creates a lot of resistance in some of these organizations. And there's a lot of work to do and we're helping do that work. But when you start telling people you got to do this thing this new way and they've attached their identity to the tasks that they've done for maybe a decade in some cases, then there's a lot of work to do to get people, not just to the culture or like mindset of it, but to accept that they're going to be able to contribute because they're kind of afraid to make that stuff. And you've both mentioned culture and I agree. That's absolutely key. And I think people respond to how they're measured also and what incentives are put in front of them. And so, Andrew, like you're saying, what's their existing identity and how have they built that up and that's probably been built up partly by previous modes of measurement and I like what Jamie said about security as an auditor versus a committer. And so it's really critical that the executives and the managers help provide new incentives and a different way of measuring to help teams pick up these new skills and feel good about making that change. So something I say on a very frequent basis interacting with our customers right now is if you show me your org chart and your funding models, I'll predict what's going to be hard for you. Interesting. Interesting. And so like there's this other thing I think, and maybe this is the setup for the cheesecake conversation. But security is not one thing. True. Right. When you think about security and what security professionals do in organizations, and I've seen different versions of this, but you can probably break it down into like 15 different types of activities, responsibilities, depending on the maturity, depending on the rigor. And there are things that's interesting. So like I did tons of automation projects for your last 15 years and especially in the regulated organizations, the bottleneck of those investments always ends up being about security but governance risk and compliance, like which are kind of not the same but also relating. And then when you get deeper and deeper into the rabbit hole, you start to realize that the risk people and the security people, they're not necessarily lines. True. They don't necessarily see this. Very true. There's lots of work to do. Yeah. And I think compliance teams like or one pattern I've observed is that compliance and risk people or in particular compliance, but they've provided these guidelines based on known traditional architectures and kind of the ways applications worked, pre containers, pre Kubernetes, pre cloud. And oftentimes the security, the people responsible for implementing those things don't know the why behind the requirement. They just know that they've been given a requirement and that makes it hard to change as well. The implementers don't know why. Sometimes the compliance people don't know the why either. I agree. And you end up where, and this is sort of the point I think James was making a minute ago that it's this academic rubber stamp after the fact with an audit that oh, we check these boxes. We're in compliance. There's lots of blurry things where, oh, are we in compliance? Well, we'll just sign off on it because it's like, you know, 10,000 lines of code or whatever. And then, and then we'll go, we'll just, we'll we'll figure it out later. But there's lots of, I mean, this is the, this is the beautiful thing about where some of this is going is that you can, you can actually be in compliance, verified, audited in a way that you never could before. And you start to modernize the policy and how the policy be enforced by this code itself instead of by a human checklist audit after the fact. I really like what you said there. The reason why is because I don't think you've been in the security community long enough if you haven't been called a blogger. It's true. And ultimately, this is the feedback that you get from every development team. And part of it is because you're not applying that raw y rationale. So I'll give you an example. And HSTS header is something that upgrades your browser from HTTP to HTTPS. It's a very frequently flagged as a low or medium risk and gets talked about. But I've seen applications that literally don't even turn on HTTP. So there's no point in upgrading them. But these risks still need to be accepted. But there is no risk. So the ability to apply that mindset in an automated fashion and bridge that gap between of the why is really the core concept here of you have product managers who don't understand security and they're responsible for the why typically. You need a champion to be responsible for that why. And I think this is also going back to the shifting path. You need that conversation to happen before those things are implemented. Not something you come back and rework. Unless you want to spend all the money you don't know. And one of the interesting things about code too is in some ways maybe it's like how do you make sure that that why is captured as you implement compliance as code and security as code because the how may be different. And so do you ensure that there's a comment that documents the intent? Really kind of interesting element to think through. And I think one of the core issues with that is the world is limited. Security mindsets like that are just not scalable. We can talk about education all we want but it's just an unrealistic numbers problem that we have. So shift left through automation at least helps bridge that gap because it's unrealistic to think that we can educate the world to be a security-centric mindset. Not everyone is going to be able to be a sysper pentester overnight. No. That's never going to happen overnight. So a lot of the conversation I have I talk about you know, I'll be things like reliability and security would be one of them. And the idea that you can make all of your applications and building reliability and scalability and all these things into their applications is just not possible. At the scale and the complexity you want the platform to be able to keep all those promises, those architectural decisions that are going to give you the scalability reliability and this is really the model that you see evolved inside of Google with SRE to be honest that the more on their business logic on the domain things like scalability are solved because they're writing to those libraries, they're writing to the systems that can keep those promises and security is just another thing that you want to bake into that architecture, into those platform services so the developers are not going to be experts in security and they don't have to be. Thankfully. Right. So with that in mind and I apologize if you don't have any questions or if you don't have any questions or if you don't have any questions or if you're to do something and they're not going away so if you hear my dog my apologies. You like dogs. I like dogs. Yeah. So how can organizations foster this culture that we've talked about right like emphasize the value of security for everyone to me look at it from a top down perspective can we even try to make it a bottom up right. I think that's a good point. Genesis point for making this happen paradigm shift happen. There's a lot to impact right if you if you start talking about the way like I think culture gets kind of thrown around and we already did it a little bit on this thing where it's like it's the truth but it's also sort of inaction bull. It's hard to like go implement culture right and so what I try to help people do is I'm going to do it and then what are these kind of practical activities you could do and and then also be mindful about the incentives you know the large structures in the sense are been set up but there's really simple things that in a lot of organizations just as it evolved dynamically with whatever kind of silos in history the these roles don't even really interact with each other as human beings right so it's all it's all through a little bit but you can you can fix some of these problems just by getting people to go to lunch together just by getting people in proximity with each other. Absolutely and I think I think you know kind of you you alluded to this I think and I brought it up earlier many organizations do have measurements that they use to kind of assess various elements of what they're doing and you know I've been in organizations where for example one of the measurements for a development team was the number of lines of code delivered on a regular basis which frankly I think is not a useful measurement terrible measurement doesn't tell you anything that doesn't do anything and so I think you know an interesting thing to look at is meantime to resolution for newly discovered security issues you know and and potentially you could look at what point in the in the life cycle were those issues discovered as well and ideally the earlier they're discovered the more the easier it is to is to address them now that's never a hundred percent true because you know until you know what the issue is you don't know what the fix is but but I still think an organization really needs to take a look at their their measurements which is one form of incentive add to that a bit Kirsten one of the things I see is that the security team is measured based on the number of things they find and the development team is measured on the speed at which they can deliver business value and these are just totally conflicting incentives and that's one of the things that really bothers me about our organization because security professionals at the highest levels are taught it is okay to accept risk if you have a two million dollar risk in an a hundred million dollar deal you're going to accept that two million dollar risk for that chance at a million ten million dollars it's a simple math problem despite the fact that there might be potential security impact and I am assist so like me saying accepting risk is fine is almost counterintuitive to the standard security persona but that's how these setups are designed and the incentives that we have of find all the bugs and deliver things fast are just too conflicting right now so if you can change the incentives to say if we are able to make more appropriate risk management decisions then I think the entire conversation changes nice so let's talk about policy right like there's all kinds of policy out there right we got regulatory policy we've got security policies we've got you know internal organizational policies but usually the best way to deliver a compliance system is to continuously deliver compliance right so thinking about applying policies in the way of against infrastructure to ensure safe deployments or ensure regulatory requirements are met how would you go about you know assessing what your policy enforcement should be within your environment in you know any organization right like is there a process that you normally drive people through or is it more like well it depends on if they're you know financial services or something else so I think the ideal future state you know that we sort of kind of hinted at earlier in this conversation is that you're continuously auditing and the audit function is enforced by the framework by the platform itself and then the security professionals are making those kind of creative investments in managing the risk and thinking about what that policy should be not trying to audit it right the audit like computers are really good at doing repetitive tasks so let them audit that stuff all the time and that but that's that's not where most organizations are right so then you get to being able to think about okay like there's the policy and there's things checking it like who who who signed off on it right attestation and chains of trust and change of responsibility like there's a bunch of interesting stuff emerging and I don't know if we want to get into this here but with like these transparency logs yep and being able to kind of think through or reason about where things came into the into these pipelines right and and I think yeah there is there there is some really cool stuff going on as you say transparency logs and like the sigstore project and and you know how do we think about all of that throughout the life cycle I also I also think it's it's a piece of the puzzle is at least in the is it what we call in kubernetes admission controls right you can define policies for kubernetes I'm sure you can do it for other environments as well in different ways that check elements of the solution that is requesting access or requesting to be deployed and in that case kind of one of the things that comes up frequently is is that principle of least privilege right so so can I check for the appropriate privilege request now that gets challenging because you might create a policy that's that's fairly you know after all its code it could be fairly black and white and then if a solution needs certain levels of privileges that you don't allow by default you're back to an exception process so you also you know there are also interesting tools emerging in that space to that help you define fairly complex policies you know opa and opa gatekeeper in the kubernetes space and and there's there's another policy engine very yes thank you caverno in that space as well awesome and I can't emphasize the importance of like treating this as a computer solvable problem not a human solvable problem right like computers are way better at this than humans could ever be so auditing yeah for auditing purposes and well and I like to give everything in terms of a socio-technical system so it's augmented intelligence right it's not I think we're pretty far away from saying like this is a computer solvable problem like security security these are qualities that come from the socio-technical system which is the combination of those computing systems and and those social systems coming together to deliver like you can't I mean just just like you use another analogy from reliability there's no organization on this planet that thinks that they don't need it the right right you can you can invest in AI automation you know autonomous systems whatever but no one thinks right now in 2021 that you can just turn your back on all the stuff and it's going to be good and I think that that analogy comes over to security as well it's the socio-technical system that delivers security absolutely I mean just we think about facial recognition right the facial recognition systems we have today are partly driven by the data that's fed into them and the and humans are controlling to some extent what data gets fed into them that will shift over time right but but but I do think there are some things you know so so yeah I'm with you there's always going to be corner cases and someone better be there that has a clue what's going on someone else's account for instance and a banking application so being able to look at those types of business logic problems is one of the most important things I think that security people can apply themselves to in a socio-technical problem so to that point if you go look at the most of the breaches that happen very few of them you know relatively speaking are because someone exploited some you know CVE or whatever that's like in in the system that's technical there's lots of social engineering that leads to the breaches yeah I mean if you just send a phishing email saying free coffee imagine the hit rate on that if it came from a spoofed email within the org I mean I can tell you that as someone who sent those phishing emails that's my number one hit free coffee can we get Chris back or that's I think that's the machine Chris or taking advantage of people on Valentine's is also good oh wow we have to have the particular mindset to do that kind of hacking you have to be evil phishing the boundaries a little bit so are we getting Chris back I pinged him and I don't know the answer so so you guys want to just go through DevSecOps like how do you guys think organizations foster DevSecOps culture in ways that emphasize the value for security for everyone we've been talking a lot about culture how do you think we do that poorly is an answer there's a lot of people on a lot of different journeys in different places and we kind of talked about this in pre-show there's been lots of investment so here's this pattern that you might see right now you go into these organizations and they've got a DevOps team which is this is a true story a DevOps team which is a different silo a different team than an SRE team that's a different silo and different team from the DevSecOps team and you're just like what are you doing or and maybe even sometimes they'll have like a CICD team it's like you got a DevOps team and a CICD team and this goes back to org charts and funding models I think it's hard for leaders in some of these organizations especially when they come from a background where everything's been so structured and siloed to really set up these collaborative environments and let go of some of the silo building, kingdom building and let those good practices emerge and you know there's a lot of ways to think through this but when you've got so many different teams and you have so many different incentives and they're not really working together and I don't think you've got the plot of the DevOps movie and I ran into this recently with an organization who not only has an infrastructure security team but they have a cloud security strategy team and actually I've seen that emerge in more than one organization as they start to adopt cloud they create sort of this special team that's looking at at cloud security and so we can get even further siloed as organizations do that I mean it's almost by we've kind of like built this culture where organizations want or individuals want to be kind of empowered and the only way they can do that is collecting body count and building these orgs and getting these budgets instead of having kind of like floating funding models where project teams are brought together from pools of resources you get these long standing silos that accumulate power over time and I think that holds a lot of organizations create a potential back you know not just on security but on all the other digital issues yeah absolutely yeah so what are some of the things that Red Hat is doing to help I know there are elements Andrew that you're working on and then Jamie there's a product set that you're working on but maybe we start with Andrew and then shift to Jamie so security is an interesting one and there's a number of things that are happening you know in conversation with my team and we are working with product there's some it's hard to talk about some of this because people don't like their logos or their names but you basically have a lot of big banks you kind of like a lot of federal agencies they're really interested in pushing for the state of the art along a lot of the topics that we talked about on my team publicly John Willis has been participating in kind of writing these papers about doing security differently doing organizations automated because it's really coming down to this tension between developer productivity and operational efficiency and if you can't get this process for doing the automation of the security to contribute to the developer productivity then it's always the bottleneck and this is especially true in these highly regulated organizations so across things like financial services energy the three letter agencies they have a very strict ideas about what their policy should be what their compliance and security position should be and so we're working through helping both the tools and kind of like that process practices, culture side of it to identify the interventions and investments that need to be made to get those better outcomes yeah and so I know like you know there are folks in Red Hat who are working on kind of what how can we help automate what they call the software factory right there can be so many different tools used in a shift environment the artists are really known as the trusted software supply chain and TOS and yeah those are some of the projects supporting the conversation exactly right and so Jamie you know kind of when we look at it from the staff rocks or the Red Hat advanced cluster security perspective right how does ACS help with the shift left and automating security and compliance sure so one of the things I like to say is we help you to focus on what matters most and we enable your team in order to do that because one of the traditional problems you see in the security industry is that the vulnerability management or compliance team effectively takes a result from a scandal tosses it over the wall and then sends it to the development team and the development team has absolutely no clue what to do with it so they'll work to address these but having that conversation and allowing teams to prioritize what matters most is really critically important so what advanced cluster security is doing is it's doing your traditional vulnerability management but looking at it from a different angle by adding security context and it does that through assumption of what it knows to be in Kubernetes and using the Kubernetes API so now you can effectively say if a container is privileged what that means is that it has effectively root access to the underlying host operating system so if you manage to get a remote code execution vulnerability which would give you access to a container if you get one that's privileged then you are going to then be able to access all containers within that host effectively which means you go from compromising one application to potentially hundreds although I always want to be careful to separate the word privilege and root access because there are so many different types of privilege that can actually be supplied so is it fair to say that what you are thinking of really requires running as root and access to the host for example just like think SE Linux SE Linux can provide some protections user namespaces help I know we are getting a little technical here but Kubernetes doesn't yet support user namespaces so I think that there is a lot happening to help to put boundaries around some of those attack vectors but we still need to track them it's definitely about prioritization to me because there is always going to be a defense in depth architecture and at the end of the day you want to build a layered approach to security where you expect one or two controls to fill because ultimately a mature nation state is going to be able to attack you in a more effective way than someone who has a limited budget so the goal of the security team is not necessarily to secure all the things the goal of the security team is to make it too cost prohibitive in order for an attack to be successful proportional to the risk exactly ultimately when I talk about something like being able to prioritize more effectively saying a privileged container is a route on the operating system but that is what it means to be privileged in the context of Kubernetes now that being said if I have a privileged container I want to be able to block anything with the network attack vector or an adjacent network attack vector that is a critical vulnerability or a high and I want to do that instantly and maybe for a privileged container I might even want to go down to the medium level for that so being able to make more informed risk management decisions on what matters most is really key so what we see is organizations will use labeling strategies with their crown jewel applications and really focus on what matters most by focusing on that labeling strategy to look at data classification labels they will use it to look at if that application is a crown jewel and by being able to link that context security teams can make more informed risk management decisions and apply different policy logic so if an application doesn't have sensitive data then isn't even exposed to an external network maybe it's acceptable to have a high risk for that organization whereas if it's a crown jewel application that's externally exposed and privileged maybe you want to start tackling the mediums right absolutely I think that's why that's another place where again we need to blend socio-technical right so there's some human perspective here but also the computer and a solution like ACS the software can gather additional data to inform the decision in ways that if you are just looking at vulnerabilities through an SCA analysis a software composition analysis where they've said here are the packages in the software here are the known vulnerabilities associated with that software here's the MITRE the NVD rating for those vulnerabilities that's not enough to really prioritize because you don't have the full context that you're just talking about so I like to drag everything back to the center of what I understand which is you know the DevOps SRE type stuff and I think there's an interesting conversation emerging and I've been part of having some of it around framing some of these conversations through the same lens right so when you're talking about risk and reliability the not exactly the same and I think it's easier to measure reliability than is to measure risk to be honest but when you start thinking about SLOs and setting up the targets for the service level every nine for reliability from kind of like a rule of thumb this is not a strict rule but it's pretty close approximation as more than the last nine right so if you're not willing to make the investment in this to be another nine ten times more than the last one then you know you have to make that around and that's like framing for SRE is embracing risk and then a lot of what you're talking about in kind of this sub genre of the DevOps conversation is about observability from a security perspective like what is actually running on these systems what state are they in now and how is that state changing and lets us respond to that right so that's there's all these little sub genres I think there's also some interesting analogies with like chaos engineering, fence testing whatever like there's a lot of these practices that are analogous but if you don't have an understanding of what's happening in your systems whether that's from monitoring the performance or monitoring the security aspects, policy aspects then you don't know what you don't know and you're going to find out the hard way yeah no I think you're absolutely or outage yeah absolutely right about observability I'm going to ask a question that I get asked which is interested in the perspective from the two of you you know how does DevSecOps shift left help us address ransomware or does it sound and you know what do you think you know what are some of the best practices when we think about ransomware so I'll take a stab at ransomware when I think of DevSecOps I do not think it addresses ransomware it helps to mitigate it though so ultimately if you get phoned you get phoned there's not really a way around that so you can mitigate the risk of that ransomware spreading you can mitigate the risk of lateral movement and you can reduce the likelihood that you're actually exploiting but that being said it's unrealistic to think that shift left is in any way going to help address a ransomware attack because the reality of the vulnerability management space is that it is a moving target and because it is a moving target and vulnerabilities are updated as they are discovered then that's why we call them known vulnerabilities because there are so many unknown it's not going to address that but it'll help mitigate yeah I'll add to what Jamie said and it's similar to like there's no organization that's not going to have outages you're going to have some corner case, some new thing that changes some of the assumptions but what you can do with these investments is you can have a better understanding of what software is running how that software came into your system the policies and then being able to remediate the known ones is one thing but the idea that you're going to be able to take care of a single thing which is literally an arms race with adversaries is just unrealistic and here's the one place that there are two things I like to use as an answer one is provide as much protection as you can the way you would normally do the observability is critical because you need to also detect and respond but I think also more and more the more you can deliver your solutions or manage your solutions as code and that includes your infrastructure not just your applications so that you can position yourself to with the exception of stateful data which you're going to have to back up but if you can position yourself to recover by redeploying your that's a much better approach than trying to scramble through a whole bunch of backups and also means that if somebody to use Jamie's phrase right if somebody pawns your data you know you're just going to redeploy it means you have to protect the code systems that you're storing all of that config data and that pipeline in so I'm a big believer in being able to repave everything and some that's like the main benefit or primary benefit of being able to rebuild infrastructure from scratch from code is that you're building things constantly so recovering is the same as all your other deploys at the same time like that doesn't necessarily protect you because you don't know where some of those vulnerabilities came into your supply chain so you might just be redeploying them anyway and I extend what Jamie said a minute ago and use an analogy from physical security at my house I have a security system and there's like sensors and like all this stuff but if someone came to my house with like real force I have glass windows like they're going to get in the house like it doesn't matter and like 911 will come maybe later but like someone can come in this house by force and there's nothing that security investment will do to prevent it. There is no such thing as a hundred percent security and we're really just talking about what are some of the mitigation some of the approaches to mitigate situations. Use the zombie apocalypse metaphor. If zombies are attacking you you have a gate you have a house when they break in through your windows you better have a door key to lock your room and your disaster recovery plan at the end is do you have an escape hatch and an ejection seat that's what you need an ejection seat time to get out apologies for the technical difficulties folks my Comcast issues are being buried right now meaning the cable is being buried hence the network loss of connection so and the cell networks are not as strong as they were after the storms we had so joining from my iPad as it were so thank you all for joining today and discussing this topic we miss your smooth radio voice this is true I'm sure you've kind of gone off the rails Andrew I'm not really well I will say I think we got a little dark here towards the end we need some uplifting well fair enough so let's just close on this right like what are you excited about the next month year insert plan here all right I'm going to go around the room on that one I'll grab it first I'm really excited by what's happening with signing transparency logs the whole sigstore initiative and investments around attestation I think there's some really cool stuff happening that will allow us to to make zero trust more than than just a or extend some of what is zero trust today in red hat solutions but but extended to more parts of the process and the software doing first I'm thinking there's too much to be excited about yeah there's a lot to be excited about I mean I'm watching all this stuff emerge with the different kind of open source projects around being able to do identity stuff with spiffy stuff with the audit logs the transparency logs that's great the other thing that I'm faced with and I'll say this is true about DevOps the forward leaning conversations and kind of like pushing the boundaries of this is one thing but the vast majority of organizations in some ways like they're just getting caught up to DevOps conversations from 2010 right so to me like the most exciting thing is about bringing the standard up across the industry the rising tide that will hopefully lift all these boats a lot of boats there's a lot of boats it's a lot of water it's a lot of water to be hanging right all right Jamie I was someone who came over with the stack rocks acquisition so to me one of the things that really excites me is to be able to attempt to disrupt the Kubernetes security space scale by transitioning from a startup to a large enterprise with a huge presence in the enterprise space I want to be able to sit there with a seat at the table with the CISO and really get to understand how we can help to one reduce the cost of a security program to help them make more important business decisions awesome well with that note I think we'll wrap it up for today and I appreciate you all joining us thank you Kirsten thank you Jamie thank you Andrew I know you're very busy out there in the world trying to help everybody save the world kind of deal so I really appreciate your time today thanks for having us Chris thanks Chris appreciate it Chris