 All right, everyone, good morning. Our next presenter is my fellow co-chair, Frederick Coutts. Frederick is a steering committee member of Stiffy Inspire. And today, we'll join Fred as he pulls back the curtain on the unsettling truth of software security and invites you to reconsider your approach to trust in the cloud. Please welcome Frederick Coutts. Thank you. So before we jump into the topic, first, I want to thank everyone for coming over. Like, the community, I think, has been absolutely fantastic. So that's one of the things that we really tried to focus here was, like, how do we bring a container together that's safe that people can come in and participate? So, again, thank you, everyone, for being here and for those who are not able to be here. Definitely hope to see you in the future. You're also part of our community, even if you weren't able to make it. So please don't feel left out. But let's jump into an interesting topic, which is one that I find interesting. So when we talk about security, we often have to ask, what is our goal with security? And very often, we have this concept of we're trying to hit confidentiality by trying to keep some piece of information secret or integrity that's not been modified or availability, which conveniently spells out CIA. Easy to remember. But this is like a classic, like the renaissance of art where people are really starting to think through these issues. But the thing about it is that the thing we want to really focus on, like, why do we have security, is not security for the sake of it. It's always in the context of something. And I want to really get down to the root of what that something is. And it's really about establishing trust. And so, like, in the background of this image, you can see a bank there. Like, there's trust that's put in the banking system. There's what happens when that trust in the banking system has been eroded. We saw some events occur earlier this year that are indicators of the damage that can occur when that trust is not there. So the same thing happens with us, with companies, or with individuals, is what happens when that trust is eroded. And part of the path is that when we want to perform some task, when we want to perform some thing, we have to establish that trust. Like, this little cap here that is climbing up, and it's testing the branch to make sure the branch is not going to fall underneath of it because it doesn't fully trust the idea to test it. And then it's like, yeah, I can go up this, and then it continues up. Eventually, it runs up 1,000 times. And then it knows it can take that particular path, but may not be comfortable with other paths. And so the same thing happens when we are looking at any given system. It's not just security. It's like, why do we have quality assurance? Why do we have all of the testing that we do with it is because we want to establish trust that that particular thing is going to do what we think it's going to do. And so let me ask you this particular question. Give it a little bit of a pause. Like, when we say trust, what do we really mean when we say trust? And this is a really deep, like you can spend an entire lifetime, or 100, trying to answer this question. But we need to start somewhere. And we got, I don't know, 12 minutes to get it right. So we'll do our best. And so with trust, one thing that I hear quite often is, well, trust is a property of a system. Like, we build a system to be trusted. But if we think about this, like, we get down to what trust is, there's no single thing that we can say, like, OK, this is like, that it's not really something like it's a property. It's not a property of a system. You can't build something and say, this is the trusted thing. Because trust is very, it's based on other properties. Those properties are based upon our decisions, or our observations. They're based upon us looking at the world and making that decision. Like, that tree that we saw before, did that tree have trust inherently built into it? Or was that a decision by the cat to trust that that branch was not going to fall down? And is the cat always right that it's not going to break on it as well? So this is actually really well put together by a person named Dorothy E. Denning. And by the way, if you ever want to read some really good security, some really good security work, go look at Dorothy E. Denning's work. Like, one of the hot things that's coming out, that's been coming out for the past couple of years are like Lattice-based languages and systems like Q. She was doing that in 1975 and was publishing about it, Lattice-based models for developing security authentication and authorization. But in 1992, she challenged the US National Standard for evaluating trusted systems, which are known as the rainbow books. And one of the things that they said was you could build a trusted system if you follow these particular set of tasks, if you do these things. Like, you run these kind of tests. And she was able to change the whole industry with this particular comment about trust not being a property of the system, but rather an assessment by us. Given evidence, we make a decision. I also wanted to thank Ava Black, because she's the one who pointed me towards this direction. So in short, this is probably the most important slide that I'm going to show. Here is trust is a decision. You decide, and when you're going to trust something, what you're going to trust to a context you're going to trust. And there was a, and I'll have a little bit more on this in a bit, but there was an individual named Mike Wursell, who recently became the executive or part-time executive director for the Confidential Computing Consortium, which is a fantastic group that you should go look at if you're interested in how to protect workloads. But in short, he wrote a book where he really covered what is a trusted system, or not just a system, what does trust really mean? And so when you look at what does it mean to trust something, you have something that is where you have the context. And what context am I trusting this particular thing? So my relationship with a bank, the context is going to be my banking at this particular place. I'm putting my money. It's not the same relationship I have with a doctor, or the same relationship I have with a close friend. It's also time-sensitive in that I'm going to trust it for a specific period of time. I think of your bill systems when it puts out a scan, how long do you trust that scan for? If that scan was done in the past day, you'll probably have more trust in it than if the scan was done a year ago or two years ago. So the trust is also time-sensitive, but it's also asymmetrical. The relationship I have with my bank is not the same relationship that the bank has with me. So the same goes with us as customers. Our relationship we have with software is very different than software has with us, or one API, one service to another. The relationships are all asymmetrical. So we want to model all this stuff in, but also be aware of what happens when we put too much trust or too little trust into something. So too much trust. 1985, during 1987, there was a system called the Therac 25. Software bug resulted in a radiation overdose and ended up killing people. So our decision to trust this system too much and not tested appropriately led to catastrophic results. And very tragic results. So simultaneously, I decided not to post what company it was, but there was a major breach about a decade ago where they saw analysts saw the event in their observability platform. They saw the attack going on, but they had so much information overload from the system not producing good results that they decided to ignore the results because they didn't trust that the system was giving them something that was useful or actionable. Attack continued and was catastrophic for the company. So too little trust can also be catastrophic. And so in this scenario, you can see all the events coming in and cats are flying everywhere, like how do you catch them all? And so we also have to be aware of information overload that does cause that alert fatigue. It just is one example. And so the question then becomes, but now we don't know what there's this concept of trust that it is a decision that we make. The question is how do we reason about it? And again, so to repeat, we have to set up a trust framework. That framework is contextual. What is the purpose of this thing? What are you trying to defend? If I create a snakes game that's multiplayer and stick into a $5 instance in the cloud, it's gonna be very different than if I'm trying to protect, let's say, financial information in the bank. So what is the context? Because the context also dictates how much you're willing to spend, what is the value of the thing you're trying to defend? What is the value of to the attacker of that thing that you're trying to, of what they want to grab from your systems or what they want to bring down? So once you understand the context of that particular thing, again, for what time period? Time sensitive and asymmetrical. Again, trust in computer systems by a crowd. Highly recommend reading this. Or at least read the first chapter. The first chapter is absolutely fantastic. But what we wanna do is we wanna start to do this for our systems and start asking, like, well, let's ask about the basics. Like the thing that you're building on, you have all these CI-CD systems. We're talking about building all these amazing attestations and zero trust on top of it. But can I trust the foundation that I'm building this thing on? Like think of how many systems are compromised because of patch management or somebody clicks an email and all of your security features just go out the door because now you have ransomware in your system, literally from someone clicking an email. And so you have to start with the basics. Establish trust in the thing that you're building. That's why baselines are also important is because they help establish that trust in that thing that you're building. And trust in the process, not in the actual individual thing, but trust in the processes themselves. And so once you've developed that framework, think about from a thread model perspective, like if I'm developing a thread model for something, the question then is not a question of, well, I'm not gonna trust anything in there. Well, clearly we're trusting something because we're running and using it. The question becomes, what is it that we are actually trusting? What is the reason that we're trusting that particular thing? Why am I trusting my single sign-on system to do what it needs to do? And also ask the question, what happens if it fails? And you wanna build a culture around this of people being, of asking this question about is our process good? Is our process enabling us to facilitate that trust in a healthy way? And is it making it explicit? And what happens if that trust is violated? What are the consequences to the system? What are the consequences to the organization, to you, to your customers? So you wanna ask what that consequence is and that gives you the blast radius. And the whole concept of zero trust that is a little bit unspoken is that you wanna focus on the blast radius because if something is compromised and you have a huge blast radius, like that's the entire thing we're trying to push against is not having an incident cause a massive breach. But if we have small blast radius, like things are gonna happen, things are gonna break, people are going to get in, the blast radius of that particular attack or that particular failure is the thing that we want to keep small. So what about zero trust then? Like as we have this concept, then one of the reasons I decided to give this talk about trust is that too many times I see people jumping into a architecture and saying, oh, it's zero trust, we're not gonna trust anything. It's like we're clearly trusting something and I'm trying to get people to think about what trust really means. Like why are we trusting something? What is that particular thing in the framework that was mentioned? And the reality of a zero trust is we should really actually rename it to zero implicit trust. Like this is really what we mean. There's clearly something we're trusting, the thing, but we wanna make it explicit. If you have implicit trust somewhere in your system and you're not exposing it, that is a area of risk because that means you're not analyzing it properly. You're not saying what is the blast radius of this? What is the impact of this thing goes wrong? So when you hear the word zero trust, interpret it to this, interpret it to zero implicit trust. So basically the implicit is implicit. So in short, I do wanna point out that all of these graphics were generated by AI. And so thank you AI for generating all these amazing cat photos. And finally, I didn't wanna leave out the dog people and there was a really amazing cloud native Corgi sticker that was given to me a long time. So this is a nod to the cloud native Corgi, cloud native Corgi foundation, CNCF. But in short, I do wanna thank you all. I do implore you to think about this particular question when you leave and when you're looking at any particular system. Even if you're not in security, you should still be asking this question about trust because trust is not just about keeping malicious actors out, trust is also about how do you design a system so that it does what you expect it to do? How do I trust it to do what I wanted to do? And that applies not only to computer systems but also to our relationships with each other. So again, thank you very much. So our next presenter is Denny Shannon. And Denny's is a senior director of engineering at Rancher by SUSE. And today she will share some key learnings and findings and explore how the cloud native open source community, technology and processes have welcomed us to get closer with Kubernetes Nirvana. So again, please welcome Denny Shannon.