 Hey, Bosa. Can you hear me? Yeah. Oh, all right. Awesome. Cool. Just audio check. All right. Let's wait a couple of minutes. Greetings, everyone. Cameron's here. Hey, Cameron. I put the meeting docs in the chat. So you can go in, put your name in the attendance. And we also need one more scribe for today. So if you could help scribe for today's presentation with Jonathan, that would be great. All right. Thanks, Martin and Bosa, for subscribing today. Let me just see if Jonathan is here yet. He's just next to me and attempting to zoom working on this network. Yeah, that network doesn't sound very great. You seem to be breaking up a little bit. While we're waiting for things to work out, I'm kind of unclear on how I'm supposed to ask for some time in this meeting or subsequent meetings. Myself and a couple of people here are from the Kubernetes security audit working group. And we would like to chat with you guys, but we don't exactly know how. So if someone could guide us through the appropriate process, we would appreciate it. Right. So one of the, so for our six security sessions, we usually have, every month, it's kind of a mix of presentations, actually working meeting sessions. So within the meeting notes doc on the top, you will see the plan meeting. So there'll be a couple that we'll have there. And so if there's a topic that you'd like to discuss or talk about, if you go to any of the working meetings and add like a sentence or two and your name and what you'd like to talk about, we will edit the agenda. Okay, great. So we put it in the future meeting section. I'll move it maybe to the 29th working meeting and we'll all come back then. Yeah, that sounds great. Okay. Thank you guys so much. So Jonathan, I see that you've joined. Hi, yes. Okay, great. Awesome. No worries. Yeah, so so just to that. The meeting notes is I'm gonna paste it again in the chat in case you can see it. Add yourself to attendance. Next week, we have a working session. So if there are any issues that you'd like to discuss on calm PRs and things like that, or you have an item that you'd like to add to agenda, you can put it under the 29th of January working meeting in the meeting notes. In two weeks, February 5th, we will have the spiffy slash buyer security review. So this will be a presentation from the spiffy spire folks about the technology and then we'll have for open discussion after that. So since we have a presentation today, we're going to skip check-ins. Today we have Jonathan Meadows here with us. And I'm just one. Yeah, there's one thing I don't know. It says also, host is not in the meeting, which I think it means it's not being recorded yet. That's not recorded. We are good. Oh, it is. Okay, I would go. Okay, just checking. Okay, thanks. Great. All right. So today we have Jonathan Meadows. I believe the topic is going to be on open source training with heroes, is that right? It's also about some of the threat modeling that we've been doing in the financial services user group. So there's a couple of items that we're bringing over from there because we want to present and show people. Great. So thank you for taking the time and please take the floor. Okay, great. So if I can try and share my screen. I'm just trying to find the right screen. There we go. There we go. Excellent. There we go. Are you in good shape? Excellent. Yep. Great. So yes, I just want to present some of the work that we've been doing in the financial services user group. So we started this as part of the one of the users groups at Barcelona last year. And it's really for those companies running in highly regulated environments. And it's usually the financial services. We have about 15 to 20 financial institutions working with us from large banks down to small fintechs. And the way that we've started out is really looking across our industry and looking for what are the key issues that we're really, really concerned with, really preventing us from getting into Kubernetes or cloud native solutions? And what are the issues of the day? And surprisingly, that they're quite common. So if you look at our GitHub, you'll see that we've got a number of the focus areas that we're starting to look into. I think attendance is a little bit sporadic, where we're really continuing to bring people in and sort of solidify the agenda. But one of the areas that we've got feedback on is some of the top focus areas that we've managed to go out and actually start coding. So the top two items are really around which controls can we actually install on Kubernetes and cloud native architectures? How can we test them? Something that's common across all of our members. And also how to address the skills gap, what are training out there, but we're also competing for the same resources. So we've been looking into that. So those are the two top ones we've been focusing on and really focusing on, not just talking about them and exchanging ideas, but trying to come up with solutions we can share in the open source community. So starting off with the controls. The approach we took was really looking at threat modeling the Kubernetes infrastructure itself. So it was quite an ambitious start, but really using that as a mechanism of identifying the vulnerabilities and the attacks that could possibly be executed against Kubernetes and get a real understanding of where the controls needed to be applied. At that time, we as a group looked out to see if there was anyone doing this sort of work. I think that was probably early last year. And we couldn't find anyone specifically looking at it. Subsequently, we were found two or three people, including the very, very good detail audit from Trail of Bits that have also got some threat modeling in there. But at that point, we didn't have anything to go on. So we were really starting from scratch and we reached out to some of the chats at Control Plane who are with me here today as well, Andy Martin, asked them to come and help us as a group and detail it. Now, what we were doing with these threat models and we started to use attack trees is trying to identify these attacks. We were then using that as a mechanism of mapping out where within that attack tree we could apply particular control, whether it's detective, preventive or responsive. But there was a real push from the group to focus on how we'd actually automate those regression tests and automate that testing. That's something that we started to build out not only from a visual representation we can show you but also starting to write tests to try and validate that threat model. We haven't really seen many other people do that yet. Be open to suggestions and sort of pointers to see if someone has looked at it. That's really one of the focus areas. The other bit that really helps our members is a mechanism of demonstrating the threats that Kubernetes can come under to multiple different partners. It's not just the blue team that we've used to show this. It's our security operation center and forensics team. This has actually been one of the real benefits of this approach. We've been able to show how attackers get into the system or how they could get into the system and use that to train our security operations team on what to look for and start to map out on the tree some of the attack factors as they come through. It's really helped us define the content for remediation and incident response as well as the actual training. That's something we didn't really anticipate when we were building out, but it's probably one of the main benefits at this point. This is one of the pieces we've open sourced this in the CNCF. We'll show the link towards the end. We started with mapping out the trust boundaries around the different components within Kubernetes. I think this has been very instructive to our teams how Kubernetes truly hangs together and gives a better understanding of the general architecture, although obviously people are training, reading in the books and have a good understanding of it. When we start to map out the trust boundaries, I think it really has brought that understanding to the fore. Then we start to get into some more of the attack trees. Do you want to pick up a bit, Andy? Thank you. As we've said, we're looking to formally and methodically describe the security of our systems and then verify them from an attacker's perspective. We are all very focused on how to harden the cluster, but in order to apply those controls, we need to understand, of course, how they will be circumvented and what the routes into the cluster might be. Apologies to those of you that are familiar with attack trees already. I'll run through these quickly. These are typically devised using a top-down approach. These, what we'll see in the next couple of slides, are subsets simplified from the main tree, which we'll look at in a minute. We start with the negative outcome that we would like to avoid at the top of this tree. Subsequently, we ask, how can that be achieved? Obviously, this is useful to gel with the mindset of most security teams who are used to thinking in terms of negative outcomes. Obviously, we have a key here on the left. Simply, and nodes require both actions and conditions joining the nodes to be true, whilst all nodes indicates what lies beneath can be realized in multiple ways. One way to achieve this specific example, if we look at the content of this tree now, is to deploy a poisoned container image, which in turn requires that the container image is poisoned within the container registry. That potentially is through the compromise of a pull secret, which is also incorrectly configured with writes or overwrite privileges. Of course, we would expect, well, we require the image to be deployed to the cluster, and in this case, we're relying on that as part of a label overrides and standard scale outs or node addition churn. Subsequently, as we go down to obtain image pull secret here, we can achieve that in one of three ways. You see, this is a green or node. We can obtain it by the Kubernetes API or a running container with host file system access, or we can create a Kubernetes secret from a misconfigured Cubelits. That is just a slice through one of the possible journeys to an attack tree. We can also invert the thing and run the bottom up approach. In this simplified tree, we're looking at an attacker that is able to gain some small level of access and attempt to enumerate and pivot. From the start point here at the bottom, we have an RCE in a container, and then we ask, what can we do now? What would we like to do? This diagram is a little bit hard to read, so if we zoom in on the lower half here, in this excerpt of the tree, we're focusing on how the container's service account token, if sufficiently privileged, can be used to start pods exact to a running container or extract secrets. In fact, this was actually a bit of work. This was actually really useful. It's perhaps not the standard mechanism of doing threat modeling from the bottom up approach, but what we often found was in reality, we're taking it from the perspective of, well, we assume the attacker has already got to this point. What can they do? From a SOC perspective or an instant response standpoint, it's really beneficial for us to look at it this way so that we can identify, right, so we've actually covered up all these controls, or we've looked at all of the content that we need to generate if certain things actually happen. So we decided to start doing it in both ways from top down and bottom up, which probably doubled the amount of time we ended up doing it, but certainly got a lot of use out of this particular approach. I think we'll probably just talk with it. And yeah, certainly in terms of the thought process and kind of mental abstractions that went into this exercise, certainly coming from the top down and the bottom up, did expose some gaps in our thinking the first time round. So it was a useful sort of internal quality assurance, if you like. So just to... I think to point out as well as we were building out the attack tree, there was a couple of areas of the tree that we sort of identified although we didn't particularly find a CVE, there were certainly areas that we identified that subsequently someone published a CVE. And as we were building it out, we were always looking at, right, so how can we evidence that? How can we write a test case to validate that we are or are not susceptible here at all? And it turns out in many cases couldn't quite think up of a particular idea. But lo and behold, two months down the line, someone else did. And it was actually within the tree. We just hadn't figured out how to exploit it. So I think that was that was interesting from a blue team perspective in that it's probably easy to actually write a detective control to see if someone is trying to do a certain thing as opposed to actually execute it. So we can think ahead of someone actually breaking in, even though we couldn't evidence that someone could at that point. Okay, so looking at, yes, we've got this RC in a container. In this excerpt, we're just looking at starting pods, executing to a running container, or extracting secrets. So subsequently, we move further up, we can see how this could lead pulling or pushing images from an image repository or unauthorized access of data, crypto locking or ransomware, all that good stuff that we haven't necessarily seen in Kubernetes yet. But we're seeing far more widely. And okay, as we can see, these attack trees are not especially complex. But when composed into a greater diagram, our useful tool to communicate our understanding of the security of a system. And back in to John. He probably just leaves the headphones. But so you can take a look at the threat models that we've been sourcing for a while now. They are quite extensive. I think we've got a more detailed version down here. They are in Visio. It took me an awful long time putting them together in Visio. And there's a number of different major scenarios you can take a look at. In terms of next step, though, we've really started to look at how we could automate it. We did look at it from the start. And we're trying to think of different ways of doing it. And realistically, we've come to the approach of trying to create these automated test cases that we will continue down. I certainly want to do less Visio, because my concern right from the start was, as soon as I put it into a document, the document effectively becomes stale. How do I keep it updated? And that's an ongoing challenge, really. So we're looking at putting it into some sort of automated format, or at least data format, so it can drive it further forward. Some of the challenges that I think we started to see is, in reality, as we were building out the test cases behind this, it's not a huge leap to think of them as effectively exploit code. We were writing an actual function or a functional test to try and validate the security of the system. The way you're doing that is effectively writing almost an exploit. So not particularly comfortable doing that and certainly not open sourcing that. But perhaps there is purchase in reaching out to some of the breach simulation companies, we've certainly started to do that. Perhaps that's something that they could look into. Or if anyone has any other ideas, be open to suggestions on how we could get people to write some of that exploit code or automated tests is really what we're looking for. So I'll leave that one there for anyone to take a look at the actual link. And that's the first area we were looking at as the FSUG. We can come back to questions unless there's any now. But the next step was addressing the skills gap. So clearly, there's a huge amount of focus in this area. All of our members are looking to hire exactly the same people. But in reality, we need to train a lot of people we already have on staff. And when we were doing some of the analysis, there's huge amounts of documentation books and slide web. But in reality, personally, I've seen a much better engagement and level of understanding when developers and my staff are using a hands-on system. We've implemented something and open sourced it before for application security. I've done other things in this sort of area. And really, that's the key difference. I get huge amounts of better retention when people are actually getting their hands on within an IDE or within a command line and going through this. So we've basically built out a hands-on training system where you can stand up the infrastructure for Kubernetes, load in exercises, and then attempt to do a particular training exercise. Now, it is sort of like a CTF, but the real idea behind it is more on the remediation and blue side. Now, ultimately, the goal would be to allow the same exercise be deployed to a user and then go in and do the red team approach and try and break into a particular exercise. That same exercise would then be saved down, given to the next team who would effectively do try and do forensics on that same exercise. And then back to the blue team who would try and defend or mitigate those issues. And that way, I get the sort of the rapport building between the multiple teams and the ability to train all of those teams together on effectively the same true exercise. And you get the blue team and the forensics looking at real data for someone who's actually tried to break in rather than, again, perhaps in slideware. So this is what we've built an open source to. Ultimately, this is a tool that allows learning and practicing on production-like infrastructure with impunity. Because it is our shared belief that DevOps in general, and perhaps SecOps in particular, really only rears the specter of reality or realism when something is deployed in a non-trivial environment. So simulated those few things, it will wrap Terraform to stand up AWS infrastructure. That is behind the bastion to avoid drive by ponage. Then there's scenario runner, which will provision one of currently 25 different scenarios onto the cluster. That has incremental hints, guidance, and is intended to cover a range of skill sets. This tool was originally built as part of control plans for more general cluster debugging and training and security at work. What we've open sourced here is very specifically all the security scenarios, but it is a generalized runner. Of course, there's nothing really specific about the perturbations that security. It's just a byproduct, let's say, of a different code path. It is a raw command line experience, which means you're dropped into a shell and you're given access to any tooling that you so desire. That shell is already relatively tooled up, but should you want to pull Coup Hunter or whatever it is, then there are no restrictions. The whole thing, of course, is open source. I mentioned this as a potential for something that will submit into the CTF for Six Security Day at KubeCon. Here are a list of the initial suite of scenarios. These are attacks and remediations, and they take inspiration from military terminology in homage to the origins of red and blue teaming. Just to run through this one quickly, our back, Sanger, which is a fortified military position, I am reliably informed. In this case, we're looking at the audit logs and we noticed that something is making calls, which shouldn't be. We have detected this visually instead of with the automated test, but nevertheless, we are then challenged with identifying the root cause of the problem, correcting the, let's say, Kubernetes deployment. Very good. Okay, well, correcting the deployment that is at fault in this case. And then, well, I've summarized the whole thing, but there are discrete challenges in order to achieve all four of those tasks. And the remediation involves defense and depth. So, of course, we're attempting to communicate what we consider best practice for these things. Again, review on these things will be very welcome because to some extent, there is subjectivity involved. It is possible to run all of this open source. Of course, be aware if you're doing this in your organization's billing accounts that these are willfully vulnerable and as such, yes, by the way. And then, yeah, just a moment discussing the lessons we're looking at. Yeah, I mean, so we spend a lot of time looking at the actual experience of how are we going to do that training and how it would look to a developer because if you just stand up a Kubernetes cluster, some of the developers that we have perhaps aren't as familiar with running around. So there's quite a lot of functionality within the system to enable them just to jump between the different nodes or hosts or depending on what the challenge is straight into that container. So it is quite a nice sort of developer and end user experience. And a lot of the challenges that we've implemented there are really focused on the mitigation. But I strongly believe that to actually get to the point where you understand how to mitigate appropriately, you have to understand how to attack it in the first place. So that's why the hints came in. So a lot of the people aren't seasoned red teamers or pen testers. So it's worthwhile at least having a try, but then we have the ability to give them a number of hints to get to at least the initial point of breaking through a particular scenario. And then really, the focus for that particular team would be on remediation. And they can, they can, they can get the value out of the remediation part of it. The next steps really in this piece are really centered around allowing a local development experience. So at the moment, it does deploy all of the functionality into AWS. We have a bastion hosted in an environment we deploy it to. But often, I'd much prefer the ability to just do it on the train or give developers access to it in that way. So we looked at kind and a number of other options. We'll be furthering that and seeing where we get with that one. And then the next big one for me is really completing the vision that I had for the project, which was try and get that multi user experience going where we have the red team, blue team and forensics team person involved in the moment. It's effectively one exercise with limited save down. But we'll, that's really the next challenge. So, so those, those are really just the two focus areas we've really dug into so far. It's certainly work in progress and it's open for any commentary or feedback on either of those two. We have a whole list of additional focus areas that we're going to go through as a an FSUG team. I think the next one's on codifying controls, which I know a lot of our team of members are really, really focused on. We started to get into some of that as part of the threat modeling, threat modeling work. So I'll hand it back. I'd really be interested in any feedback on the approaches that we've taken or anything that is of interest or you've seen other people implement some of this work around threat modeling or automating the testing around threat modeling. Just love to hear your thoughts. Brendan, thank you. Great. Thanks, Jonathan and Andy. I think we have a couple of questions and comments. I think there's one from Mark on how this relates to MITRE. Yes, so that's a great comment. So we were looking at that and that was before MITRE released the cloud attack framework. So we haven't mapped it directly to MITRE, but I think that's something that would certainly be possible and be interested in doing that. So we, just by coincidence, just ended a meeting in the last hour with MITRE and, you know, they have this Center for Threat Intelligence that's trying to formalize this and we haven't, my company hasn't decided whether to join that because that's a pay-as-you-go enterprise. But I do think there's value in trying to align what you're trying to do, not just with MITRE in particular, but with the MITRE Unified Ontology, which is the language that we could use to automate this stuff. Chances are, I think what you're doing will get ahead of what MITRE is doing, but MITRE might provide a commonality of frameworks that would leverage future work that you're doing. I'd be interested in taking that further definitely. I'm sure everyone else in the group would be interested in that. All right, so I'm, my company, I'm not a sponsored member in this group, but I'm employed at Synchrony, who is a, you know, a Fortune 155 something in the U.S. So we're a sizable, you know, large firm involved in this. So, you know, I'll try to get involved to the extent that I can on an unofficial basis. Yep, please drop me a note. Oh, we have another question from Jess and Carlos on attack trees. Do you want me to just say it because it's probably easier? Okay, yeah, that'd be good. So one thing I found that was very interesting about your attack trees is that the nodes themselves are labeled with an and or an or relationship. And I've seen attack trees use in lots of cases, but I've almost always seen it where the and or relationship is between the edges themselves. And the reason why that makes some degree of difference is let's say that there is an outcome that, you know, someone is going to break into your data center, and they could either like knock out the guard and steal their key and open the door, or they could just blast through the wall or something like that, right. So there you have a situation where you'd kind of have to either break it down and add additional intermediate nodes or other things like that. I'm wondering if you've if you made a conscious choice to choose this style or whether this was something that happened or is there is there a reason to do this or I wish it was a conscious decision, but frankly, it wasn't. And honestly, I think we were learning as we went along in many ways. As long as it means that I don't have to redo those Vizio diagrams, I'd be open to anything really. So I get it, it'd be interesting to see what difference that makes to the maybe we take one page and take a look and see how that would affect it. I don't know the answer. I haven't tried that. I don't know the answer, but it'd be interesting to take a look. I think a lot of the value we got out of it was the kind of the discussion and the creation of that threat model. Just the conversations that were spilling out of that those sessions were, you know, it really got us thinking about actually, that's a real issue. You need to take a look in evidence and put down in that document diagram. And if there's any way of we're efficiently memorializing that, I'd be very interested to take a look. Great. And since I have the stool on the floor for a second, I'm going to just look for one second longer and just ask kind of a follow up, which is that have you looked at any of the other follow on things people often do with threat modeling where you tried to assign like risk probabilities to nodes or did things like looked at collusion cases where an attacker has capabilities X and Y in a weird way that may allow you to break through different parts of the infrastructure or anything. Yeah. So one of the things that we haven't really open sourced is the probability. So we looked at the probabilities, but also and I think that helped sway some of the other decisions we were making. But also again, more to the visualization of it, we were able to highlight the value of some tools. So we looked at the probability of what's the probability of X and Y and Z happening. And where are the tools that we've used, created or bought? And where are they actually providing that value? So if we've got 80% of high chance that this is going to happen, where are they on the attack tree? And if we've bought this very expensive tool, is it really the one that's catching the 80% or is it the one that's just about covering the 2%? And that was quite an interesting visualization to overlay onto the thing. Okay. So I haven't, we haven't updated the map, but if we effectively have multiple overlays onto it to show these are the probabilities or this is mitigated, this isn't mitigated. Yeah. When we did a similar thing for Spiffy Sapphire, it was really, really illuminating. And we've done this in a few other contexts where the things that people thought were really important often weren't and vice versa. Well, one of the things I wanted to try and do, but I don't have any data, is where do you get the probability from? Because it was effectively, we were effectively making it up, frankly, and maybe there's a better way of doing it, but I didn't have any TTP knowledge of back to the mitre framework of who was actually exploiting it in this way. And if we had that capability and that data, I think it would have really, again, given another lens on this, as opposed to us thinking, if I was out there, what are the likelihood of me getting a service account or getting this key or that key? So if we can start to get some of that more probability data or attack based data into it to evidence some of the probabilities, I think that'd be really interesting. But your point's well made. It was, it's pretty illuminating that one diagram, you can get a lot out of it. Right. And we had a similar problem. We actually had different people independently try to come up with and justify a different score. And then we sat together and discussed, and we didn't ever really try to push people to change their score. But if you do some information you got from somebody else, you could. And then in the end, we did kind of a median sort of thing as a result of this. And I think we were all pretty happy with how that came out. But it's very hard. There isn't real data for most of these things as far as I know. Right. But I like the idea of getting a couple of different people's opinions and then perhaps applying that and open sourcing that. Hey folks. By the way, I was just listening. And this might seem blasphemous or it's maybe ignorance. But what I was thinking because the conversation became mathematically inclined. And we were talking about ands and ors and then the Boolean logic, right? But we also talked about it from a probabilistic way. What I was thinking was more, what about doing some kind of formally verified threat modeling? Right. Like because ands and ors are like Boolean logic. So how do we get there? If that's possible, that is. I've thought about it. I know a number of members are doing similar things around AWS. I would be very interested to see how you would be able to do that with an open-bounded problem like Kubernetes. There are many people in school better than I that could answer that. I've seen kind of these threat modeling being mapped onto logic systems. And I think they work very well with it because it's very, you know, it's very structural. I can't say I've seen the same with formal verification. Okay. Yeah, I couldn't find a way, but I'd be very interested to find if someone did. It's just, I'm thinking it's just too big an unbounded problem. I was just asking, I was just asking as a curiosity. Me too. I'd be very interested to find out if anyone's got an approach to that. So I was wondering, I was interested in the threat model. So it seems like this boundary that you had was between components within than a single registry cluster, but do you see the model being very similar when you bring it into a multi-thought environment or do you see more components at play now? I think there'd probably be more components at play and this took a long time. I mean, if you take a look at the PDFs, this took a lot longer than perhaps it should. It was a couple of months, and I had to bound it in a certain way. So it's a single repository, single cluster. There will absolutely be differences if it's multi-cloud. And the thing I was really, really interested in was the supply chain security and SDLC required to deploy Kubernetes and how that would affect it. That if when we go again, we will go again, that's where I think the next focus will be on effectively. Well, first of all, finding out has anyone's already done that. And secondly, try and look at it from the supply chain perspective with respect to Kubernetes. Well, you're working with the right groups there with control plane and stuff like that to answer that. So, and there's a, yeah, you can... But for a number of reasons, I'm not allowed to accept a gift like that, but there is an awfully nice hooding. Yeah, the nice hoodies I've seen out there before. So, yeah, but I think there's, you know, all kind of joking aside, you know, of course, others like the people in security are also, I think would be very interested in this. And I know we've also done a lot of work on sort of the threat landscape and things like that, that we're gonna be very curious to do a deeper dive into some of the things that you produce there because they're starting to look a lot like work that Brandon and I has been doing, although we've been focusing more on, you know, we're doing like a mock-up and the stuff we've been doing so far has focused very early on, like the software supply chain side of it. Whereas as your focus seems to be later in the process, so I think there'd be some nice energy. No, it really has been from a threat modeling standpoint, that was the initial one, but in reality, again, number of the FSUG team are very, very heavily looking at SDLC and supply chain, and I am very focused on that too. So as I say, that is gonna be the next threat model, but also implementation. Personally, I'm going into, well, some of the open source projects that have been discussed before to make sure that we cryptographically evidence and validate all of the individual points throughout that supply chain and look at implementing that. And I think, yeah, just because we've only threat modeled the end side, the runtime, everyone's also focused very much on this supply chain as well. That's next. I'm also curious. Oh, sorry. Go ahead, Tara. It seems like a lot of the work you've done isn't specific to the financial institutions. And so I think that if you're open to it or whatever mechanism we can link to the work that you're doing so that people, because people come to SIG Security looking for resources like this and we don't have as many resources to point to as we'd like. And it's not that they're not out there. It's just a process of highlighting things and linking things. So it'd be great if offline, we can kind of brainstorm or start an issue where we can be like, where would be a good place to put this so we can make people aware of this. And I think the other thing that I wanna, I don't know the how yet. I think that one of the things that brings me to wanting to be involved in this group is in reality, this is unbounded, yet there are frameworks for thinking about this that can make it more approachable for people who are coming from a traditional security background into cloud native or the creator, like coming into the cloud and then adding security, which is frightening. And that there might be a way that we can group things or do something that can make it just not seem like a 3000 point list, which is itself contributes to lack of security because if you can't execute on it then you can't make things more secure. Yeah, and then maybe to Justin's point, if we stipulate the probabilities on some of these areas that would help people focus down instead of 3000 points into some key areas, perhaps. But I'd certainly be open to hosting it elsewhere. It was from the group, but it's absolutely not financially specific. I feel it's, yeah. Yeah, it's just gonna. We can start with the link or whatever seems like the right thing to do. Yeah, I was just about to say when Sarah was talking that I've been working with this and it seems like the requirements for federal and financial seem to be overlapping quite a bit. So it seems like these trap models could definitely be used in the federal context as well. I can attest to that. I have a background now for national services and I can attest to that, yes. Great. Yeah, I mean, I'm happy to use them wherever it makes sense. Just one more point, I guess, on Justin's comment about the supply chain. One of our next focus areas is on codifying controls, but that's sort of tied, I think, with the supply chain and SDLC. So, yeah, more to come in that space. And then really just don't want to reinvent the wheel or replicate things and certainly not do it within the FSUG if it already exists somewhere else. I think perhaps we have a separate conversation or a side conversation about that. That'd be useful. Yeah, and I think that the work you're doing, I think this is one of the neat things about this group is that it includes the people who make the software as well as the people who, you know, the red team people who make sure it's trying to break it. And I wonder if there are things that you came across that shouted out to you like, oh, wow, if this was built differently, this wouldn't be so hard. There were, and I noted a couple of them down which probably sort of rehashed them and that was actually one of the reasons for doing some of this work is that actually if we identify certain issues or certain areas, we could have a discussion with teams upstream and make it so. I mean, the one universal aim for some of this work is the automated testing and the automated sort of control or security validation. Yes. So if we had the ability upstream that in addition to Kubernetes, we are also able to either run Kubernetes itself through an SDLC that validated its security by validating that threat model or we have effectively a financial services test pack that our members can validate and use. Instead of every single financial institution or member of FSUG standing up a 10% team to learn how this works and run it, it's just not competitive advantage. It's do it once, do it upstream. And again, it's not financial services really, it's just general security. I'd be interested in seeing if we can push that. Yeah, I totally agree with that. In fact, I was gonna interrupt you and make the same point. In fact, you could argue that federated DevOps is kind of a key thing that needs to happen. If you can't push the test cases through the supply chain, you can't keep up with the releases that are happening across it. And this is central to doing it. And when I try, you know, I'm involved with the Homeland Security efforts for finance. And I keep trying to tell them, don't go solve the boiling ocean, right? Of computer security, that's not your role here. It's what does finance need. And a lot of what finance needs and these critical infrastructure, what parts of the economy are tied to risk and how to tie to risk and discover risk to align your business and software engineering processes along the risk. And that is something we can contribute. Whereas, you know, cryptography and signing and a lot of these other things cut across all software production. Yeah, but it's also hard for them because even today I was talking with someone and they were saying that they are like trying to adapt this kind of risk frameworks but then it doesn't align with the cloud native security capabilities. And that's a huge friction for them because on one side they need to be able to be compliant and have all this regulatory, right? But at the same time, like the cloud native capabilities doesn't provide them that seamlessly. And that's a friction for them. So while I understand what you're saying, I think there's deeper problems also in the culture that they have in order for them to adopt that then becomes like a process and a people problem, I would say. I think, well, there's many problems but I think if we had the automated testing capability upfront, that would certainly solve or help alleviate one of them. Yeah. And I'm not just talking about PCI or some of the regulatory standards I'm talking about that plus basic security. But then it gets to the point where how do you write an automated test when, and maybe this is because we're thinking about it incorrectly but how do you write an automated test in a way that it validates the intent of the attack not the configuration of your server? I prefer the intent. And when you get to that, you're effectively writing an exploit which then moved us on to let's talk to the breach, breach assessment teams because that's more in their wheelhouse as it were. Yeah. Yeah, yeah. Sorry. It's quite all right. So if I can actually just ask, so by exploit, do you, what do you mean exactly by exploit? Because what I'm hearing regarding the test for the control is that you are basically testing both the positive and the negative, right? You're testing to ensure that the control is in place and it's doing what's expected or it's not. And technically speaking, these are verifiable without necessarily requiring an exploit because realistically speaking, you're not necessarily trying to test an exploit for that control. You're trying to detect the presence of whether or not it's working as intended or not. I guess in this audience, I should be very careful when I use that phrase. It's not usually this sort of audience. In reality, what I'm referring to is rather than testing to make sure that I have configured configured a particular security parameter, it's a better test for me if we're actually trying to perform that security action. So I'll stand up, prove it's container and then we'll try and use that service token to hit the API server and pull all the secrets out of it. And I'd rather do it that way than do a test that would validate. Yes, you've got a pod security policy that has the preventative control in there and you don't have a security token. Because if we'd make a mistake or it's not applied to the appropriate namespace or Kubernetes is changed in some way means that the security feature has changed, I wouldn't know that that's the case. Whereas if I'm trying to simulate the attack, it gives me more fidelity. I don't know if Robert that answered your question, possibly not. Slightly, I'm just wondering if that is necessarily what would be a bad thing per se, because like typically speaking if you're ensuring that control is in place, you're gonna have to test both positive and the negative and that effectively classifies underneath. So I'm not sure if I'm just misunderstanding. I'm really sort of suggesting as an FSUG group, we'd probably prefer not to be writing explanations of how to sort of test your, we're not writing exploit or security penetration testing tools, it's part of the FSUG. This is probably a great group that would be able to do that or be able to discuss that, but it's probably not something we'd want to do necessarily. Happy with the blue team tool, but not necessarily the way around. We may be able to uncover the groups or projects that might have the different features or tooling that would be useful for that. Like I'm not sure we're about to do an assessment of cloud custodian and that might be a framework that could be applied to this problem. And it might be something where like looking at it from a general infrastructure standpoint, more like, well, any Kubernetes deployment, like maybe it has some things that don't need to be authenticated because they just serve public information, right? Other stuff that requires authentication. And if you look at it at a lower level, you're not saying, here, let me tell you how you might be able to hack into my bank. You're just saying, you know, if we decouple them a little bit, right? Then we're saying, well, you have private things and public things and this is how you look at them and how you might verify that the private thing should be private and the public thing should be public. And then you can look at it more from a, well, certain things need to be very private and other things, you know. I think that's the way to disconnect it. What we're really referring to is a reputational situation. We don't want the financial industry writing tools that can turn up in newspapers where you can and other people of the communities can, but it's something that is not as easily palatable from a reputational standpoint. We'll do development and defense all day long, but that's for other people to perhaps develop an open source. Obviously, we've already implemented this internally with multiple different banks, but it's not something that you'd open source. All right, so I'm gonna just interject here because we're over to the top of the hour. So if Jonathan and the few guys could share the link to the slides, what I'll do is I'll create an issue and then we can continue the discussion if there is a new event. Thank you everyone for your time. I'll look forward to continue the conversation offline. Thank you. And thank you so much for pretending fabulous. That's appreciate it. Thanks Jonathan and Andy. Thanks. Before we get off just a quick announcement. So next week is a working session. If you have any items, issues or PRs that you wanna talk about or anything else, put it under the time meeting next week. If I could ask a question to the speakers, slightly more primitive questions, I may have missed it earlier. Are these attack trees based on actual attacks that have been experienced in the financial service industry or what is the basis of the origin or the origin for the attack trees composition here? No, it's a logical exercise of every possible thing we could think of a way of attacking the Kubernetes system. It's a full threat modeling exercise. It's got no specific bearing on particular attacks. In fact, we thought about those TTPs and I think we discussed them earlier, perhaps from the MITRE attack framework that would evidence some of this information but we at that point there was no information to hand. So it was a purely logical exercise. So logical from the perspective of the financial application or any, so it has no biasing from the, so it has no biasing from the financial service industry. Correct. Okay, so it's a generic proposition. Okay, thank you. All right, thanks again, Jonathan and Andy and thanks everyone, let's see you next week. Thank you very much. Thanks everybody. Thank you. You too.