 I actually have a webcam is the it's the norm to have a video for most attendees. I see a number of people tend to turn that on. Oh, especially if you're checking in. And if I'm facilitating the meeting, I definitely try to prioritize unless I'm in a situation where, you know, that's not practical. Okay. Am I coming through video and audio wise sir. Yeah, great. Okay, I put together some pieces from the agenda there was already a presentation put into there by someone else so my plan was just to stick to the format that's in place. Quick attendance stand up and then presentation so if anyone needs to back out early they at least get to see the presentation. And then the check ins and then issues PRs, and then opening of the floor, whatever else follows and that's about it. That sounds about right. Yeah, I made a note on the notes that check ins actually, you know, it started first to check in started first as a workflow. The kind of attendance and stand up has really taken most of the check in, you know, workflows, unless there's a major check in and then we'll kind of thought something in later. So, like, I'm my only hesitation is, you know, should check ins and any, you know, quick sort of notifications around issues and PRs get pushed up into that first. Like, hey, like, this needs to, you know, attention. You know, before we go into something like a presentation, but you know it's a minor detail. Otherwise, it looks great. Okay, I'll put like an extra note there I guess critical check ins or something like that, or maybe just check ins and then the one I already have in column in a row four would just be additional or one off check ins. I guess the right way to go would be to have the presentations and the check ins let all the people that maybe only have a 30 minute window get what they need done and then they can check out if they don't want to stay for all the PR discussion. That's right. So, you know, one thing for today is, since we do have a presentation and we want to get into discussion. So, if you don't have a, you know, something really important or pressing, like, it's better to keep it short today. And then, you know, something we haven't really documented, but is our standard practice are actually it's only been like a couple months is, you know, when you don't, when you when you sign in, you put, you know, no, no update beside your name, if you don't want to get called on, you know, and when you're facilitating you basically read through no list of folks, you know, with anybody who doesn't have no update. Thank you. Actually, I'll go put that in there now. Great. What I'll do is I even added subheadings here so we can just have a hot link for the attendance thing I'll throw in the chat. Cool. We'll let folks log in and then, you know, I'll kind of introduce you and let her know that you're first time and provide feedback. I guess commenting in the dock, probably best keep it centered around the document that we're working on. Okay. Or, you know, do you have any other preferences or are you, did you set up a PR where folks can I added a couple like those one of them off but I did add a few PRs relating to documentation one is on the scribe role. The other was like sort of a jumpstart guide for a quick start guide for facilitators but the notes I got from Brandon and a couple others from the last meeting show that yeah that'd be overkill. And then I believe I have. Let me just see the backlog here. Yeah, I think it's great to be able to also see, you know, what someone else is capturing and if you need to add to that. You know, we'll see if. Yeah, I think it's great to be able to also see, you know, what someone else is capturing and if you need to add to that. You know, we'll see if, you know, editing in the small table. You know, is more challenging. Yeah, that's my one concern. But I thought that if it was at least was one shot of having them side by side so they sort of are cohesive instead of sort of two testimonies one after the other. We'll see how that turns out. All right, so we probably ought to get started. We're, you know, approaching a quorum number of folks. So, welcome everybody we have a presentation today, I just dropped a link to our meeting notes. We wanted to dive into the presentation and, you know, leave ourselves time for discussion. So, you know, let's prioritize, you know, getting through, you know, stand up relatively quickly quickly. So, if you're new, we'd love to have you introduce yourself. And today we have a new facilitator, Matthew is going to be facilitating today. And Matthew, this is Matthew's first time and brings some some experience and travel learning some stuff. So I'm going to shut up and hand things over to Matthew. Any sort of feedback that you have be great to either capture your thoughts in in the document in the meeting notes, or, you know, an issue on our get up. Thank you, Matthew, all yours. Thank you, Dan. So I'm just going to go through the regular workflow that we have for this so attendance stand up. There's the link there in the chat so anyone that's attending please feel free to add your name there and if there's no updates or comments or you don't want to be called upon just please throw no update and parentheses beside your name and we won't ping you. Feel free to leave it without that if you're say new here and you just want to get a quick spiel in. And besides that, I thought we can just move into the presentation that we have today so cartography using graphs to improve and scale security decision making, and the link there is on the slide. I just have to see and who's our guest presenting that today. And that would be me. Yeah, like Matthew before you get too far in, you know, make sure you get scribes signed up here got ash. So we sorry we have Ash Narkar, if I got the right as one of the scribes and do we have anyone else for a second volunteer for meeting minutes as a scribe. Okay, we have Ash and if someone else can just post in the chat or if not I'll fulfill the role myself today. Thank you, Alex. Cool. So dive right in. Please thank you. All right, I'll go ahead and share my screen then. All right everyone able to see my screen. Coming through clear. Cool. All right, let's go ahead. Hi everyone. I'm Alex Kentucky. I am a software engineer at lift. And I am the maintainer on a project called cartography cartography is a tool that is a Python tool that pulls in assets infrastructure assets from many different data sources. It puts it all into a Neo4j graph database. And what we found is that having things in a graph database is very helpful in correlating multiple sources and it's also helped us answer some very complex questions. Cartography is open source we open sourced it about almost a year ago, it might be coming up on its birthday so happy birthday to cartography and what better way to celebrate its birthday than sharing it with the security. Thank you all very much for having me here. At this time, we're not necessarily looking to join the foundation, however, we are here for feedback and for eventual submission and our hope is that you all will find this as useful as we have found it so far. And so some of the motivations going into the project. The bottom line is that we found that the cloud is really complicated. And there are all kinds of different assets all kinds of different permissions relationships and not understanding this and getting this stuff wrong can have some pretty bad consequences. And a lot of us on the project who have worked in cartography we all come a lot of us come from in a sort of offensive security background where we worked as red teamers and what we found is that looking at things from a graph point of view has been very helpful in having us identify our targets perform lateral movements. And we think that others can also find this useful as well. So if you are a blue team or if you are a service owner if you any number of different roles on a security team infrastructure team what have you, I think that looking at this can be pretty useful. So the things that we, we, I took a look at some of the CNCF six security use cases and I think that we've probably fit into some of a couple of these. As a security ministry I can audit all accesses I can understand my policy grants as an enterprise operator quote operator and user I need a centralized way to look at all of these resources. As a developer I can perform an access check as an implementer I can perform auditing of resource access. And, although we didn't necessarily build it with these scenarios specifically in mind like I said before it came from a pen testing perspective. I think that there's lots of ways where it fits into that I want to. So if at any point in time something doesn't sound like make a lot of sense or if I'm being confusing please interrupt me I want to make this interactive. I don't have any questions at all. And so I'm just going to dive right in to kind of some of the use cases that we have. So with this first set of use cases understanding access checks understanding auditing and looking at organizational resources in one way. Well, I'll show you how we do that at lift as kind of a motivating example. So at lift we use octa octa is a single sign on provider, you authenticate with octa and then it delegates your access to all sorts of other different providers and resources, AWS being one of them. And the way that octa works is that you'll have an organization, a group, a user, and then you have a human identity. So I myself can have an octa identity that can be a member of a group. And this is sort of kind of modeling what the what this would all look like in a graph and what. Okay, cool. Sorry, just checking stuff. All right. Yeah, so and then so one, the one thing I want to highlight here is that if we wanted to keep an inventory of all of our octa groups and all the octa users, then if we had it in terms of a relational database, every single one of these edges would be a joint and joins you know they can work but then quickly if you want to correlated with other things, the problem of keeping track of all of this in a relational database gets pretty complicated. So, as I mentioned before, we use octa to delegate access to AWS. It's a fairly as far as I understand it's a fairly common workflow where an octa group will be allowed to assume an AWS role to become an AWS identity, and that AWS identity belongs to an account. And we can layer this together with other things too. So we can layer a in HR organization in HR structure, so we can go from the identity of myself, put in other HR data from a provider such as Workday at lift internally we use it up but Workday is just as an example over here, we can layer it in there. And then here we have all sorts of different sources that are put together in this graph view, and put that you can augment this further, and then add things like I myself I have a G Suite identity. And this G Suite identity can let me connect things like a duo CR excavator which is a tool that identifies risky Chrome extensions that are installed throughout your organization. And so putting all of these things together kind of leads me to my first live demo so let's pray to the demo gods. All right. So I kind of want to show you just what this looks like live this is a local database instance that I got running on my own laptop, and this is a visualization layer on top of the standard graph database so it just looks a little prettier. So I have here user 123 user 123 as a human. And then user 123 has an octa user identity. And then this octa user identity I actually wasn't supposed to move that node so I kind of messed up my demo already but you can see that if I expand this right here. This user 123 is a member of a number of octa groups, including this AWS admins group. If I expand this AWS admins group it tells me oh man you got so many other different things you can go to. Well I'm just going to I'm only interested in this AWS role. So let that expand. And then like this AWS role so let's expand this role. I get to another role. And then I'm going to back up and explain this whole path what I'm going on in just a little bit once I get everything kind of expanded here. So the idea here is that I have a user I have a human that happens to have a octa user identity. This octa user identity is able to it is a member of this octa group. And because it's a member of this octa group. It is allowed to assume this AWS role to become this AWS identity and AWS has this feature that lets you if you are a certain role, you can assume other roles. So that's what this TS assume role allow relationship is. So if I am this role, I can assume this role and sort of the reasoning for a cloud provider providing this functionality is for flexibility in your organization. And it, if you come from an on prem, I guess, background, then this can be, well, if you come from like an on prem hacking background then you should, you should be perking up right now because this is very interesting this is literally lateral movement if you have the ability to be this role, and you can assume this. And then what other things can we do what's interesting about this role. Well, let's see if I double click this. The thing that I want to highlight here. Well, let's look at this. So this role the 603 role is a member of account df AWS separates assets into different accounts and account if you're familiar with Azure for example is a like Azure subscription it is a billable unit and people organizations. They delegate accesses and organize a lot of their assets into different AWS accounts. So you can have a billing account, you can have your service account, you can have all sorts of other things. And the main thing that I want to highlight in this demo is that. All right, our user can assume this role that lives in account ABC, because we are this role. This role has a assume role allow relationship with this 603 role that lives in the df account. And so basically the TLDR of this is showing that you can this highlights the ability to perform actions on another role that lives in another account. And is this a bad thing, not necessarily like I said you know organizations can set things up like this because they know that they want this that they want this kind of behavior. However, it to get to this kind of information. It's very difficult and you can't see it very easily if you were to look at it through the AWS console or to pull it up yourself. But through this sort of exploration flow with cartography you can pull this up and then we'd like to highlight these kinds of relationships. And this is kind of the problem space that we're very interested in being able to move between different permissions relationships, making sure that we have all of our assumptions on isolation. Very well understood, especially when they cross different boundaries between services, like this is an all AWS this is like going from octa to AWS and there's many other different pivot paths that we're interested in. We have a raised hand in the questions if you want to. Thank you. I was just curious about the role names here because like, mostly they in like they would normally need to just have like a human readable name and they're look at this look as though they're all like random numbers is that something that's introduced as part of the import or is that the actual role name. Oh, this is all dummy data. Okay. Yeah, yeah, so I just made it random numbers for the purpose of example. I see some other questions in the group chat sorry go ahead. Oh no I was just gonna say I was just gonna power through them back to back the next one was per nay. Can we set any rules alerts over these graphs, if some or some relations should not be allowed. We do have a feature that performs exactly that function, and it's admittedly not as full features it could be but it does exist. I'll talk to that in just a few slides later on. And I'll just add one of my own on top of that and that was disappears to be rendering the data I'm wondering if it uses certain APIs where you could go in and say, disable or decommission say a role or an account like if say an account was compromised this allow you to directly jump in and say, said what resource should I disable to limit the damage to the running deployments or is it more for logging and auditing as opposed to actively stepping in. In terms of so the tool isn't focused on real time at the moment. It's not very good at that. So I guess right now that we were definitely looking at real time, because to run a full sync admittedly it takes a decent amount of time to pull in all of these nodes process them load them to the graph. But other ways to deliver more, I guess real time scenarios, such as by listening in on a cloud trail log, for example, other things like that. But so we focus on visibility. You can't, for example, like click on one of these things and be like, boom, I'm going to go ahead and turn that off. You know, we don't have that capability right now. But it will give you that what we found is that it gives us that visibility to go to another console and take that action. Okay, and we had two more questions in the chat I'll defer to you rather than reciting them wrote. Okay, let's see. Is there any functionality for viewing changes over time for the sake of auditing changes to environment. Oh yeah, yeah, I'll talk about that in just a little bit viewing changes over time. Are you using a sequel data store are you what are you using as the back end data store how are you at a high level storing data and perform correlations. So we don't use sequel this database is neo 4j. And it, yeah, high level storing the data performance correlations we have. We have a schema so the kind of the view on how we are making these relationships representing them. That's kind of what I was showing there with like the diagrams are the way that we're doing our data modeling. Yeah. Neo 4j UX is a built in UI. Yeah, so this is this view I have right here this is actually link curious which I don't want to distract too much from I guess the actual topic but it is a visualization layer on top of neo 4j that just makes it nicer for presentations. I can show you guys this in vanilla Neil 4j, but if I were to expand too many of these nodes it would frankly blow up my browser so we're just going to go through things like in this way. Any other questions. Okay, yeah, feel free to interrupt me whenever. And so, so if we were to take a look at all of this we can zoom out even more. And so this is the reason why I didn't want to use the vanilla you Neo 4j UI to show you show all of you this is that if we want to visualize all of these all of the possible cross I am role assumption opportunities for all of the accounts in our fake organization. You know this this looks kind of amazing looks kind of cool, but the point here is just to kind of show you that the cloud is complicated even for a medium to large for even for a medium sized organization this is nothing really out of the ordinary honestly and being able to visualize all these things yeah this is intimidating. However, there are ways that we provide to consume this data and make it a lot more tractable because this is not this this is very impressive to look at but let's face it this isn't really actionable. I'll show you just what I mean by that in a little bit. And the next thing that I want to show you is we also want to talk I also want to talk to a couple of other different scenarios so another scenario that we looked at that fit in with the security use cases was I need as a network operator I need a potential way to look at the networks in my organization. I need to understand the effect of changes to network policy. And that's exactly what I'm going to show you right now in this quick cross account connectivity demo. And all right so just a little bit of disclaimer the these examples are pretty aws heavy mostly because lift is a very aws heavy shop it is kind of our area of familiarity. However, I don't want to say that this is our only focus that we're not going to welcome other clouds. Yeah, we're definitely open the whole project for exploring many different clouds we Yeah. So in this particular example though, I've got an AWS account, and it has a virtual private cloud of VPC if you're more familiar with Azure. I guess the closest analogy to that would be a virtual network. And what a VPC does is it has a number of different subnets that you can set up so this is the 10.0 slash 16. And one thing that's neat with VPC is that you can take these subnets and peer them with other subnets. And what that means is that because so these pink relationships that I have here in this diagram, these are VPC peering relationships. What that means is, if I have a host connected to this subnet, it is able to talk to a host in this subnet. And what I want to highlight here is that these subnets belong to VPCs that belong in different accounts. So you should see that this is kind of a theme of my presentation I love looking at things that go across different account boundaries. And a thing that I want to highlight here is I have this name account for service ABC. It's got this VPC. It's got this subnet is peered with this one nine two one six eight slash 24 lives in this VPC that lives in this account that I don't even know what the name is what's going on here we have no idea what this AWS account is. Well, I'll explain why this why this happens so what cartography does is that we'll enumerate all the accounts will enumerate all the network assets, we will get the VPC data will get the subnet data. And then when we enumerate the VPC data will enumerate the other hearings that are available. And what happens here is that by calling that will get back some Jason blobs I'll tell us hey you know we know about this other cider block we know about this other VPC, and it happens to live in an account that you don't control. And because you don't control it we can't get the name of it but here's it's it will give you back IDs. So repeat that one more time so this has this in this particular organization's case. It is possible to discover cases where your account is peered with assets that belong to an account that you don't control. There are ways later on that we can further this analysis and build things to draw relationships to make this exploration, even easier. A quick question from my interview. Is there either a set of best practices or predefined rules kind of like almost a linter but for this software stack that points out obvious known bad use cases such as what you described there there's an account in your setup that you don't control or other corner cases are there already established rules or examples for that that you can just say run this report around this test and tell us what's wrong. This is an area of, I guess, active research. The, the Neo4j Cypher query language it's basically like SQL but it lets you draw out these sorts of relationships so you can quickly identify these paths. I'll show an example of that what that query language looks like but the idea is that you would draw out a relationship from here to here to here all to visualize this whole path. And then whenever that path matches on something when that query matches on something you can fire an alert or take an action. Again, I'll get into that just a little bit. It's not specifically tailored for our scenarios but we have the ability I'll show this in a little bit also we have the ability to make what we call analysis jobs. In an analysis job we would identify paths that look like this and we could draw a shortcut relationship between these two accounts to show a you know this is something you might want to look at and so that you don't have to look at it this way and have it be that cumbersome. Thank you. There's one more question from Elvin in the chat. Are you or would you consider leveraging high cardinality data like VPC flow logs. That goes into where we'll definitely be welcome to exploring that that goes into kind of what I was talking about earlier on consuming new sources of real time data such as cloud show logs. That's not a design goal at least in the next three months in the next six months we have on our roadmap to start exploring at least cloud trail but VPC flow logs that we can definitely like put that in that same family of real time data sources and excuse me and then so then so that's a very pretty good segue about going on to doing some analysis. So you mentioned about oh you know how do you make these shortcuts and let's kind of pivot ourselves a little bit here because we're going to ask ourselves. All right, I have a bunch of compute instances. How do I know if they are open to the Internet or not. And this is a complicated question to answer because there's all sorts of different security group rules and things that you need to compute to figure out what's going on. And this is kind of the data model for that we've got our instance it's got a network interface member of a security group, which has a number of different firewall rules and it's got it which is connected to a number of different IP ranges. How do we do that, how do we identify how do we tell if this easy to instance is connected to the Internet or not. Well we do something like in the analysis job so what we do is that would do something like match on an IP range so match on the Internet the zero slash zero network. Look for all of the IP rules that come from the Internet. What are the security group rules that roll up to that IP rule. They're connected to any easy to instances via their network interfaces. So we're drawing out this path. And if so, if there are any easy to instances that satisfy this criteria, set them set this flag exposed Internet equals true. And what it does is, rather than run this massive query every single time need to memorize that I can simply ask myself, let's go look at all of the easy to instances that have this exposed Internet true flag, and we apply this similar set of logic for Google cloud instances also. So we have. This is a example analysis job, and we have similar things in GCP. Similar rules, you can do the same thing for an elastic load balancer, for example. And so I'll show that just real quickly. What that looks like in demo land. I have a question here isn't this missing network equals for reachability I can have a public IP without. Yeah, this is missing all sorts of different things for this demo. I'm, I'm glossing over all kinds of details in the interest of time. Yeah, so sorry if this is like not entirely 100% correct, but this is. Yeah, I'm just trying to blast through this. Very, very good observations. So this is only looking at it from the perspective of rules on the EC to security group. So in this case what we can look at is, if we've got different accounts. Let's say I have this account for service ABC and I've got this special projects account. And this account has a number of different instances that we've identified as Internet exposed. So this flag here is true. And so let's say that these are web facing roles. And from the previous demo we know that there's VPC peering that's possible VPC has the subnet. The subnet happens to be peered to this other VPC that lives in this special projects account. And what that means is that if let's say that this special project account stores all of our top secret stuff. So what that means here, through the magic of this path. This web facing instance is able to talk to the special projects instance, even though the special projects instance is not directly connected to the internet. And it is an I'll leave it as an exercise to the reader to draw the relationship from this instance over to this instance as part of an analysis job. So, again, this is kind of the set of problems and questions that we're interested in answering. And kind of the, I guess, motivating scenarios for looking at our tool. And let's see. I'll just speak briefly on. There are a lot of questions on how can you view changes over time. As an enterprise operator I need to see what about the resources changed. I need to provide logs for changes to critical resources. And we accomplish, we accomplish this through something that we call drift detection. And as I mentioned earlier, one of the limitations of our tool, one of the very, it's a pain point of ours that we known for a while, is that we need to pull in all this data that graph is huge. And it takes a while to process it, it takes a while to sync it all. And then so you take one time slice, and you take another time slice it's not very good at real time. But we can kind of get around that through something called drift detection. So in this particular case, let's say that we have a known set of storage buckets that we expect to be open to the internet. Every so that we can keep that and then so we have we build ourselves a query asking ourselves, which are the S3 buckets that have anonymous access equals true. Every time this list deviates from our known set of expectations, we can fire an alert. And in this case this is demonstrating a slack alert. So we have a couple of different reporters available right now in in GitHub, we've got a slack reporter, we have a JIRA ticket reporter, and then you know it's a modular enough that you can build your own reporter on top of that. So you can find out which of your assets deviate from a known set of expectations every time a graph sync is run. And it's left up to the implementer how often you want this sync to run. We're far from the open, the only open source security graph in town, what really sets us apart. And there's a few things. So first that I want to say that, you know, we are extensible. As I showed earlier we got Intel modules from different sources we got GCP we got AWS octa. We have we could you can extend these queries with analysis jobs. And, like I said multiple data sources. We are also not deployment opinionated we don't care about whether you run. Vanilla in compute instances like I see right here it's very subjective but I think that our community the aspects there I think we got a pretty great growing fledgling community. And we hope that you will join in as well. And so kind of moving into this aspect I think that this is one of the strongest aspects of cryptography. We have been very thankful for the response that we've gotten from the community so far over the past year, maybe getting about 100 clones every week or so. And one, I guess key milestone is like on the lift loves open source page. Wow, you don't even got a scroll down for us anymore so it's like, that was a, I was immensely happy about that selected highlights for a brief moment we were more than lifts IPO. All right, great. Top of hacking years that's lifetime achievement. This was one of the first external contributors that we did not ask for. We got. Oh, this was one where I created an issue and then a community member jumped in to come and help me out. Thank you, Zach. And this one was the first case where a community member reviewed the code of another community member. And we're, it's getting to a point where we have more people from the community working on this project and we do have lift employees working on the project and I want to kind of foster that sense community to kind of grow that a little bit even more because this is useful to so many more places than just lift. And like I said in our community, you can join us on our open source slack. We have a monthly meeting calendar link is there minutes or their video recordings of our meeting are there. We have users from all kinds of different companies and many more to come, hopefully. And, you know, I want to end this presentation with this call to action that we need your feedback. Please look into the graph play with it, say hi to us. And, you know, we're really focused on how can we make this more useful for you. And speaking a little bit to the roadmap in about one within like one month or so, we want to have runnable examples for new users so that you don't have to necessarily install Neil 4j, because depending because they can be a little tricky. You can do things runnable so you can play with some of those exposure scenarios that I showed you just a little while ago you can play with that without downloading and doing a lot of install work. We're looking at ingesting tags so that you can look at getting all sorts of attribution resource attribution information knowing about who owns what on a service. In a few months we're looking at more infrastructure improvements to our graph sync itself. So, resilience via DAGs. So, if the AWS sync fails, then what will happen now is that everything else will fail after that and that's just it was kind of a move that let everything run serially, but I mean there's no reason because the GCP has no data dependency on AWS so we should create something smarter than that. And as I mentioned in six plus months we're looking at ingesting more real time data. And this last one we got more shameless plugs here when there's some blog posts on us there's some conference talks that we've given but yeah again you know thank you very much for having me and then we can open the floor for some more questions. Thank you Alex. We've got at least we've got up to five minutes for additional Q&A and if there's anything further that requires more detail I posted the slack link there so we can reach it to Alex in his team. There's some more that came up in the chat actually that I missed. Let's see. Are you guys using this in practice for SOC operations that lived. Yes, and I didn't talk to it in this presentation but if you look at the RSA link. I'll throw this off at RSA I think my colleague Sasha he shows how we use it for incident response how can you find out who owns a given service who do you loop in how do you who is the VP for that etc etc. Neo4j license ever been an issuer for potential users are their thoughts on pluggable graph DBs. I'm not the best person to ask about licensing or other graph databases we gravitated toward Neo4j because we really like the cipher syntax. I found that it's very useful for being able to you literally draw things out and it's reminds me of prologue I guess anyway we we really like that language. No GP. Okay. Did you explore building identities as abstract roles layered over nodes like AWS and octa or is service specific chaining the key thing. Do you mean like, I guess. So, you know, if you have a cloud operator, you know, maybe you're multi cloud, or building out contingency plans. As you're looking at those roles, is it more important to just, you know, get the actual correlations of reality, or is there any thoughts on, you know, taking those and kind of abstracting that so you could, you know, look at a AWS and a, you know, GCP and Azure. I think I understand. Stop me if I didn't quite get the question but what we'll do is that there are certain cases where we apply multiple labels to the same node. So a compute instance is a compute instance whether it's in Azure land or in GCP. So we'll apply a GCP instance label to it, and we'll also apply a generic instance label to it as well. And then so similar things for VPC is like they live in AWS. They also live in other cloud providers. Right. So multiple. Say again. So you have both. Yeah, exactly. That's our role in the specific role. Right, right. It looks like we got all the questions on the backlog here. And again, if anyone wants to reach out to Alex for more, we've got the slack link and the links he's provided there. So thank you very much for your presentation Alex. Well, thank you. Okay. But that said, I'll move on to the essentially working group and sake check ins plus individual check ins if anyone has anything to bring up I think it will be a bit brief today I think we just have one update so far. So, I guess, first, the sags are working groups. Do we have any reps from external sags are working groups that need to do any check ins or bring up any topics. Let's see if anyone's listed themselves as such. I don't see anyone is noting themselves representing a SIG, at least with an update here. I'm not saying I'm going to use those suspects. You know, policy folks, or Mark Underwood. I'll get to know the names. Alright, well if anyone feel free to chime in then in that case I'll just go to the individual updates we have one from Cameron Cedar if I got the name right today Cameron. Yes, I am here. That was everyone. Good good yourself. Good doing well thank you. Yeah so I put in a suggestion there. My suggestion was to create a an end user slide deck around security. I don't know if you were able to take a look at the issue, but just, just to outline who SIG security is in the CNCF, and also to dig deeper on considerations to end users around security and things that that they might be using for security in the public cloud for security around Kubernetes, whatever cloud native platform that they might be using for their, for their organization for their application delivery. So it's just a way to get the word out it's a way to express some of the considerations from the SIG security group, and to have a general understanding of what it is that we're trying to accomplish. So, any thoughts or ideas around that I'm trying to work up, you know, kind of a draft doc at the moment to show everybody what that might look like. And then kind of go from there. A couple of things that you potentially pull in, you know, with changes to keep on, you know, the pressure is kind of gone off and finalizing everything but we do have, you know, intro deep dive slide decks that are underway for for that I don't see Brandon on to where those stand. So there's a prior art to pull in, you know, love the initiative. Then the other point that I wanted to touch on in terms of validating, you know, getting early feedback and validation of, you know, the content and its viability. There is a new end user working group called contributor awareness, probably but during that, that is focused on a bit more contributors, and that might be a good forum to go and present the document get feedback and ensure that we've captured enough to, you know, get some from from outside of our, you know, little group to share. Okay, good points, good points. So what I was going to add on top of that was that was actually just like pure fluke one of the suggestions that I put into the notes for today. And my take on that was if it was possible to take either that or a subset of it and put it pretty much between the background and vision sections of the security main page, sort of for new members how do I get started chopping wood and hearing water sort of thing. Where do we start best. Hey guys, this is Vinay here. I actually have a very generic reference architecture kind of posture for this particular topic and would have to contribute that that would help with material. Sure, would it be something that could be attached to the, like the existing ticket for reference right now let me see that's, let's see number 362. Let me take a look at it, but I'm sure it can yes. Okay, I'll take a closer look and then because it's very generic and and it takes it gives a it takes a posture on how they know comprehensive security for CICD and I noticed that that was one of the so so how does that look like and where security can be inserted as appropriate and so on. Okay, thank you. Awesome I'd love to look at it. Sure, will will maybe we can chat through the through the ticket through the issue. Okay, that said, it looks like we've covered the SAG and working group check ins plus individual contributor check ins. If anyone else wants to jump in or bring up anything on those topics. Please feel free if not there was one other ticket here in the two other tickets here in the agenda and then the usual PRs requiring chair approval nonlisted and opening the floor. Okay, so I'll just jump into one that I put together myself last week minor documentation update number 350. Essentially just adding a scribe role to the existing roles within our documentation on the security page. I don't imagine there'll be too much to that maybe a paragraph or two and bullet form. So, if there's no concerns about it I'm happy to just go in create the pull request run the draft by the team and implement any recommended edits and away we go. I don't imagine it will be very much time on that one. Are there any comments on this one here. That's great. It'd be probably worth the pinging Brandon, who is has done a lot of facilitation recently to get his input as well. Should I treat him as a critical reviewer for anything pertaining to the main GitHub page documentation that sort of thing. For the facilitator role definitely. So, you know, we've gone through and doc have been looking to document most of our roles. The facilitator role is something we introduced in the last year or so. It's actually mentioned in the roles but not documented. So really appreciate the work message to get that documented and get some clarity so we can help more folks on board and top would carry water. Thank you. All right, I'll make sure to improve that for the facilitator plus scribe tentative scribe role. Okay, let's see. I think there was one left here on the backlog here. Suggestion define a review process for CNCF projects being considered for graduation number 367. I'm just going to get that here. Hi, this is Ash. So we spoke about this last week about formalizing a process for projects looking to graduate. And so I just created an issue from last week's meeting. I put some thoughts in this issue 367. So if you guys have any feedback on this, that will be really appreciated. So yeah, take a look. The TOC issue for the graduation process, I believe got merged yesterday that Michelle was working on in the TOC, CNCF TOC repo. So it might be worth cross cross referencing what we want to do with what the TOC is required that we do. So that's an issue up somewhere. It's on. I just have to find it. It's a I believe it's issue 361. No, that's the sandbox one. There was a graduation one that got merged yesterday. 37, PR 374. Okay, I can link to that as well. I look for that. We had some discussion amongst the tech leads and the chairs regarding that. Whatever process that we work on should be pretty late wait for right now until we have a few more assessments under our belts because we're still working through that process. I don't want anybody to be forced to comply with the process. It's not actually going to work because we've not quite gotten to that point of evaluation for graduation projects. So whatever the recommendation is or someone is planning on drafting a PR, I think we should start lightly first based off of the lessons learned from our previous assessments. Yeah, I think that's, um, that makes sense. And especially as I think the wording on the PR that we agree yesterday was that the graduation process should be relatively lightweight because most of the issues should have been addressed. Thank you, Bayesian. And if they were, if they were specific outstanding ones, those will be addressed to graduation, but in general, it should graduation should be relatively lightweight. Thank you. So there do not appear to be any peers requiring chair approvals so you can wrap things up with just general discussion slash opening up the floor. Anyone wants to grab the mic now's the time. All right, so I guess 15 seconds. Long silence. I'll treat that as we're all good to go. Thank you everyone for attending and have a great rest of your day. Thanks for so exciting. Thank you. Thank you. Thanks, Matthew. Cheers. Thank you everyone.