 Hello, can anyone hear me? Yeah, hi, good morning. OK, great. Just wanted to make sure my audio wasn't messed up. OK, let me put the link in. Hi, I've dialed in. Do once again just doing problems. So. Oh, good. I don't see Matthew on yet, so I'm going to make a new meeting thing on the dock. Give me one second. All right, I've created the notes for today's meeting. I'm going to put the link in the chat. I guess Matthew isn't here today, so I'll facilitate. OK, so if anyone would like to scribe, it would be good if we can get at least one. OK, so today's going to be a working session, and next week is going to be a presentation. We're having the folks from PASAC do a presentation. They are submitting their project for Sandbox, and the project revolves around kind of an abstraction layer for hardware, security, device, sensations, CPM, stuff like that. So that will be next week. So next week is going to be a presentation. So today, it seems like there's only one thing on the agenda, so we're going to do check-ins. And then if you have any other things that you want to discuss, please put them in the meeting notes. So no updates. Seems like we have no updates today. OK, great. If there's anyone that can help scribe, that would be great. All right, I think we're still looking for a scribe. If there's anyone that can volunteer, that would be really helpful. I'll give it a try, but my audio quality is not great. OK. All right, thanks so much again, Ash. As usual. And Justin, thanks. OK. So it looks like there's no updates. So we're going to scribe check-ins here. Checking in from other six, policy, workgroup, NIS, workgroups, so on. Mark, anything from your site? Nope. Put out the call again for anybody interested in analytics as a service and standards around that to support cyber security. We meet every other week on Tuesdays, so ping me if you're interested, but a standing offer until we get this thing launched later. Gotcha. You mentioned that at some point that there'd be something that you guys could present at the group. Yeah, this is a professor at Indiana University. And when they went virtual, he had to go underground playing catch-up. So, Justin, I don't know if that happened to you, but he got too busy at that point. So that's on hold. OK, whenever that's ready again. Yeah, thanks for remembering, though. Yeah, all right. So the only issue that we have today, and this is something that I brought up. Sorry, Brennan, one quick update that we did have our policy workgroup meeting this morning at 8. That's on every other week meeting. And we posted the notes from the quick call, if anyone's interested. And we're just talking about a policy violation CRD for Cuban Indians. Cool. By the way, is there any chance that, I know we haven't done an update with the policy group in the wall as well. Is there a way you could do a couple, maybe 10, 15 minutes, just talk a little bit about one of the next working sessions, to just talk a bit about what's new over there, and then at least keep up to date on that? Yeah, definitely. I think we're going to post an update probably in two weeks to the CRD proposal. And so I think that would be a good time to present to this group when we have that proposal ready. OK, great. Can you also add a link to anything from your meeting, or you mentioned some reference material or something like that. There's a, I left a spot in my scribe notes for a link in case people want to go and follow up. And I'm sorry, I didn't catch your name either. I'm called in so I can't see who's talking. Can you say your name? Oh, this is Robert Ficalia. Oh, OK, OK, great. Yeah, if you can just add a link below for people that want to look at it, that might be helpful. Absolutely, we'll do that. All right, thanks, Robert. OK, so one of the issues that I started working on again is the security landscape iteration too. This is something that Jess and I started working on. And so we made a lot of progress on that. And for those that were not around the first time we talked about this, the main motivation for this was that we found that our general landscape wasn't really that useful in terms of consumption. It was just a list of categories and a list of projects. It didn't really give a lot of information on how to use it. So if you want to take a look at the, this is the current landscape. We've created categories. And we said, OK, here are some of the different technologies within the categories and so on. But a lot of the time, some of these things were like, we had a huge discussion on this, which is identity access control isn't really a specific technology on its own. It's integrated into multiple technologies. How does it, is it a category on its own in the landscape? Or should it be part of, basically, a broad category that spans across every technology? So while back, Jess and I started working on this. So Jess and Keppels and I put in together this kind of first cut on how we wanted to see the security landscape. And the thought was that we would break it down into how to use cognitive security based on the processes. So one of the first things that we tackled was application security. So how do you create and deploy a application in cognitive security? And the idea of that is there would be a process. So we wrote down, basically, here are the steps in which a developer would deploy application. So you write the code. You commit it to GitHub and so on. And the idea is that at every step of the way, you would run into several threats. And we want to map these threats onto the possible preventions or mitigations. And that way, a developer can come in, take a look at how their process maps onto the landscape that we've provided, and then look at the specific technologies that they can use. And on the micro level, the idea is that we will be able to provide the details of the threats. The technologies referencing the technologies and projects and CNCF, they help to mitigate these threats. And on the macro level, we want to be able to provide a process which we can follow, which people can use to maybe bring to their managers or executives to say that, OK, here is kind of like a model that we can follow. So the details are just one aspect of it. One thing that we talked about quite a bit was being able to have kind of like a bird's eye view on this. And so we created the mock-up. And the idea here is, if this loads, OK. Let me try this later. The idea here is that we want to be able to create kind of like an overview of here the various kind of processes that we have in regards to CODNATIVE. And then here, if you're interested in, in this case, building a CODNATIVE app securely, we can look at this process. And the idea is that this will be kind of like exploratory interface. And then if you click on one of these nodes over here, it's going to give you kind of like a more in-depth view of here the threads. Here's how you mitigate them. So this is where we are at currently in the process. We've done the mock-up. And I think the next step here is to kind of create an example for this and create some kind of interactive web page so that we can try this out. And then on top of that, to start building the content for different types of processes. So the next step, I think, for us here is to actually create a website or a mock-up. I have some HTML to do this. I'm going to try and take a stab at this myself, but I am not a web person. So if there's anyone that would like to work on this as well, do leave a comment here, any expertise in this, or whether you're just interested in creating more content and giving feedback on this, that anything would be great. And I think the other issue that we can talk about here is the one-on-key cloak. I guess both of us, if we have a review team ready for this, when would you kind of see the window for doing the review would be? Time. I can get necessary people from team involved shortly. OK. So Justin, do you think that this is the next project in the pipeline for assessments? Or is it? Yeah, I mean, we have a couple of assessments that are sort of stalling, but there's no active assessment going on right now. And I would like to ask Ash to leave the assessment. Are you comfortable with that, Ash? Yeah, sure. I just need some hand-holding at the beginning, but yeah, I'll be happy to lead this one. OK, great. So what I'll do is I'll note that on the documentation here. And really, then the question is, for you, you feel like you're ready to go and start. You've got three other reviewers here that have volunteered to participate. Have any of the other reviewers done a review before or shadowed or anything like that? This would be my first time. OK, that's fine. If someone else has done a review and would be willing to participate, it's not too late to be added in. I think everybody, from what I recall here, all of the reviewers have already gone and put their conflict statements in. So that part's been done. And now I think we're just waiting on the chairs to sign off on reviewer conflicts for the top part to be done. So I'll go ahead and start this process a little bit like I'll make the Slack channel and stuff like that and ask you should probably start taking a look at the document to do the, I guess it's technically clarifying question phase or something we changed the name slightly, but just to start that part of the process. OK, cool. Awesome. All right, so I think that's all that we, at least, that was in the agenda for today. Any other discussion points? Just a quick question about that. So do we need the approval from the chairs before we start looking at that or can we start with the question phase? You can go ahead and start. It's just we don't want that to be a blocking thing. It's just that they need to do that sooner rather than later. And I assume like a bull slot, will it be on Slack? Is that more comfortable once you create the channel? Do you do? Yeah, I try to be, I try to monitor Slack and I don't see it as Slack and I will get project leads to join as well. So either Slack or just via email is fine as well, but we'll try to be to monitor Slack. Yeah, perfect. Do you have access to add everyone's, I don't know if you have added access to change the issue thing at the top. I think you probably do. But if so, can you add in, so I just added you in as a project security lead, can you add everybody else in who should be contacted or I guess I can just invite you to the Slack channel and then you can like add him ever. But okay, sorry. Yeah, go ahead. Yeah, that sounds fine. I can add people like I think I can add Krishna and the rest of the viewers on the Slack channel and we can start a conversation there. Does that work? Yeah, I think that sounds good. Okay, I'll ensure that necessary people get out of there soon. Is the channel ready to create it or do we need some other privileges to create on? I guess you don't need privileges to create it. I just created it and I'm adding people now. Okay, all right. I'm struggling with the, what is someone's real name given their GitHub name or what is it by name, but I'm getting there. All right, sounds good. Okay, any other issues that people want to talk about or just like ideas that people have any thoughts if there's anything that you're working on that you would like to share, anything. It's under way here. Just a short question for the surveyors that are doing some of the reviews. Do we call out assurance tasks? For example, if somebody says we have feature acts and the reviewer kicks the tires on multi-factor or something and says, okay, they did that. Is there any call out for assurance tasks to automate CICD pipeline to assure that the features are still working after future revisions or deployments in new environments? So I think the, let me see if I understand this correctly. So we are talking about the security assessment, right? Okay, we do review the aspects of having a process. And right now our benchmark is the CI badging system. I don't think we go into specific details like whether something necessarily is said to be that, but isn't that? I think we generally take the, we trust that the project says what it does. I think if not though, it's just that the scope would just be too big. I know argument with that. I just want to offer the use case of what we've run into in big enterprises. There's potentially conflicts with other tools when you deploy these things in the environment and features that worked when they were being tested in standalone stop working. So having these secondary, if you want to think of them as secondary or subsequent assurance steps available ends up helping you untangle some of these conflicts that occur. In fact, I don't have any official statistics about it, but as many as I would say, as many as a quarter of the issues that arise in a major security shop like ours and a Fortune 500 company might be associated with what version to version conflicts or apparent conflicts between tools which interrupts the telemetry stream. So, cloud native is a forward-leaning organization. It's something to think about. So if the projects come through with those kind of features, it will be a plus if not a necessary feature. I would add that, so I never look at the versioning conflicts that could occur or the interactions for when I looked at OPA, FALCO, but Brandon, just a comment on what you said. I did trace back what was on the CII badging documentation and Ash can attest to this. I did actually go through and at least spot check that if they said they had a notification process that there was some evidence that that process had actually been demonstrated in their change log or in their patch releases or whatnot. But that's about the level. I mean, I consider that fairly superficial, but that was about, there was some, at least the first layer, I tried to scratch the surface of everything presented with CII evidence. Yeah, I understood. Test automation, it's a problem in software engineering writ large, not just in security, but sometimes we have to lead instead of follow. It's a kind of like framework of particular projects that do this well that we can have as a recommendation. Yeah, that's a subject of considerable debate. You know, the DevOps community wants you to build it in the pipeline with an embedded test tool like cucumber or something that, you know, looks at the results from the tool and says, okay, it's still working. And if it fails, it automates notifications to the developer, but other people think that's not sufficient for complex environments where you're running multiple tools because, you know, the CI CD pipeline is a necessarily pristine one with a lot of segregated namespaces and test data and so on. And that's not the environment it gets deployed into. So I don't know that there's a consensus around doing that. So I think we're consigned to sort of a checkbox kind of approach, but, you know, getting, often as you tell the developers, they need to do this, it kind of changes their behavior because they think, oh gee, I got to stand up a dev environment now that provides telemetry to do automated testing feedback. And that changes the way they code in an ideal circumstance. So, but yeah, I don't think there's a standard around this yet that anybody has been satisfied with. That sounds like- And I can- Go ahead, go ahead. Oh, one more data point having had this conversation yesterday with a fairly large organization. You know, they're trying to figure this out as well. So at that scale, scale, you know, large cloud provider level, discussions around how you trace CI CD execution, specifically for dev and QA environments pre-production to your policy framework, your risk assessment framework, you know, not even scratching down to the level of reversing control on all of those risks. It's, you know, bleeding edge. People are thinking about it, but no one has a defined or even standard framework or tooling for that. This sounds like it could be an addition to the security landscape application creation on development process that Justin Capilson and I are looking at. Yeah, I think I could share, yeah, I could do a short deck on this at some future meeting if you like to stand up kind of the approaches that we've seen and some of them that we tried. Again, I don't think the result is super compelling, but we do, we have seen, for example, embedding security people in the Agile teams to get the assurance tasks built into the sprints. That works for the teams that operate that way, but also it depends on the kind of security feature that it is. Some of them are just are not amenable to that. Also there's the sort of high profile or high privileged access problem of, you know, what level of security, you know, in an RBAC kind of framework do you need to have to run the test in and is that compatible with the one that you find yourself running in production? And if not, how does that look? Then there's the data problem of getting representative data to exercise the thing. That's similar to the one you run with dynamic testing, people would say, why doesn't CNCF do dynamic tests with commercial tools to kick the tires on the products? And I don't know if you do that or not, but that's hard to do too, even if you have pretty good tooling around it. So I just wanted to comment on that. This was something that I brought up when we did the Harbor review. And I had suggested that, you know, because a lot of the artifacts are containers and images, et cetera, you know, at least why don't we, as we, one of the, I don't know, considerations, if you will, should have been like a complete reposcan of all the images, et cetera. And the response I got was, we don't want to be too prescriptive from that standpoint. I'm not saying that that's right or wrong, but the response that was given was, we don't want to be too prescriptive. We don't want that to be a gate, as well as the fact that in the case of Harbor, we had actually, they had actually done security testing, tent testing two, three times from different organizations at different points in the whole process. So I, and personally I'm all for being a little bit more prescriptive to say, hey, as security, you know, can we have the opportunity and should we start instilling these kinds of capabilities? And I think Mark to your point, I think some tools exist. It's very low hanging. And we have the opportunity to define that and bring it in as we perform these security assessments. But at the same time, I think Dan, I just paraphrase and I'm probably gonna butcher it, but I think we just want to perform the assessment, but I forget the right term, his verbiage, but it's there in the documentation. But I think we don't want to be gators. I think that's the overarching position, I believe. I would like to chime in here a sec. I think that there's a real like danger or risk if we start to go too far a field there. The concern is most of us don't do pen, like pen testing and these like detailed security audits regularly. And the firms and groups that do this well, it's more than obviously just running tools and reporting back and looking at real low hanging fruit. So Harbor's been through something like three different real security audits, including one by cure 53, which was very detailed. And so I feel like starting to do things that have us run tools on code basis starts to make us look like a really bad audit. And I think we're better off looking like a really good assessment than a really bad, than a good assessment and a really bad audit. So if we were going to build that capacity, I think we would need to get trail of bits for cure 53 or some other group like that to partner with us. And then for us to do some kind of like combined assessment audit, but I definitely don't feel like, just to use an example here, like I kind of just tapped Ash on the shoulder and said, hey, please leave this assessment. And I have some degree of comfort from that because of the great job he did on the other side of this on the recipient side from the OPA thing. The other reviewers on the assessment, I don't really know. I don't know that I've interacted with in any great detail before. And so for us, like, for me to just say, oh yeah, now you guys go like do what's basically a lightweight audit is I think scary. It's already mildly scary to do the assessment part like this, but there's some degree of comfort with having worked with Ash. But I think the further we go in this, the harder it's gonna be to get qualified people and the worse our work product will be. Actually, I agree. I think the key words from what you mentioned to me for me was an audit versus an assessment. An audit is far more, you know, greater in depth and provides obviously far more introspection as opposed to an assessment. And given that these are assessments, I think what we are performing right now is, is suitable, at least correct me if I'm wrong. It does seem like this version testing stuff is also kind of questionable whether it's on the borderline or the project responsibility or whatever you're integrating with because usually the projects can be used in more than one way. And I don't know whether it's really on the oneness of the project itself to maintain all integrity integration compatibility. Yeah, I would say, you know, I look at the assessments as prospective whereas audits are retrospective. So I would say if in the assessment process, we thought, you know, there was a consensus around that there's interactions and version control and all that was a key element of the risk model for that particular project. I'm not, I don't have a hypothetical where that may be true or false, but if for whatever reason the consensus was that that should be done, I don't think that the assessment should, to your point Justin, I don't think just that we should do that work with tools and the specific output. I think the assessment recommendation would be that that should be done by the project at some future time to address that risk. Yeah, these are all good points. And you know, some of this I think is nudging, not auditing. And I think there's a useful distinction to be made here between software engineering practices and the extent to which they become ubiquitous. And in our organization, and you know, I'm in a fairly narrow world of dealing with DevOps adoption and standardization. And that, you know, maybe doesn't reflect the whole world very well, but the Jenkins and the Jenkins-like model for co-development is so ubiquitous outside of security that to not have that built into part of the assessment seems like a miss. Well, we do have a spot assessment as part of like make sure it's that kind of do a certain level of like surface level checking. I'm not sure what you would mean in that case. Like in the self-assessment, would we have additional question about like the types of integration tests that you have? I don't know what we could ask as a broad question. And the other thing is, I mean, quite a few of the things that we have around process come from the CII staff. And I think it would be worth talking to them about potentially adding things, but I think that's been a, I think the CII badges has been very helpful for in terms of looking at materiality of process and projects. And I think encouraging people to go along that path, which we already have some requirements around and CNCF and things. And we've talked to projects about, I think is a good way of approaching that. Yeah, well, maybe I'm sorry I brought it up. It's a good morning. You know, the easy example to use is if you take encryption, you know, because we're, I'm dealing in PCI settings, you know, we wanna make sure that the encryption still works at the other end. So, you know, you can check it at a point in time, you can, but you can also have it checked to see whether the encryption occurred using an automated task. That's got a lot of value beyond the claim that there's encryption happening in the product, you know, in a certain path where there's data in motion or even for data at rest as well. So, it probably depends on the use case and for a product that's, you know, as mature as Harbor was, you know, and has already been through a lot of testing, it seems kind of silly. But on the other hand, if you look at a mature product like Prometheus, which is embedded in a lot of other products and there's a lot of dependency on the robustness of certain features of that tool, you know, I would feel safer if I had a pipeline that was doing builds that was doing assurance testing on security features Prometheus at the same time. This sounds like a kind of, I think the suggestions are good and it seems like for specific cases like you talked about, like for crypto, there are certifications that can be done, right? And I think that's generally how these are enforced is that there's a specification and then you can get certified by somebody. I think this would be good scope for kind of like if we were to do like a white paper or as part of the recommendations. But I think definitely we can be kept in mind for assessments if a reviewer thinks that, okay, this is a project which is used at a certain production level standard and we can make the recommendation to the CNCF that, okay, perhaps this project should, you know, spend some money and that this project will choose a certain level of certification, but it seems, like I said, very case-by-case dependent. Yeah, well, and just to extend your hypothetical, we're running, I mean, and this may be germane to the Parsec discussion because in hardware environments, especially certification for payment and whatnot, that is exactly what happens. You have to recertify if your firmware changes or revs. And so, you know, that might be a special case where, you know, version controls is far more relevant to the risk model. It's not, it doesn't really fit into your Jenkins software CICD case mark, but that may be a case where the assessment says, you know, you really ought to be doing things, every version rev to retest or recertify or whatnot. But again, I would say that the assessment should make that recommendation based on a particular risk, not necessarily perform that certification. Right, understood. Why do you put this on too? I agree, that's not a bad idea. Yeah, I see the value in this. I think maybe we can kind of like add it as a question in the self-assessment to be like, okay, is the project a lot skill enough that it requires a certain level of certification? You know, it's also the ecosystems evolving with a lot of dependencies. And, you know, that's for those of us that are adopting some of these tools directly or indirectly through our tools, it's a cause for some quite insecurities or anxieties. You mean you wanna sleep well at night? Yeah, there's that. I don't know why I joined this field. It seems orthogonal to good sleep. All right, great. Thanks for bringing this up. This was a good discussion. Yeah, thanks Brandon. All right. So before we close out, we have Justin Cormac. Do you wanna give some comments announcements? Yeah, I just wanted to say that the public comment period for the SpiffySpy incubation is now open. So please comment on the mailing list. There's actually accidentally two threads, either of them will do. But really thanks for everyone who outwork on the assessment and another due diligence work in this group for SpiffySpy. It was really helpful and thorough. And so that was really, really good work and very helpful. Thanks very much. So I noticed on the thread before, maybe I misinterpreted this, but there were a lot of people that were just sort of saying plus one to a lot of the threads without really adding any value. And I think someone at some point was like, stop just saying plus one, right? Is it valuable for us to weigh in and say, yeah, we've really done a thorough look at this and it's a really strong security project. Is that useful to say in that thread? Cause I've been kind of loathed to just weigh in and say that about SpiffySpy or to comment on Harbor or things like that because of it. Yeah, I think it, I think, yeah. I think a comment that's got, that isn't just plus one is actually substantive and in real sentences is valuable. I've said the public comment pairs recently have been on the actual main list have been totally at the one for Helm. No one said anything at all, which was kind of weird. So I think that, yeah, supportive comments that are actually have substantive are great or obviously unsupposive comments if people like actually have an issue that's not been addressed at this point, but yeah, I think it's good to comment. So question about the public comment stuff. So I'm, you know, my organization's not a CNCF member, so I'm sort of lurker in that respect. This is an open meeting regarding those public comment or is that truly public or is? Yes, that's truly public. Yeah, yeah. I mean, from my point of view, I think that, you know, Spitfire has done everything that you could expect a sandbox project to do and I'm really, you know, they've got integrated in lots of different places and having those sort of conversations with other projects and the way we'd like to encourage. So I think it's been a very positive, you know, I think it's very positive for them to move into incubation. I agree, but also I just want to point out, I don't think they've actually completed their assessment, have they, the assessment process? So they've completed the assessment process. The only step that we don't have the check mark on is the optional presentation to the TOC, which I don't think we've got the request for that and also because I think since you've done that presentation before, I don't think there has been that request coming in yet. I see, and also somehow the sign-off by the chairs hasn't happened. So should I move this to completed or anyway, I'll flag the chairs and we'll see. Yeah, sounds good. Yeah, but in terms of content and everything else, it's everything set. Okay, so if we don't have any other topics, then we can wrap up for today and we will have the PASAC presentation next week. That's good, be safe everybody. All right, thank you everyone. Thank you.