 Good morning folks. Hey Duffy. This might be a very quiet meeting. Okay, we're getting some folks in here neat neat neat It's gonna say I hope it's not too quiet. I see four people in two of them are me. No, it's fine We're gonna give folks a few more minutes to join before we get started Hello Hello, we're gonna wait one more minute and see if we have got more folks join us All right, let's go ahead and get started. Thank you everyone for joining us today as a reminder. We all Abide by the antitrust policy notice and if you have made it today, you know where the meeting is. Thank you for joining us Next slide. We have several to see members here today, but we won't be making any decisions on the agenda We're gonna talk a little bit about security audits Some common issues and findings from them how the TOC uses them how the maintainers use them and potential improvements Or even identifying existing challenges between Maintainers or TSE members when reviewing the audits for projects in an effort to just improve the overall experience for all parties involved We have with us David and Adam from Adologics who are gonna first run through kind of a little bit about their audits if you were Present at the last cube con they had a presentation on this which is kind of what sparked this discussion and then run through some Challenges and issues that they they come across and then we'll open it up to a little bit more of a discussion Around how our maintainers feeling about the audits What are some of the challenges in that process that they're experiencing and some of the recommendations that we have moving forward? All right, let's go ahead and get started Adam. David. I'll hand it over to you I'm David from Adologics and yeah, I started with Emily reaching out during cube con A few months ago where we're giving a talk about sort of we have done I think five to ten security audits of various CNCF projects now we have written reports and we have done it quite several times now and we gave a presentation about how they they usually go and various aspects of them and Emily reached out after the conference and basically had sort of the high-level query of There are some Parts about the roof. So gave the kind of like problem statement that the TSE are using these reports we deliver to assess the overall security of TSEF projects and She had some ideas about how we could improve the reports such that this decision-making Becomes perhaps. I don't know if I should say easier but becomes a bit more Qualified in a sense. So it's actually help that process not do your work, but give the right information such that you can give a correct assessment and I think this is a great idea And I think a key point there is that we have essentially never discussed with the TSE or so to understand what are the Things you try to extract from the from the security reports to to enhance your decision-making so that's also something that's very useful from our side to direct it towards the the correct people because in general when we write these reports there are various stakeholders that place both the maintainers both the CNCF who wants to pay us and both the sort of like the companies that often run these Projects and so on and then obviously the TSE is also a major stakeholder here And I don't know if I should say the companies there are also as they go, but it's indirectly they kind of are I guess As they are affiliate with the maintenance So that's the key point of the discussion and To sort of lay the groundwork We're going to give a short presentation of how our audience usually go the timeline and what are the technical aspects We look for and what are the outputs of it? So usually the engagements are five weeks long plus minus a few and sometimes they extend a little bit based on how kind of How how well we synchronize with the teams during the orders, so sometimes they will have some they will not have to A lot of time for the orders in certain periods So we have to kind of like it's a fairly flexible process in that sense It's estimated at five weeks and we do a lot of the navigation during that time to ensure Enough like the proper work is done and also we meet the maintenance We meet the maintenance schedule. So they usually start with having an introduction meeting Either the first day or a few days before the the the order starts where we discuss sort of logistics Communication channels outline expectations They can give us input for example if they have focus areas if they have certain concerns and they can just Give us a lot of information that they feel is relevant and we'll kind of incorporate that into our audit process and Basically the audit then starts There can be a lot of different tasks depending on each audit depending on the project but Sort of an invariant is we often have meetings either weekly or bi-weekly and this is depending on what the maintenance Want and sort of like prefer and we try to engage with the maintenance on their terms so after emails slack or other means and usually it's slack and I don't create me if I'm wrong and the output of the Report sorry the output of the engagement is a mix of things Fundamentally, it's a report That's kind of the deliverable we give to the CNCF But then there's also a lot of upstream code changes upstream document documentation changes security advisories threat models Security policies and various types of things So those are also outputs of the Artifactors output of the of the orders We have a list of all the orders carried out and to say here that there's also other companies that do this We generally follow a similar structure as far as I'm aware the reports have a lot of similarity in terms of Composition and I think also Retro-inspiration from each other so often when One company come out comes out with a new idea or a new way to present things other companies might follow along and Right if I can just add about the reports. They are all public Without exception. So so they are meant to to be shared with the community and other people that want to continue the work or Kind of our curious about what kind of security issues that are given project phases So that context super and and we do write the reports in in a way where we try to have section that is more appropriate for various stakeholders. So just to put a note here is that it would be Great to have some form of like a section that is primarily targeted the TSE almost and It's just about scoping out what what that the section should be and also in a sense What it should be while still focusing on which on all the other aspects should not lack because like we shouldn't confuse the reader in a sense, but anyway, so just keep that in mind Move on to goals because it sounded like you were talking about some of the goals in here Like happy to be able to come back to slider things Cool So What are the goals of the various security audits? We have yeah, so I can take this one David if you if you want Yeah, so we we typically have 4 to 5 formal goals for security audit and they are they are often very different And that is on purpose. So Some typical goals are to do some thread modeling of the Or do a do a thread model perform a thread model of the model of the project that we are all the auditing do a manual audit of the code base and The documentation and they perhaps the examples that that users are presented to get started with with the project Another goal is to look at the the test suite both Yeah, for example, first first test suite see if we can improve it and make an assessment of how it is How well it covers the critical areas of the code? And we also will consider and assess the static analysis Tool tool chain of the project And also add more tools or Assess findings that the project might might have skipped In the in their own work So so those are for example far for very typical goals that we will that we will have Have outlined before we start an audit So We have listed some bullet points here thread modeling We will look at the the typical threats and attack vectors of a project to and put ourselves in there In the in the position of an attacker to see how we would Assess a project and how we could damage the project And the threat model is useful for pretty much all the other goals that we are Achieving in the audit so we use the threat model when we audit the code we use the threat model when we Assess the fuss fussing of the project and also when we Assess the static tooling of the project Typically we start with a threat model and it continues throughout the audit And helps throughout the entire in the entirety of the the five weeks Then we may audit the code manually for box and security issues Considering the threat model very very closely when doing that And we check the the fussing We do a lot of fussing ourselves. So typically we will also contribute new new fusses and Add them to projects or just fuss integration. So they run continuously And yeah, so that will there will be a separate part of the audit as well And we will also work with the project on On So so if in case we find a Severe issue we will typically run it through the project's Security disclosure policy Pretty much to Kind of battle test, especially if the project has never had any advisories security advisories We will we're running through the security disclosure policy so that they go through the process of receiving a community Based Disclosure and taking it all the way through to Making a CV issuing a CVE and creating a public advisory And yeah, so so That we do that with If if we have if we see a clearly Severe issue and sometimes the project will also do it It itself they they will receive a list of findings at the end of the audit and then they will assess Each and all of them and if they consider an issue to be severe They will create the CVE themselves and we have experienced both both sides of that We also look at how a project maintains the source code So how many reviewers are there are do can do that Commits need to be signed etc And we will also assess the release process how are artifacts built and shared do they conform to Industry standards in that part of the project maintenance And finally deployment and usage So when a when a user deploys Or consumes the the the soft the the product the software product Are they secure or are there Is it is the project insecure by default Or are there some security Holes that they can very easily fall into So is it easy to open up certain attack vectors And are these document well documented in the in the project's documentation So yeah that's that's it for for the goals as a as a high level And just to clarify You initially said that the goals are usually different on purpose And the main point is here we kind of debate these things with With the maintenance and like the project the project that we are working with during the initial Meeting we set up an initial set of like these are the areas where we are working with We set up an initial set of like these are the areas where we think She can prove stuff and we got this in the sort of RFP But also during the initial meeting with everything as we present a lot of these things in detail and we discuss them further with them and usually adjust in a sense where we prioritize our efforts as well If I can add There is also an experimental element at times So to we will have different or depending on different security disciplines With the purpose of attempting or experimenting to see which benefit the product projects significantly And we have had security audits where fossing ended up being very important or where the threat modeling was was so important that it let that focusing on that Revealed more findings and then if we had let out the threat modeling So and as well as the static analysis we have also found issues with with static analysis when that in that goal So yeah, that's it for goals. Yeah, next slide please So this is an example table of contents is for Istio work and Istio Gage maybe did I think I can't remember now roughly eight months ago or so And that's sort of an example of where you can see how the goals are manifested in the report itself There's a fossing section, there's a threat model section There are an issues final section with which just list issues There is an assessment of salsa, which is kind of related to the instance release process Supply chain security and then there's the second last section is review of fixes for issues from previous audit, which is kind of a different section because Istio had A previous work done with a lot of issues found and they need to assess where they were correctly fixed. So that's sort of specific to Istio in that sense And It's fair to say that based on the table of contents that the vast majority of content itself in terms of pages are in the issues found section Which is 33 pages long significantly more than the rest of them Small highlight And so that's just to give a little bit of clarity in terms of how it manifests itself the goal and the audit and so on Next slide please And we also put a result summarized in each report and this is where we try to Encapsulate Well what the report found and what we think of the most important aspects of the audit and as we can see it's very technical in the sense that it's basically a quantifiable summary in that sense And yeah, I don't think that there's much to discuss more here, but also just to lay a bit of groundwork for further discussion A few links if you need them just to see how well I guess this is also important to say so these are links to the announcements for each audit so an important step for the CNCF is that we come out with publishable artifacts that can be used to show and distribute the work and sort of presented to the world in that sense And these are links to the announcements not to the reports and just to get an in a feeling of how the CNCF likes to take the work further Next slide please So The conclusions around sort of all of the security orders we have done so far is something along the lines of what we presented these slides we've done a lot of audits continuously and The focus of the of the orders are both to cover the technical aspects and in a sense also to improve the project's workflow processes, for example that the threat model can be an important part then looking at the release process and so on where we look at sort of what Salsa level compliance that they are at and how they can reach a higher level of compliance or so on are important workflow We have some quantifiable results and then we have listed four categories where we think Most of the security improvements we can dogetize them the application security that's kind of From the from the manual auditing work security automation, which is often improvements of the the CI improvements of the fussing infrastructure so on software supply chain, which is often ready to sell the compliance And then security policies which is in a sense related to both threat model as we often will And also both the threat model and also the Security disclosure process. That was what I talked about how we help and engage with containers in in establishing an appropriate security disclosure process and also how they can triage Triage issues with respect to how secure security relevant to a given issues which is kind of related to the threat model Next slide please. So moving forward, what should we do to what can we do to improve this and there are probably many areas. I think in general, my overall thought is we'd like to hear also some input from you in In terms of pros and cons of what we have been doing and then also what are the just sort of how you digest the reports and what you do with them as we haven't really We don't really know that in reality. It's all sort of guesswork from us. I would say educated guesswork, but but still it's It hasn't been made explicit and then Yeah, I think I think that's it. Yeah. Thanks, David and Adam. So I wanted to kind of bring everyone together to talk about this because In the course of performing due diligence on several projects that are seeking to move levels particularly ones that have undergone an audit When we're reviewing the security audit that's been conducted on these projects has a lot of really great information for project maintainers and At KubeCon when I was talking with a few maintainers that had recently undergone an audit, whether or not it was by adologics or another entity There was some Express challenges that they had associated with it one understanding that prioritization that David had alluded to earlier is how do we understand which findings are the most important ones that we are taking into consideration for remediation Which ones can wait in the context of the project because it's not going to be the same for every single project. How do we fundamentally introduce more secure processes or guides or recommendations to projects so that they have a potential to eliminate whole classes of findings Justin Can we do an end user survey to find out what end users want because end users are the other customer for these and I don't know that we've ever Asked them what they would like to see and I'd be really interested in what they say as well because I think that That they're the kind of the other that a stakeholder that's not really represented at any point in this process at the moment. Yep. I think that is a great call out Duffy. You're muted. I agree that end users are definitely not Like represented in in the in the view of this. I think other stuff that would be useful is like looking at one of the reports that I've seen. In the executive summary piece of this In the overall assessment having some function around like licensing review supply chain security review security instant review process. You did mention that you go through a security review. You know, you would open a CBE if you found one that is actually reasonably sustainable but I would I would argue that that probably should just be like, you know, probably should go through that every time regardless. And just validate that they actually have that the project has a mechanism by which they can handle things like here security responsive review. And then making those things part of the Making those things part of the overall assessment, I think would be really useful until it's you see because these are things specifically that are going to come from the Yep. That's a good point. I definitely making sure that any of the processes that are described as far as the security of the project are fully exercised continuously and not that it's just a one off that it was that it worked. That's a good call out as well. Other to see members when you're reviewing security audits. Are there things that you'd like to see that come out of them are there questions that you have regarding the content and even those to see members or maintainers on the call. How has your experience been with undergoing an audit are there changes that you'd like to see that would be beneficial for you. One of the rather basic things is with my premises head on we didn't always have people who were deeply familiar with go as a language which sometimes led to interesting results in the first round. This is more on the how to do it not on the result set but this is something which has come up once or twice right. Sure, I understood that was your point. We should have people involved as I experienced in a certain language or we should not have or we should. Okay, we had discussions about the P prof endpoint being enabled and being surprisingly activated in the go binary and we're like. This is how you do stuff. Highly specific example but this is something which which came up and didn't. We didn't like it as a project. Let's let's put it this way. Okay. Okay. Hey folks. So I'm the sub project lead for the Kubernetes six security external audit and we just released an audit not just as in April of 2023. What we've learned was also helpful was the since projects and maturing. This is this was the second external audit for Kubernetes would help us was also a review of the first one of 2019 and to publish that review, like what if any issues were still open issues were closed from the audit any and if they're still open why as well just to be transparent and have that we published that in a blog on kubernetes.io. So it was helpful and it was it was a totally different crew that as well that ran the 2019 audit versus the the one that was published this year so that that that helped the the case projects in that as well scope can be very big. I mean the kubernetes project is very big so the scope, there's not we couldn't fit everything in scope for the audits and there's always things that we want to you know we want to put next as well so that's also something for us to help keep in mind of that the scope can't be too big and and that sets are that also kind of presets or roadmap for the next audit or next audits which wants to be missed what scope or what what parts of the project we miss and what we want to include. And so that kind of helps with the lessons learned there so just want to make a few points. Thank you. So kind of, if I understand correctly in a sense, specifying the limitations of what was done, such that a next team can pick up on on those limitations. Exactly. It's also important. I think that's that's super key. It's also it's interesting what you mentioned with all the issues could we did because we did that with Istio where and this is mentioned in the report and where we reviewed old issues from old and old audit, and it was also one of the things that we wanted to do. And this is mentioned in the report and where we reviewed old issues from old and old audit and it was also a new team in Istio and they weren't. It wasn't very explicit where this issue is fixed by this this place or and sometimes it was difficult to sort of assess whether where the fix was and so on. And I guess this is kind of related also like tracking have not been done as well as it could be. And I think it's kind of related to what Emily said in the sense of they might not have had the prioritization might not have been there in a sense that it wasn't clear which one should should be addressed as as I guess which one should be addressed first and so on and they might have gotten a little bit lost or like there might have been a little bit too much to digest in a in a meaningful manner. There's a difference between his 50 issues. You're welcome and here's 50 issues and these five there were the ones first and we have the next batch of bucketize them and categorize them into a little bit more structured manner. I think could help in such cases. One of the areas that we had also talked about was some of the tradeoffs between going with one recommendation for for a fix versus another and trying to make that a little bit more transparent to maintainers for individuals that have contributed to projects or that have had this in a past life. Do you all often find that understanding those tradeoffs or having them explained as a contributor to a project are beneficial and understanding what are the next steps how do we do this as a path forward than the course and then making a documentation decision around that explaining why we chose to do something a particular way instead of another we see this a lot with some projects and some of their feature design discussions but not necessarily for security issues as they come up or security design decisions. I think we often find the way projects make decisions is based on what they jump as the state of the art. So for example, at some point salsa was heavily promoted and then they made decision of we should integrate various prominent features such that we could achieve this thing. So I think it's often understood by reference rather than necessary from the first principle type of approach, which is, which is still great. I think that's one of my thoughts there. Right, I can add another one. Performance versus security is also a very, very typical one. So a missing security check or adding a missing security check can affect performance and that's where it typically where it becomes very difficult to for projects in the orders to come up with a quick fix for an issue. Yep. Another item that we had talked about was around the security boundary associated with some of the findings are conducting the audit on the project. In some places, it's very clear what is in scope of the project and what is considered a security design issue or just a finding against the project. In other cases, it's a little bit ambiguous. It's a little bit unclear whether or not it's the project's responsibility to implement that fix or if it's an upstream or another project that they rely upon or if it's a configuration requirement on the adopter to take under consideration for their individual deployment environments. Have others experienced this this kind of quandary and been forced to make that kind of decision and how do you, how do you justify where that boundary is and how do you document that for for adopters for contributors that are looking at it just then. A lot of the time and in my experience with that what's come out with that is that the project disputes the findings of the auditors and it's not. It's not actually very helpful because it just sits there being disputed and there's no not necessarily a resolution, which is definitely a problem because that just kind of leaves it. These are open and there's no kind of. There's never a resolved decision. I often on those they just get left and all closeders won't fix or something which is tends to be unhelpful so I kind of rather it got resolved at the audit time. And a clear policy was written and it was then clearly documented one way or the other of what the decision was about that, rather than being left to post audits when it's less pressure to actually say come down to say one thing clearly. Duffy and then Richie. Yeah, I mean, I do think these kind of these things to struggle a bit because of this is because of that, but I think there's two parts to it also one is that like this is sort of a question about secure defaults like if you're going to have a configurable element, are you are you choosing a more secure default or are you choosing one that's easier to configure at the time and when doing a security review, it might be nice to identify and I think that you probably do this you probably identify those places where configuration could be made more secure by the way, you know, then then with whatever the default might be. And then the challenge you have is do you open an issue against that or do you just document that because if it for the most part people who are consuming a security audit as a vendor, when they see that you're not opening an issue against a particular issue that falls right to the bottom of the list of things to solve or to resolve because they don't care. So I am curious, like, from Ada's perspective, how they address that laundry. I think that's where you're kind of going. Yep. Adam or David, if you want to respond, go ahead and then Richard. Yeah, right. So I can, I have a real quick and then I can leave it to you, David, but so there are also some positive sides from a project saying or a project simply disputing or refuting a security issue in the sense that it's very clear to the project that it doesn't fall into our scope of the security model. And that's a very positive sign as well that it's clear to the maintainers and the security team where the boundaries are. It can be that we disagree on that, but at least the project's policy is very clear. So I like that. I think that there's a positive outcome sometimes to the project kind of being or rejecting a security issue for the reason that it's not in their security scope. Yeah, and that's exactly how we essentially resolve it. And I would say it's positive when they refute it because that shows the boundary. And then what we say is we tell them this is just documented and that's that just specify in your threat model. You do not care about this and you do not take any responsibility for that and don't expose this and so on and so on. And then they're good. And in general, in this case, we really refer to the onboard threat model, who have an excellent documentation that says very clearly, we do not care about all of this, we only care about this. And it's a somewhat simple threat model, but it's a really good one to show how opinionated they are in terms of where this boundary is. So that's kind of how we often trigger like we say, we sort of direct the conversation because we plenty of times have we reported issues and they say, stop reading an issue isn't it. And then we say, well, if it's not you just documented that this is not an issue because of this isn't that and we don't consider this in our scope. And, for example, like the most common example is probably when we assess security vulnerabilities that require a little bit of privileges within the system. So you can either have a completely external attacker from from from from place you got a clear with and so on or you can have someone that might have a little bit of permissions. And usually when you increase the permissions a little bit, it's still within some threat model a security issue, but a lot of maintenance might not consider that a relevant. You should trust from all that have crossed this level of permissions within the system and that's where we then clarify these. So we usually summarize we usually make it explicit either you should document it or we should fix it. And that's kind of right. If I can add so then the next thing is of course once it is completely crystal clear for the project then they can start the project can independently start to discuss on and work on whether it's the correct threat model and the correct approach, but but at least it is crystal clear what it is what what the what the security boundary is. Thanks Adam. Yeah, Richie last one last one. Okay. Last thing I guess the key thing is we tell them to be opinionated they should have an opinion on where the boundaries and that's that we leave them that to them they can make their own decision on what they want, but they should have a clear opinion. Okay. Thanks, Richie. So tiny with Justin's point and also what just has been said, again speaking from experience. Even if if a project clearly documents their security boundaries are the auditors might not always agree with with the boundaries. And in the end, this leads to a lot of discussions and sometimes again with the people of stuff and such also just misunderstandings on the auditor side. And the things or the thing which we came away with was making certain that everything is clearly documented in the report also disagreements and everything. But taking a step back maybe instead of trying to lawyer clear boundaries and and being super explicit upfront, maybe we should just have a little bit of an arbitration mechanism or third pair of eyes who are neither from the project auditors and look over the thing and basically also give input from from that level to to make certain that neither the project nor the reviewers are pushing too hard. I think that's a good recommendation will have challenges finding individuals within the community that are capable of serving that capacity. The thing that comes to mind is tank securities security buddy program that they've run, or potentially a modification of one of their reviews that they have just to side saddle and review the draft report and understand kind of where some of the conflict or the challenges are and where some of the potential trade off or value add to the project because there could potentially be occasions where the recommendation that's coming from the auditors might be outside of what the project beginner's initial scope was but would be beneficial from an adopter's perspective or an end user's perspective if the project were to take on that remediation for instance. What other thoughts and experiences do folks have that we could as provide with a to logic to potentially improve those reports either as a to see member consuming them I mean I've shared a lot of my observations and my experience with reviewing these reports but I'm keen to hear from others because I know that my opinions are just one sided. I'm in a logic so you have any other questions for us. If there was anything that we didn't discuss here on the call. I guess a bit of a will be nice to have some. So I have one specific question and then a comment. Just mentioned something with NDAs or so the initial question 20 minutes ago, I didn't quite get that. Is that repeated perhaps for my notes? Didn't say NDA so I'm just trying to think what I did say. It was related to asking the project sorry the vendors about their experiences with asking the end users about what they want because they're the people not being represented at the moment in this discussion. That makes a lot of sense now. Thank you. My question would be. Before what we'd like to take some initiatives to adjust these reports and then we are coming up with reports in the in the immediate almost. And it makes sense to do that. I think. I don't know what you prefer but what we could do is we could include what we think is make sense. We can send an email to Emily and the rest and highlight this was like we did this for this specific purpose. Let us know if this makes sense based on comments and whether it's easy. I guess we could have a little bit of a continuous communication slight communication to adjust as we move along. If that's the people are happy with that type of improving this. Yeah, I think that would be good sending a summary of some of the changes that you're looking to make in an upcoming audit to the TOC public mailing list I think would be fantastic. Yes, I think Amy if we could look at getting an end user survey pulled together regarding the usability of audits by end users and what features they'd like to see added or an improving clarity on them for their consumption. I think that would be beneficial and then we can share that with the auditors. Maybe. The reason I'm saying maybe is because that's not the area of expertise that's Taylor dole's all decisions so I'll bring it up with Taylor and see if there's something either currently in the works or if there's something already planned. Okay. Awesome. Thank you. No promises but like I see where we're going with this I like the idea but like, you know, not stepping on toes here. Yep. I think I think there's potentially other options with end users as well I think like inviting them to be part of the audit process like some of it, some of the customers of the project for example. I think there's potentially other things we could do to involve them well. I just wanted to highlight that usually audits are quite condensed. There's a lot of work in a relatively short period of time. So it's not as if there's scope for, in a sense, putting in introducing a lot more as, you know, it's like all the time is already allocated in a sense so it would be kind of. We should also think about that. Yeah, introducing new stuff might remove other stuff and even confuse the user anymore. Just just just a small note I thought about when we were going. Yeah, definitely. Yeah, that definitely makes sense and if they're not available to be involved in the right time scale it's not useful either. You can definitely talk to Taylor about in that end user survey if if he's willing to put that together or even collecting some concerns from adopters ahead of time before audit so that the auditors understand kind of like where they want some focus to occur or at least maybe address a small portion of it within the report for for different projects. There's some chat. Somebody's talking about part of the review of a project for end user surveys pulling any security concerns if you can talk to that Duffy. Yeah I was saying part of the review of a project is like we'll interview people who are using the project and get their thoughts about the project and it may be useful in that time to pull for security concerns that the that the consumer of the project has about the project and feed that back into the loop. I'm not sure exactly how we could like, you know, make that part of the process but it seems like that would be useful. I think very useful, not only for auditors but like even the maintenance themselves as well, I think. I think, especially useful as well to to have information about how the project is being deployed and used, which environments and how open is it to the world, etc. It's it's it's not often but it happens that we ask the projects of what are some typical deployment patterns for example of a given project, and it's not super clear to to the to the maintenance. And so that that will be useful information for sure. Anything else. I think one last comment on this was suggested, it kind of also ties back to the security boundary as that might be used for establishing where should the boundaries be. If certainly a lot of customers or users of the product come out and we use it this way, but the maintenance actually don't consider this as something that should be secure of using it that way might have impact. Right as an anecdote we have done a security audit where an issue was reported by another team a year before we did our audits and we reported the same order to the project and in that year time span of one year. The same issue became a very severe security issue because the use case had changed. And suddenly reading the local file system was not untrusted anymore. So that that's something that that will will help several parties. That's a good call out we've talked about highlighting what are the security features and functionality of different portions of the project so that they could be watched more closely. When pull requests are reviewed by maintainers for inclusion and understanding the impact of what those changes look like. So that's a good call out to have. Well, I would like to thank Adelogix for their time and talking through how they're kind of how they conduct an audit and everyone sharing their experiences with that I think overall the discussion and some of the recommendations that have come out of it will greatly increase the usability of the audits for a variety of interested parties maintainers to see members potentially end users and adopters as well. So thanks everyone, we really appreciate your time today. Have a wonderful rest of your day. Thank you everyone.