 updates. So, security, I believe you are first. Who do we have? Oh, I didn't expect that I'd have a little bit more time to wake myself up. No, we're going to move faster. Thanks, Liz. Yeah, so we have 56 members now, three more from since the last time we gave an update and now 40 plus affiliations. I stopped counting on the number of affiliations. It's too hard. So, cloud native security day. Thanks to Michael. Do see an Amy and Emily. It's come together really nice and we have 150 plus people that are registered and the talks are full and we published the schedule out there. We do want to see if we could encourage more diversity by giving out more diversity passes for attendants. So, Sarah and I are working on that. We'll circle back on that with any asks to the TOC. Governance, we've finalized the definition of the roles. We've put together tech lead, project lead and assessment owners for some of the assessments and I'd highly appreciate if any people can go in and then chime in on any of that. We're almost done with the assessment for OPA. I think we're actually presenting it later today. We are presenting it later today. Thanks, Sarah. That's going to be fun. So, we did learn quite a bit in that OPA assessment and any inputs and any intakes on that so that where we can improve the process or where more clarity is required would be a super helpful input from this group. Policy working group, we are folding that in, I mean we've already folded that into the security. There's more work going on in the policy working group. There's a proposal for formal verification that's happening there. I'd encourage people to go take a look at it. It's sort of trying to use a single language to define the security posture for the entire of the infrastructure. So, it would be useful for people that care about distributed systems, which I think most of them here are to take a look at that and chime in and help. We do have assessment priorities that we've published. We sort of agreed on the criteria and condition with Joe's today list. I think if there is anything that needs to be changed on how we think through in terms of what the criteria is for picking up of assessment, we'd be open to, but right now we have sort of finalized on that and then we've put together the list of things that we are going to be assessing there. The second ask is also guidance on who the audience for white paper is. I'm starting that process with the help of TOC liaison. So, any input there would be super helpful. You could either chime in here, ping me on Slack or the best possible place is Six Security Depot, like chime in on the issue. That's update from Six Security. Thanks. All right. Here we have from storage. Erin and Alex are both on, but I'll be going over the review. And Quinton is on as well. Apologies, Quinton. So, right now we're going through reviewing Dragonfly. We're looking to engage the tech leads from the storage SIG as well to help with the project's review, just making sure that we're looking at the process of scaling going forward and how to best utilize all the team members. We completed Longhorn and Chaboa FS. Longhorn, we're going to talk about later today on this call. So, ongoing, we continue to update the landscape white paper along with database updates and also documenting different use cases, you know, what is commonly available, and then we're looking to also put in some metrics around performance and benchmarking. And then the next steps, as I mentioned, is we're defining a process for reviewing these projects. Since the current review criteria for Samba incubation and graduation were based on a non-SIG process, we recognize that we need to work with the TOC in a way that makes sense to provide recommendations for projects so that we're not doing the due diligence twice. So, we've started a very rough draft of this process. The link is on this slide. And we'd love to talk about that in more detail, perhaps maybe in the private TOC call to figure out how we best go about that for all the SIGs. But this is our first run-through on that. Are there any questions? Florent? I think that process document looks like we need permission to access. So, we might get a few requests for that. Okay. So, SIG App Delivery, do we have Brian? Brian's here. Yay. Yes. So, we're still ramping up and trying to get up to speed, but I wanted to call out a few items that we are touching right now. There is this cloud native app delivery dictionary that Harry from Alibaba started. And what it's trying to do is bring a set of a standard set of words or maybe a consensus around a set of words that we use to describe cloud native terms in respect to the SIG App's delivery. So, that document is up for review right now. And probably in about a week and a half, we're going to start really pressing on people to get the words in so we can turn it into a deliverable. The next item is this creating this application definition document. And really what we want to do there is we want to start thinking about what it abstract from an abstract point of view, what it takes to describe an application and get that down into words too. So, there's no link because the document doesn't exist yet. It will over the next week. And the final item was that we are working out logistics for KubeCon. We're going to have two sessions. And it's pretty helpful that I can actually affect that, but we're making sure that we have two interactive sections to introduce the new group. And as far as notes, Quentin brought up last week that the second and fourth Tuesday of the month was a little contentious at 11 a.m. Eastern time for meetings. So, we're going to move it to the first and the third Tuesday at the same time because looking at the calendar, it's a lot better. We're working with Amy to get that taken care of. And then also, this is something we need help with. We have a pull request to update the SIG App's delivery repo so that myself, Alice, and Harry can access it. And that PR has been sitting out there for a little bit. So, we just need someone to. We will get it fixed. I already know that. I was just bringing it up. Amy and I did talk about that before. Shame. Shame. What did you say? Shame. Your name's up there. Come on. Hurry up. So, that's about it. But really what's going to happen now is I think that Harry, Alice, and I have a cadence and we can start moving on to more complex things. And something I didn't put on this list was there's no way we can do this all. So, discussion around tech leads for some of the things that we're trying to do will be coming up and we will definitely be discussing that next on our next call. So, that's it for me. Thank you, Brian. And the new, the latest SIG to join the ranks runtime. SIG now known as runtime. Hi, yes. Sorry about the very busy slide there. But basically just a public service announcement. We have a draft charter. Quite a few people have been through it. I think it's kind of getting to the final stage now. There's a little bit of time left for chiming in if you would like to. We've provisionally changed the name of the SIG from core to runtime. There were some pretty valid objections to the name core. Runtime was, I think, the best we could come up with, but probably still not perfect. So, if anyone has any better ideas, please feel free to contribute. And if you would like to get involved further in the SIG, please reach out to myself, Brian or Brendan, who are the TOC liaisons for this SIG. The scope is everything to do with, you know, execution stuff. So, Kubernetes type things. So, workload execution, management systems, components, interfaces, general orchestration, auto scaling. I'm not going to read through the whole slide there. But also specialized architectures of these things. So, you know, for example, the container orchestration systems aimed at edge computing, IoT, batch, etc. and incorporating, you know, specialized computing elements. So, the projects that are kind of in that scope at the moment, pretty much as per the original TOC specification of the SIGs. So, Kubernetes, container, deharbor, dragonfly, virtual kubelet, CRIO, kube edge, and the new kubevert. That's it for me. Unless anyone has any questions or comments. Thanks, Quinton. And then, I like how we have other SIGs, question mark, SIG network. You've got to read that question mark, right? Yeah, I've already learned to be on the right side of Amy. This is just a reminder. Oh, very good. So, this particular SIG has been, I don't know, embarrassingly a long time coming. It's a bit of a reincarnation of the networking working group. We had, it was our goal shortly after KubeCon, the EU in Barcelona to reincarnate and reform, can reach charter with a bit of an expanded scope. We're finally doing it now. So, there's a draft charter that's been sent out for a broad review. There've been a number of folks who've signaled interest in this area. And it makes a lot of sense when you consider how networking as a discipline is just part and parcel to every request that flows through a distributed, you know, system through a distributed application. So, so networking like some of the other SIGs ends up touching a fair number of areas in general, I think that we consider topics and kind of projects that fall within the crowd-data network, API gateway, coordination and service discovery, service mesh, service proxy, and RPC categories within the landscape are, you know, a foremost focus and will be topics of discussion. One of those actually that falls within coordination and service discovery is EDCD that I think is already the focus of SIG storage. And so we've kind of pushed EDCD from focus. I think that there's a lot of other backlog to go through. There are open standards, open specifications that are emerging in the space for things at proxy layers, for things at service mesh layers. This provides a good vendor neutral venue for those discussions for helping advance some of those initiatives. Initially, we're intending to hopefully be light on some of the governance, be light on some of the roles, and that's all of TVD based on how many folks and participants descend upon the SIG. And everyone is encouraged and welcome to do so. There's a channel in the CNCS Slack, a new mailing list, and what will be an intro slash deep dive session at KubeCon. And so please do go, if you're interested, please do go review the charter. We're hoping to respond to comments and solidify within a couple of weeks. Terrific. Glad to see all these SIGs forming. And it looks like we're going to have a full compliment by KubeCon, I think. I actually have a question just related to the sort of other SIGs question mark. And this may be a question particularly for folks in SIG app delivery. I'm wondering whether it might make sense for a serverless SIG to kind of form out of what is currently the serverless working group. I wonder if anybody has thoughts around that, whether that would make sense to be another SIG? I think we did discuss that when we were formulating the draft SIG breakdowns and at the time application development fell under SIG apps and we thought that serverless was a kind of application development. There's also the issue of serverless platforms and the kind of support that things like Kubernetes need to provide for, you know, fast containers, etc. And so that was the thinking at the time that it fell under SIG app delivery. And are folks from SIG app delivery kind of happy to continue to cover that space? Does it feel like a natural home there? It does. But like Quinn said, there's the application side and then there's the like if you think about like Knative with the eventing side which is under the cover. So there are two ways of looking at it. We can actually start looking at it from the front side and we probably and we actually are going to. But I think the other side does need some love. So your decision there. There hasn't been a lot of movement in the serverless working group for a while since that group has moved to the cloud events sandbox project and trying to get that wrapped up. I anticipate that we'll switch back over to doing some work on the serverless working group. You know, try to look at updating the document and everything. But you know, I agree that there are two sides to this. There's, you know, kind of the more general where's serverless going, etc. that the serverless working group is looking at and then that app delivery, which I think belongs in, you know, in the current. Yeah, we sort of when we completed the white paper, we kind of presented that back to the TOC and kind of made a decision not to do anything further with serverless at that point. And so, you know, to the point that was just made, we can definitely pick it back up. We kind of moved over to the cloud events and started working on that is for the action we took away. And so if there's other things we want to do, that definitely makes sense to that in the CNCF, I believe. Hey, Liz, this is Jeff. I think we should have at least try to do a serverless thing. So I'm pretty passionate about it. Can we can we put that on a potential to kind of move the serverless working group to a serverless thing? I personally think that would be a good thing to have as a, even if at this point, the existing working group is relatively quiet, maybe we should have a, I think right now there's a PR or an issue that has the kind of list of those things. You could have it as a possible feature sig at the point where yeah, I worry that there's a bunch of underlying serverless infrastructure, serverless projects maybe don't quite form naturally into that delivery. Well, I can take that one and try to talk with the current serverless working group and see if we can do it, describe a pull request for that sig. There's another option here that we have, which is to keep it as a working group that is focused on serverless inside one of the sigs. And the two, obvious sigs, one would be app delivery and the other one would be what we've now called runtime. Because a lot of having been involved in some projects that are building serverless layers on top of Kubernetes, for example, there's quite a lot of general purpose useful stuff that needs to be added to Kubernetes to make it suitable for a serverless platform. And I think that would, it would be useful to have those conversations in the runtime sig, for example, because they're not all serverless specific problems, they're actually platform general problems. Just a thought. So maybe we could just re-energize the working group and drive discussions in those other two sigs. Exactly. Have the working group be the place where serverless specific stuff is discussed, and it can be homed in either of those two sigs, whichever that working group thinks is the best home for it. All right, why don't we, I think as well as suggested, let's have the existing serverless working group discuss amongst themselves for a while. I just wanted to flag that as a possible area that right now feels a little bit buried away from the other sigs, but I think we should probably move on and talk about Longhorn. That's okay. And Liz, if Doug is on and I can take the action to get back to you guys with the feedback from the serverless work group on that. Thank you, Ken. Great. All right, so Longhorn, Shane, do you want to take it from here? Yes, thank you, Liz. So from last time, the presentation of Longhorn in the TOC, we get some feedback about we currently don't have very strong development community-driven development. So the current situation is we do have pretty active user communities, but we have to admit that there are many contributions from developer outside the Rancher Labs. And we have identified a few reasons. The first one is we think the Longhorn's technology is really considered a highland curve compelled to some others. And in this case, we need to invest much more into providing a guidance or documents for developers. We mainly focus on providing documents and the guidance for the user at the moment. So that's one thing we need to change. And the second thing is the development process is mainly driven by our internal Rancher Labs engineers. And for the outside, they can see what issue has been working on and when it's been done and when the release is going to happen. But they don't have the full view of the process and how to get involved. That's probably another barrier as well. And also, the project's wellness is not really high enough. So that's, I think that's basically, I would think that if 100 people start using your project, probably, I don't know, less than one percent, probably going to contribute. So in order to get higher contribution, developers, we also need to increase the product's wellness by all means. And finally, we think as the Rancher Labs, as the parent company may make people think Longhorn is biased towards oil truth with Rancher. And there are many people asking if Longhorn can only be used in the Rancher and can be used on the OpenShift and on other like AWS or GKE. The answer is, in fact, Longhorn can be installed on any Kubernetes, but with the Rancher's umbrella on that, so that makes me think if Longhorn is probably biased. Next slide, please. Yeah. So based on what we observe and what we think may be the potential reasons, we have formulated the following actions to make sure the growth of a developer's community. And the first is we are going to make a barrier, the technical barrier lower for the new contributors. And we are going to invest more time and providing the architecture design doc and all kinds of design docs. And the document for the developer to understand how Longhorn works and how the components impact each other. And also, currently, the Longhorn's development requires three node cluster on Kubernetes. We normally do that on some digital ocean cloud provider. But if you think that not everybody has a cloud provider backing them up, so we are going to make it possible to complete the development setup on a laptop. Probably going to utilize K3S or some technologies to make a standalone Kubernetes small footprint setup and make it possible to complete the development there. And also, with some other small things like we can mark small issues and help on it to have the user to know which issues they probably can use as a gateway to get into the development in the long run. And the second thing is we want to make the development process more transparent. And from now on, all the new features design doc will be shared. And currently, we are thinking about using the form as either Wehe or using Google doc because the new development, new design doc normally going to be modified a lot. So probably, we haven't decided to adapt to the Kubernetes KEP style, contribute to the design doc, probably going to be too big for us for now. But we can see down the road how it goes. And also, we are going to hold a monthly community meeting. We've got the latest design and update of the project. And as we have just decided the meeting, we're going to be hold on the second Friday of each month. And next week will be our first meeting. And speaking about how to raise the awareness of the project, and of course, we're already trying to speak as we've already tried to speak as many community events as possible, but we haven't do much on the development meetup and the small conferences. And we are going to spend more time on that and try to reveal, try to raise the awareness on that part. And finally, regarding the Rancher Labs as a parent company, so that's also one of the reasons we are going to donate. We try to donate phone horn to the CNCF. So with the CNCF serve at the mutual room, a mutual phone for the project, I think the concern on the bias towards the rancher will be diminished. So that's what we get the feedback we got and the things we are planning to do to address these feedbacks on the community side. Can I say if you think? Sure. Thank you very much for doing this. Thank you. Well done. I think that we already did, Liz, confirm or not, did we already vote on Longhorn for Sandbox? Well, it's Sandbox, so it's more... My recollection of this is that there were some concerns raised about whether or not Longhorn had a sufficient community or planned to engage with the community. I absolutely echo what you're saying. This is great to see this conscious effort going into making this work. So I think the concern that was raised was about whether Longhorn had a that kind of breadth of community or had a plan to achieve that breadth of community. I try to remember, I think yourself, me and Shang might have been, sorry, Shang might have been the people interested in sponsoring. Yeah, just, hi, it's Alex over here. So it's probably worth just noting that the SIG had to review the Longhorn project and we had sort of given it the thumbs up and send the information to the TOC. And I believe last time around when we got to this point it wasn't so much about the community, but there were sort of a couple of question marks around things like the CLA, which were things that we kind of all agreed were things that can be sort of sorted out post adoption as a sandbox project. So honestly, I think it's just a case of, you know, if there are two TOC members that are ready to sponsor it, then it should be a sandbox project at this stage. Right. So I would like to apologize to the Longhorn team because we've put them through a lot and you've done really great and you've been incredibly patient. And I think everybody now thinks that your project is more than worthy of the bar that is sandbox. Would anyone like to dissent with that statement? I completely agree with that statement. Fantastic. Well then Liz, unless you disagree, I'd like, I propose that we move forward and Longhorn become a sandbox project and thank you very much for to Shang and team for doing this. And I think it's a good template for other projects that have come through this process where they've been shepherded very much as a sort of single vendor saying and trying to open up in stages slowly little by little as they come into the foundation. I think it does point to the improved process that we were trying to work on. I'm trying to open up the docs. I got to move it to my Gmail account, but where we don't require poor people like this to present twice then I think we want to reduce the work, especially for sandbox. Please contribute to that doc so we can harden the process. Amy, I wonder if we could essentially reuse some of this plan here as a kind of template or have it somewhere as a reference because I think it's great. Some great ideas. Yeah, we can work on that. So I think that means that we should now consider Longhorn a sandbox project. All right, thank you. Congratulations. Thank you. I also want to thank Shang for guiding us on this community growth plan. And of course, thank you Alex Liz and Alex and helping us to get this in. And it's been a long journey, but it's definitely we'll see what we're lacking of and what we can improve. And I definitely believe Longhorn will be a great sandbox project. And yeah, let's see when will we become incubator and even graduate. We have had a higher goal now. Fantastic. Thank you. Thank you. Thank you. Right, what's next? Is it Yeager? Yuri, hi. Hello. All right, thank you. So Yeager is just as a brief introduction. So as a brief introduction, Yeager project has several different parts. And so I have a kind of a diagram here on the left. We have what the seven official repositories with implementing Yeager clients, implementing open tracing API. Those are the that you put in your application for collecting tracing telemetry. Then we have the main repository, which is a Yeager backend. It also has another repository with the visualization front end. And we have several other repositories that implement various data mining tools, like aggregations, dependency diagrams, and all that. Next slide, please. So as an overview of the project, the development started at Uber in August 2015. And then in April, we open sourced it. Red Hat came on board at that time and started actively participating in development. And so they actually the ones to encourage us to apply to CNCF, which we did. And we were incubating since September 2017. I think we missed our annual year last year. So one thing that, so we have a number of users using Yeager in production. There's a doctor's file in the repository, although it's not super up to date. There are many more users than what's listed there. We recently published a number of case studies after interviewing some of these companies, specifically Mastegra Fun and WeWorks. So they're posted about how those companies are using. And obviously Uber is probably one of the largest users of Yeager in production. Next slide, please. So this one, Matt Klein, who is a sponsor for Graduation, asked me to put a couple of slides talking about Yeager versus Open Tracing and Open Telemetry, which are other tracing projects in CNCF. So here, first, Yeager versus Open Tracing, which is a current state, as we can see in the diagram. So if you have a user application process, it can be actually instrumented in different ways. You can directly instrument your code, or you can use some framework like JRPC, which comes with tracing instrumentation, or sometimes you use an automatic agent, which will do monkey patch and some other type of instrumentation. But all of those types of instrumentation, they ultimately talk to the Open Tracing API. And so therefore, instrumentations are portable. You can take any other vendor as a tracing. And then what Yeager does is that starting from the blue line, it implements the Open Tracing API. So all the calls from the instrumentation come to our library, and then we collect data and ship it out to Yeager back and components. Yeager itself, as a result, does not provide Yeager project, does not provide any instrumentation whatsoever. So if you want to use JRPC with tracing, you would go to Open Tracing contrib organization and you pull some library, which actually implements that. So that's part of Open Tracing project. And again, that's the instrumentation that's portable. So next slide, please. And now the question about what about Open Tracing sensors, open sensors and telemetry. So Open Tracing and open sensors are emerging into Open Telemetry as the sort of the next major version. And that project has a bit more overlap with Yeager, but it's still very synergistic overlap. And so as we can see here, Open Telemetry also provides an API for instrumentation, and it will provide in the future the actual instrumentation code. And so that part doesn't really change that much from the Open Tracing state of the world. However, Open Telemetry will also come with the actual SDKs, the implementation libraries running in the application that will collect the data. And so that will compete with Yeager client libraries that we have today in multiple languages. And we as a project kind of leaders are actually very happy to discover that work from the Yeager project because it was a lot of work. And there's not something that's that much unique in the Yeager client libraries almost. So if Open Telemetry can provide those libraries, we would focus our effort instead on the backend and visualization and data mining, which is really the meat of the project today. And the last part is open sensors and Open Telemetry by extension had another two components called agent and collector. And the reason they did that is I think one of the challenges with Open Tracing was that if you are shipping the binary, then you as author of the binary, you kind of have to make a choice which tracing implementation you bound with that library, unless you provide some flexible plug-in framework because you can change out, but in like in the global binary you can't do that. And so that was always a friction because people didn't know what to choose which tracing library to choose. And so with Open Telemetry, you can choose the default implementation of Open Telemetry. And it so that you don't have to configure it. It will also export data in the default standard format. And therefore agent and collector, which are simply components, which accept the data and forward to the backends, whether it's tracing or permissive backend, those components can also be implemented as a standard component in Open Telemetry. So again, in the current state, we have a duplication of Open Telemetry, but in the future, if those components develop parity with Jager, then we'll be happy to switch to them and not spend cycles on these two components. And again, our main focus will be at the bottom box, which is like a tracing backend, storage backends, visualization and data mining platform. Next slide, please. So this is as far as aggregation. This is some of the stats about the project. We have over 1,000 contributors, which I think since it accounts them like as authors of commits and pull requests and comments and issues, specifically of commits and pull requests, we have over like almost 400 authors. And across different repositories, we have currently, I think 15 maintainers with official commit rights. And for the backend repository, it's seven maintainers from Uber and Red Hat. And some other stats are also on the screen. Next slide, please. For the graduation criteria, so we have a pull request for the proposal, which has a bit more details than the slides. So as I mentioned, we have a successful production user document that Red Hat actually bundles Yeager as part of their service mesh product. And so they support it on that front as well as actual commercial product. We, as I mentioned already, we have a pretty healthy community, although I would have liked to have a few more maintainers. We have a couple more people who are currently actively contributing to the backend and they might become full committers if they meet the requirements that we have in the guidelines. And finally, in terms of like velocity, we do releases of the backend approximately every two months. Client libraries release on different cadence as features are added and needed. And yes, in the backend, at least, we've had over 1000 PRs merged last year. So I think it's project has a pretty good velocity. I think this is my last slide, so any questions? My recollection is that Matt Klein is working on the due diligence document right now. I think Matt isn't on the call today. Yeah, he couldn't make it. Yes, he and I will be working on that document. I think I looked at the template, it's a bit more questions than what's in the GEC gradation. So yeah, that will be forthcoming. And I think obviously, subject to the due diligence, it does look like you're in very good shape from what I've seen there. Anyone got any comments or observations or questions? So I have a question on the process. Once the due diligence document is available, what is the next step? Really, it's a vote from the TOC. I'm sure if we have questions that come out of the due diligence or following on from this presentation, we might come back and ask questions. But really, once we have the due diligence document, we can just take it to a vote. Well, thank you very much for the presentation. I think the fact there's no questions probably means people are feeling pretty comfortable with what you've shown. So thank you. All right. What else do we have? Open Policy Agent, Security Assessment. Sarah, is this you? Hello. Ash is going to kick it off doing the left side, which is really about what OPA is. And then I'll follow up with the recommendations. Good morning. I'm Ash Narkar. I am a software engineer at STIRA and I'm a core contributor to the Open Policy Agent project. Thank you so much for this opportunity. So let's talk about OPA. So the goal of the project is to provide a consistent policy enforcement across the stack. And about OPA itself, it's a general purpose policy engine that can be used to enforce custom security policies in disparate systems using a high-level declarative language called as REGO. And some of the benefits of OPA, if you think about it, a single organization can have like thousands of security components that require authorization. Each domain, vendor, and product has its own authorization paradigm, expressiveness, and interface for administrative control authorization policies. So the challenge with achieving a least privileged authorization is the number, the complexity, dynamicity, and the heterogeneity of software systems that organizations are embracing. And so OPA provides this unified approach to authorization, giving organizations context aware visibility and control over their authorization posture in dynamic environments. Using mechanisms such as admission control, OPA provides card rails so that organizations can impart enough power to their employees to promote rapid innovation without compromising on security and safety. Regarding OPA's maturity, Netflix is one of the earliest adopters of OPA, and they are using OPA for authorization of the HTTP and GRPC APIs. Companies like Chef use OPA for API authorization and auditing API access. And there are more than 20 companies who are actively using OPA in production for use cases such as ABAG, RBAC, admission control, risk management, and so on. Regarding OPA's community, we have around 78 contributors on GitHub right now. The project has been started more than 2,500 times, and we have a healthy slack community of around 1,200 people. OPA has integrations available with more than 20 open source projects such as Kubernetes, Docker, Terraform, Onward, Istio, and they keep on adding more integrations. So that's pretty much on OPA overview side. Sarah, I hand it over to you. So yeah, I also want to point out on the bottom left is a link to the full security assessment, which includes a self-assessment of the project, which we all on the security review team chimed in on and worked on clarifications and contributed to the security analysis. And it's still in PR, so we welcome people on the call or anybody to review in detail and give us feedback. So coming to the recommendations, highlighting part of the security assessment is in OPA, taking these heterogeneous environments that are so common in cloud and unifying policy then presents its own security risk. So part of the risk of the project is really two-fold, one that it's not implemented correctly, so you could have a false sense of security in thinking you have all these policy controls that you don't have. And in whether you've actually expressed the policy you intended to express. So it's important that people adopting these security measures don't consider it to be a panacea. You also have to be attentive to whether they have correct implementation and whether your design is implemented and expressed appropriately. So the recommendations for our security assessments really fall into two buckets. One is what could NCF, what could we all do to improve security of the ecosystem in helping this project? Really things that are maybe outside of the scope that the project itself could really execute on effectively. So one idea is that a study of the user practices, if we were to discover CNCF members or companies that have implemented OPA, that would be a great resource for finding what are the things that people might have inadvertently deployed something incorrectly. Are there common patterns there? Are there also common patterns in insecurities? This OPA allows custom policies, therefore every policy is different, right? However, we all believe that there are actually common policies. OPA has like a rich set of examples. Maybe we could expand that by discovering what are emerging commonalities across its users. And that's where CNCF has visibility into a wide spectrum of users and could help this project. Another analogous recommendation is that it may be that individual companies applying OPA all have common dependencies, where if OPA were to integrate with a common dependency, maybe that would accelerate adoption of OPA, as well as adding security by creating robust implementations of integrations that are used by, which would then be used by many vendors. So that's one part of the event recommendations. The other part of the project itself. OPA has really vast documentation that is generally very good. We felt that the attention to the gotchas, to the potential problems could have more attention in the documentation. And we also brainstormed some ways that OPA could help be a little more secure by default. Like we had some ideas about how the implementation of the language or the tooling features could help that. And the easiest things were things that, or the most straightforward things I should say, are already on the roadmap in terms of improving testing and playground. So that people can validate their policies. But we also had some ideas about some changes to the language itself. And then also kind of a call to action for people who are using OPA. We'd like to see more companies represented on the security team. So if you're a company that's using OPA, maybe you could consider having one of your security experts join the group. So there's also a link to the security assessment overview. And I'd like to invite questions. Discussion. Thank you. Actually, one little question is Rego used for anything else or is it purely used for OPA? So Rego is OPA's high level decorative language. So it's like at the core of OPA. So it's for OPA itself. So it's made for OPA. So yeah, one of the things that I had learned about OPA ages ago. And it kind of has some heritage and like ZACMOL. And there are predecessors that influenced its design. So there seems to be... Yeah, I was going to say the same thing. There's a long history of use of similar types of tooling and security use cases, but not in cloud native ones. So just to add to that, it's inspired by Datalog. I'll just give some history on Rego. Just a comment. I don't think that assessment link exists. I can't find that document. Is it? Because the PR is not merged. Oh, it was supposed to link to the... I will stick it in the chat. That is... So if you read the text, it's not linked. That's where it will be. I just put it in the chat, the link to the commit so that you can read it ahead of time. And then we can look at the PR. I suppose the following question is really for Ash, how the OPA project feels about the assessment process, whether those are useful recommendations, whether the project is planning to follow up on those recommendations. It'll be useful for you. Yeah, sure. It's been really helpful to work with the assessment process. We identified certain issues to this entire process and we opened up issues for this, which we will definitely tackle because we want to be as secure and user-friendly as possible. Some of these are valid concerns about users not being sure about what the policy is they're authoring because the policy language itself is so strong. So adding more documentation and whatever checks that we can put to reduce user errors is definitely helpful, which came across to this process. So yeah, I would say it was very helpful for the project itself and we hope to tackle these issues in the near future. That's really good to hear. Good to hear that the assessments are bearing fruit. So thank you, everyone who was involved in putting that assessment together. That's great. Any other questions for me? Yeah, I had a quick one. Sorry, I was just looking at the PR. It's called a self-assessment. It's not entirely clear. So did the project assess itself or did the security thing assess the project? Both of them. So there's two parts to it in the assessment process. One is that the project goes through and is asked to kind of answer and fill out a bunch of information and then afterwards the group that does a security assessment effectively also writes a document and there's a couple of reasons why we have this structure set up. One of which is that the project may feel that they've made a step that their documentation on a certain item is an adequate way to address something or that some security concern isn't under their purview. Whereas the group that does the assessment can in their other document sort of be able to have a different perspective on items. In this case we didn't really find a strong need for the two to diverge greatly, but I think there were quite a few places where we stress things differently. So it's helpful to have the documents so that different perspectives can come through. Okay, thanks. So it's actually done independently by the project and by SIG Security, it sounds like. Yeah, we do talk with each other a lot obviously throughout this process and interact with each other and make suggestions to the other document, but they have ownership of their self-assessment. We have ownership of the like the summary assessment that the assessment group did. This is actually quite nicely documented. SIG Security Assessments Guide lays out the steps. I think that's quite nicely evented. Yeah, and I think that I'm also finding that the the commonality in the breakdown of the sections helps quickly scan these documents. And so it's my hope that having the same format for many projects will help people look across projects and pick the one that is appropriate for their use case and their risk profile. Okay, I guess one other follow-up action we could take here is to loop in Cheryl and the user community and see whether there's interest in getting some more case study information together for this kind of the CNCF recommendation side of things. So yeah, I had an initial meeting with her a couple of weeks ago and she's very interested in, they don't, isn't currently a structure for how we would do that kind of outreach, but she's very interested in trying to figure out what would be the way that we could do this kind of user research and Amy and I are trying to figure out the right, you know, kind of set of folks who could execute on it. I mean, I think it would be an effort from SIG Security, but we tend to have security experts rather than, you know, the people who really know how to frame that type of research project. So we're still kind of feeling our way through how we would execute on it, but I'm really excited about the opportunity of reaching out to that end user community. All right, so we're bang on the album. Thank you everyone who has presented today. Perfect timing and talk to you all again soon. I'm sure. Yes, we'll be putting together next meeting's calls together. We were prioritizing our graduation reviews. So pin me if you are a project that would like to be able to have your graduation reviews introduced. Thank you.