 Good morning. Good morning. I guess. Yeah, we'll just wait a few minutes. Good morning. Or afternoon, as it may be. Good morning. Or afternoon. Or even evening. There's ten of us here at Good Lord. I'm sorry I'm a little late. I start to dial in a little. I just want to see if I can project here so long. It must be very cold where you are. It's snowing pretty hard. We just got back from running the kids to school. Let me know when you want to start. I think maybe we can give it one more minute and start at five past if that's okay with you. I'm in a rush at all. I'm dialed in from one of these Facebook portal devices, which work really well, but which you can't project a screen from. So I'm dialing in another way too. Hopefully that will enable me to project a document. Sounds good. We're looking very good so far. Okay. I think we can start whenever you're ready, Quinton. Yeah. Okay. Thank you. Maybe you can do an intro so long. I'm just going to get the projection working. Cool. So today what we're going to cover is we're going to do a review of the due diligence guidelines specifically as are required as part of the criteria for projects which are joining the CNCF and projects that are graduating from one level to another, say from sandbox to incubation, for example. Quinton was as well as Aaron, I guess, were one of the original people who put together some of the guidelines and the criteria. So it's about as authoritative as it gets. And hopefully this will be useful. We'll be sharing this recording and this will be useful for projects looking to learn more about the process and the criteria and also it will be a useful aid for tech leads and other members of the SIGs that might be involved in due diligence reviews at different points. Thanks, Alex. I'm just struggling to, here we go. Maybe that will work now. I'm giving one second. I was having some permission problems. Ah, there we go. There we go. Can you see my screen? Good morning, everyone. Yeah, as Alex mentioned, a few of us got together a few years ago and tried to, while I was on the TOC at the time, tried to kind of codify some of the principles and goals behind the admission of projects into and out of the CNCF and between the various different levels. I didn't prepare any fancy slides or anything for today. I was thinking it might be more useful just to treat it mostly as sort of an overview of the process and drive it through Q&A from the audience. So this is the document that we wrote some years ago. First of all, I must mention that I'm no longer on the TOC, so I do not actually officially represent the views of the TOC in this regard. I did write this document and it is kind of the document that the TOC uses. But if there's any contradiction between what I tell you and what the TOC tells you, the TOC takes precedence for sure. And if there's anything that I say today that any of you think is contrary to what the TOC might have told you recently, then please do bring it up so that I don't confuse people with misleading information. So I think the first thing, and I'm not going to go through all the bullet points, this document you can see the URL at the top that's pretty straightforward, has links to all of the various bulleted lists of criteria and levels, etc. But I think it's important to understand that there are three different levels of involvement of projects in the CNCF. I'm sure most of you are familiar with these terms, but it's useful just to recap on what the overall intention of the different levels are. So the first one is called Sandbox. We have many, many projects in the Sandbox. And for the most part, the Sandbox was explicitly created as an environment for essentially any project nascent or otherwise that was looking for a legal home where the IP of the project would be well-defined, where the Linux foundation or the CNCF within the Linux foundation would have clear ownership of the IP and that the project, multi-company projects could easily collaborate with a well-defined structure as to nobody could sue each other afterwards and all that kind of stuff. So that was the overall goal. What that means is that essentially, if somebody has a good idea and says, I think I've got a good idea and I would like to collaborate with companies X, Y and Z to figure out whether this good idea can be turned into some kind of useful open source project. That could be a Sandbox project. And so any sort of indication that it has to have code that meets certain standards or certain commit rates or anything else are, I think, misleading. The intention is clearly that these Sandbox projects either gain some momentum within a reasonable period of time getting all of the people together to collaborate on this idea or project or maybe it's already a project that exists that has a bunch of collaborators who are looking for a legal home. And the expectation is that it either gains some momentum and actually starts happening or it dies. And by dying it might be there's no interest in collaborating on this project or not enough interest to actually get a viable community around it or maybe they do get a viable community around it to go and explore the space and decide that it's actually a bad idea and cancel the project. And these are all reasonable and expected potential outcomes of Sandbox projects. Of course, another one is that the Sandbox project does become something useful that we end up with multiple companies collaborating on a useful thing. It gets some steam behind it. People contribute. It starts working. And then at some point it starts getting used in production by limited numbers of customers or users. And then we can say, oh, maybe it's actually something that is sort of reaching some point of 1.0 completion. And we can consider moving it into the incubation part of the CNCF, which is a different level. So does that make Sandbox somewhat clear to people? And feel free, please do interrupt me along the way if anyone has questions. Deathly silence. Is anyone here? I think that's very clear. Okay, good. So the next level up is what's called incubation. And so again, you can go to the links and you can see there are specific criteria. They're intentionally vague. They are designed so that there is a fair amount of latitude that the TOC has to exercise some judgment on deciding whether a given project should enter the incubation lab or not. Oh, one thing, just to go back to Sandbox for a moment, one thing that's very important is that we do not want to create the impression that by gaining a CNCF badge, a project somehow becomes credible as a production worthy use case, that it is somehow as good as Kubernetes or as good as any of the other graduated CNCF projects. So we've put some pretty strict ruling and rules around what level of promotion a project may receive within the CNCF when it is in Sandbox. Because we don't want to create confusion by mixing up these sort of nascent, not yet developed, not yet mature projects with, you know, clearly mature projects like Kubernetes and the other graduated projects. So, so there will be no, you know, big banners, there will be no press releases, there will be no, you know, socks with with, you know, Sandbox Project X on them at the CNCF, at least at CubeCon events produced by the CNCF or any of the organization into the extent that they are, these are essentially mistakes. The intention is very clearly not to promote to actively market and promote Sandbox Project, but rather just create a legal home for them to get started. Right, so progressing to the next level, which is incubation, these are projects that are, you know, still relatively new. They might be, you know, one dot zero or they're about they have a limited number of users, maybe a few who are starting to use them in production to have an active development community of may even I think somebody correct me if I'm wrong it may even be a single company who has got this thing to a point where people are actually using it in production in some albeit some kind of limited form. They may not have all of their T's crossed and all of their I's dotted, they may not have all of the requisite committees in place and all of the requisite governance structures in place. They may not have, you know, enormous numbers of users or enormous numbers of contributors, but they do have a viable product project that in the in the opinion of the CNCF TOC is is, you know, consistent with the goals of the CNCF around cloud native technology, and which they believe has a potential to become something that that could get the real stamp of approval from the CNCF to say this thing, you can you know bet your business on you can run your business on on this project it's not there yet it doesn't have all of the things in place that the CNCF would like to have in place to be able to kind of bet long term on a project, but it looks like it's getting there and it's usable in production, at least with the limited use cases. And I think that is you know that is the point at which we do the main bulk of the technical due diligence on projects is when they want to go into this incubation stage. And I'm going to use this as an opportunity to flip through this document quickly to give you an overview of the kinds of things that we want to do during that due diligence. The one that happens, you know, later when a project wants to move from from incubation to to graduation graduated project is a lot more lightweight it's a lot of checkbox items about around legal compliance and do you have a process for this and that and the next thing. And have you got this badge and that badge all of which are useful things and I don't want them to sound unimportant. But usually, when a project transitions from incubation to graduation we're not arguing about whether it's architecture is correct, or whether it's cloud native or not, or whether it's a good idea or not. We were usually really checking all the boxes of things that we would expect to have in place in order to say this is a good long term bet for you to use in your company and run your production software. Cool. Any questions so far. Deathly silence. Okay, please do interrupt me and in fact Alex I'm going to ask you do like as a point of just to keep the debate slightly lively please do ask me questions even if even if you don't really want to because I think it's useful. I will do some questions come up along the way. Okay, so, you know there's a bit of blurb there basically if you're doing a technical due diligence particularly for for this CNCF SIG. Your overall role is to try and gather all the information that the TOC would be wanting in order to make a judgment call on whether a project should or should not be either admitted at one of the levels sandbox incubation or graduated, or whether it can move from one level to another sandbox to incubation incubation to graduation. You know, there's quite a lot of hard work involved here, you know differs depending on the project but you know the intention is particularly for an incubation level application that you know you should go and look at the source code you should go and build the software you should go as an Ideally an expert in the particular area that the project covers, you should be able to have a fairly detailed and informed opinion about the quality of the code, the people that are using it how they are using it, how the software developing process works what kind of testing they have. Is the code in reasonable quality or not. Are there any people in the open source community that are, you know, arguing and fighting about things. All of these things are relevant and and they're not things that you're going to find by reading the application form that the project filled in to say we want to be in incubation. They actually require you to go kind of behind the scenes, talk to people, look at the code, run the code, try it out. You know, go and sort of eavesdrop on a few PRs and a few pieces of debugging stuff or if there's a CICD process in place, you know, go and watch it for a couple of days and see what stuff's getting merged and what tests fail and what tests exist and where things have to get backed out why. All of these things are relevant to the to the health of a project. They don't all have to be perfect, but they do have to be, you know, we do have to have visibility into them. So I think that's important to to note. So the primary goals here are to enable the voting TSE members to cost an informed vote about a project. It's crucial that each member of the TSE is able to form their own opinion as to whether and to what extent the project meets the agreed upon criteria for whatever level it is wanting to enter. As the leader of the due diligence exercise, your job is to make sure that you have whatever in that they have whatever information they need succinctly and readily available in order that they can form that opinion. You've probably noticed most of the members of the technical oversight committee, the TOC are, you know, very busy people. They're very, very busy people. Well, they're on the TOC because they're typically very good at what they do. They typically have a lot of experience and know a lot of things about cloud native technologies. And those people tend to be given a lot of responsibility in the companies they work for. And hence they're very busy. So they don't have the time to do all the work that I just outlined to you. So that is the job of the of the of the SIGs, Special Interest Groups, I drew a blank there. The Special Interest Groups are there for precisely that reason. One, they're more focused on particular areas of expertise. And this one is the is the storage Sega course. So we gather, you know, more than one or two people that are very specialized or experts in a particular area. And we do a lot of that heavy lifting work. So that's that's sort of the model that is that is proposed here. And there's a whole bunch of bullet points here where you can start to make sure you understand what the TOC principles are, what the project proposal process is, what the graduation criteria are between the various levels, what the desired cloud native properties are. And, you know, these are these are pretty foundational things. If you if you're not clear on what those are, then you won't be able to do the due diligence to any degree of usefulness. Make sure you've read the project proposal. Just as a, just as background as well, it's probably worth noting that the criteria are layered between each of the different levels. So, you know, this kind of It's kind of implied that in order to to achieve incubation, for example, you will have achieved all of the criteria for sandbox and likewise, if you to fully graduate, you have to be you have to have all the criteria for for incubating to Yeah, yeah, that makes absolute sense. And there are certain things that are kind of invalid. So, so, I mean, there are certain principles that apply across all levels. And things like cloud native properties, you know, the ability of the system to to service its use case in a scalable way on essentially commodity hardware and a cloud computing environment. These are, these are, you know, fundamental principles of the CNCF. And so if you come along with a project or if a project comes along and it and it has a, you know, large single point of failure monolithic database in the middle of it. It's, it's unlikely. It's not impossible, but it's unlikely that it will fulfill the goals of the CNCF. So, you know, and sometimes you can sort of cut your due diligence exercise a little short if there are obvious flaws in either an existing project or a plan, which make it Contrary to the goals of the CNCF, the cloud native computing foundation, then you could just kind of stop right there and say look, you know, this is this looks like a nice project but it doesn't fit in the CNCF. Here's some suggestions as other places you might go to with that project or that idea. Does that make sense. And the Linux Foundation, by the way, has has several different foundations within it, which cater to different areas, networking, storage, telcos, machine learning, etc. So it may just be that this thing is nothing wrong with the design of this project. It just doesn't fit into the into the sort of the sphere of the CNCF and it fits better in some other foundation, either within the Linux Foundation or outside. And hopefully we will be able to guide you in that direction. I think that's that that point is is is a is a particularly good point because I think we've we've seen that we've seen that happen a few times. In fact, when the TOC have been voting on on sandbox projects or questions that have come through from from the TOC on incubating projects as well. So I think, you know, this is it is a good points to to obviously not join the CNCF just for the sake of joining the CNCF it needs to be a good fit as well. Yeah, that's true. And actually maybe this is a good time to just have a slight kind of diversion. So one of one sort of syndrome I've seen a few times, more than once at least. Is the, you know, KubeCon in particular, you know, obviously less so now that it's virtual, but but hopefully we will resolve that at some point, but KubeCon in particular and the CNCF in general, has been an incredibly successful vehicle for promoting open source projects, possibly, you know, the most successful vehicle available to any open source project that provided that it fits into this space. And, and, you know, the whole cloud native space as all of you are aware is, you know, on everybody's tongues. So there's a there's a very strong motivation for open source projects to be seen to be strongly affiliated with the CNCF for a purely marketing and promotional point of view. It's a very good brand, a very strong brand. And, and a lot of companies and open source projects see a lot of benefit in in coming to the CNCF and becoming part of the CNCF family projects. On the flip side, the CNCF is really trying to create a foundation or is has created a foundation to a greater or lesser extent, which, which helps the consuming environment that the people using and wanting to use cloud native technology to provide them with it with the same structure of projects where they can easily understand what is what, why these projects exist, how they fit together, whether they interoperate properly, etc. And so sometimes those two goals are at odds. Sometimes a given project may not actually fit into the sort of clarified vision of cloud native computing that that the foundation tries to create for consumers so that they can understand what's going on. In other cases, some of the projects are just not mature enough. So, so to create the impression that Project X is usable in production. When in fact, it's not, and people may try it out and find out, wow, it's got a whole bunch of limitations that were documented and now I'm having a bad experience and our CNCF you've, you've made my life difficult. So, so there's this, this kind of tension between these two. And, and to a large extent this due diligence exercise is designed to resolve that tension. And, and try at the very least to clearly explain to projects why they may not at this time fit within the CNCF well. Or advise them as to what they could do to change things. In some cases, these are just, you know, honest errors where they may be the architectural choices or choices and how they run their project or whether they collaborate with other companies or how they deal with them. That make it difficult to fit into the CNCF and they're like, oh, wow, that's great advice. We're going to change that and we'll come back in six months time and and then we'll fit in. And so anyway, I'll stop babbling on about that but just to point out that there is, there is a bit of attention there and, and part of the due diligence exercises to try and resolve that tension. So, here's a bunch of questions I'm just going to rattle through them this is going to be kind of boring but hopefully it will trigger some conversation and questions. Because these are the explicit questions that you should be asking yourself and noting the answers to as part of your due diligence exercise. Is there an architectural diagram feature overview. Can people, you know, with diagram understand Oh, this is what this project does. What are the primary use cases, and which of them are accomplishable now. You know, sometimes there are use cases that are in visit for the future but the project is not ready to service them today. Which of them can be accomplished with reasonable additional effort, and are perhaps even already planned on the roadmap they'll be delivered in later this year or whatever the case may be, and which of them are explicitly out of scope. So it's entirely possible that you have a, you know, data distribution system that that is specifically designed for particular use cases, either rate of churn of the data or size of the data or whatever. But you know, distributing large blobs is explicitly out of scope or distributing things that have you know millions of subscribers and want to know when the data has changed that's just not part of the design of the project. So it's as important to understand what's in scope as what is out of scope. And make sure that these things are clear because if somebody wants to use in a key value store to distribute their movies, it's probably not going to work very well. And sometimes that's not obvious when you read the marketing blurb around the project it sounds like it. It's just awesome for everything. What exactly are the current performance scalability and resource consumption bounds of the software. So do do the does the project actually understand exactly at what point the software breaks in the performance wise scalability wise or resource consumption wise, and have they actually been tested. So if they claim this thing scales up to millions of rows of data, or whatever the metric is like has anyone actually tested that and prove that to be the case. Many of you might remember that Kubernetes in the early days only actually scaled to like 100 nodes. And it took a very, very long time many, many years to get it to even a few thousand modes. And, and those limits were empirically derived by actually doing tests on some other stuff. And I don't know where it stands at the moment I'm guessing it's in the sort of 5000 region but I'm not sure. But there's an example of that kind of thing. What exactly are the failure modes you know what what what happens when things fail. Some, some architectures fail catastrophically you know if you have a single point failure database and it goes down. And you have no mirrors and no caching. The system is typically completely down. If however you have you know replicated charted system. It may degrade in performance, you know in a somewhat more graceful manner, how well understood are these failure modes. And you can enumerate them you know what happens when this kind of note fails what happens when that kind of note fails what happens when this overloads when the network becomes unavailable all these kind of things. Does it fail gracefully. What is it collapse in a heap and corrupt all data. There are very big differences there. What explicit trade offs have been made. Yeah, just just just just on that point. I think that's that's also brilliant place to when you're doing DD to to actually investigate for yourself, because it does give you insights into the modes and then just trying failing components of the system etc actually does give you a really good insight into the overall architecture of the project as well. So it's generally a really good place to start. Yeah, absolutely. You know one tension you'll find that there's a lot of if you want to do all of this stuff yourself manually it's a lot of work. And so, quite often what I end up doing is going to speak to the project architects or the, you know, the leaders of the project and get a sense of whether they've thought about these things if they can give me reasonable answers yes we've tried that. And this is what happens, or we haven't tried that but but the architecture tells me that it that it would do the following. That's often a good enough answer. If the project leaders have not even thought about these things. Then that's more of a red light to me because that indicates that they designed the thing without even thinking about what the failure modes would be. And that often means that they're pretty bad. And that's, you know, again, it's not your choice to it's not your place in life to decide whether a given architectural design is good enough. It's really your place in life to expose the details to the TOC in such a way that they can consume them, and perhaps give advice. You know, this thing looked very good to me. This thing seemed like it needed some improvement in these areas is the is this kind of the mode of communication that I would recommend. Trade offs around performance scalability complexity reliability security etc. You know there it's it's almost impossible to build a system that is you know infinitely performance infinitely scalable, very simple, highly reliable and fully secure. It just doesn't exist so inevitably explicit trade offs need to be made. We decide that we're going to make the system more complex in order to make it more secure, maybe. Or hopefully less complex to make it more secure. We trade off, you know, performance for reliability that's a that's a pretty classic kind of trade off so certain kinds of performance in a central centralized single point failure relational databases are actually very fast. You can do, you know, amazing numbers of transactions per second per node on those things. You know when they fall over they fall over. And so so there's an explicit trade off that often gets made people use less performance databases, for example, to get better reliability etc. What are the most important holes in the project like do they know that they have no extensibility or integration points or they don't have a very good high availability story at the moment. Sometimes knowledge of these things particularly at the incubation level is good enough, you know, we know we have this shortfall we're working on it that's part of our incubation efforts. It's still usable in production, but there is this you know theoretical failure mode that we don't like and we're working on solving that. What is the quality of the code look like. Yeah. Just a question before you move on from those last two. Those are common kind of backlog or issues that a good healthy project keeps visible to all players. So is the check mark saying that there's some architects that know what those trade offs and holes are, or is the check mark saying the project is is good in giving visibility to all those involved of what those are. That's a that's a very good question. First of all, there's no checkbox. And secondly, it depends very much on what what level we're talking about here. You know, if, if, if a project wanting to graduate has an obvious single point of failure that is not, you know, at the very least made highly visible in big flashing letters on the front page of the product project. Then, you know, it's almost definitely not going to graduate. Even at incubation level, you would definitely want that stuff made visible. Clearly anyone wanted to use the system should know what the restrictions are and what risks they're placing themselves under in terms of availability or performance. You know, to the extent that there is a checkbox, it's exposing the stuff and then it's up to the TOC to decide. Okay, we tell potential users of this open source project what these limitations are. Are those reasonable limitations. Is the project still useful. Are they solvable problems, etc. Does that does that answer your question, Tom. Your, your, your, the answer is it's, it's subjective based on which, which gate they're going through. Yes, it is definitely the case that being clear about what the limitations are is, is very important. And it may be fine to not solve the limitations provided that there are workarounds or use cases where those limitations do not create a huge problem. It's not the case that every project has to work in every single use case perfectly, but it does need to at least work in some use cases well enough. Yeah, agreed in some of the most successful projects I've seen are the ones that do a good job of deciding what they're not going to do. And so it, I guess I'm just curious on how the CNCF and storage SIG can help present a best practice kind of idea of taking these concepts and modeling what is good healthy examples of giving, you know, making those visible to all the developers and contributors and users. Yeah, that that's very good feedback and I don't think we've necessarily tackled that specifically Alex correct me if I'm wrong. We have the white paper which I think does a pretty good job of outlining the sort of the space of storage and the kinds of areas that one needs to think about. But I think we could kind of distill that into you know what what you've kind of referred to as a bit of a checklist, or maybe the specific answers to these particular questions per project and that to some extent that's what the due diligence report is supposed to be is to take all of all of the kinds of things that people might care about and create explicit answers to them. Make sense. Yeah, me. Yeah, thanks. Cool. Okay, so Alex, how much time do I have to do do we want to cover anything else in this meeting or should I just carry on what's looking. Um, no, we have about 20 minutes left. Okay, cool. I will try and step it up a little bit. You know, code quality is pretty self explanatory. But of course, you know, just to state the obvious, if your code quality is pretty bad at the beginning, it typically doesn't get worse. It doesn't get better it gets worse. So if you have a project that has like horrendous code quality at the beginning, then you should raise that as a big red flag, particularly in a poor choices of language languages that don't scale well. If you don't think that it's possible to kind of move from a point where the project is to to graduation ever without, you know, rewriting the whole project, then that would be a pretty big red flag. And there are a couple of languages I won't dis any in particular but there are a couple of languages. They're actually, you know, major open source projects that have failed, because of relatively simple things like like choosing the wrong language for the particular project they were doing and running into a wall where it just could not scale to the size of a project not so much scale in terms of performance of software, but just scale in terms of the size of the project that they were planning to build and the size of the teams and and the language may just not lend itself to large projects like that. Dependencies, this is a pretty big one, you know, make sure you understand what this thing needs in order to work correctly or well, and make sure that you understand what how these dependencies couple with the system, and what their license restrictions may be you don't have to be a licensing expert. But but if this thing can't run without some proprietary thing that you have to go and buy from somebody, then that's a pretty big red flag, or if it is, I can't run unless you you know cut and paste some code from something into somewhere. That's obviously a concern. What is the release model, do they have like proper versioning, do they do periodic releases. Do we have like CICD systems continuous integration continuous development system, sorry deployment system so so is there automated testing whenever anybody sends a diff does it get tested and merged if appropriate and caught if it doesn't work properly etc. This is also all important. It's not the case that every single project has to have them right from the beginning, but certainly once you get to incubation you want to have a reasonable CICD system and decent automated testing. Staff have lots of lawyers and licensing experts so you as I said you don't have to be a licensing expert yourself but you do need to at least have a rough overview of what what the licensing requirements are for the CNCF and and what projects the this particular project depends on how exactly they integrated. Do we import their code, do we just install that thing before installing the project etc. operational modes. So that's that's mostly technical stuff. I was just going to add sort of a couple of points because over the last over the last year for example at least you know since I've been involved in a number of different projects. You know to to recurring things that come up fairly often are the licensing so you know it's and the licensing sort of applies from sandbox and above. So it's useful to familiarize yourself with the CNCF IP policy and understand you know which classes of licenses are an easy slam dunk and which others cause problems. So that's definitely worth considering. The other thing that's worth considering is is the dependencies. So this is often come up, you know, partly, you know, as, as, as Quinton mentioned, if you know maybe there are dependencies on proprietary products or something like that. But, but also, you know, we've had circumstances where questions have been raised about dependencies on products which which effectively might might compromise some of the cloud native aspects of the product so so for example you know you might. You might have you might have a project which which is scalable and it's you know it scales horizontally and has multiple nodes and that sort of thing but but for example, you know depends on a single postgres database in the background right and and then you know you have to you have to ask some of those questions about how. You know what what options are available for for making those those aspects of the system highly available or scalable for example. And that's that's that that that comes up more often than you'd imagine. Yeah, make sense. I agree. So moving on to the sort of more project project management side of these projects. So moving away from the purely technical stuff. You know do we have good documented processes. How do people who want to get involved in the project figure out what they have to do. Is there an issue tracking process you know most of these things with people using things like GitHub. Most of these things get checkboxes pretty quickly, but things like release management are not always as obvious. And so it's good idea to understand what's going on. They don't all have to be perfect and I want to be very clear about this. It's not the case that if you if you get a cross next to one of any one of these items you somehow not eligible to enter the CNC if that's not the intention at all. We just want to understand what goes on there and what things if any need to be changed and what the what the current statuses and what plans they might be for improving the current status. Is there a documented governance model of any kind now you know for for incubation this is quite often the case that it's not there is not such a thing. And then Kubernetes didn't have one until you know much later on. And that was one of the things that sort of held up its graduation. So, so you don't have to have an answer yes to all of these questions but you do need to know what the answer is. Is there a code of conduct is there a license which licenses is there like automated way of checking whether people who contribute to the project fulfill those or agree to those license restrictions. What is the quality of communication around the project so usually projects have you know slack channels and GitHub issues and PR reviews and whatever is is the quality of communication good to do people respect each other do people respond to things when they get reported etc. Or are people having flame wars in their PR reviews, or are they just leaving, you know, reported issues to rot for months and on and all those kinds of things. What is the core team looked like who are the people behind this thing, you know, how committed are they are they, you know somebody who works after midnight for an hour once a month on a project or these like, you know, professional people who are paid as their full time job to build and maintain this project. There's a big difference between the two. Are there any areas that are lacking leadership, you know, maybe you have a good technical person who doesn't have good project management skills running the project maybe a good project manager who doesn't have good technical skills maybe there are other areas that might be lacking maybe they might have skills around testing or lease management. Maybe they need some help there in some cases the CNCF can help there. In other cases, we can, you know, sort of help the project, identify areas that where they might be weak and where they might want to recruit some people, etc. Right. There are areas around project type items, project management, etc. users, very important to understand who uses the project. I can't speak to people who are using it and get an understanding and sometimes quite often the case that, you know, everybody wants to claim that all the big famous companies are using their projects, their software so you'll see, you know, screens with lots of visible logos of, you know, whether I don't even want to mention any brand names but you know the ones I'm talking about makes it sound impossible sound impressive if these big names are using a software. There's a big difference between, you know, some intern at the company kind of like install the software once versus the business runs on it and getting an understanding of exactly what the use cases are that the stuff is really being used for at the moment and talking to the people using it is very important. So all projects have both strengths and weaknesses. So if you get a bunch of people telling you that everything is awesome and it's perfect and it changed their lives, it's probably not true. All projects have strengths and weaknesses and you need to understand both of them. So, you know, a huge amount of viral marketing out there on projects and so your job is to actually cut through all of that stuff anybody can read all the buzz and the Twitter feeds and everything else about a project. That's not actually the information that is useful in deciding whether a project is well used with whether it is technically sound or whether users of it like the project for for the right reasons. So it's very important to kind of cut through all the hype and get an understanding of what's really going on there. And I think, you know, most of the remaining stuff hopefully is fairly self explanatory, you know contributors on the project should be welcome feel welcomed. There should be reasonable onboarding procedures where people who want to contribute can sometimes useful to understand, you know, how this project came to be. One again one sort of syndrome that I have seen is that some projects originate inside often big companies. They sounded like a good idea or somebody got excited about something and created some project in a company. And then they basically didn't find any users for it didn't find any use cases or customers. And the thing kind of languished around for a few years and now they want to donate it to the CNCF to get some kind of good karma press or something. And it's important to kind of spot those you know for project has been around even if it's only applying to be in the sandbox, which doesn't have any strong requirements for you know heavy use or stability or anything. But if a project's been around for five years and allegedly people have been working on for five years and and it still doesn't have any users. You should ask some pretty, you know, pointed questions about why have you got no users. You know, is there something wrong with the software is that a use case that is not important to people. You thought that there was a need for this thing that nobody actually turns out to need. Those are questions well with asking you don't have to be rude and you don't have to be, you know, disrespectful in any way but they definitely questions that need that need answers and don't be shy to just keep asking until you get the answers you're looking for, or at least sufficiently in depth answers. And sometimes you won't sometimes people won't be able to give you the answer and in that case that that that is an answer in itself. If people can't explain to you why they have no users after five years then then probably this is not a project that's going to thrive in the CNCF. Right. When you're point about users. So we were just accepted into the sandbox yesterday, the Provega project. And one thing that kind of became important to us even though we started with an incubation application but whether we skipped over this definition or, or the definition clarified itself in the intervening months. But at the different stages, you know the types of users are important. So for us as a vendor, you know, our, our customers weren't necessarily end users, according to definitions. So anyway, that's I think an important thing for projects to understand at the various levels, the definitions of vendors and not necessarily being users and their customers not necessarily being users. And what you're really looking for is. Yeah, just just to clarify on that point. Just to clarify on that point, I think what's what's important is that there are production users of the open source project as it's being submitted. So, you know, if, if for example, the production users are only using a commercial version of the projects that includes components that aren't available in the open source version, for example. I think those users probably wouldn't qualify as as necessarily as end users of the of the open source project, like, like, as what happened in Proveka but I think, I think we can, we can probably look forward to to remedying that. Yeah, I think we're yeah, I think we're happy about it, but. Yeah, these transitive users. You know, it's, it's good to I think to understand that. And just to clarify your point, Derek, was was your point that sometimes the people using the software are not what we would term end users they might be vendors themselves or integrators or whatever the case maybe was that your point. Yeah, that as well as you know you can have really solid production use but maybe not. But maybe that doesn't qualify as as open source end users. Yeah, I think that that's actually perhaps a point worth expounding upon. You know, open source window dressing is not what we're after here and to be clear I'm not accusing you or your company of this and I wasn't actually involved in that due diligence at all but but I have seen it before where, you know a small little sliver of a big commercial project is open sourced and maybe even it has the, you know, community edition label on it or something, but is, is, you know, not very useful in practice or in production. And in order to to use this thing and any kind of reasonable production environment you have to go and buy the commercial license with a whole bunch of add-ons that you know are not open source that that's not a model that we're promoting here. The open source components the stuff that that lives within the CNCF should itself be a viable production system. By all means they can be commercial add-ons and proprietary additions, but the core thing that is open sourced in the CNCF should itself be useful in production. Yeah, right, absolutely. Yeah, and I think on our side somehow we we kind of overlooked these these definitions. And that's a good point. We should we should clarify that Alex, maybe if we if you can make a note and we can we can make sure that that's because that is a recurrent we've seen that more than once let's put it that way that that confusion. A lot of a lot of software companies business models are kind of built around this, I forget exactly what the commonly used term is but you know there's a community edition or an open source part of it and then there's a bunch of commercial stuff that's add-on onto it. Open core I guess is the term. Right. And that's that's quite common and and it can confuse people and there's nothing wrong with you know open core where with the part that is open source is only usable to like kick the tires and play with but that's not the intention of the CNCF. The CNCF is is open source projects that can be used in production at scale. Well, yeah, in our case we want the open core to kind of compete with the with the product, you know, so yeah, we're happy to be here. You're going to get going. Cool. I thought that was putting up. Yes, thank you. Absolutely. Cool. I think we're pretty much out of time. So maybe I should stop waffling now. You can read the rest of the document and feel free to discuss on the Slack channel or feel free to email me. If you Google me, I'm sure you can find my contact details or ask on the Slack channel and happy to answer any questions or find any further questions before we wrap up. Hi, yes. So this young is so I just want to say that we might need for so I'm from the other side, the ones that didn't get in on the sandbox so congratulations to Rebecca right. I think we might need some more clarifications about what is the bar exactly for entry in sandbox in terms of adoption and project maturity right because so from the criteria for sandbox right I see that is to encourage public visibility of experiments or other early work right but I've got doesn't look to me like as an example right it doesn't look like a project that it was an experiment something like it looks like a proper company to me right so it's a it's a big mixed I think the messaging that we get right so that that is my take on that we might need to be more what are they what is the bar exactly for sandbox project in those terms right adoption community adoption and maturity. Yeah, that's a that's a good point you make. And I think I touched on that earlier which is that sandboxes and I don't think the CNCF documentation is is does any helps to clarify some of this confusion but my personal opinion is and I was involved in actually concocting the sandbox when I was in the TSC my personal opinion is that the sandboxes is intentionally a vague space it's a space where somebody's looking for a legal home where they can collaborate with other companies safely. That may be experimental that may actually be a project that has been around forever and which now is being open sourced and they want to collaborate with other companies. So it's not strictly speaking experimental experimental is one of the use cases but there are many others. All it really means is that it is not yet does not qualify for sandbox and many projects straight from sorry does not yet qualify for incubation. Many projects skip sandbox and go straight to incubation but there are you know a bunch of requirements for that like multiple production use cases and etc. And some companies and projects do not yet fulfill those. So if you if you look on the so if someone doesn't know that right and goes and sees you know what is the criteria for the sandbox projects and what is the goal right I noticed that you're focusing more on the legal aspect. Why the I mean what whether you see on the CNCF website right is that this the CNCF sandbox has four goals one encourage public visibility of experiments right so if it's not the case. I mean it's fine right it just needs to be clarified. Yeah that's one that's one of the goals that doesn't mean that all projects that go in the sandbox have to be experiments. Yeah. Cool. Thanks. All right. Thank you so much. I was just saying one thing that might be worth avoiding or at least this is a perception I have of Apache a little bit is like sometimes Apache is a place for projects to go to die and. Yeah, I don't know that we you know this the I don't know the CNCF is risking any of that but you didn't mention that you know maybe an experiment in the sandbox doesn't work out but. The Apache attic is almost bigger than you know the Apache roster. And I've been excited about a lot of projects that like joined Apache that failed and I wonder, I wonder, since you have some of that. Yeah, we do we do have and we should probably wrap up now because. Yes, sorry. But we do have a good conversation. We do have processes for archiving projects explicitly you know some projects do die and then that's kind of reasonable. As long as they're flagged accordingly and they're they're not presented in the same kind of light as thriving projects I think that's fine. And and all of these, there are sort of very vague guidelines around timelines, you know, people, projects cannot sit in in sort of limbo forever. I think it's reasonable for project to kind of get stalled temporarily maybe companies change direction and new open source contributors need to be sort of engaged in the project to get it going again, but it can't just sort of sit there forever in limbo. Then it will either get archived or removed from the CNCF and there are processes for that. Right. Thank you very much everyone. I have to run. It's been a great conversation. I enjoyed it. And I hope to have some more in the future. I hope it was useful. Thanks everyone. All right, thanks.