 All right, cool. All right. Go ahead. We can we can move forward. I got five people attending. Go ahead. Go ahead and steer. Ken. So I'm kind of going on today. It's standard agenda for today. We're going to talk about the Prometheus graduation. Have a couple of community presentations and then talk about the backlog a little bit and have some open Q&A. So on the Prometheus plus Open Metrics graduation, Prometheus has graduated and Open Metrics is being spun out into the sandbox. Any comments or questions about Prometheus and Open Metrics? Outside of finally, awesome. And then we received a request from Fluent D, a poor request, 69 and 145. And I think, I think, because in my opinion we should go ahead and send this out for vote. Yeah, I mean it was a bit of open discussion topic. I know Brian had some comments, but the idea is to open discussion whether we want to move forward on the vote or potentially hold and review it more and potentially also add a security audit requirement for graduation. Chris, I was just curious to add a question. It seems like that review has sort of been open with no specific reservations expressed for since February. I was just wondering if there was any context, why has it been held up for so long with the concrete objections that were raised that are not in the PR? The only concrete thing on our end was just a legal review on the CNCF side, which we've completed. So now it's just up to the TOC what you want to do with it. So I'm just waiting on an answer from you. We should just put a deadline on that and then vote. I mean we can't leave it sitting around for six months without making a vote. Well I think there are also, it may have been the first project to apply for graduation. So we hadn't graduated any so we didn't really have a process or we weren't really confident of how to apply the criteria and things like that. So we chose to focus on Kubernetes first and then Prometheus came up after that. I did take a look in more detail at DevStats and some other information from directly from GitHub and so on and had a thread with Eduardo. Yes thank you. And I don't really have any reservations. There's information from that thread that might be useful to post to the PR comments that explain some of the things like a lot of the activity and fluency has been in the ecosystem rather than in the core, which is useful. They do have two active contributing companies as maintainers on the core project though and that seems to have a reasonable number of contributors for the size of the project. Fluent Bit, which is the smaller newer project, doesn't have as much contributor diversity but some projects in Kubernetes don't all have equal numbers of contributors either. So I wasn't that concerned about that. They also, Eduardo felt that the number of contributors was in good shape is sustainable. So I didn't really have any red flags. Obviously it's being used in production by quite a large number of people for a long time. So I think it meets the other graduation criteria. Yeah I know I had some evaluation of it. We did here just to kind of see if it was a solid and it was solid on a lot of testing. So I haven't a issue with it either. Yeah so I didn't see any red flags like PRs pending forever or things like that. In that sense it seems in much better shape than Kubernetes. Are they asking for a security review or not a review of it? Just wanting to add it as a general requirement. That's an orthogonal requirement. Okay. Yeah I don't know that we need to block on that for fluency. Yeah I agree. Okay. Yeah I mean in particular I think they were seeking to do this relatively quickly to align with an event. Is that correct? Yeah and given that they've been waiting for us for six months seems like we should do that. Yeah so we have much better like dev stats. It has much more information and things like that. So from my perspective it looks good to go. Okay. No one opposes. I'm happy to put forward the vote later today. If you want to send out something on the TOC let's just to make sure there's no major opposition Chris. Okay yeah let's let's do that since there's some TOC members not on the call. So I will do that. So good point. Thanks. And then if we took it to talk about the auditing piece separate I think I'm in favor of adding that as a requirement for graduation going forward. Chris I don't know like I don't want to force it on. I don't know if we want to make it a requirement. I don't know if we want to force it on projects that have already graduated. Right but I think it's a good it's a good it's a good capability to have available for companies to request or maybe we make it like I I suggested you know we like to we'd like to offer you this capability if you like you said the CNCF pays for so it's a nice benefit for the community. I think I think if we if we are going to make it a graduation criteria I think we do need to strongly encourage the existing projects to go through one otherwise it looks like we've got some kind of excuse. Yeah I'm happy to have Kubernetes go through it. Cordy and S went through it which is technically I guess part of Kubernetes now so it's up to the community really to make this request. We've piloted it with a few projects in CNCF and it's been actually super successful in my opinion so happy to do it. The only request I made in the PR was just that we be very clear about what we actually want them to do. We don't leave it loose as to any anything can be done otherwise it becomes a checkbox item. Yep. Let's have a little bit more discussion on that and then hopefully we could close out that requirement by the next TOC meeting but it looks like there's generally broad broad support as we as long as we clarify the details a little bit more on it. Yeah I think that's so like a general there have been some emails we've had back and forth the last couple days to it I've been reading and I think you know Brian Grant brought up that we seem to sort of I think review those requirements that we have for graduation sort of make them a little bit more clear. Yeah we don't have any opportunities to update that list but we might as well take a second look at it after we graduated a couple projects. And I think the same goes for sandbox which I don't know if it made onto the agenda but yeah all right I'll make that as a to-do item so I think that's good for now since we have a few community presentations we should probably move forward. Thanks. The next slide is just the some of the upcoming project reviews so I we're still seeking some sponsors from the TOC for Cortex and we already have yeah TIQV is good to go so we'll be announcing that next week. And we we're going to discuss I think offline the some of the other presentations on here in the backlog presentations as backlog items so. Or online. Yeah I just I just sent email about that yeah I would like to have a discussion about what we're looking for in some of these categories since similar or projects in similar categories have come up in the past or we expect to come up more frequently in the future. So just so that we're prepared a little bit with what to ask some of these projects so we can be a little bit more deliberate about which kinds of projects we take in as opposed to just sort of being reactive to the ones that express interest. Yeah I completely agree and so with that we move over to the community presentations I know if I'm Terrace or Joe Terence or Joe on Steve. I'm on this Terence. You Terence. Stephen on I should be on. Yeah I'm here too. Cool. All right. Yeah so Stephen earlier. Cool. Should I just start then. Yep go ahead. Yeah so. I hope you've seen the screen we have slides up. Yeah I can see him. Awesome thanks. Hi so yeah I'm Terence. I work for Heroku a Salesforce.com company. I'm one of the co-creator Bill Pax with me is Joe. Well it works at Heroku he's an architect. On my team and then is the Java experience owner at Heroku. We also have Stephen who is the Pivotal Proc is a Pivotal Proc manager and Cloud Foundry Bill Pax project lead and then we also have Ben who works at Pivotal and leads the Cloud Foundry Java experience. So for those of you who aren't familiar with Bill Pax Bill Pax turn your source code or app artifacts into a running application on the cloud. And really want to meet developers where they're at today, which is it at the source code. And it's been around for over seven years. So it's an old project. At Heroku we use it to basically turn our Ruby specific pass into a Polarglot specific pass. So this enabled us to support other programming languages on the platform and Since then there's been a number of other players in space, Cloud Foundry being one of them that had really kind of an endorse and adopted Bill Pax And more recently K-native added support for it as way to handle source the container building or one of the ways that you can go ahead and do that. Next slide. And we have customers across both startups and enterprises and the reason startups really love using Bill Pax is that it enables them to get this really quick time to production, have these fast rapid iterations and push out features and bug fixes. And on the other end of the spectrum with enterprises, it allows them to have this security and compliance story. And The way this is achieved is by separating the application concerns from the rest of the infrastructure operation concerns. And so this allows application developers to focus on actually building the application and kind of the application operating people to focus on actually operating and managing Running that application. Next slide. Another reason people choose Bill Pax is that there's a community behind it. So between both Cloud Foundry and Heroku, we have 13 people that are paid full time to work on Bill Pax, maintain them, support them, ensure when you choose these Bill Pax that they're well supported and up to date. But if you can't find something that we support, there's actually an entire community of Bill Pax. Next slide. So here's a list of the Bill Pax that both Cloud Foundry and Heroku officially supports. And most of this stuff is centered around languages. But since we've kind of open sourced it and introduced it into the community, the community has really showed us the versatility of Bill Pax. So, beyond just adding new languages, which they've done that we don't necessarily support, they've added support for various tools as well as off the shelf products that you can kind of just install without having to deal with the integration work. It takes the stuff up and running. And these are examples of Bill Pax that are being implemented. And so if you need to go off and build your own, it's not too complicated. But what we're proposing here isn't the actual Bill Pax itself that we want to contribute, but the API infrastructure that actually power the Bill Pax. Next slide. And so kind of stepping away from the high level and kind of digging more into the Bill Pax themselves. Simply, when you take your app source code, which is where the developers start with and then you run them through a Bill Pax or set of Bill Pax, at the end of it, you get this tar ball, which is a runable artifact. On a Cloud Foundry, they call it a droplet and at Heroku we call it a slug. And what you get with this tar ball is this ABI compatibility guarantee with the underlying libraries in the operating system. And so this allows application operators to provide underlying OS image updates without the need for the application developers to have to really do anything or rebuild their application at all. And so if you need to patch a CBA, the app operators can do that in production across all the applications. And this is what Heroku and Cloud Foundry have been doing over the last seven years without incident. And if you compare this to when doing Docker files, the whole image has to be built for every single app across your entire fleet of applications. And so again, this allows app developers to focus on building the application and the app operators to focus on actually operating their application and And so allows people to quickly get stuff up and running and maintaining these outpatients in the cloud. And now I'm going to hand it off to Steven to talk about kind of the shortcomings that we see in Bill Pax today and kind of where we want to take them. Hey, so thanks Terence. So I'm going to talk about some of the drawbacks we have with Bill Pax right now. So each platform has sort of a custom build pack interface. So Heroku build packs often don't work on CF and CF build packs often don't work on Heroku. It's possible to get that to work, but it's a lot more effort. The contract between the platform and a build pack is very basic. So it leads to a lot of uncertainty about how build packs behave. People think they feel like black boxes, advanced devs, they say, this is maybe not enough control for me. The simple contract can also make authoring and extending build packs not the easiest thing because they all work very differently. You may be working on something very complex or very simple. It's not an easy thing to make a build pack really quickly and get it out there. Build packs often involve a lot of unnecessary rebuilds and data transfer. So if you have a thousand Java apps, you may end up storing a thousand JVMs on your platform just because of the way the model works. You may end up transferring those JVMs back and forth a whole lot. There's not very much data duplication like I mentioned. It's also difficult to provide additional OS packages. Everything, they're sort of unprivileged and the model works differently to prevent that right now. Next slide, please. So in January, the build pack leads from Heroku and Pivotal. We sort of got together in New York and found that we had really similar problems and that we had really sort of similar future goals. And so we sat in the room for two days and came out with out of that we got this idea of cloud native build packs, which are sort of this idea that build packs should be universal and use container standards and run anywhere and take advantage of more modern and more uniform across platforms technologies. So the idea of cloud native build packs is that they transform source code into images and just OCI images or Docker v2 images that behave sort of similar to how they do today but work on those container standards and are a little more compatible with each other and with different platforms. So we use a new technique that's sort of pioneered at Google right now. It's not conico, but it lets us manipulate images inside of a Docker v2 registry sort of a new feature of the Docker v2 image format, where we can just swap out individual images to update them without having to re upload, you know, previous images or images that don't need to change. So this can lets us really easily take advantage of, you know, API compatibility of OS libs or other kinds of compatibility guarantees provided by different languages also really minimizes build time and data transfer compared to the previous model. We're targeting Kubernetes for this effort, but this should be compatible with any image based platform. We really want this to work well with helm and run on any OCI container runtime. The whole thing runs unprivileged and just a series of images that doesn't require conico or IMG or build or anything like that. And there are no nested container hacks or anything like that needed. Next slide please. So I have some questions about that. Sure. So the container image will be the generated artifact as opposed to this lug tarball. That's right. It's like the build process itself runs inside of container. Yeah, that's right. What is the interface to specifying how to build the container? Is it a script or some new declarative format? Do you mean like what creates the other container inside of the container? How is the build process specified? So I'll sort of go into the details in a little bit. There's a detection process that selects a series of build packs and those build packs determine how to build the application. Okay, I'll wait for that. Cool. Happy to talk more after if you have more questions too. So our goals for this effort are to alleviate enterprise app dependency management pains. There's one goal that we, you know, if you're a large enterprise patching, you know, critical vulnerability for hundreds and hundreds of applications can take a really long time and rely on a lot of green pipelines. With this you can update lots and lots of images simultaneously in ways that are safe. So if you have open SSL CVE, critical CVE, you can patch that really quickly compared to, you know, hoping that your thousands of apps eventually get updated. We want to make all app developers lives easier, not just enterprise. We don't think you should have to, you know, a lot of app developers don't want to worry about the patch version of Node or the patch version of Ruby they're running and we think we can, you know, make a system that manages that for you. We want to unify the BuildPack ecosystems between Pivotal and Heroku and the community so that all BuildPacks run everywhere and, you know, encourage more contribution to BuildPacks and creation of BuildPacks. We want to sort of narrowly cover application build. We don't want to have a lot of opinions about how you deploy your, you know, images. There are great tools for that like Helm, you know, and we're not saying that we want to replace Dockerfiles. This is just an alternative to Dockerfiles to meet, you know, certain use cases. Next slide, please. So now I'm going to talk a little bit about how they work. So I'll kind of go over this quickly. The first thing that happens is detection. They're four steps. So the first thing that happens is detection where a series of candidate groups are, you know, process the app source code. And the first candidate group that says, yes, this works. It gets to run. So in this example, this is an APM BuildPack, NodeBuildPack and RubyBuildPack. So say you have a Rails app that has Node.js on the front end and needs New Relic to, you know, do performance monitoring. These BuildPacks would, you know, say, yes, this application. I'm compatible with this application. And if they all agree on that, if the group is selected, they work to come up with a build plan that has the dependencies they're going to provide, and they work together sort of to come up with a list of dependencies and dependency versions that they'll install during the build process. The next step is analysis. So during this build process, the BuildPacks are going to generate layers. I'll show you that in a second. And in this analysis step, they're actually retrieving metadata from the previous layers on the registry from the image so that the BuildPacks can decide whether or not they want to replace the layer because it needs an update or they don't want to replace the layer because it's just fine and there's no need to touch it. And so layers only go in one direction, out towards the edge, not never come back. Next slide, please. So during the build process, the BuildPacks all take, individually take the build plan and metadata that you saw in the previous step. They run in order and they decide whether or not they're going to replace layers. So if they want to replace a layer, they just create a directory with the layer, with the contents. If they don't want to, they just don't create that directory. They can update the metadata about the layers and a series of Tommel files that the metadata is stored in. They also have a local cache they can use to speed things up even further. And the cache can also be used to supply dependencies to other BuildPacks. There's a particular contract for that. So in the end, you end up with a whole bunch of directories that replace the remote layers that need to be replaced. And the platform, sort of part of the API, the API lifecycle, replaces those layers remotely in the remote image. This uses the image layer rebasing strategy with Docker v2 image format in order to just replace the layers from the previous build that need to be replaced, sort of getting absolutely minimal data transfer and duplication when, so you don't have a whole bunch of the same copies of Node.js. Next slide, please. So when a critical CVE hits or any kind of CVE hits in a layer that can safely be swapped out without changing application behavior, like an open SSL, for instance, and the operating system layer, we can rebase lots and lots of apps simultaneously against that new or lots of lots and lots of images simultaneously against that new base layer to update all of your apps very quickly without any rebuilding. And that's a big advantage of this. This is similar to what Cloud Foundry and Heroku do today in their platforms. But sort of like you take that model and shift it onto a Docker registry. That's kind of what it looks like. The next slide, please. So we have a bunch of planned contributions here. A lot of these already are finished or near finished. So we currently have a working document for an API v3 specification for build packs that's stable, but it hasn't been completely formally specified yet. We have a reference implementation of this that's finished already called the build pack lifecycle v3. It's just a series of images that don't require privileges. As I mentioned, you can plug them into any platform if you're willing to coordinate them yourself, but we'd like to have some tools to do that for you too. So we are working on a pack CLI that has just an early alpha out. You can barely do anything with it, but it does work with one sample v3 build pack at this point that coordinates these images. The pack CLI does this locally on a Docker daemon for now. But we're also going to work on a controller. This hasn't been started yet. A controller for build pack cloud builds on Kubernetes. That's the next step. We want a controller that'll automatically coordinate this for you on your Kubernetes cluster and potentially update images to or update deployments to sorry. We also plan to provide cloud builder images, not just for sample v3 build packs but for the current we we call the Heroku build packs v2a and the cloud boundary build packs v2b in their specification. We plan to make cloud builder images for them as they are today so that you can start using these quickly without us needing to port them to v3 which will happen, you know, over some amount of time we plan to do that soon but we want, we wanted to make this compatibility layer there so you could use them right now until we finish that process. And you can use those today right now too if you want to. The cloud boundary ones work in Knative. They're already in that project. We also, Heroku has an open beta of a new curated registry for community created build packs that they want to contribute to this effort. Also, it's really exciting. We'd like one central source of build packs that we can. Hi, this is Dan Khan. Just quick question on that that I'm unclear about the existing Heroku build packs that are in production today so like the Ruby or the node ones that are used by thousands or tens of thousands of folks. Are those getting contributed into this project, or are you suggesting a model on how they can be built but they would still live outside of this project. It's the latter so that we're not contributing. Currently, we're not contributing the cloud foundry Heroku build packs. They're sort of large projects that have, you know, individual communities on their own. You know, we may, we may have build packs in this project later. It's just not an initial goal. The things that I'm talking about with cloud builder images are just, we made a compatibility layer to run CF or Heroku build packs and images in an unprivileged way on these platforms that doesn't quite use the v3 API yet but we'll let you use those build packs right now outside of Heroku or outside of cloud boundary. So it's it's the compatibility layer that's that's the contribution. So thank you. Next slide please. So we have two sort of key needs. If we were to join the CNCF one is we need a neutral third party to foster collaboration. We don't want this to be a pivotal thing or Heroku thing or cloud foundry thing. We want this to be a open project that anybody can contribute to fairly, you know, we want build packs run everywhere, not just on particular platforms. And we adequate vendor neutral infrastructure, you know, if we're going to host a registry build packs for CI CD and all that stuff. I want to be able to ship build packs quickly to people with new dependencies. Next slide please. And finally, the we think there are a lot of benefits of CNCF inclusion for us and for the CNCF hopefully we we think a uniform build pack interface specification as part of a CNCF project would allow the CF and Heroku build packs to run any platform greatly improve compatibility and interoperability of build packs. Then community build packs also this would facilitate the adoption of container standards because there are a lot of users using build packs right now and build packs don't use all the container standards that you know currently the we like to see and so by, you know, making those users use this new build pack specification that uses those standards will pull more people into this ecosystem. And the association with Kubernetes and other CNCF projects we also think will encourage wider contribution and hopefully dispel myths that build packs are very platform specific and then you know you have to make your heroku app or a cloud foundry app, you know, you make a build pack app that doesn't doesn't have much has very opinionated configuration doesn't have much configuration all. And finally, we are looking for to see sponsors for the CNCF sandbox. So, sponsor us. And that's about it. That's it. Yeah, thanks even thanks for the overview. Are there any additional questions. So when an image is has some layers updated does that affect all pullers of the image at the same tag or digest. So the it actually creates a new image shot. So you can choose whether or not you want to upload it with a different tag or you can, we want to replace the tag and point the tag of the new shot. That means if you use a shot based deployment though you do have to update your deployment, or if you're using tags to make sure the image gets to the edge somehow right so different strategies for that. That could be nice. Okay, thanks. Do we have any TOS team that was interested in sponsoring? I am. Anyone else? I think I might be I'd like for somebody on my team to look into this a bit more. It's, it's interesting. I can't tell how much of this is similar to work we've had to do to get, you know, sort of rootless containers working. And how much of it is completely different. So And we really rely on this kind of Composability of the different components comprising an image inside of Google and inside of App Engine. So I think it's potentially really valuable. So I'll put it's put Brian and Camilla and you can definitely have people. I'll kick off a thread on the mailing list to for more community input. So thanks. Awesome. Right, we'll move on to the next presentation then. Which is a Georgia and Rook and storage. Awesome. Thanks. And moving to incubation. It looks like that's still reading correctly. Yes. And can you hear me well right now? Yeah, we have loud and clear. Thank you. Oh yeah. So my name is Jared Watts and I'm a maintainer for the Rook project. So let's go ahead and start with a quick introduction on Rook. It is already a CNCF hosted project. So there may be, you know, a bill, a bit of knowledge in the community about it already, but we'll do a quick introduction. So Rook can be thought of as a cloud native storage orchestrator. And what I mean by that is that Rook provides a platform and support for a broad set of storage solutions to be integrated into cloud native environments. And the way it accomplishes that is with a lot of automation, basically, so a lot of administrative operator tasks of, you know, deployment and configuration and management of a storage solution over time is handled by a set of operators in the Rook project. So we started when we were accepted into the sandbox stage. Rook was exclusively focused on providing this orchestration capability for Seth. And since then, Rook is now a computer thought of as a framework for providing those services and support for many different storage providers and solutions. And so Rook was accepted back in January as the first storage project to be hosted by the CNCF and our TOC sponsor has been H. So next slide please. So we're going to be talking mostly today about the growth and the progress of Rook over the last seven months since we were initially accepted into the sandbox stage. So since then, we have, we're looking at, you know, a lot of the numerical metrics and data growth in the last seven months. So what we see here is a lot of these metrics have have doubled or tripled in the last seven months, like the GitHub stars, the contributor number of contributors to the project, you know, Twitter and Slack members and followers. There's also been some 10x growth, which I find very interesting in the number of container downloads in the last seven months, which speaks to, you know, the community growing and the popularity of the project project growing. Another thing to note here in the growth of the project over the last seven months is that we have added another maintainer to the to the project to bring us up to four maintainers from three organizations now. Next slide please. So we can talk about some of the specific accomplishments now since Rook was accepted. So since then we have done two releases the zero dot seven release in February, and the zero dot eight release in July about a month ago, which represent a total of 545 commits between the two of them. Another thing that we have done in the last seven months is we have gone ahead and implemented a formalized governance project governance policy. And you know that covers things like adding and removing maintainers from the project what the specific responsibilities of the maintainers are conflict resolution and voting process you know all the typical things you would find in a governance policy. And then we after that policy was implemented we went ahead and followed it to add our fourth maintainer to the project. As we talked about earlier we had originally focused exclusively on Seth, and now we have a framework for other storage providers as well so that kind of makes you can start thinking about Rook now as a general general purpose cloud native storage orchestrator and you know some of the benefits or the aspects of that framework are kind of normalizing the way that storage resources for a distributed storage system would be declared. Some of these patterns around operators, you know, auto software automation to deploy and manage distributed storage systems the plumbing for those operators to talk to the Kubernetes API. A bunch of common specs, policies, logic that can be shared amongst the various storage providers, and then also integrated, like integration testing framework and environment that these storage providers can all reuse. So we with that framework we have added in the zero to eight release support for both cockroach DB in many and NFS Cassandra and Accenta and Alexia are all coming along NFS is just about to be merged into master. And that was taken on by a student this summer for the their Google Google summer of code project. So we're happy to have a contributor from that program as well. Yes, that's great because that was definitely a subject of discussion earlier in the year and early last year we talked about work for the first time. Are people actually deploying on other storage back ends I assume that most of your production and appointments are still on set but are you seeing an interest in in production deployments and some of these other back ends. Yes, we have seen interest in Congress to be in many and then some of the other platforms as well they're both, you know, very early alpha stage support so they're not have nearly the maturity that stuff does that the community built around that stuff does so, you know, the majority of our downloads are definitely safe right now, but we are seeing some traction growing from some of the other platforms and something I find interesting as well is that a lot of these the support that's for more providers that's coming along that's all been community driven. You know, we have folks that demand from the community that they're bringing that demand to the project we're discussing it in you know as a group in the community and new contributors are coming to the project to add support for other providers. So is there a consensus that that's the right call for the project in terms of getting beyond just stuff or sometimes it can feel like you're generating kind of false genericism and that you're creating a bunch of pain when actually people are only using one back end or is the consensus more that this is a direction that people want to go and it's worth the genericism. I mean if that's a question along along those lines how generic is it how much of the API and implementation or shared between the different back ends. I'm browsing through the code and it looks like there are at least some specific apis that each different back end. Yeah, that's a great question. So I would say that the, you know, the generic, you know, kind of general abstractions across all these providers is it is still early in its implementation and that's we're iterating over that and that is growing. There is absolutely platform specific logic that goes into the specific deployment and management of the various storage providers you know there's there's plenty of stuff specific logic that you know you would do to you know bootstrap and manage over time a stuff cluster that does not necessarily have the ability to other storage providers. So there I believe that there is a lot of value in this, you know, general generic approach that we're taking here to kind of abstract the commonalities out, but it is, it is growing and I hope to see more of that over time. Well what is shared amongst the different back ends right now. So amongst the different back ends there are a lot of ways to kind of specify how a deployment should be run and managed, you know, how to select and provision, you know, the raw storage resources, how to specify placement or, you know, resource consumption, or how to set up networking. So a lot of it's more deployment time focused I would say and less of it is ongoing management time. But are there clear layers in Rook separating the different concerns like the infrastructure concerns from other management oriented concerns. I would say that there is a there is a separation there with the, you know, infrastructure sort of bootstrapping or initial provisioning of resources, you know is definitely a different layer than that later run time to operations. And then that kind of leads into where the other layers or higher layers of provider specific implementations as well, which are the more of the ongoing management tasks. So there is there is definitely some some clear separation of layers between is it stacked that way. Okay, thank you. Thank you. So then, back to the progress or accomplishments and sandbox entry. So the step support that we have has been graduated to beta from alpha. And you know what that means is for us is that the community has put a fair amount of miles into the step implementation and the reliability of it has increased a lot. So going forward, the API for you know specifying and managing a sep cluster will remain stable, and any updates to it will be done in a way that honors backwards compatibility. The, I guess we can speed up a little bit. So some of the another important feature for sep cluster specifically is the sep operator has the capability of doing horizontal scaling so it can grow and shrink a sep cluster with, you know, obviously less loss, but also minimal impact to the clusters performance as well. And then we revamped some of the security model in Rook so that the operators. This is sorry that the pods that run or deployed and set up by the operators can be run unprivileged, and the separation of permissions to allow administrators that are running running Rook and running some of these storage fighters have more control over specific permissions and operations that the operators will be performing, and then also being able to incorporate you know pod security policies and things and other environments like and like support for open shift. Next slide. So let's go ahead and start talking about with the adoption that we're seeing for Rook. So this first slide here is about some of the bigger names that are in the process of evaluating Rook and they have deployments in their environments. Right now are they're running internal workloads. You know they're relying on them and you're internally for some of their business needs but they are not yet running customer facing services on their deployments. One thing we have seen is you know storage systems in general can tend to have longer evaluation periods and they're usually put through a bit more rigorous validation. When they're being evaluated to take them to production so we're seeing folks get some good mileage on them and you know really kind of vetting them for a good long period to get their confidence up. So something that I'd like to share too is that there is an upcoming CNCF survey that was taking it KubeCon that's going to be released I think next week. And amongst the some of the questions there about adoption of various cloud native storage solutions. Rook had the highest rates amongst all those projects that were in the selections there and it also independently confirmed for us that there is production usage out there. I think that was in between 10 and 15% of deployments are now seeing production usage. So let's go ahead into the next slide. These are some of the specific use cases that we have of people that we've you know as part of our community that we've reached out to or have reached out to us. And we'll go into some of the details of these specific companies deployments and their use cases. But you know these are some of the ones that we some of the more users or adopters of Rook that we've had a lot of experience with and we're interested in what their deployments are. There's you know a note at the bottom there about there are also additional adopters of Rook, especially those that have on-premise deployments that are running their running the solution on site, and that are not ready or willing to share some of those details public area now. But there are more adopters there so let's go ahead to the next slide and start getting into a couple of the details of some of these specific adopters. So SAP Concur is I believe that's the biggest deployment that we're aware of of Rook right now both in terms of the node counts and the end users being serviced by it. So this you know they're evaluating Rook right now across 300 nodes in their environments and it's about 10,000 or so users are being serviced by you know with Rook providing the underlying storage for about 400 apps in their environment. So this kind of speaks to a couple of their experiences here have kind of speaks to the ease of use and the reliability that people are experiencing with Rook where a lot of these administrative tasks or operational tasks for storage systems are automated and you know kind of more they're able to take a more hands-off approach with running a fair amount of scaled out storage. I also like that from Tiz one of the senior systems engineers over there at SAP Concur that he really that speaks to the healthy community that we've kind of built in Rook that you know we have a lot of folks who are you know starting to help each other in the community it's a healthy growth of a community where people are you know solving each other's problems and you know really helping each other out. So let's go move ahead a little bit quickly here now because I think I'm running out of time. Well is Concur the only user you're aware of running at the order of hundreds of nodes. I think yes in the hundreds Brian I believe that is the only user I am specifically aware of yes. But there are none in the thousands. No I did not believe there any in the thousands. Okay thank you. Yes let's get to quickly through this so some of the other users here. The Pacific Research Platform is funded by the National Science Foundation and it is a large platform that's being built for researchers amongst a lot of the University of California schools and other universities around the country to collaborate with large projects and machine learning and simulations, you know, image processing all sorts of stuff there. Let's go ahead to the next slide. Another one of our adopters is the Center of Excellence of Next Generation Networks. It's in Canada it's a consortium of member organizations like big telco and device companies like Bell Canada and Rogers Cisco Nokia that are working to create an ecosystem to grow the Canadian IT sector. They, you know, have like a one of a mix of, you know, both, you know, storage focused nodes and compute focused nodes, and the ability to select and, you know, isolate or use the storage resources while being able to also take advantage of running workloads on the heavy of the compute focused nodes that, you know, can access the storage from, you know, the storage heavy nodes kind of a hybrid mix there of hyperconverged environment to be able to take advantage of both storage and compute. Let's go to the next slide. So these the folks at the Hamburg Germany University of Applied Sciences also have a one of our larger deployments that services thousands of users at the university there to run really large scale simulations that produce lots and lots of data that needs to be persistently stored and managed and accessed over time. You know, what I like about these, their experience here is that they're, they've kind of taken that idea of storage resources being treated more as cattle, instead of pets, where you know they're the administrators are focused less on the nodes level and focus more on the cluster as a whole because a lot of that automation is being taken care of by rock at the node level in the last slide. And the genie is another German company that is using object storage from look and then set as their backing storage for their end user services. So they have the world's most advanced digital everyday assistance, where you know there's documents invoices forms all sorts of things that more than 10 million of user uploads of those type of files getting uploaded to their system and storage stored on this persistent storage here. You know, it's, it's the, the smarts are intelligence, you know, that the operators provide kind of give, you know, the ability to manage their data at easy, you know, audience with all that automation and very easy manner but they also have the ability to dig into some of the finer configuration options to, you know, in more advanced scenarios as needed to so that flexibility of what the operators can provide is helping the genie guys a lot. I think that was the last slide so we can have time for any questions or move along to the next agenda item if we need to since we're running out of time but thank you for letting me talk about rock today I appreciate it. Awesome. Thanks, Jared. So I don't know do we have anything we asked most of our questions we went to any other last minute questions you want to bring up before we move on to the last presentation. I'll send a note out so we could continue to discussion on the mailing list regarding whether we want to vote or not. Okay, just in the interest of time. Okay, that's great. All right, do we have JJ on or Dan for safe. Yeah, I'm here. JJ. Thanks for the time. So, security, security has been a cross getting concern across multiple infrastructure I think we brought this up before. And we started pulling pulling together like a bunch of people who are aware of security and then we started this working group about a year back I think, or maybe less. So, people that came together, they understood that this is a cross getting concern, and then we have various folks from different organizations, big and small Google to like startup. And in addition to all these organizations, I think we've also tried to bring in external perspective from NISD and NISD has been dangling with this problem for quite a while. Now, whether it's a big data team or their security team. So, all these have led to us understanding that there is a product thing here in security to understand and disseminate knowledge and I mean obviously we got this way for advanced and many other infrastructure. That's part of CNCF. So one of the goals here is to enable cross pollination between what's learned from other infrastructure to the other infrastructures that are going through the same problems. And we have been running this for a while. So there's a lot of information on GitHub. The link is link is dropped there. And thanks again for signing up for the sponsorship for this. I'd be happy to answer any questions. And just before we move on to the questions just so everyone knows in the reference architecture discussions we're having. We have spread out security now as its own category which I know we need to probably review the categories with the TOC at some point soon just to kind of get some feedback on them. But so security is sort of its own area. I think my own area because of all the obvious, you know, implications about security in the industry. So just with that kind of let you guys know what our thought process is on why is an interesting work to create. But that will open it up to any questions. Brian Grant, I'm assuming you're familiar with some of the work that Google work groups are doing. Safe for doing this with this work group, right. Well, I'm actually not super familiar with safe. I haven't had time to keep track of it. Within Kubernetes there are a number of efforts and I'm not. There are there is some overlap of people. So I assume people are talking but I've had working together closely. Yeah, actually just had a great conversation with Tim from SIGAOF about the opportunity to have a subgroup combined of members from each group who like sort of brainstorm some of the open questions around. There's some identity concern, you know, like how do we articulate how identity and authentication works because that has to ideally work across broader than the Kubernetes ecosystem and if we can get the stuff to be interoperable, it would be ideal. So that's this really just beginnings of conversation so far we've just had people attend the different meetings just to kind of informally cross pollinate and then we we just we want to to formalize this out a little bit so that we could have projects where we work on them together. Other than identity and authentication and various flavors of policies, what else is within scope for safe? So at least to one. Yeah, so one thing to also reinforce is the Kubernetes policy working group had a proposal to create a working group here at the CNCS level as well. And we've joined forces with that group to form one. So, you know, policy, you know, policy controls are in scope through that merger. To address to address Brian Grant's question a little further. It's not full. It's not completed, but I think in terms of layering of security problem itself, there is there is identity, there's authentication, there's authorization, there's policies, which cuts across like authorization to some extent then access control as a separate part auditing auditing is another piece. Compliance is a big beast by itself so there is like various aspects of security and some of it is touched in some parts of infrastructure dips but even to the extent of understanding that these are layered problems versus these summer overlapping problems is less clear in many people and then there's like endless discussions around various models and various, various overlaps that exist in this right so the objective is to like at least have a common understanding and not talk about talk about the same thing like a number of different context and a number of different architect infrastructures. Yeah, the main thing is that there isn't clarity around what it takes to have a secure architecture and what that even means. And we're seeking clarity around that because newcomers to cloud native who have our deep security experts. It's not, it's not clear to them how they're supposed to secure their system and they're and you know we've had a lot of conversations about our back versus a back and they're, they're complimentary concepts but really clarifying when and how and why one would secure a system in different ways. And, and I think that we're exploring we're focusing on the big problems first rather that and there's like these fuzzy edges, you know we've had conversations this is include physical security, probably not before, if you have to, if in order to move to a cloud native, you know, part of your infrastructure that has to connect with systems that ensure physical security on prem. Well, maybe we have to kind of touch those systems in some ways. So it's really about like how do you secure. How do you reason about whether your cloud native deployment is secure. And you know how do you keep your end users safe in that right so that we were explicitly touching on, well, how do applications become secure how do they do. You know how do they ensure that their end users have the right security controls and the developers have the tools they need so where does that make sense in terms of scope Brian. Yes, thank you. I was reading through the use case oriented description as you were talking to. Yeah. I just want to be sensitive of everyone's time. Since since since this has the TOC sponsor already I'm going to just move the discussion to the mailing list and get a little bit more feedback before seeing if we call for a formal vote to accept this okay. Other than that, we need to wrap things up since we're a little two minutes over so I appreciate everyone taking the time to present today and check the mailing list for for follow up. So thanks again, Ken for for taking the place place of Alexis today. Thanks. Thanks guys. Take care. Thanks everybody. Thanks all.