 Uh, I mean here, let me post the deck first in the chat. It's kind of self-serve. But it looks like Taylor will run the deck. Are you on Alexis? Hey, Chris. Hey. I hear you now. We got Brian Grant. On. Quinton. John bull. Sam said he can't make it. I'm sure he's busy over there at GitHub. Chris was the agenda deck sent out. I didn't see it. Should be attached to the meeting invite. Maybe I missed mailing list. It's in Slack and attached to the meeting invite and in the chat. Thanks. No problem. Let's start in a couple of minutes. Hey, Chris. Hey. Hey, Ben. John is a Brian or Camille on. All right. I'm just going to go straight into the agenda. Which is on. Slide five. We've got a couple of presentations today. And a few topics to discuss. Around. So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, So, of activity last year in the server that's working group. Showing that the working group can play an important transitional role in helping a community form around the topic. In this case, the events passed around by as inputs and outputs or functions. And then from that, an actual open source project can be initiated. And I think the lifeblood of CNCF is open source projects rather than the sort of committees and standards that we see. It's really important to see living code that's something that can be supported by cloud providers as well. And so well done to everybody who did that. And thank you for everyone who participated in the helm debate and due diligence. That was important and threw up some issues that we want to talk about today. So here on slide seven, you can see the issue that I wanted to highlight which is around the notion of a sub-project. Now Kubernetes has an existing notion of a sub-project which I haven't linked to here, but we could. There is a proposed set of rules or principles for how to deal with sub-projects which is linked to as a draft text. You'll see it's a post by me as a comment on the helm TOC due diligence issue. And below and above that comment you'll see contributions from Brian Grant, Quinton, Brian Cantrell, and also Matt Farina and a few others going back and forth on the issue that was thrown up, which is this. Kubernetes has grown to be a humongous beast in terms of its projects and sub-projects and SIGs and committees and working groups and what have you. And if it is one of the things we'd like to do at CNCF is how can I put this? Avoid some mistakes that have been made by projects, large projects in the past. And there is concern that Kubernetes will get unmanageably big. I mean, it's already running into very significant issues of scale just in terms of its use of GitHub. I hope Microsoft can resolve those. But now we've got other issues like numbers of sub-projects and for Kubernetes to continue to maintain progress and velocity probably needs to be very lean and well-organized or as lean as it can be. And so it's great to see a project like Helm coming out from under the umbrella, so to speak, of Kubernetes and being its own thing. And I'm fully, totally supportive of that. But that raised the question, which is, if you're a standalone CNCF project, can you still be a project that only works with Kubernetes? And this has created a number of discussion topics, including, oh, does this turn CNCF into the Kubernetes organization? Or, and things like that. And if you've got a totally independent project, like an Envoy or a Prometheus, would CNCF be as welcoming to you as it would be to projects that work with Kubernetes and all kinds of things? And I think we need to be very clear with users in the community and everyone else about our feelings in the TOC here. So I think if you read the debate, you'll see that at least those participants on the debate concluded that, yes, CNCF should accept that it will have standalone projects that at least initially only work with Kubernetes. And that's just how things are gonna be. Now, I would like to throw this open to the TOC people who've dialed in and asked for any strong opinions in either direction on this matter. Well, I would like to just provide a little bit more context that's also on the GitHub issue. If you look at what happened in Node.js, just as an example, the Node.js foundation decided not to accept user space, JavaScript libraries into that foundation, and then those projects were sent to the JavaScript foundation instead. And I don't think that's the situation we want for the CNCF that another foundation, for example, either a new foundation or like the Apache foundation or something would take Kubernetes-specific projects because the CNCF didn't want to. The Kubernetes project is very large. It has pretty ubiquitous industry supports and it is inevitable that there will be projects that are perfectly happy supporting Kubernetes only, at least for quite some time. So, yeah, I don't think we want to create a situation where Kubernetes becomes a kind of sub-foundation where it feels it is compelled to take on these other sub-projects because the foundation itself doesn't want to. Thanks, Brian. I agree with those comments. Would anyone else like to apply for this? Yeah, I had a comment here. I think there's another issue here, which is just how many projects do we want to have in the top level of the CNCF? A very large number of small projects, for example, projects which help to install and operate other pieces of software, typically on Kubernetes, which we may not... Well, it sounds like Kubernetes may not want them as sub-projects, in which case, do we want them as CNCF top-level projects or do we want them somewhere else? And it sort of overlaps with the question of, if projects are forced to go somewhere else because the CNCF doesn't have a home for them. I'm not sure how we deal with that. And would we be comfortable with hundreds of top-level CNCF projects, for example? So I didn't do a comprehensive count, but I estimated that the Apache Foundation has about 200 projects right now. That foundation is something like 20 years old, and that doesn't include their incubator or their labs or whatever other categories they have. That's quite a lot. I think this foundation is a little bit too young to say whether we'll ever have that many. I think one thing that came up in the discussion is we made need more flavors of categorization than the current tiers we have, graduated incubation and sandbox, which are more oriented towards project maturity. And we may want to distinguish platforms, strategic technologies, other things, compared to projects that are simply useful tools in the toolbox. I think the trail map is doing a better job of this than the landscape, but we may need some categorizations of the projects themselves. Looking at Apache, they have categories of projects in terms of the domain, like whether it does builds or it does data processing and things like that, which is definitely useful. That's more of a landscape style approach. They have more categories than we have projects, just of that flavor. So I feel like we may need some more core screen buckets in addition to the landscape domain kind of categorizations or attributes. Yeah, I think if we have anything from user space, we will be compelled to categorize those separately. For example, okay, go on. How many projects are we actually talking here, realistically? Realistically, Camille, I think that if we start having user space projects, and I'll give you one example that was doing a rounds at KubeCon, but it's just a representative example, it's Kubeflow, then we could have thousands of those things. By thousands, I mean literally on the order of magnitude of high hundreds, maybe over 1,000. Well, we probably can't even review that many projects, realistically speaking. But if we look at over the past couple of years, we took on one project a month, we can probably extrapolate from that over the next 20 years. Yeah, I've got a feeling it might accelerate. I mean, there is a perception out there that having a top level CNCF project is somehow more, has a higher profile than having some sub-project in one of the high level projects. And so if we create a precedent where relatively smaller, and in some cases, Kubernetes-specific projects move from Kubernetes sub-projects to the CNCF, I think that number could be the rate of onboarding requests could accelerate tremendously. And if we don't have a mechanism for accommodating them, i.e. review bandwidth and categorizations and just ways of managing this at the top level of the CNCF, they're gonna end up homeless. And I don't think that's a desirable outcome. I mean, it may be in some cases, but I think in many cases it will not be desirable. Yeah, I think the main source of new projects will be from outside our current projects. There are a few examples where projects have been kind of spawned off. Like the open metrics is sort of an offshoot of Prometheus. Maybe Conduit is an offshoot of Linkerdee, Fluentbit. I don't know if that would ever make sense as a standalone project, but could be an offshoot of Fluentee. So, but those are pretty low numbers. I think the vast majority will just be net new projects. Yeah, I think, I mean, for every application or infrastructural component that wants to run on Kubernetes, there is likely to be one or many projects which gets spawned to make it easy to run those things on top of Kubernetes in particular. And in future, probably other container orchestration systems. I had Strimsy approached me the other day. That is something to make Kafka easier to install, run and operate on Kubernetes. And for every project that wants to run on Kubernetes, there is another project that makes it easy to run that thing on Kubernetes. And there are hundreds of those at the very least. Is there anyone here from Apache Foundation or familiar with how they manage the scale of hundreds of projects? And it is hard to kind of understand the signal from the noise there, but yeah, with minimal resources. Yeah, not well is what I would say is the synopsis of Apache. I mean, they've got way too many projects. They've got essentially no budget. There's nothing coherent about them. There is no TOC. I mean, I think when we initially established ourselves, it was with the explicit goal not becoming the Apache Foundation. But does that mean we don't want to have, I, you know, my impression was that we didn't want to become the Apache Foundation and we felt that they had overdone some of their bureaucracy and maybe two heavily standardized parts of their process. Do we not want to grow to have a lot of projects also to just avoid becoming like the Apache Foundation? Because I mean, you know, because like cloud, like cloud native, I think we all believe is the future of computing in a big way. And that means that there's going to be a ton and frankly, everyone builds open source software now, like even more than when the Apache Foundation started up. So there's going to be a ton of potential applicable projects. And I mean, I don't think we will scale to having the TOC, you know, hand evaluate every single thing that wants to join if we really want to be the place for cloud native projects. And I guess we should, you know, contend with that at some point. Yeah, and I think what's holding that intention is that in addition to not wanting to emulate necessarily some of the bureaucracy of the Apache Foundation, the Apache Foundation does have many projects that have very little activity and don't necessarily have a theme that don't necessarily have anything to try to unify them. And I think that we do want to avoid becoming a dustbin for projects, not that the Apache Foundation is a dustbin necessarily and there are obviously very active Apache projects, but it also does have your reputation as a place where you kind of have an orphanage for projects that don't have another home. And I think we do want to avoid that I don't think that's a desirable outcome, but I think you've got a fair point in terms of like, there are a lot of projects out there and we don't want to, we don't want to give up the mantle of cloud native computing because we refuse to accept projects. I think that comes to figuring out a scalable strategy for managing these projects and understanding their health, whether they have users, whether they have contributors and so on. And maybe we do need something, maybe not like the sandbox exactly because that's a maturity oriented thing, but maybe like a toolbox for projects that are kind of smaller and more bounded in scope and have kind of a complimentary role. It's also the case that we are starting to, for some of these things, they may not necessarily have to live in the CNCF if they are related to another existing open source project, but add support for Envoy or Prometheus or Kubernetes or whatever. They can live wherever the original project is. And that may be fine as CNCF projects become kind of more widespread and more things integrate with them. And I just find to your point, one of the questions that you have is what problem are we seeking to solve by making them CNCF projects as opposed to just projects, open source projects? And having kind of a queer bar for that would be really helpful. I mean, actually it will be totally queer, but in terms of what problems we're trying to solve for the projects. Yeah, sorry, Karen, Brian. Oh, you can go ahead, Quinn. No, I was just going to say I had a similar kind of question and proposed answer, which is I think we need to be clear who the CNCF is trying to serve. I think there are two kind of groups. The one is consumers of cloud native technology. And I think in my mind, it's desirable to have a kind of a place where they can go and shop for things and know that there is a reasonable way of figuring out which ones are high quality, which ones are active, maybe which ones are incubating, et cetera. And be able to have some level of confidence that what they're getting is consistent, interoperable of a certain quality bar, et cetera. And then the other group that we're serving is the actual projects themselves who want a home and ownership and legal structure and support for project management, et cetera. And as long as we keep that clear in our heads, then we can shape the rest of the stuff around that. If we believe that those consumers need access to hundreds of projects, then we need to kind of tool ourselves up to be able to support them. If we don't think that that's a requirement, then we shouldn't tool ourselves up to handle hundreds of projects. Yeah, there are specific services that Quint mentioned that projects are looking for, but the most frequent issue that I hear about is companies not wanting to contribute to projects owned by their competitors or owned by companies that are startups that might disappear or get acquired by a competitor, things like that. They're really looking for that neutral ownership of a foundation, and otherwise they don't feel comfortable contributing. So in order for the projects to really succeed and have a broad set of contributors, they're looking for a foundation. Not every project is looking for that, certainly, and there are trade-offs, but there are a number of projects in the Kubernetes ecosystem where that issue has come up. Well, I think Quint has hit on the kind of the fundamental tension about who our primary constituency is. Because my view is that our primary constituency is ultimately the user trying to navigate this, and I feel we do a disservice to them when we simply have every conceivable project as a CNCF project, and some fraction of them are related to Kubernetes and some fraction of them aren't. I think that we're offering them no clarity, but at the same time, you want to give choice and we don't want to be kingmakers, so I think it is a kind of a fundamental tension that I think it helps to be explicit about. I think it's a good way of phrasing it, Quint. Okay, so I agree. Thank you, everybody. I'm just in the interest of time, I'm gonna declare a halt to this discussion now. I would ask everybody to go and look at the language that I've linked to as a proposed set of principles on this. I don't think it covers all of the things that were said today by Brian Cantrell and Quintan and Brian Grant and Camille and anyone else who spoke. There's also some good chat in the IM window here. So anyone who's participated in this, please go and take a look and see if we can improve that language. Let's come up with a statement of our opinion here and add that to the operating principles next time we update them. I think this is gonna be more important in the future. I don't think it's urgent because I think that, you know, Helm is an early indicator of this rather than the beginning of a slew of projects. I think it is correct as Quintan said that for every X, there will be an X plus Kubernetes project in the course of time and that will create a slew of things. Okay, so with that, I'm gonna move on to the next slide. We have CNC project proposals, a regular call for TOC contributors to help review and volunteer. There are the links. You know what to do. If you wanna contribute and you're not sure what to do, just contact Chris, okay? Thank you. Now we have a project presentation. I need to declare a conflict of interest here. So this is a project called Weave Cortex or Cortex for short, which originated at Weaveworks. Weaveworks is transitioning it or has indeed transitioned it to a community-based project. So for the duration of this presentation, I will drop out completely. I'm actually gonna pass my computer to Brian Borum, who's gonna speak for Weaveworks as a project maintainer. I believe we have some people from other projects on as well to talk about it. Bob, are you there? Yes, I'm here. Good morning. Afternoon. Good evening, guys. I'll just shut up now for a while. Chris, you're in charge of time. Cool, go for it. So Bob, we thought you should give the introduction. Yes, good morning. Afternoon, good evening. So today we're here to talk about Cortex. We're looking for two sponsors to bring the CNCF to bring Cortex into the sandbox. So what is Cortex? Cortex is a horizontally scalable multi-tenant Prometheus. So when you get this thing up and running in a SAS monitoring system, effectively have the ability to provide Prometheus instances on demand to your users, whoever those are. From a single running instance of Cortex, you can provision these without having to provision Kubernetes clusters or storage or any of the infrastructure that you need underneath that. Cortex is a complete Prometheus monitoring system that is API and PromQL compatible with Prometheus. In fact, it vendors in Prometheus underneath the covers. It is highly available in architecturally, fundamentally different way than Prometheus. It is a microservices-based architecture that allows you to fail the components individually inside of the architecture. It is horizontally scalable. You scale this out as opposed to scaling this up and provides a value that a lot of people in Prometheus are looking for for long-term storage. Fundamentally, it is multi-tenant to the core. So there's a single cohesive system. This is not a pot-per-client sort of architecture and that tendency is encoded throughout the architecture all the way into the data storage layer. And we're also cloud-native. As I said, microservices-based distributed hash table for the right path, completely stateless on the read path, deployed and managed with Kubernetes and works in multiple cloud and on-prem NoSQL storage engines. Next slide, please. So there are two fundamental types of users who run Cortex. We have commercial service providers and large enterprises. Commercial service providers like myself, like Weaveworks, Grafana Labs, OpenEBS, these folks are providing a system that is enhanced. Metrics are monitoring to their end customers and they need a multi-tenant solution in order to do that. Large enterprises, on the call today I saw Ken, so Electronic Arts and Storage OS are using Cortex today in order to provide on-demand Prometheus instances for large Kubernetes installations or multiple Kubernetes installations. Well, these are the two types of users that we've identified who are currently running Cortex today. Next slide. So the problems that Cortex solves is fundamentally the same for these two types of users. If you wanna run large installations of Prometheus, your choices are to either manage thousands of Prometheus instances and operationally, that's hard from a storage and infrastructure standpoint. Long-term storage is a fundamental source of value that users need out of their Prometheus instances and Prometheus itself does not provide that gracefully. For the enterprise, having a provide a global query view as an alternative to the Prometheus Federation story is super compelling and having an alternative to the Prometheus run two of everything everywhere for high availability is also very nice. So the architectural decisions that came out of Cortex were more based in these motivations. Brian, on to you, next slide please. Hi. Yeah, so we put up some logos here of the people on the left who have gotten involved in the code and in running Cortex. So Bob is from FreshTracks, which is a company doing machine learning with metrics. I'm from Weaveworks using it as a part of our monitoring and metric solution. We have Grafana Labs using it in quite a similar way. Electronic Arts is an interesting one. They are not intending to sell the thing externally like the rest of the people I just mentioned, but use it internally. Anyway, there's just a flavor on the screen of a bunch of people either as end users or as people adopting the project from a development point of view. And we think in aggregate, there's about 60 million time series being gathered by all these different people in real time. So that's that's that's slide, next slide please. We wanted to spend a minute on the alternatives. You know, where does Prometheus kind of sit against if you like the competition? So some that you might consider in the space influx DB, which is a very big and powerful database for multiple kinds of data. So the Cortex is more focused on the Prometheus problem of time series and with a much more powerful query language for doing that. Thanos very recent project come on the horizon is some kind of problem as, sorry, same problem space as Cortex but solving it in different way in much more, how would I say? It's about running, well, let's not get into the details. Anyway, it takes a different set of decisions about how to solve the problem. We think Cortex much more manageable. A couple more on there, I guess we might leave them to the questions phase rather than getting into all the details right now. Next slide please. Tom, do you wanna just review the history of the project? Yes, yes, thank you, Brian. Hello everyone, I'm Tom, one of the original authors of Cortex over two years ago and now at Grafana Labs running it there. So project started two years ago at Weaveworks. We did a very rushed sprint to get it ready for PromCon 2016 and had it launched and gave demo. We then spent the next few months making it more production ready and launched it into production with a few customers. We added support for recording rules, alerts, really kind of fleshed out the rest of the features to make it fully future complete. And then since then I've focused on broadening its applicability. So we've added support for Google Bigtable. We've added support for Patrick Sandra so you can run it on premise. And we've been focusing on building out the community, working with Bob, working with Ken over at EA and generally trying to build a bit more of a community around this and improve the software. Next slide please. On the community, we are actually on 31 contributors now. So it's still relatively small, spanning around six companies. Codal licences Apache 2 and there's a good cadence of PRs and a nice Slack channel that we all kind of chat on pretty regularly. Reviews are happening, bugs are getting fixed, generally pretty healthy I'd say. Of this people you saw on the adopters slide, we know four are definitely in production and three of them are kind of in the early stages of going into production. In February the community effort sort of kicked off in erst. We started a community mailing list and it's led to this outcome here. Brian and the Chaps at Weaveworks have drafted a governance process based on CNI which I believe is very close to being merged as a PR. Next slide please. So one of the questions we foresaw was what's the relationship between Cortex and Prometheus? Well, as Bob covered, Cortex is API compatible. The two original authors, myself and Julius are also Prometheus developers and a lot of the code in Cortex is just Prometheus code. We vendor it as a library. We have upstream various fixes, the whole of the remote write API, remote read API as well. In Prometheus was more or less motivated by Cortex. We did discuss upstream in Cortex into the Prometheus org about a year ago. But the Prometheus team decided against it. I was part of that decision. Long-term storage is explicitly a non-goal of Prometheus. The Prometheus team has limited bandwidth to deal with newer projects and they really don't want to be kingmakers. And they thought the implicit blessing of adding Cortex to the organization might put off other projects. That's why we're here. That's why we think the CNCF sandbox might be that right balance. Next slide please. I think back to Brian as well. Yeah, so why are we asking to enter the CNCF sandbox? Yeah, fundamentally to put Cortex on neutral ground between contributor companies, some of whom are natural competitors in the business market, but we all get on very well, but it would be good to have that basis. Growing the Prometheus ecosystem and the affinity with the CNCF technologies that it is built to be a good place to be run with Kubernetes and we instrument it with Yeager. There's a bunch of synergies there that things all fit together. So that's our presentation. Questions please. Do you have a more detailed comparison with the other systems? Thanos is brand new, so maybe not that one, but you have more detail on your site or somewhere else. No, I think that a lot of, so the Influx DB comparison you could certainly read on the Prometheus site with the possible wrinkle that the Influx DB's high availability and scaling features are really only available in the commercial version. Cortex is fully open source. We can certainly take that away as an ask to put more detail. Yeah, Tim Bauer is a pretty new small scale project really. M3DB, probably very interesting, again quite newly announced. It's using a lot of the same ideas like the so-called guerrilla compression, things like that as Prometheus and Cortex. It's explicitly aiming to be a highly scalable database. Yeah, I think one of the fundamental differences between Cortex and the rest of the players in the space is the multi-tenancy, the ability to host multiple segregated users in the same instance of Cortex. So does it have its own authentication and authorization mechanism to support that? A lot of that is bring your own, depending on the application of the end user as they need it and how to apply. Fundamentally, within the architecture, you provide your tenant ID into the rest as an HTTP header and that is propagated through the architecture and into the storage layer. So the multi-tenancy is cooked into the architecture but authentication is up to the end user. So between horizontal scalability, long-term storage and multi-tenancy, would you say that there's a primary focus or objective or they're all equal or? Yeah, they're all equal, I would say. We've focused explicitly in the original design doc on those three. I mean, the motivation for the high availability and scalability is because when you start running a multi-tenant system, these things become very important. And the multi-tenant focus is more from a business case. We wanted to offer this as a service. Okay, and for the long-term storage, like some of the other long-term storage solutions, do you offer summarization and reduced resolution for past queries so you can do efficient queries over longer time periods? We don't, no. The approach there has very been, sorry, go on. Well, I would answer that as not yet. I mean, that's something that's entirely possible within the architecture, but it's not been coded. Yeah. Okay, and how separate is the storage layer from the monitoring layer? Very separate. So the way it works is there's an ingestion engine which does the compression of the time series into what we call chunks, which are a small number of K, and then those are shipped off to the store, which is one of DynamoDB or Cassandra or Google BigCable. So does that split, basically, that the metrics are turned into chunks, the chunks are sent to a store, which is itself scalable, and there's an index as part of that store. And then data collection is actually handled by a vanilla Prometheus instance that is configured to send the data to Cortex. So Prometheus does service discovery, Prometheus does the scrape, and Prometheus sends the data to Cortex, in effect, turning Prometheus into the agent. And how does it speak in production? I mean, you obviously had people that are using it, but who's actually using it in anger in production? Everybody who's offering a commercial version of this is also using it in anger for themselves, and then we are offering this to our customers to allow them to monitor their Kubernetes customers, their Kubernetes systems. Okay, so I mean, how is it actually being used? Because I mean, honestly, it seems a little vendor heavy, and I'd be curious to know how it's actually being deployed and used and how big those users are. Can we get canhains from EA to say a few words? Yeah, I'm just unmuted myself. So I'm probably a pretty large installation of Cortex, so we use it exclusively internally to monitor all of our game servers, which a good chunk of our infrastructure was using Prometheus. Previously now we're using Cortex to basically pull all that data up and monitor our game servers across the globe. So we're pulling in right now 25, 30 million time series, and that's only the, and we're not finished rolling out our infrastructure for the year. So I'd like to think it's a big installation that we're using it. Yeah, you bet, that's great. That's exactly what I'm looking for. And so and from your perspective, from as a user of this, what do you see the value is in terms of it being a CNCF project versus just being a project? That's kind of an interesting question. I hadn't thought about that one. I personally and professionally in our team, we use all the tools from the CNCF already, so it'd be great to include Cortex in that sort of toolbox, if you will. So we can participate with this project along with the other ones. Is everything we deploy is on Kubernetes, we're using Jager tracing. There's a few other ones I'm just blanking on right now, but Pelham is another one that we're using heavy on. But you're already using Cortex though, but... Yes. You're more likely to contribute to it if it's a CNCF project or... I mean, I was starting to contribute before it was a CNCF project, but it sort of adds some extra weight to that for sure. Okay, and from the project's perspective, from the project team's perspective, what's the value of being a... Why does it feel needed to be a CNCF project? I think it's more around standardizing the government, the governance model. So as we have on the slide that's being shown here, clearly there's some marketing that comes along with that and vendor neutrality. So we've worked, wants to help expand the scope of the community around this and bringing it into the CNCF will definitely help that and because Prometheus does not necessarily want to touch it yet, we figured that the CNCF would be a great place in order to achieve all of those goals together. Hey, sorry, this is Dave from Spotify. Just to go back to the value to end users of it being a CNCF project, I mean, I've personally been trying to talk to a lot of people that are in Spotify to start looking at Prometheus and looking at Cortex. And I think presenting Cortex to the group of people that aren't very familiar with this community as a project owned by Weaveworks, which is a company they may or may not have ever heard of, not think for this community, gets me kind of ignored whereas presenting something as a CNCF project gives me a much stronger voice at the table. So I think as far as ability to get Cortex used at a company that's kind of slow moving into the space but already ingrained in it, there's a lot of it being a CNCF project as opposed to project owned by a company like Weave. Okay, that's helpful. Thank you. And then from, I guess another question for both you and for EA from kind of a user perspective, do you view Cortex as kind of part of your, the way you think about Prometheus? In other words, does the project distinction here, does that help you or hinder you? I mean, I understand that it's Prometheus's team's decision to actually not include this functionality but from your perspective, should you think of these as separate projects? From EA's perspective, I've looked at them as separate projects because they solve two different things for me. Prometheus is the piece of software or component that goes and grabs the samples while Cortex is the aggregated storage and query layer. Yeah, I think for us, for me personally, I see them as separate projects but I think part of the point for kind of convincing Spotify to switch from our own homegrown monitoring stack to something like Prometheus, fact becomes the same thing because if they say Prometheus alone has some set of flaws like level scalability, for example, and Cortex solves those to most of Spotify, it really just means, okay, I don't care if you call it Cortex or something else to really judge you gave me Prometheus and it's for a lot of leaks. But to me personally, I can say that's... Yeah, and that's a good point. A lot of our users internally probably in our various companies probably wouldn't know the difference because they're just looking at a Grafana dashboard where the data comes from, they're not really visible to that information but to us who know, I assume it's two separate projects. Right, okay, but to be clear, there is no intent to make Cortex work beyond Prometheus. It is going to be Prometheus-specific. I don't think we're even... I should rephrase that as a question. Is there an intent to make this actually... I don't necessarily know that there's something else that you'd immediately consider but to make this neutral with respect to the underlying instrumentation of the system, the underlying metrics gathering? We've not explicitly ruled it out and it's something that we've discussed or at least I've discussed with people making it support graphite as well. Because I mean, honestly, we have the kind of the similar conundrum to what we have with Helm where you've got something that is important to the project and specific to it but feels it has a separate identity. And I've got kind of similar concerns in terms of adding complexity to... I mean, on the one hand, I don't want to force a union where one does not exist. On the other hand, I'm loathe to add more complexity to the CNCF landscape. So that's part of the reason that... Thank you for all the answers to these questions but that's kind of the thrust of where all these are coming from. Cool. I just want to be sensitive to time since we have one more quick presenter after this. Formally, if you're interested in sponsoring the project, please let them know but for now, I think that's it. Thanks for your time. Thank you, everyone. Cool. I'll kick off a thread on the mailing list too for people could ask questions because there's a lot of questions in chat. So for right now, thanks, Bob. Tom and folks, now let's have William talk about LinkerD20 plus conduit plans. Hey, just before we do that, this is Alex. I'm going to actually drop out of the call because I've got to go run to a meeting. So thanks, everybody. Bye for now. No worries. All right, you there, William? Yeah, can you hear me okay? Yeah, yeah, go ahead. Yeah, okay. So I wanted to bring a proposal in front of the TOC and the nature, it would involve a fairly large change to the LinkerD project and the proposal, as you can see, it's basically to take the code behind conduit, merge that into LinkerD then do a little bit more engineering work for least that as LinkerD 2.0. And at the end of that process, the conduit project goes away and the conduit brand goes away and everything is now back to LinkerD. And the reason I wanted to talk about this in front of the TOC is because I know when we started conduit, we had some discussions with you folks about conduit itself being kind of a top level or being a CNCF project. At the time, we decided not to do that but I'm sensitive to the kind of possible interpretation that we've managed to like sneak conduit into the TOC. So we can go to the next slide. So I just have a couple of kind of very quick FAQs and then I figured I would kind of leave it the rest for Q and A. But the goal basically for us is when we started conduit, the reason why we decided not to kind of do it as a LinkerD sub-project or to do it as part of LinkerD at all was effectively we weren't sure that it was really gonna work. We were making a bunch of fairly risky decisions and I think there was a non-zero probability that it would just fail. Now that we're eight or nine months into it, happily, none of that has happened conduit is actually in really great shape. And the goal of merging it into LinkerD is that basically LinkerD needs a future with a dramatically reduced resource footprint. So LinkerD right now is on the JDM. There's lots of cool stuff we can do, especially with things like GrawlDM coming out. But ultimately we need to be able to operate LinkerD in a way that doesn't require 200 meg proxy or 150 meg proxy or even a 30 meg proxy. And like I said, the reason to do this now, conduit code base has reached kind of viability. It's no longer a totally alpha, totally experimental thing. And then conduit goes away at the end of this. And the existing LinkerD code base continues. This is the thing that's actually in production at all sorts of places around the world. And my suspicion is that the LinkerD 1.x code base will remain in production for many years given how widely deployed it is. It's kind of totally different from LinkerD. It's a new code base and a fairly different UX. But a lot of the core team is the same. The goals of the project are the same. They're both service meshes. The value props around reliability and security and visibility are all entirely the same. The general architecture of control plane and data plane are all the same. And you'll be able to migrate. I think there's all the devils in the details. The initial version, for example, the initial version of what would be LinkerD 2.0 will be Kubernetes only. And so the many LinkerD users, LinkerD 1.x users who are not on Kubernetes will have a hard time upgrading or migrating. But over time, we'll start adding more and more to the project and capture all of these cases as well. And then finally, yeah, the idea is that the code would live in the LinkerD GitHub org in a separate repo. And yeah, we're gonna do a formal maintainer vote on the LinkerD maintainer mailing list. But there's a discussion that's been kicked off already, which you can go visit. And then, yeah, if this whole conduit thing is totally new to you, here's the repo. At this point, I would like to open it up for questions. Yeah, I guess, William, I both appreciate the notion that conduit was originally asked about its perspective or the maintainer's perspective and coming into the CNCF. And so in many respects, like this is just sort of a natural motion to follow up on that inquiry. But I appreciate the team bringing this in front of the TOC such that folks don't receive it as being slipped in. To the extent that, in my mind, one of the more prominent concerns here is, to the extent that migrations are treacherous, it sounds like you guys working toward me ensuring that they aren't, particularly for Kubernetes-based workloads. It sounds like some of the, depending upon what workloads people are using LinkerD4, some of those might be more difficult to move over than the next, just depending upon whether they've got maybe proxy per node deployments versus kind of sidecar proxy deployments. Yeah, I think it'll be less around kind of demon set versus sidecar and more around how complex is your LinkerD1.x routing configuration and are you running it on, what are the environments that you're running it on? Because you can get very, very complex with LinkerD1.x, right? One of the strengths is you can have it kind of join a Mezos cluster and a Kubernetes cluster and a Nomad cluster all together into kind of this unified service discovery framework. And that'll be, I think those super complex situations will be the hardest to migrate over because we wanna be able to support all those environments. The easiest stuffs to migrate over will be like, oh, we're using LinkerD on Kubernetes and we're not doing anything that's particularly crazy. And we have kind of an engineering goal. We have this thing, if you search for the LinkerD consolidated Kubernetes configs, see this giant YAML file for configuring LinkerD in kind of the preferred way for Kubernetes, the folks who are on that configuration will be the easiest to migrate over. So we have some engineering kind of milestones. Well, on the other side, that Kubernetes users or LinkerD users stand to significantly have a much better, significantly benefit from much better kind of all the learnings that you're building into conduit and really the UX around its simplicity and how you get up and going so quickly. I think if we reflect for a moment there are other projects that have undergone similar like fairly massive architect, re-architectures if we think about like Prometheus, the TSDB, the 2.0, granted it wasn't necessarily marketed or branded to something else, but I view that as almost as disruptive as potentially in LinkerD 2.0 access. So there's some prior art here. Awesome, any other questions? We have a few minutes. Any other comments from the community or TSC members? So are we gonna, what is the process for this, Chris? Are we going to have a proposal? Are we gonna have just a vote on the mailing list? So my feeling is, we always kind of defer to our projects in terms of how they structure sub-projects and so on. So if there was no strong opposition from the TOC today on the call, I was just gonna let that proceed and kind of go on in doing what it's outlying today. I don't think we need to do a full official vote to bless conduit in this. Yeah, I mean, I feel that this is really how a 3D project came effectively. So I've got no problem. Yeah, I also have no problem. I actually view this as a really good thing at responding to the needs of users becoming in some sense more cloud native. As was pointed out, it's kind of similar to transitions other projects have made, whether it's TSDD 2.0 or Fluent Bit as a follow-on, I think it's a good thing. Cool. So we'll take that as a blessing. I mean, just to read it real quickly, one of the immediate concerns that was voiced in the community when conduit was initially announced, well, not this last one before, was some people interpreted that as, well, there goes Linkerd and there goes the maintenance and support and just sort of the implicit acknowledgement that that announcement made around the deprecation of Linkerd's, I think for the, and there's any number of large users of Linkerd who now will, similar to those forward, they'll receive a much improved feeling. Yeah, that fact is definitely not lost, honestly. Cool. Awesome, that's about wraps up for time. So thanks again, William, for presenting and thank you everyone for your time. So take care all. Yeah, thank you, Chris. Yeah, thanks, bye.