 Hi, and welcome to the Permissive Conformance Program. You might never see this talk. Actually, in fact, I hope you don't, because if you don't see this talk, this means we figured out how to splice this in as a neo-talk, simply because we will have pretty fresh results, which currently we don't know yet, because it's a month before KubeCon. So I'm kind of optimizing towards having a really short actual talk. Maybe you'll see, maybe you don't, and having an extended Q&A where I can share the actual results in a bit. The one-on-one, all of this, what we are doing here, as of today, October 14th, we will start publishing an overview of software, of projects, of products, of as-a-service offerings, which are compatible with Prometheus. If they are not compatible with Prometheus and if they claim to be compatible with Prometheus in the past, we will most likely also end up publishing those results just to have a little bit of a fair level playing field between all the different offers and vendors and such. As of this recording, I don't know who is making that cut, or was making the cut. We just don't know. It's too early. The intention, obviously, is that we want the end users, you most likely, to be able to choose between different implementations, between different offerings with confidence, without having any confusion about what is actually compatible with Prometheus was not. We have a little bit of a problem of success. We are victims of our own success. Prometheus is the standard in cloud-native metric monitoring and beyond. That's great, but because Prometheus is great, obviously I'm kind of biased, but I think it's correct. We have hundreds of thousands of installations. We have millions of users. This is great. The market segment, which we are leading, is worth billions, most likely even 10x that. Depending on who you ask and what analysts you read, it's substantial. As is commonly true, it's a lot easier to claim that you're compatible with something as opposed to actually investing the work of being compatible with something. Some people, honestly, don't even realize if they're incompatible. Others maybe are more aware of this. Doesn't matter. The point is we have confusion in the user base about who is actually doing things to Prometheus way and who is just claiming to be doing them the Prometheus way. How does this work? We created a suite of tests. You can see them here, just in the Prometheus org slash compliance. And anyone can run them. Obviously part of the design. CNCF, or more to the point, the Linux Foundation is using trademark law, which I found interesting when I first realized how this is structured to grant the usage of a mark. In this case, the mark Prometheus compatible with licensed little logo and such. So those vendors, those projects can showcase that yes, they're actually compatible with Prometheus. To get this, they need to sign paperwork with CNCF or the Linux Foundation, which include clauses like good faith testing. You can cheat on the tests, blah, blah, blah, blah. And lots of dry stuff, which you need to sign if you want to actually have that mark, but then you agree on the terms of the usage of that mark. That's a prerequisite. So if you would try to stop agreeing with this, you already signed a contract in the past, which is kind of nice this mechanism and this automation in contracts. Nice, when I realized how this works. You can either to submit your test results yourself or Prometheus team runs them for yourself for you with things like remote write. It's a lot easier to run it yourself and to write tests yourself and just run them fully automated with for example, promql, you need quite a bit of interpretation of the result sets. We'll get there and hopefully there'll be more automation over time. And then you really receive a time limited mark to actually show that you are Prometheus compatible. And if we on the Prometheus side update our test suite and you run out of that grace period where you have that mark, lose that mark under the trademark contract, which again is for a nice design. You might stumble across three different words which look almost the same conformance, compliance, compatibility. Conformance is the name of the actual program. Compliance is if a thing complies with an interface. For example, Prometheus read write how to generate and learn what have you. That is just that one interface which is being tested. Obviously there's overlap and such but you have a result set for one set of tests. And that's your compliance score for a thing. And the compatibility score is then how compared, like we simply multiply all your compliance scores which are relevant to you. And that's the end score of percentage of compliance. And if you reach 100% in all the relevant compliances you have 100% compatibility and then you get the mark. Why do I say if it's relevant? If you don't offer, for example, a storage course in an instrumentation library it would be kind of unfair to have alert generation or something as part of this. So that's where this is coming in. You might not actually be implementing all the interfaces of Prometheus for your thing to be considered compatible with Prometheus. If you're an end user, it's quite easy. Look for Prometheus compatible and ignore everything else. There's a little bit of a trick which I pulled here in the structure of this naming. We're using three super common terms, conformance, compliance, compatibility. The nice thing is they all have defined meaning in this contract under this program which means that if you signed the thing you implicitly acknowledged that those have specific meanings and you cannot in some potential future become clever with how you use other words. Which is a little bit of a trick but it has a nice forcing function to just keep everyone honest. There's also incentives here. There's like a lot of incentives as we in Prometheus team have found over the years again and again and again, there is a ton of informal knowledge among vendors and such who is actually compatible and who might not be, but they don't really want to raise a fuss or have a public fight or have what have you. Even if some people are aware that some things are not as they might seem to appear. But remember, we have these huge amounts of money which the market is worth like the market segment in which we are moving. So there is a direct monetary interest for people, for enterprises, for managers to improve the tests to keep everyone else honest. Their incentive is obviously mainly money here but even if that's the case, even if they go in this for purely monetary reasons the result is a more comprehensive test suite for everyone. Of course, you just cover more potential failures, more incompatibilities, what have you. I like this design. The initial cadence of this, I mentioned we have time limited grants of the marks is super aggressive. 12 weeks or two minor releases, whichever is longer, which is really, really short cadence. On purpose, of course, this thing will still evolve. It might actually be the case that as we introduce new tests or new aspects of testing, that some products, some projects even lose their mark of compatibility and then they have to reimplement something to or implement something, fix something to actually retain or regain that mark, which is not nice. And like again, this is hopefully what part of the incentive is for a lot of people to keep each other honest and to just keep moving the complete ecosystem at a nice velocity. If you need help to convince your manager to let you personally write stuff here, talk to us. We can most certainly make a good case why you as that person should be doing this and that work on this and that test suite. We can help you with design document. We can help you with everything. On purpose, we want to structure this as openly as humanly possible. Matter of fact, we already have two quite large cloud providers who committed to working on a Prometheus remote write receiver test suite. Maybe as we talk, it's already done, I don't know. So we just opened the design document so they have a place where they can write everything in. Everyone can get a feedback, no matter if they're from those vendors or Prometheus team or from someone else entirely, it doesn't matter. And once there is consensus on how that test should be structured, what needs to be covered, all those things people can just get going. All develop in the open, all even designed and discussed in the open to make this as honestly fair as possible. Let's move to the results. There aren't any. Kind of self-evident Prometheus Cortex Thanos will be compatible and anyone who is running them in a proper way, let's say, will be compatible as well for kind of self-evident reasons. Beyond this, it's just too early to tell. We will see. We will see this in the Q&A where we will see each other, hopefully right now. Or if we're lucky you're never even going to see this talk and we have a full thing with nice tables and the presentation and I can show you the mark and everything, maybe we are lucky that way. If not, see you in the Q&A in a few. Thank you.