 Hi, and welcome to making Prometheus even more open. But it is quite the pitch, and there's a variety of different meanings of what open could actually mean. There's lots of definitions, and we'll look at all of those. Open source seems honestly kind of obvious and honestly kind of is, in particular when it comes to Prometheus. Yet, just as a reminder, there's more and more players entering this field, and keep in mind that part of what makes Prometheus open is that it is open source. And if you go with other vendors who might not be open source, at least consider what you're doing. So open features, that will be the longest part of this talk. We had a lot of new stuff recently. We are opening up deliberately in a lot of ways, in a lot of feature ways. We lifted our services coverage moratorium, where we had a total of six new services coverings added recently over the last half year or so. We added TLS and basic auth to all the things. And we also have a new exporter toolkit, which makes it a lot easier to create and synchronize for exporters. What do I mean by synchronize? Well, if some new functionality comes then it's a lot easier if you have all of this and you just vendor it in, as opposed to having to do this yourself or someone going to your repo. If I don't have an issue of finding a PR, hey, please do this and that, because that's a new thing. If you base your stuff or migrate your stuff on the exporter toolkit, you just get all of this for free. We had a few changes in Prometheus. We have things like last overtime, top K overtime. If you know that thing where you say top K three of whatever over a week, you might get 10, 20, 100 different values, which are at some point in time part of the top three. Top K three overtime does what it means or does what it says in the list. You just get the actual top three over that time range. We have the ad modifier where you can determine at what specific time and what alignment you would like to have your queries to have. We have negative offset, which are hidden behind the feature flag for the simple reason that this might break a few assumption of caching front ends. And we obviously don't want to break them. So it's at least for this version, for this major version behind the feature flag. Yet super nice, super useful, try it out. And also we have human durations, which might not seem like a huge thing. And but still it's super nice. You don't have to write this 90 M or anything. You don't have to write 90 times 60 or do the math in your head. You can just write it as such and it just works. We have a remote write receiver in Prometheus, which is super new and something which historically we absolutely didn't do. They can actually send data into a Prometheus. This is not meant for like super high volume production or anything, but still it enables new use cases. Like Prometheus in the edge or something. I forgot to write that in, but still it enables new use cases. Of course, you can just send stuff from one Prometheus to another without having to go to any long-term storage or such. One of the highlight features is exemplars. It might sound small, but it is actually quite large. Here at the end, you see this trace ID, blah, blah, blah with a certain value. What that means is you can actually emit information about a trace which falls into that bucket. So if you want to go to a trace for low latency or whatever, you can jump directly to a trace where you know what the mental context is, why it is bad, why it has, you know it has high latency or if it isn't that amount of HTTP requests or what have you, it doesn't matter. But you can jump directly to this trace and you know precisely that this is interesting for a reason. You have the complete context of all your labels and everything and you can just move seamlessly between metrics and traces. Support for this is already Prometheus Cortex, Thanos and Grafana, so you can just use it and it's just there. If the trace ID is somewhat familiar or that format, that's no mistake. Of course, in the open metrics format, we specified or we used the W3D3 tracing specs as an example, we didn't specify them. Of course, we didn't want to tie the standard down. Of course, that space is relatively new, so maybe things will change. Maybe there'll be some other best practices or something. That being said, it is a good standard what W3D3 has and it's useful and that's precisely what we need. So obviously we use it as the example in the specs. There's also support for spans, but that fit in the line that you can have spans and traces kind of obvious. We have a new UI where you now have modern react if you want that. There is a super nice editor which has auto completion and snippets and all bells and whistles and it's really, really nice and you really should check it and has a dark theme for vanity reasons, but it has a dark theme, so try it out. All of this maybe tells you a little bit that as historically we were quite conservative with things, we are deliberately trying to change this and that's what we mean by this aspect of open. Because historically even features which are market is experimental, we basically treated them as stable, which is on the one hand quite nice for someone relying on this behavior. On the other hand, it ties us and it ties the community down and that's not good long-term, so we are actively breaking this up. A lot of our old assumptions have been revisited and we are enabling more use cases, like an old assumption would be we had that moratorium on services covered and we had certain thoughts around what we would need to see or want to see before we take more services covered integrations in, but we just dropped them and started taking those in. Use cases, more use cases. There are with the agent thing, for example, there are use cases which are not recommended by Prometheus, to be even simple. Like Prometheus best practice, you should have your metrics end point on a distinct port per service. That doesn't really match how a lot of enterprises work with their stuff, because their security teams tend to not like a team coming to them with like, yeah, okay, we might have 30 different ports, maybe a hundred different ports, they're not even truly continuous and they might be open or they might not and there's no way for you to tell. They tend not to like this. Having everything behind one single port makes a lot easier in enterprise scenarios and such, but this is an anti-pattern when it comes to pure Prometheus, yet it is a valid use case for certain assumptions, for certain design trade-offs. So we are trying to enable those use cases in a very deliberate and careful manner, yet enable them because I would much rather talk about how you can structure this, how you can put everything behind one port and then you have your substring or your path, they say a slash SNMP exporter slash metrics or what have you, then everyone baking their own and people basically, yeah, not knowing what they should be doing as a kind of standard. So opening up to those non-recommended use cases is a deliberate goal. Agents are another great example, agents themselves, because agents are an anti-pattern in the Prometheus work, like again pure Prometheus, yet, and for good reason, because you wouldn't want to, you wouldn't want to tie your node exporter to your MySQL exporter version, because you might need to upgrade one or the other independently of the other, yet deploying one single binary, ideally already with package configuration, everything, is a plus for that kind of model. So again, trying to carefully and deliberately enable this kind of thing. We're also trying to make our code more modular and easy to reuse, which is honestly not a direct benefit to us, like not for Prometheus team, yet there are projects which would like to take bits and pieces and just reuse them, make them part of their data pipelines or what have you. So we're trying to make this easier and more consistent. Also, that's related to the exporter toolkits or exporter toolkit mixins out of the box, ideally. There we want to have the full scaffold and everything in the exporter toolkit to just entice people to create more and more mixins and also to have more and more mixins as part of at least the default and official exporters and encourage others to also put this in their exporters and their integrations. What are mixins you might be asking? I should have explained this. Mixins are basically a way to package configuration. So you might have certain dashboards, you might have alerts, you might have recording rules and you just package them as an opinionative thing which can be modified. So it's not a lock-in into one specific way of thinking. You can actually modify this. Yet you get a sane default, something which should work and which you can base upon which hopefully has a lot of synchronization effects through the ecosystem where people start thinking with the same terminology maybe or the same dashboards or they have similar alerts. They have thresholds which are already preset by the domain subject matter expert of whatever program you're running just make it easier to push this kind of information into the ecosystem and reuse this. Another interpretation of open would be open documents and we have all already signed documents open that is kind of a logical evolution of the mailing list discussions. And by and large, we already had our design docs public of course, why not? Now we are deliberately making an effort to make all of them public and actually like follow up if someone forgets to do it and such that it is not by accident internally for me to see that we just have everything out in the open. We have a special drive on Google where we can have a public share for the complete folder and all of those design docs live in there. So you can quite easily find them and such which again is just a logical evolution yet it is part of this aggressively becoming more open. Another interpretation of open would be open standards and there are a few. We have open metrics which is the standard of how to export or expose metrics towards an ingester and we have a specification for remote write which is how you push the data from your primitive server to something else or from your agent to your long-term or what have you. Those two have been specified, they're finalized. We are considering doing the same for PROMQL and DTSDB. We'll see. Next aspect is open testing. I think it was Oscar Wilder who said that imitation is the sincerest form of flattering and this is quite true. If you look at the CNCF end user survey on observability Prometheus is on place one, open metrics on place five. So you can see that there is bound to be a lot of people who want a piece of the cake or just want to be compatible for their user's sake where they just want to work and interact. While a few of them have just taken reference code and such and just reimplemented or even just rendered in our code that is not the case for everyone. And we want to make this more easier for people to actually do. We already have three test suites or compliance suites released. One for PROMQL, one for wrote write and one for open metrics. There is work being done towards maybe it's already released by the time this video is out, a remote read compliance. And we're also considering to do stuff around TSTP and data correctness. And maybe we'll also have some baseline performance like not super aggressive performance but just some sane baseline to give you some reasonable expectation of how that thing will perform. Then we have the next interpretation of open that would be open ratings. That is kind of fuzzy that term. I know that it fit into the open thing. So, yeah. That's a big one. We will have official compatibility testing for Prometheus. This will be backed by CNCF Legal and such. So this will actually be watertight, hopefully. We will officially bless and publish those results on Prometheus.io. We will most likely have versioned results where you can say that you're compliant to PROMQL as of April, 2021 or what have you. We might also have an umbrella above this which states that you're fully Prometheus compatible or something to basically discern between someone who or a project which wants to be compliant to one aspect of Prometheus and something which can be in its entirety how it's supposed to be used fully compatible with a Prometheus system. Yeah, that's still kind of TBD. The other thing which is kind of TBD but also will most likely work like this that if you have this result for like, for example, 2021-4 that this result is valid for two or three minor version releases of Prometheus as we are on a six week cycle that comes out at 12 to 18 weeks to just give people some vendors and projects should have enough time to actually update their stuff and comply to this thing. And we don't want to just drop this on them and they need to scramble. Of course, that would be highly unfair. On the other hand, we don't want someone to sit on this for like three years and their users are running into walls left and right. Having done quite a bit of certification work and such in my past, that is a big problem where if you have certifications or stamps of approval award every which are too long running the end users might actually suffer from this. So there will be some middle ground in quick enough re-certification or retesting. And we will have some logos, some verbiage, some things which you can put on your website on your whatever to show that you're actually compliant to all of this. Yeah, maybe those details are already published and finalized by the timeless videos out. I don't know, we'll find out together during from home. Then we have the next interpretation, open meetings. And basically all our things, deaf summits, all the working groups and such is all out in the open. It's all public, it's recorded, it's free to join. We have a calendar and we'll be linking those slides and all of those are clickable. So you can just click on them and join. We publish this on YouTube. We kind of realized with the whole pandemic thing, previously we had one deaf summit per year and that was physical. And we have always invited people and such to not be just amongst ourselves, but obviously it doesn't scale to have, I don't know how many people and how dynamic in a room when you need to actually fly to a place and such. We started having virtual deaf summits for pandemic reasons and then we kind of forgot, I guess, that we could just make them open. So we realized and we did. And again, everything is just open. Of course, why not? It's on the internet. It's trivial to do this. So that's it. We should have like five minutes, potentially four questions, which is great. And I hope you do have a few questions. Thank you very much.