 So, okay, cloud events are everywhere. Now, in a world where life is really easy, where you have one event producer, like a tractor, sending events to an event consumer, and you know what the events look like, life is actually really easy, right? Because you know the producerage, you know the transport, the schema, the format, you know everything about it, you really don't need anything special. Just send the message and you can process it. Life is easy. However, in a little bit more real world scenario, you're probably gonna have lots of different event producers, all going through potentially some middleware, either destined for one event consumer, or potentially multiple event consumers, or does additional middleware like Kafka kinda is, with other things, right? In this world, where you have varying formats, schemas, business logic, events that look all different from each source, all going through the same middleware, when do you need to do basic processing of these events? Whether it's routing purposes, or just some quick introspection about what the heck the event actually is, it actually becomes really, really difficult to do that in middleware because of the wide range of all the different data. In particular, in terms of extracting the data that really matters to you from the business logic, you have to understand the schema, the format of the message, whole bunch of information, and that's really, really complex and quite expensive, especially if you have new types of events coming online quite frequently, okay? So what cloud events tries to do is something very, very simple. It just defines common metadata and where it appears in the message, independent of the business logic, okay? So it's really there to help in the delivery of events from the producer to the consumer, whether the consumer is your middleware, the consumer is the ultimate recipient of the message, doesn't matter, okay? It's just there to help augment your messages so that you can do some basic processing without having to understand the business logic itself, okay? Now, it's very important to understand that cloud events is not defining a new common event format. Every now and then the industry says we're gonna define one event format, we're gonna rule them all and we don't need to worry about all this stuff anymore. Not realistic and that's not what we're trying to do, okay? We're trying to, I'll show you in a second, but we're there to help you with your current flows, not try to rock your world, okay? So let's actually take a look at cloud events with a real live example, okay? So here we have an event flowing. All it says is okay, I created what is it, a new item and item ID is 93. Very simple little message flowing, okay? Simple little event. This is that simple tractor then consumer thing. What cloud events does is defines, I said extra metadata, in this particular case, there are four required attributes you have to define for every single cloud event. The first is the version of the cloud event spec itself. We're at 1.0 so that's the first thing in there. The type of the event, that is the action, the reason for the event or the occurrence, okay? In this particular case, it's from bigco.com and it's a new item, okay? CE source, where did this event come from? If that's of interest to you, okay? This one's coming from bigco.com slash repo. He's the one that sent the event, that's the event producer. And then you have a unique ID per event just so you can do de-booping and stuff like that, okay? Now if you think about what's in there, the information there really isn't that exciting. Most of it you actually get from the event itself, okay, but that's the point. The key data that most events are going to include to help for routing purposes is what cloud events is all about, simple little metadata. And with this, I should be able to do some basic routing or error checking or something like that, right? Am I interested in the new event things? Yes, send it through, if not, trash it, right? I don't need to understand the body, I don't need to understand JSON, the schema, the properties of the business logic, doesn't matter. These are the types of things that cloud events is for. It may seem really trivial, and it is, and that's the point. It is one of the most boring specs you could ever see, but that's a good thing. It's just there to help you get your job done. I can't tell you how many customers that come up to me said, thank you for inventing this because this makes our life so much easier from that middleware perspective. It's not there to completely revolutionize everything, it's just there to ease pain points, okay? Now this is what I call binary format, okay? Take an existing message, add a couple extra metadata in the transport extensibility place, in this case, HTTP headers, to just add the cloud events, a metadata. Now, some people want something a little different. Some people, maybe they don't have an event envelope that they have predefined, so they're looking for someone to define a forum. Okay, fine, we do that here. Other people want all the metadata, including the cloud event metadata, to be included with the business logic in one gigantic payload in the HCE body in this case. So they say, okay, fine. For you folks, if you don't like binary for whatever reason, which is actually my preferred way, we do have structured mode. And notice the, look, there it is. You can notice the content type changed, right? From application JSON to application cloud events JSON, where the application JSON now appears as a new cloud event attribute. So you know the content of the data. But if you notice, it's basically exact same data, right? The four metadata properties moved into the JSON object, content type moves in here, the business logic gets wrapped by data. But it's basically the same thing to a slightly different format. Pick your poison, it doesn't matter. This is all cloud events really is at its core. So let's talk a little about these attributes. We talked about the four that are required for every single cloud event. The spec itself defines a couple more that are what we consider to be really, really popular that most cloud events, I'm sorry, most events probably have subject. Who is the event actually about? The time of the event. If you're doing the structured mode, we have the data content type, like I showed you in the previous example, data schema. URL to the schema of the business logic itself. That's a very popular one for schema validation of the incoming events, okay? Again, very small little spec, there are minimum needed, okay? But the key ones that most events probably have anyway. Now we do have other extensions or other properties that are extensions. These have no official standing, but they're in our repo as sort of a common workspace for people to work on these things, okay? One day, if they get really, really popular and completely stable and people use them, then we may move them into the spec proper as official optional ones. But for right now, it's more like a workspace set of extensions, okay? And you can see a lot of them there are very popular ones that you might expect to see. These are really coming from real world usage. People say, hey, we really need these kinds of things in there because we're seeing these things a lot, okay? So in terms of status, my camera is actually way in 1.0. I think it's like two years ago now. We're up to 102 right now. Very minor changes so far, mainly clarifications more than anything else. For those of you who aren't aware, we actually do support quite a few different bindings besides just HTTP, you know, amp-u-p and QTT, Nats, Kafka website, you can see it up there. We also support many different formats besides just JSON, okay? Protobuf is one of the newer ones. XML and Seaborr, those are in draft mode, but those are the very newest ones. And yes, XML is still alive. Don't laugh. We do have a whole bunch of SDKs. You can see them listed up there. Little bit of interesting statistics I just found out today because I'm really bad at looking at these things. Someone was telling me that the JavaScript SDK in a one-week period in September was downloaded 800,000 times. I was quite blown away by that. So this thing is quite popular, okay? Even though you probably haven't heard of it, people who find out about it, they really seem to like it. And it is getting more and more usage out there. In fact, there are quite a few cloud providers, a lot of open source projects like Knative, I'm being from Microsoft, I'm going to mention things like Event Grid has it. These things, it's been more and more popular out there. It's just not making a lot of waves, but it is there. We do have a primer, so you can see what our thought process was and what we did and why. And in terms of what's next, obviously from this spec itself, we're just gonna continue to get customer feedback and keep looking at adding more and more of those extensions. We haven't come across anything major that would force us to look at a version two of the spec. Nothing big really wrong with it so far. Thank goodness, okay? So instead of just sitting around waiting, we are looking at what other pain points the community might be having relative to events. So, cloud events are there to help with routing. Well, that's great and that's fine for the delivery of the events, but what about the rest of the lifecycle of the venting? In particular, what about things like discovering of who actually produces the events or what endpoints are out there that I can send events to? How do I find out more about which endpoints actually produce the events I'm interested in? What are the formats of those events? All those kind of things, right? So there's this whole discovery part of the lifecycle that we haven't touched on yet. So, let me, before I jump to that, I should mention, I keep talking about events. Technically all these things apply to messages as well. So as we keep going forward, because events are really just specialized messages. As we keep going forward, if I say the word event, think message, okay? Because for the rest of this talk, they're really interchangeable, okay? So, obviously the word discovery popped up a lot in there what I was saying in the previous slide. Before we start talking about how we're gonna handle the discovery aspect, a little bit of definition, endpoint. All it is is a never addressable URL that can send or receive messages, okay? Easy. So, in this world you have an endpoint, you have a client or something that wants to talk to it, okay? What does the client need to discover? Well, it's all the things we kind of talked about before, right? What are the endpoints that are out there? What are the messages or events and the formats of those messages, okay? You can also look for groups of messages. So I don't wanna have to go find all the endpoints that support these 10 messages that I care about and check for all 10 being there. Maybe I just wanna find one particular group by name or something like that, okay? So you have this notion of grouping those things. The client may also need to find out, for example, how you talk to that endpoint. What is your role when you talk to it? Are you a consumer of those messages? Are you a producer of those messages, right? Input versus output kind of a thing. Where is obviously the endpoint stuff like that? And what protocols it supports? Besides just HTTP, Kafka and those kind of things, okay? Now, if you're in this mode where you're gonna subscribe to this event, I'm sorry, subscribe to this endpoint to get these messages, you need to find out, well, how do you do the subscription? Where do you subscribe? What are the options, right? Those kind of things. So we're talking about here is defining a discovery service that has all the information that I talked about, basically a set of endpoints that has all the metadata I just described. Whoops, that's a long way. Where the endpoint can now register itself with the discovery service and then the client can obviously query it and find the exact endpoint that it's interested in, okay? It's actually a very simple concept. Goes well with my opinion with Kot event itself. It's a series of very simple specs focused on one particular problem, okay? So let's look at what a discovery service endpoint might look like in real. You have some common attributes, I didn't spell those out because the font was getting kind of small. But we have, first one of its real interest is usage. Now, this is from the client's perspective, how do I use this endpoint, right? Am I a subscriber? Am I a producer or a consumer of messages to and from this endpoint, okay? Based upon the usage, I'm gonna have some extra configuration information. What protocol do I use to talk to it? What are the endpoints I can use to talk to it? And other properties go in there so the usage and config go hand in hand, okay? Here you have the list of groups. Remember, you can group messages together. So in this particular case, I'm stealing from AWS and said, oh, this thing supports S3 events. It's just a group name so I can easily find which endpoints out there support S3 events or messages, right? And when you then look at the definitions property, these are the actual full-blown expanded metadata description of every single message that is related to this particular endpoint. Now, just because I mentioned S3 events, that's not good enough, right? I need to actually understand what those events actually look like and rather than you going someplace to define them, we want it all in one chunk so we actually list out the full definition of those messages right there. Now, you could also have messages that are defined specifically for this endpoint and that's those two down at the bottom right there, right? So definitions of the mixture of events that are brand new or events that I have pulled in from someplace else like a group, okay? So let's dive a little bit into what a definition looks like. Some of those common attributes. Owner group. If this definition is owned or initially defined by something other than this endpoint, I may need to know what group it was defined by, right? The format. What is the format of this message being sent? Is it just raw ECP? Is it AMQP? Is it a Kafka message? In this particular case, it is a CloudEvent 1.0 message. Now, for each type of message that can be sent, it's gonna have a different characteristics to it. In this case, CloudEvent has attributes, okay? So the metadata allows you to give further information about how this message is gonna look relative to the format property, okay? So in this particular case, this is a CloudEvent. So it has a list of attributes. One of them is called type. It's required, it's type string and there it's value. And you just see that's exactly what it would look like in a real CloudEvent itself. And then you have the other CloudEvent attributes there, okay? Just a very simple little list, okay? It's almost like a schema definition kind of a thing. And I should point out, a lot of this is very much a work in progress. So if this seems incomplete, it probably is, okay? We're still in work in progress. But this is a given rough idea of where we're headed with this stuff, okay? But that's basically it for an endpoint. Not in a whole lot to it. It's all the metadata you might expect from an endpoint, right? Where the messages, groups and send, definitions of the messages, those kind of things, okay? So let's talk a little bit more about that usage because I need to go a little bit more into that. So as I said, the usage describes how you interact with the endpoint. This spec defines three different ways. You could be a subscriber to it. You could be a consumer of messages. So you pull messages or you can be a producer. Push messages into it. It is extensible. You can add more if you really want to, okay? The config gives additional information about how you actually use it, right? So in this particular case, HTTP talking to that endpoint. And obviously if you're a consumer, you're gonna be pole messaging, messages from config.inpoints. You're gonna be, if you're a producer, pushing messages to the config.inpoints URL. But if you're a subscriber, you may need more than that. And because of that, we actually are looking at creating a brand new spec just for the subscription side of this interaction model, okay? Now, the reason this subscription may need its own spec is because there's more to it, right? You need to understand not just how and where to subscribe, but there are additional options available to you. A lot of things sort of like filtering, right? Does the subscription endpoint support filtering? Is it a push or pull model? Because even when you do a subscription, it may not always be push, right? There could be a push of a pull model associated with it. Now, so as I said, we're defining a brand new specification that's gonna do a listing of all the various usages along with the config options and what mechanism goes along with those various protocols, okay? Now, some protocols actually define a subscribe operation as well, and that's fine. We're not gonna reinvent the wheel. However, there are some protocols like HTTP that don't have a standard one. There are lots out there, but that's the problem. There are lots out there, right? So what we're gonna do is define one here that sort of hopefully takes the best of all those worlds in a very simplified fashion and say, hey, we're defining one that you can use, but it's extensible, you can plug in a different one if you really, really want. Okay? One last point about the subscription. This URL down here is basically pointing to the subscription manager. And so then inside the Discovery Service, you get the subscription information. I wanna point out that the subscription manager does not have to be the event producer itself. It's acting on behalf of the event producer, yes, but it doesn't have to be the event producer. So that URL can point to pretty much anything you want. Just wanna point that out. So these things don't have to be tightly bound, okay? So you can have any kind of setup you really wanted, okay? So we have almost everything you need in terms of discovering what endpoints you wanna talk to, how you talk to it, but then when it comes to actually the last mile of this whole life cycle, like, okay, great, message delivery. Well, obviously you're gonna be transferring the messages. You're gonna have a push or a pull depending on the protocol, but when you actually get those messages, you're probably gonna do some sort of validation. And you can get that metadata for that validation from a couple of different places. Obviously the Discovery Service itself, which is why it's there. But then you have things like the message schema. Now we have the message schema URL as part of the cloud events metadata. You also have it as part of the discovery service. So you can get to the metadata about the business logic, that's not a problem, okay? But there is one last little bit that seems to be missing, that schema itself. Most people just say, oh, there's a URL, just go do an HTTP get. And that's fine, and we like that model. We're not trying to change that. However, I jumped ahead, sorry. So let me back up a little. So with the message schema, what are you gonna be doing with it? Obviously the most common case is gonna be message validation, okay? But you obviously could use the schema for lots of other things, like to help in the serialization, deserialization of the messages, and in particular with tooling. A lot of people like to say, hey, I wanna talk to that endpoint over there or get messages from it. I wanna have tooling that generates code that will take those messages and create some sort of like Java object for me. So I don't have to do the deserialization myself and I'll do the validation, okay? So that's all the other possible reasons that we have the schema. But what I was going before I got distracted was there is no standard for how to actually manage the schema itself. And for the most part, you probably don't need a whole lot there. Because as I said, most people probably just say, hey, here's a URL, go do an HTTP get, you get the schema, and that's fine. But what if you want some level of interoperability around actually managing it? How to actually upload your schema to some place in a standardized fashion because you wanna have commonality across all the various tooling for all this entire life cycle, okay? So what we're doing is we're also defining a schema registry spec. Again, this goal, I like all the other specs, is to be as simple as possible, be protocol neutral, scenario schema, and messaging neutral. Meaning, it's a schema registry for basically schema of anything, for anything. It doesn't matter. It doesn't have to be events, it doesn't have to be messaging per se, anything you want, and we're not even forcing it to be HTTP specifically really don't want to be. Obviously that's where we're gonna start out with our main focus, okay? So in terms of HTTP though, it should be as simple as HTTP get and put, right? The get that we talked about, just to get on the URL, you download the schema. But the put for actually managing and uploading your schema to some place, that's where we are gonna be focusing our attention. So let's take a look at what that looks like. Very simple. You can see basically the beginning part of a URL there for your schema registry, okay? The first level object in the registry is a group, okay? A lot of times schemas go together, right? You have groups of messages, groups of events, typically associated with one particular endpoint or a service or something like that, okay? People want to group these things together. So we like to specify a group. You can also do access control in those groups. Now we're not necessarily gonna define the access control, but you'll be hooked that into your schema registry if you want. Now within each group, you have a set of schemas, right? Maybe one for each of the CRUD operations type of thing, okay? Now beyond that then, you actually have versions of the schema. Now one of the things we're gonna be doing as part of the spec is say look, every time you do a put, we'll version this for you. So this actual numbering scheme is not something you have to manage yourself. You just do a put to everything up to my schema. We'll add the slash version slash one slash two or whatnot, all right? So you can actually go and retrieve a particular version of the scheme if you want, but if you just want the very latest, leave off the slash versions and just say it's my schema and you'll get the latest, okay? So we'll manage some very basic level of version for you under the covers, okay? So that's basically it in terms of where we're headed with cloud events for the next round, right? It's to fill out that entire life cycle. So we have the discovery API of how you find these endpoints, how you talk to the endpoints, metadata about the messages that are being sent to or from those endpoints. Subscription API, because subscribing requires a little bit more logic to it. We're gonna have a spec that helps to find those things without reinventing the wheel. If a protocol already has it, we're not gonna reinvent it, we're just gonna reference it. Schema registry to help you manage schemas themselves. Now, obviously, you don't have to use all of these together, right? You just like the schema registry because your organization would like something a little more standardized for schema registry, you can use that. They're all meant to be used independent or independently yesterday, okay? So why are we doing all this? Not just because we got it done with cloud events, but because we're trying to help ease some of those pain points that we're seeing from people. Obviously, to improve inter-availability, but it's also to help automation. There are a lot of people that don't want to have to do what's the low level details of sending and producing these events or messages themselves, right? They wanna have SDKs, they wanna have the serialization and the processing of these events handled for them as much as possible, right? They just wanna get these Java objects or the structs and go, right? And just deal with that and let the infrastructure manage everything else for them, right? So we're helping, we're trying to manage and aid in this automation aspect, okay? Now in terms of status, I just said, these are all very much works in progress, but we are hoping to have release candidates by the end of the year. Even if we do achieve that, it probably is gonna be a little bit of time before they actually reach 1.0 status because to be honest, we need to do a lot of testing, right, so we have to get implementations out there, different languages, different protocols, get it out there really, really tested because we don't wanna put out a spec that obviously doesn't work right out the gate, okay? However, one of the things that we're looking for you guys in the community is we want feedback, right? Are there use cases, in particular on the discovery side that we're missing? To me, the discovery spec is the sort of linchpin of all this stuff. So if you think it's an interesting spec, but you know, we're just missing the boat in terms of what people actually need to find out about how to talk to these endpoints, please let us know, okay? Because we're coming at it from our own perspective, we do have some users in the community, but we would love to have more, okay? And with that, actually made it in time, that's amazing. Okay, so I'm basically done, information here, the URL to our repo or to the website which has more information pointing to the repo, all the specs are being done within one single repo itself, okay? So we didn't, it's a mono repo, we haven't split things out yet. We have weekly phone calls on Thursday, 12 o'clock Eastern, if you're interested. Anybody's welcome to join, you don't have to formally sign up for anything, you can join, you can add items to the agenda, you can open up issues, everybody's welcome, okay? Obviously if you need more information, you can open up issues in the repo itself, drop me a line, Twitter and emails sort of right there. And that's about it, very quick, but hopefully you realize there isn't much to these specs and that's the point, it's not trying to completely rock everybody's world, just ease some pain points. And with that, I do believe we have time for some questions. Any questions? This is a great presentation, so I have one question, the format that discovered the endpoint, it more like, looks like a semantic web, so how you actually, the schemas, the all, there's already, have you ever considered going that path? Sorry, say that one more time like that. I mean the semantic web, the Facebook users, the WopenCraft, and then. Actually I'm not sure we looked at the semantic web, actually I'm pretty sure somebody has looked at that. The question we get most often is, how does this relate to, I keep getting this mixed up, async API? Because async API has a lot of similar concepts here. The best answer I can give you is, we looked at those other things, because we don't want to say reinvent the wheel, and they weren't actually addressing all the needs that we saw. Now if you think we're missed the boats, please let us know and we'll go back and revisit them, because we don't want to reinvent the wheel. But the closest thing we have come up with is, async API is getting more and more close to us, it's based on the version three, I was just looking at the other day, and we have had conversations with them, and we're probably gonna be talking to them more to see if we can try to bring those two roles a little bit closer together. But in the event that we can't bring them closer together, we are trying to make sure that there's at least some similarity with terminology, with things like, what do you see in async open, sorry, it's async API. Async. That's the best we've been able to do so far. But if you think we're missing the boat, please let us know. No, all the context resolution that you're talking about, the endpoint, the schema, it sounds very close to what we have, like OWL as a schema, and then RDF, LinkData, that kind of concept. So I was just wondering whether you consider that. Yeah, there are a lot of things that are very similar, because they have little bits and pieces, because a lot of this isn't necessarily new, obviously. It's just we're trying to put it together, we think in a simpler and more focused, it's set of scenarios, but that way. I don't want to get into the business of necessarily saying why one thing's bad or good, it's just, we did look at those things and it just, we didn't think it met our needs, but that way. You consider that at some point. Yeah. So my, it was a good presentation and it was very clear and nice. My question is about, we talked about the schema and all, but like the format, like Json, XML, or something like that, so many formats are there. I didn't notice anything about what kind of data you can send. You're talking just about the schema, right? The actual payload. The payload of the schema. Wait, the payload in the schema or the payload of the message? The payload of the message, but that should be defined in the schema, right? Okay, well for the schema itself, we don't care. You just do a put and we'll take what's in the body. So whatever format you want, just upload it. We don't do any verification, any checks. We don't care, right? All we're really managing for you is the URL scheme and the versioning, right? Now this is just a blob. So this is actually a version of a blob store more than anything else, right? So basically you're saying that in the schema and the metadata, you'll be taking care of your flexible to provide any type of data, is that right? Yeah, so we go back to, we got to go way back. The way back to the event. I assume you're talking now about the eventing side of things, right? Yeah, talking about the payload. So basically one person might be interested in sending the payload, which is JSON. Someone may be able to want to send something else. Right, so in the binary format, this is where I like it so much, right? Yeah, I chose to use JSON, but notice the cloud events don't care, right? This could have been XML, could have been anything. Straight binary, we just don't care. That's why I like this so much because it doesn't require any new thing. Now over here, yes, if you, you can obviously take your, whatever format you want and JSON file, right? You know, escape all the other crap that you need to. And stick it in here. But if that doesn't work for you and you wanted to find a brand new format, go for it. We're not going to stop you, it's insensible. You can have whatever format you want and we'll take the contribution back, right? Because we do have a whole bunch of formats in there and we'd like to add more. Thank you. Yep. There are, okay. I was wondering, do you have two time fields or how would you handle with like a need to do the, the time that the event occurred within the system versus the time the event was actually published to the message of us? Is that part of the schema or would you recommend dumping that somewhere else? So this time, oh, I don't see, I don't see it. The time that, actually hold on, 50, 50, 50. So the time here is the time the event occurred. If you want the time for something else, you can define an extension, right? Because we have some well-defined extensions here, but you can define your own. And actually what's nice about cloud events is they would actually appear over here with CE dash in front of them, right? So as the receiver, as long as you understand what you're looking for, it's like, you know, time it reached the middleware, whatever it might be, go look for it. We're not going to stop you. You can add whatever you want in there. So when you talk about these message discovery services and subscription management capabilities, the space comes out of the ESB community, right? And over the last 10, 15 years, and there's a lot of open source projects in that space. And I'm not asking that the question is not like, why aren't you doing those things? My question is, are you taking lessons learned from projects like Apache Camel and WSO2 and things like that? Are there things you've taken from those communities to help inform, you know, these implementations and these specs? Obviously, yes. Right, I'm not gonna say no. Yeah, no, we are trying our very best. And let's put it this way. There are a lot of, shall we say, senior people in our group who have a lot of experience in this space. And so it's not just what other people are doing in the community at large, it's also there are many, many years of experience having built enterprise level systems around this stuff. I'm not gonna pick on certain people to say how old they are, but yeah, we do have a lot of background and a lot of people who have extensive, extensive background. So we're trying our best to take those things into account, yes. And actually, that's part of the reason, I think, in my opinion, why we're doing some of this is, every now and then things get bigger and bigger and bigger. They start out simple, get bigger, more complicated. At some point, you need to probably go back and simplify things, right? We had XML, it was great for certain purposes, but now Jason came along and Yama came along to simplify it. And I think this is a step in that same direction. We're trying to help simplify things and focus back on the core needs of the community. Thanks for the talk. And minor requests to get a binding for Pulsar is something that we're looking for. I know that's not something that the Cloud Events team has to implement, but I actually, that's just kind of, there's a feature request that's been out for a long time, but it hasn't been done. So Pulsar came through us in the past, right? And I believe, if I remember correctly, Pulsar is a proprietary protocol, right? I think so. Yes. I think it's open, sorry. Yeah, but I seem to, I may be remembering quickly, but I think it came through us in the past and we said, sorry, it's a proprietary protocol, so we're not gonna say define a formal binding to it, but we did have a spot in the repo for pointers to proprietary bindings. And I can't remember for sure whether Pulsar is in there or not, but if someone wants to write a Pulsar binding and have us point to it, we have no problem with that. It's just, we feel uncomfortable putting that officially in our repo because it's proprietary. Okay. I'm pretty sure this came up before. So I actually did have, thank you, I actually did have a question though. So similar to the previous question, not in terms of taking learnings from other projects in the space, but actually working with other open source projects that are currently active, like Apicurio as an API registry, which provides a lot of functionality that is pretty similar to what you would, you're proposing as your schema registry. Okay. For companies or users that might want to not have to deploy multiple solutions, but still be on the latest and greatest of cloud events. So have you considered working with projects like that? Yeah, I have not heard that one. If you can draw me an email with it, I would love to take a look at it because that's one of the reasons I was actually looking at async API again recently was because I heard they were going through some changes in D3 and they're getting closer and closer to what we're looking at. And I don't want to reinvent the wheel. And if we can merge these things, I'd really like this. So if you think, as I said, if we think we're missing the boats and we're duplicating stuff that's already out there, whether we have the minor tweaks, we can use something else, let us know. We don't want to reinvent the wheel. So please drop me a note. All right. I think we have a, I think it's about it. Any other questions? All right, cool. Thank you very much for joining. We're coming. Thank you.