 Hello, everybody. Hi, guys. How are you doing today at Wednesday of KubeCon towards the evening? So my name is David Baldwin. I'm going to do a maintainer update on the Cloud Events Discovery and a little bit about X registry. And so this will be about 25 minutes or so. And if we have any questions, I'm happy to take them at the end. And we can go into more detail, including maybe a demo of time permits or depending on what the questions are. So just a quick background of myself, just from an introduction perspective, I've been in product management for well over 12 years. Working at different SaaS-based companies, VMware, Splunk, and a lot of work I've done there has actually been related to doing integrations. Hundreds of integrations, which is kind of how I fell upon Cloud Events and the serverless working group. Trying to find easier ways for me to be able to go through and do integrations and be able to consume data. I've also a nine-time KubeCon attendee. I started in 2015 if anybody was there in San Francisco, so heavily involved in terms of the growth of the Cloud Native community and in terms of where it's been going. So from an agenda perspective, we're going to break it down into a couple different segments. A little bit of a background in terms of the current state, where the Cloud Events came from, where they're going. Also doing an introduction into Discovery. This will branch out into the topic of where the events and attributes, which is kind of where the xRegistry part will come into place. Then I'm going to talk a little bit, just a quick update on the developer resources. I'm going to point those out because a huge amount of work has been done, especially over the last year regarding updating the tooling that's necessary for developers to be able to integrate Cloud Events. Then we're going to tap into a little bit about going forward in terms of what we're asking for in terms of the help from the community perspective. Now from a timeline perspective, we can go back to 2017. This is the serverless working group started a new specification in terms of trying to build out a way to make it easier to be able to consume events. And so from there, they started to create what was called the Cloud Events specification. And this continued to grow over time. In 2018, the project was converted into a sandbox. 2019, it moved into incubation. At the same time as incubation is when we actually released the 1.0 specification. This was a great milestone in terms of all the work that's been done over several different years. But at the same time, it was also the realization that the community made a decision to change the path in terms of actually how they were going forward and to focus more upon the event discovery, as well as the subscriptions. And then since then, there's been a significant amount of work that's been done, a huge amount of work that's been done, especially in the last year, there's a huge amount of improvements to adding additional bindings to the specification. And we'll talk about that a little bit, about two-thirds of the way through. We'll talk about the other bindings and formats that have been published. There's also been a huge amount of SDK work. I talked about that a little bit just in the beginning in terms of part of the agenda. And recently, we've actually introduced new specifications that are related to the registry aspect of that and subscription. And there's been some supporting documentation that's happened recently as well. Pagination as well as the Cloud Events SQL, which will be very useful in terms of creating Cloud Event filters. And finally, this is actually something that happened a little bit earlier in the year, but we're making more progress on that, is that a PR was actually submitted towards graduation. So as the group has been maturing over the few years, it was prudent for us to go through and actually do the PR associated with that. So a big step forward, we're not there yet. The PR has been submitted, but it's a step forward in terms of our maturity as a group. So breaking down, the next step is actually to talk a little bit about discovery. And this is an extension of some of the talks that were done beforehand on the topic as well. But we break discovery down really in the three different areas. The event registry, the schema, as well as the endpoint. And so for the first part is what context are the events used in? Are they used as part of our product or maybe perhaps as part of a service? What are the events discoverable for me to consume from an end user perspective, either with automation and towing, for example? The next one we want to understand from a schema perspective is that what really is included in the payload? What does the payload look like? And this is where the schema actually does come in and this can be used from different use cases outside of cloud events. It can be used for code generation, validation purposes as well. And also to discover if there's been any behavioral changes with the events and the content that's actually been included with that. And then finally, there'll be details on what endpoints are available to consume and produce events which falls under the endpoint registry. So these will be three specifications that I'll call out again a little bit later as part of the extra registry documents. So going back to the first part of that, the first part is the event definitions themselves. So, sorry, we go back one second. So going back to the event definitions themselves, the event definitions we can see as being summarized basically as a group. And I use the context of a product or a service earlier and coming from previous experience, these could be security events, they could be really any context that can serve any purpose. The user community comes up with new use cases all the time. We get people coming in asking about how they can contribute, add new different types as well, new use cases. In the sample event that was provided, there is a format that's actually defined as cloud events. Now there are also other standard messaging formats that can also be used outside of cloud events. Some of those are actually listed to the side there. What I've done, and I've actually done this in a couple different places throughout the presentation, is add additional links to these as well. So you can use it as a reference to go back and click on and see as well. But on the surface you see some of those as a MQP, MQTT and HTTP for example. Now the cloud event contracts, as we go into the attributes themselves, the cloud event contracts are defined by basically their attributes. The cloud event specification defines the required attributes and the optional attributes that can be used. There's also some flexibility as well is that you as the person actually producing the events can also indicate that some of the optional ones can be required as well. But there are options to use the custom defined attributes, specific to the event, the cervix, the product, et cetera, and you can define these through the list. Now the schema registry also uses the concept of groups and this is basically where the documents can be stored. There is no specific language that is required with this particular context, but there are some examples we've already written in such as that cover JSON, XML, and others are being developed and added to the specifications and documentation. And as with the majority of the specifications that we provide, this is actually even more so now the case with the registry which there is, we provide guides and even more flexibility where possible. Now outside of the event definitions there is also support for multiple versions of the schema in parallel. Within the registry specification itself there's a couple other examples in terms of how this gets broken down and actually how to be able to consume that. And if we have time, if there's questions on it, I can go back to that. Now the last part of those group of three was focused upon the endpoints. And the endpoints are extension of the definition group. So we're going back to the group topic where they can add additional data points such as configuration data for protocol settings. They can also leverage the definition groups for use of these already defined events. So this is a big win. There could be hundreds, thousands, even more endpoints and by having the ability to refer to predefined events especially in very complex architecture this can really make it much more efficient for you to be able to deploy these cloud events. And there are, there's another example of another attribute that we'll cover a little bit later, in this presentation as an example. So now this is a basic concept but we're going to cover the consumer, the producer, as well as the subscriber endpoints just to talk a little bit in terms of how they operate in terms of where they're going. And starting on the left, starting with consumer and poll model, there the consumer wants to consume the events. It's going to initiate the connection to the consumer endpoint. This hopefully is straightforward from that perspective. Now once that's actually initiated, the events will start to flow to the consumer. Some examples of this would include a PubSub, for example, or an HTTP get. On the right-hand side, we have another model which would be a push-based model. So in that case, the events are being sent from the producer to the consumer itself. And examples here would be MQP, WebBooks, and there are some examples of documentation about this within our repos as well. Now just to build upon the previous slide a little bit, there are subscriber endpoints are used to create the endpoint subscription. This can be part of a single event or really a set of events as well that would be sent to the producer endpoint. So this was something from at least my other experiences that would be very, very useful from a data ingestion perspective. Web Hooks are an example of this, as I mentioned a little bit beforehand on the previous slide. And one of the attributes that I didn't cover, but we're going to just talk about it a little bit more from this perspective, is that the channel was mentioned earlier as the endpoint definition. It was primarily used to correlate endpoints which actually belong in the same channel. So an example here would be a queuing system where the channel would allow the producer and the consumer events to be easily correlated. And so you have just an example here where we cut out those two different segments and showed how the channel, and as well as the producer and consumer as such that can be mapped later on. Now what do all these registries have in common? They have this hierarchical structure that works together through a common core. And this is also, again, repeated in the extra registry. And what that common core is that there are groups of metadata defined within the specification itself. And there's also a standard set of APIs to be able to access them. And this is actually increasing as we go through and have additional conversations associated with that. And where that branches into is we're going to jump into what's called the extra registry repo. So there's two repos that are actually involved. This part of Cloud Events is the Cloud Events repo and there's also the extra registry repo itself. And this is where the draft specifications were pushed out a little bit earlier this year. And now the extra registry allows for more than just cloud events, but also arbitrary message format. A place to source schemas that can be easily extended to use with other resources. So this is something that has been asked for from the use cases is that we need to be a lot more flexible in terms of how we use our events and how we define how we're end up using them. And so hence the name, the extensible registry. Now we still have the core of Cloud Events as a whole, but this is much more extensible as I mentioned. So the new discovery specifications that will be created. Some of these will be moved and expanded in this repo. So as we go forward and create new specifications, you'll start to see them in here as well. Which brings me into this particular slide and I mentioned I was gonna cover this a little bit earlier. So we have two different pieces in terms of what's being built out. On the left-hand side, you have the new extra registry repo in terms of what's being put into that. There is the core specification for extra registry and then you see the endpoint schema and the message definition. The registries as well. All these are currently at 0.5 version. So they've been slowly been increasing over time and making good progress from that perspective. So I definitely recommend if you have a chance to go in and take a look at it and see if you have any feedback that might meet your use cases. The pagination and the subscription were a little bit later to be added. There was a realization that these were the types of specifications and documents that we would need after starting to work with the different types of data coming in from extra registry. So they're actually gonna be at 0.1. So I'm gonna cover it a little bit later, but there is also a primer, both for cloud events as well as the extra registry. I personally find the primer to be very, very useful, especially for people that are just getting used to or getting exposed to cloud events. The cloud events primer is a great place to jump off from and get exposed in terms of actually how it works. The extra registry primer is still new and needs to be expanded from that perspective, but it is something that I do recommend taking a look at and keeping it aware in terms of actually how that's being moved forward. Now off to the right, you see all the different documents associated with cloud events, and some of these are new, new within the most recent year and I'll be talking about them as part of the update. You have the core cloud event spec, and then you have all the different additional domain specific documents. These are docs for binding, such as the MQP, Kafka, HTTP. There's also the event formats that are listed there as well, such as Avro, JSON, Protobuf, and there's other ones as well. And then the new docs, the additional docs that are available includes the SDKs, the supported features as well, and also the new ones for a subscription, and the cloud events sequel are all listed there to the right. Now, in terms of some of the recommendations in terms of how to work with cloud events, and I mentioned the primer as a way to get started, but it's very easy to start small and build out from there. And these are some of the same approaches we took when cloud events first got started, started small, started building upon that, and made adjustments as we learned more about the use cases and what the community actually needs. You as a user are able to take all these different discovery objects themselves and use them within your own source code. You can include the scheme inside your event definitions and the event definitions inside the endpoint. Another option, if you're interested, is to build automation and deployment tooling. And that would basically create the endpoint definition on the fly from code when there's what's actually being deployed. This is one aspect associated with that. And then you're able to link that to any static definitions and reuse them within your project or your projects within your environment. Another option, if you wanna go even more advanced, and this is something that we are exploring, especially from a use case perspective, because people within our own group as well as customers, other community members have actually asked for this as well. It's looking at the concepts of a centralized registry, for example, where you would have centralized schemas and as well as governance associated with that. Another way of actually thinking about it is interop between organizations, between your own internal organizations, or maybe within partnerships, different partnerships in the organization and customers as well. Another take that would be an option is a federated use case. And this would be a situation where you might have different services that need to discover each other and sync between each other as well. And so these are all different use cases where we're trying to expand upon. Definitely looking for some other suggestions that you may have and it would be willing to be able to contribute and provide to us as well. Now, where we are, and some of these things I've mentioned multiple times, is that everything we're doing is a work in progress, but we're very, very active. The community has been providing lots of content and lots of feedback. We do meet every Wednesday, 9 a.m. Pacific time, 12 central. And so we have a standing meeting that's very active and very engaging from a dialogue perspective. But there are still some areas that we need additional input on and we're looking from feedback from people such as you are additional use cases. We're also looking at people to come in and provide different perspectives as well in terms of challenges they have in terms of consuming events and working with those and subscribing to events. New ideas such as those that came up in terms of being able to add the discovery specifications is a way to be able to change how we do business. Sorry, I'm thinking about work. But in terms of how we operate as a community. And as I mentioned earlier, we have submitted Cloud Events PR for graduation and any assistance towards that can be useful as well. Now there are some other challenges that are in place as we continue to define the Cloud Events X Registry spec always looking for ways to get past some of those challenges. But we also want to be able to increase the standardization across the other CNSCF projects and external projects for that matter. Right now, at last count when I looked at it, we had about 30 different projects that are using Cloud Events, 30 different projects within CNSCF are using Cloud Events. And this is a great start. People are basically using this foundation for helping them manage events. It makes it much more tightly coupled and now these projects can function from an interop perspective much easier for you to be able to go through and actually be able to integrate. There's a couple other areas that were called out from beforehand and these are actively being worked on and this list will continue to grow. And where that actually goes into place is that the enrichment of overloading for event definitions is one area. The endpoints that generate enriched events, one that's actually we just put there as an example would be the partition key. And another area that was turned out to be very, very useful with Cloud Events is when people would go out and publish and have publicly available their proof of concepts with the implementations. This enabled new users to come in and see how the Cloud Events were operating. This could also be applied to the extra registry which would be very useful. Now I'm going to tap into one aspect is that the Cloud Events primer itself. And the registry one as I mentioned is being developed but this is a great place to start. It has the key parts of specification all in one location and the concepts such as the structure behind the events as well as the goals and why we did what we did. The architecture which includes a model, extensions, the invent format and coding, constraints, area handling and security and security has been a big topic this KubeCon. I've seen a lot of conferences where it's been really focused on cloud native security, Kubernetes security and there's also been a lot of vendors in the showcase as well have been pushing that as well. Versing, a quick guide to attributes on how to use them as well as the key protocols and actually how they fit in. In the end, basically what you'll end up getting out of the primer is a reduced time to value for your Cloud Events. Then extra registry as well as it matures and you'll have a much better journey and overall better event experience associated with that. Now the last part, at least from this particular segment is to talk about the developer experience in terms of where to go with that. So I mentioned several times that the community has been very, very actively involved in terms of improving the SDKs. It's been something that we put a lot of effort into and so over to the left you see the different languages that are there and over to the right I put a link that basically maps to all the different feature support page which is continually being updated. And I'm gonna share that page here in a second. I'm not gonna exit out because getting back into this particular page was a challenge in terms of setting it up, in terms of where to go with that. But I do wanna provide a couple different other links and I'm gonna go back and we can cover that. Two additional areas for it to be able to connect with the group in terms of what we're doing. To the right is a link to the Slack channel for X Registry and this channel is actually, it's gonna be actively available within the CNCF workspace but we're writing an adrenaline cure. And then off to the left you have the link that takes you to the Cloud Events GitHub page. This would be in addition to the X Registry that was already there. I am gonna share the thank you real fast. I'm gonna have myself in the Cloud Events community to allow us to provide this update but then I'm gonna exit out and just give an example in terms of the other feature support as part of the SDK. Hopefully I do this correctly. Sorry, got it, cool, thank you. So this is an example of the feature support that's currently available within the Cloud Events and we spent some time over, I wanna say the last three weeks or so going through and updating a lot of the SDKs in terms of content was in there as well as feature support itself. So some of them are gonna be very specific towards the end users and the companies who are helping support certain SDKs and feature sets but it's a great example in terms of all the different areas in terms of what you're able to do from an SDK perspective. Yes, there are a lot of Xs but a lot of these are being knocked off and there is still a significant amount of overall adoption and the ability for you to be able to use Cloud Events. I'm going to pause, that was about 25 minutes and be able to pause and see if there's any questions. Yeah, so I believe, so the projects that are currently used K-Native is one of them that's starting to be able to do the integration. Argo is another one, Flux is another but there typically is coordination or there is somebody from their project will come over and actually be able to ask questions but they can use it independently as well. Are they gonna end up using it completely at some point? It's a great question. I can go back and figure that out. We are doing, so for example, there have been discussions that we've been having trying to see how we can better pair up for example with open telemetry for example and that's actively ongoing and it hasn't been finalized and there's still work that needs to be done from a PR perspective but those are types of active discussions and the case of K-Native, I can check to see who's actually alignment with that but typically there's a person who's actually attached to each one of those projects and who's actually working on that. I'll be happy to show you actually how to be able to contribute or ask questions associated with that. So we actually have a meeting tomorrow morning that again, 9 a.m. Pacific 12 Eastern and we can ask those questions in terms of where to be able to go with that. There's a separate governance documents in terms of if you do wanna contribute or if you do wanna be able to take what you have and bring it in or if you want feedback in terms of what you're doing. There's been other companies have done that in terms of vendors but there's also been other consumers as well that just wanna be able to bring it into their environment that have contributed as well. And so, okay, it makes sense. Is there another question? I actually might walk over because my hearing's bad if you don't mind. Oh, there's a mic. So sorry I'm new to this but it seems like you guys are mostly defining like the end points and the schemas, messages these kinds of things. And I'm looking at some of the event-driven architecture stuff that we've worked with and it seems like it's a natural fit except that maybe it doesn't support some of the cooperative features like replay, things like that. Have you guys considered the suitability for this for event-driven architectures or have you looked at like the iSync API that seems to be feeding into that as well? So topics such as replay, which you just mentioned that was brought up. I don't know what the final resolution with that was be. It does brought up about a month ago in terms of one of the calls. I don't know what the final execution was in terms of how they were going to adopt it or how they were gonna take feedback on that. Okay. I do recall that actually being, I wanna say early October, late September was one of the calls in terms of when it was brought up. Okay. But I can get your name and go back and look at that. Sure, sure. And try to give an intro for that. Yeah, I'll come up to the end here but this looks like an 80% overlap with the kind of technology contracts, things they're looking for, especially for inter-organizational cooperation and your cross-organizational registries, things like that would be fantastic. But I don't know. It was something that, similar to that when I was doing a significant amount of integrations, for example, I mean, we just started, it was something that I wish I had beforehand. You can still do both at the same time. So there are examples where you keep your old wave actually doing events and you start to implement cloud events for the new ones as they come forward. That can be a little bit challenged sometimes architecturally and it can be expensive to some degree in terms of trying to manage two different solutions at the same time. But that is one way to start a progress in terms of being able to change things over. Okay. And then quickly, is there even a beta product or something that somebody has produced open source that is a registry that we could begin to use today or is it still in definition stages? So we have our extra registry which is in the definition stages. One of the key maintainers who's done a lot of work, especially with Event Grid, has a perfect concept he's actually put out in place already. That was, I wanna say that was shared in June timeframe. That would be available to be able to, you should be able to see that either on a video or you'd be able to download that and be able to use it from that perspective. Okay. Thanks. Oh, the person actually did a POC. I wanna say in the June timeframe. So one of the items that we actually had in our list is we have a POC that's already in place. We're always doing additional development to show ways of doing different implementations. This was one I think that was done in the June timeframe and I thought it was done with, also done with Event Grid in terms of integration but that was just an example. So I'll just use Event Grid as an example. So they have two different ways. I think this is still the case. I shouldn't probably speak for Microsoft but it's, I think this is actually still the case where they have two different ways to be able to consume the events. You can use Cloud Events as you typically would be and they have a separate option that actually has a wrapper associated with that. So you have both options available depending on how the implementation actually is. AWS, I would have to go back and read about where they're currently at in terms of actually how they offer the ability to be able to consume events. Typically with Event Hub, for example, that's gonna be a streaming service. It's a little bit different in terms of actually how it operates but we can take a look at it and I can give you some more feedback. So there was, are you talking about, so I talked about one of the options for the event types themselves was to make them required versus optional. That was one aspect associated with that. I don't recall my, actually just directly talking about the overriding part. So it's still focused on the events themselves. The extra street also, the discovery part of extra registry is really unique and that it's, the specification enables anybody to go through and actually be able to publish and consume events under a general specification themselves. You don't have to use cloud events per se. You can have another mechanism to be able to go through and do that. You can still use the discovery specification itself to be able to go through and discover all the different event types and what the scheme is like and then figure out what you actually want to be able to consume there as well. I'm not sure I'm answering your question. Okay. All right, but I can take the notes that I think those three people actually that needed a follow up, I can write that down and reach out to you after this event typing. So the, are you interested about the type of data? So you can still create a hierarchy of events, all the different attributes associated with that. The data itself that's actually inside of that, we're not gonna modify, but you can still go through and classify the metadata and actually have that set as needed. So you can define that as necessary. You got time for one more question here. I have two questions. We use open API for defining our rest interfaces that has been a standard like open API for rest interfaces. When I look for our async messages or async events, right? I see two things when I go and search, one is the async API and I also look at the cloud events. So is cloud events going to be the standard for defining your event structure? So I may mention this a little bit earlier, you can have more than one methodology in terms of how you actually wanna define your event structure itself. We are trying to push the cloud events methodology just because it makes it much easier for people to be able to integrate and less hurdles associated with that. And will there be tools provided along with the cloud events? Like for example, we generate the code based upon the open API definition, right? That's how we share our open API with the external vendors and they create the clients based on the open API opening. So does cloud events also will be providing those kind of tools where it can generate the consumers or the producers based on the definition of the... So we provide the specification in terms of how you should write that. What's the message format going to be and what message queue or what channel or the topic you're going to consume from. So when it provides that definition. In terms of... Will there be tools provided to kind of generate those? So typically the working group doesn't necessarily provide tools, we provide proof of concepts. I mentioned that a little bit earlier in terms of how to potentially be able to use that but it may not be the exact scenario that you're actually looking for. I can take your information and we can go back and have a discussion in terms of what you may need from that. Okay, one other question. The one other requirement we generally tend to do is like backward forward compatibility of the messages. Right, I did see in your... I mentioned versioning. Required, required kind of but does it also support the compatibility where some fields are optional and I can do a test when I rev up my version whether it's compatible with backward or forward. Those compatibility, data compatibility or format compatibility. So there can be data compatibility at least from a specification perspective if it's required as you're switching between versions that'll be more of a challenge but from an optionality perspective you do have the flexibility associated with that. Excuse me, I would have to go back and look at the error handling aspects in terms of... Because some of that has actually changed a little bit in terms of clarifying actually how we handle the errors in terms of if you run this scenario, what would happen? Do we give you a null back? Do we give you an error back? What are the conditions associated with that? So there's been some tweaks we've made recently that I would have to go back and check. Okay, thank you. Thank you, but I can take your information and I can go back and be more specific. Okay, thank you.