 All right, gang, thanks for sticking around. We're back. Hopefully everybody is refreshed. So real quick, I promise everybody a demo. And we'll wait just a second for this to come up. And just to get us back to where we were, at some point this will come up. There we go. If you guys will remember, we have two K-native services here. If you guys went through the actual GitHub barrel, and you should be pretty much to this point, and it's worth noting we have two services, right? We just saw the K-native service representation of that. I'm using something called KAML-K. KAML-K, what it does, we'll look at the actual code here in a second so we can see why this is giving us event-boss-like capabilities. KAML-K is going to hook into K-native serving. It'll go create things like the K-native services that we just saw previously. It'll also create things like subscription. We have a few other things sitting here in the same space. I'll go ahead and go over here to something that looks a little bit better. In K-native eventing, you guys will notice this is we're in our application namespace, service-mesh-con. We have a default broker. I'm more of a CLI guy, so we'll look at it this way. Cool. And so we'll notice we have this broker. Let's take a peek at the YAML that actually represents that. And we'll notice a few different things. This is being configured for this particular tenant with a config map and its individual namespace, a talk for another day. But we want to engage in notions of multi-tenancy with K-native eventing. That's when we really start getting some great stuff out of K-native eventing. Cool. A few other things that we'll notice here is, oh, yes. We'll notice here that we're actually a Kafka channel. So when we go look at the resources that we're laying down into this cluster, for instance, the channels we just looked at, we have two channels. And so what our particular demo will do, one K-native service will pop up. It's going to send messages into a K-native channel into this testing DB events channel. Another guy is going to say, hey, I'm subscribed. That was a subscription we just looked at. I'm subscribed to that channel. And I'm going to do some stuff with it, and then I'm going to send it on its way. True kind of enterprise service bus behavior. It's unaware. Our particular implementation is totally unaware of what these channels are. It doesn't know that Kafka is underneath. It doesn't know whether or not it's NAT. It doesn't know whether or not it's in memory channel. The only thing it knows is it's engaging in this PubSub behavior from K-native. That's super cool. What that means is we can engage in some multi-tenancy, i.e. in one namespace we could use Kafka, the next namespace we could use NAT. Maybe we have a tenant that's not very important. They can stay in memory and they don't need to use up our resources. But it also presents on some things that we need to do. We need to have, as mentioned previously, we've got some governance there, right? So we determine what this channel implies for a particular namespace, right? So if you've gotten past the admin who's holding cluster admin and he says, you're okay to use Kafka, someone and so forth. That's not material to your particular implementation. So let's go over to Eclipse where I like to live, and hopefully this is big enough for everybody. But we'll look at the first guy here. This is something called Apache Camel, and I was just having a discussion with Rahm over there. This is a DSL that attempts to be an idiomatic easy button. So you notice this is idiomatic, right? We're going to go transform something, we're going to go log something, then we're going to go to another thing. Well, this particular guy is going to do is, he's got a timer, right? Typical kind of quartz-y scheduler type of thing, and then he's going to log some stuff out. He's going to set a body real quick. What a body is in Camel is just the thing that we're carrying around from this idiomatic thing to idiomatic thing. Then we're just going to go off to, we're going to go off to a channel, testing DB events. You'll notice this doesn't indicate that it's a Kafka producer, not wiring up a Kafka producer at all. It's just going to go to a Knative channel called testing DB events. Then this guy, Event Bus Transformation Integration, is going to pick that thing up off of the testing DB events channel. He's going to do some, or she is going to do some stuff to it, and then again send it off to another channel, which we'll notice here, right? So literally this thing, our second Knative service is just going to sit on the bus. She's unaware that we're speaking to Kafka. If we were to throw this into another namespace the way we've wired up Knative eventing, it could be in memory, it could be NATs, so on and so forth. Again, it is this core central notion of governance that we've gotten to in Knative eventing. Where we get out of our developers way, we let them do what they want, but we govern what these sources and sinks are. In their actual physical implementation. For us in this particular case, it's Kafka again in other cases, it may be something else. Cool. So we'll notice the event sync integration guy is running. Again, if we just looked at this guy, this is the guy with the timer, and then he emits off to an event. He's running, but he's already done all this stuff. So what I'm going to do, instead of forcing you guys to sit through like a build and stuff like that, I'm just going to delete this pot. Once I am able to type again, and we'll let that guy start deleting. Let's go over here. We'll do a gap pods and see what's going on. That guy we just noticed that was scaled to zero is now coming up. If we go interrogate this guy's logs to see what's going on here, we'll notice that it's taking in messages off the event bus, and assuming some stuff, it received a service message, service mesh, welcome to Cloud Native Integration, and then it's saying, oh yeah, by the way, it's going to add some more stuff in, and then we're going to go off to another Knative channel. What these are physically underneath the cover are actual Kafka topics. Those Kafka topics could be, we could simply as somebody else in another namespace in other one of our applications, can now just pick up off of that particular Kafka topic. If we lose our runtime, so on and so forth. If we lose Kafka, hopefully we're replicating over to another data center or potentially another Cloud. This allows us to get those Cloud Native runtime capabilities, also while injecting governance into the situation. Also, remember the white paper we were discussing previously, we're not looking to get into the way of developers, we want them to be able to do their stuff, we want them to be able to do it quickly. Providing a service bus implementation that is decoupled to these kinds of monolithic notions that are old ESBs used to be, right? Or old ESBs used to have service component assembly, you had to use a type of message, message type, it had to do the following things. You had to adhere to a canonical schema, we get all those things, but still can allow you to be Kafka message, we can allow you to be MQP, we can allow you to be NAT, we can allow you to do a lot more than that. So let's go check out what this graph looks like in Kiala, and you guys will notice it's pretty for both. One of the reasons we went through our schematics previously, if I went over here and try to point out all these points to you guys, it'd be pretty rough. We can notice where some of our traffic is going through, here in Kiala, and the different ways that it's heading. Let's go over to Yeager and see what's going on here. Go. All right. We'll look at this individual service. Oops, which one is it? Here, we'll just look at this guy. All right. We'll look at one of our dispatchers, right? We've wired up into the match. We'll notice a few different things going on. We have spans here. We can see some of the things that we were talking about previously, right? Here, the activator calls event bus transformation, right? Here, our event sync integration goes off to another dispatcher, so on and so forth. These things, because these things are all in the mesh, they can talk to each other, and I didn't have to do any further configuration. Remember, generally speaking, in the past, we had loads of bespoke ways of handling this. Things had to shake hands. Things would have to shake hands differently, so on and so forth. So that was the demo. It takes actually quite a while to get there. The upside is, if you guys go through the GitHub URL, the upside to that is these are things we generally speaking wouldn't change a lot, right? We're probably not going to significantly change the innards of our Knative serving and Knative eventing implementations, so on and so forth. We may extend them, right? You'll see how we go about that. If you guys go to the GitHub URL, we wire ourselves up for different namespaces a little bit differently there. You'll see it in the resources there. So that demo was that attempting to get those Cloud-native architectural capabilities via a set of Knative services to talk over the PubSub abstraction that constitutes our Cloud-native event bus. So I promise you guys some advanced topics. Because if you guys will notice, we didn't really do too much crazy, right? We do wire up everything for some things that need to be wired up for sidecars, right? We inject those. We wash those handshakes happen. I didn't have to do a lot to get there. I just had to set up my platform, and then developers can just do their thing without having to worry about all of this. But we do have some other things that we probably want to get to, right? One of those things is external authorization. Right now, if we think about what it is that was providing a JOT or some means of authentication and authorization over the mesh, what we had are these things that are, we have a dispatcher, we have a controller, and we can wire it up a little bit differently to go from namespace to namespace on the dispatcher. But generally speaking, these things are global across all of our namespaces, right? So the JOT that they're representing isn't necessary is just from that same namespace, is just from that same namespace. Generally speaking, right? We can provide some bespoke authentication and inevitably authorization by whatever our event source is, sure. But we don't have a really great way to say, hey, this dispatcher should be calling a Knative service outside of that JOT or this dispatcher should be talking to an event source outside of that JOT that's driven by a service account. That probably doesn't work for us, right? Like, I probably just don't have one God identity that talks to Kafka. I don't have one God identity that talks to my myriad services. I need finer-grained authorization than that. And generally speaking, I've already, in the majority of enterprises, I've already figured this out. My organization already has these things, right? We've spent probably the last 20, 30 years getting a handle on what fine-grained control means in our enterprise, right? I need to probably hook into something external, right? And so what we really would like to do, what we probably want to do as we're moving passes is to leverage SEO for external authorization and then say, hey, in this particular case, IMC channel dispatcher in the namespace Knative eventing or in this case, IMC channel dispatcher, let's say I've decided to, there's a way in Knative eventing where I can make sure that my dispatchers are in individual namespaces. Hey, Knative eventing, you're actually going to be this guy. You're going to be testing at secure.SEO.IO and I can inject more fine-grained authentication and identity this way via SEO. This allows me to now get back to my fine-grained RBAC that I have in things like a Kafka broker or NETs or whatever that thing is that I'm communicating with that ultimately constitutes the events store on my event bus. Oh yeah, and I've already got that stuff. I don't want to redo things. Remember one of the things we were talking about with governance is we don't want 30 of these things flapping all over the place, right? We want to centralize these notions of authorization, authentication, we want technical consistency across our approaches, right? I don't want to say, hey, in this case, I do something wildly different because I don't have, because hey, the same Jot from the same service account doesn't work all the time for all the authorization I need. All right. So I had one more thing, but I think we'll leave it there. I haven't stopped for questions. We've got about 10 minutes I think, Nicolette 10 minutes. Yeah, sweet. So we've got about 10 minutes. We can call it if you guys want. I would recommend this, please. Yeah. Please go to that GitHub URL and file GitHub issues. If you guys have any questions, you think we did something wrong, you're like, hey dude, whiskey tango foxtrot, like why isn't this working for me, right? Remember, we don't have to use OpenShift, we don't have to use Mystera, we don't have to use OpenShift serverless, and we don't have to use Camelcat. So what a great GitHub issue might be, hey, I'm using something else, can you show me how to do that? We'll totally get to you, don't worry about it at all. If you guys want, you can ask all your questions there. It'll be preserved for better or worse in the perpetuity. Or guys, more than welcome to hang out for as long as everybody wants. Well, at some point, we'll probably have to take it to the hall, but more than willing to hang out and talk some shop if anybody wants. Mr. Murphy, specifically the autoscaler? Well, so it's like Knative serving and Knative eventing out of the box, all depending on your operator implementation or just show is not going to handshake securely, right? We don't have MTLS, we don't have a means of governing who and what those identities are, right? Outside of, again, the basic job that's provided by a service account, right? We probably have something like an Oath proxy in our Kubernetes distro, and that probably says, hey, get the heck out of here, like who the heck are you? This job isn't able to do this, right? And that probably we would get something unauthorized at that point. However, we would suggest that that's really not enough, right? We want one to not have 12 different ways of doing this, right? We want to have one way of protecting of engaging in mutual TLS with our Knative services, right? So from an autoscaler perspective, you're absolutely right. We're not providing any extra master tricks, we're not saying, hey, don't do what you normally do, go over to SEO and collect some stuff that is sitting in elastic search or wherever we decided to store that data, right? However, it provides us one consistent cross-hour organization way of making that MTLS handshake, and then two, I had a second point. Yeah. So at minimum, it provides us that initial means of not having 20 different bespoke authorization. But yeah, it'd be cool if it did some other stuff. In fact, if we could probably spend the rest of the day talking about some stuff I'd like to see in Knative Eventing specifically around SEO. Yeah. You could replace your HPA or KPA autoscaler, right? There's nothing in Knative serving or Knative Eventing that says thou shalt use the out-of-the-box HPA and KPA stuff. So that's maybe how I would, if I was to say, hey, I would like some other sorts of metrics, I need something richer than let's say concurrency, right? Or whatever I have wired up an HPA, CPU or something like that to scale up and down. Yeah. I would probably hook into something that provides that. So I would imagine much in the way, so that could be something that interrogates SEO metrics. I would imagine whatever it is that you took, it sounds like you have something in mind. I would imagine you could probably plug that thing into Knative serving, and that is going to be the thing that's bringing things up and down. So yeah, I mean, I'm unfamiliar with an appliance that maybe gets us a little bit more fine-grained metrics than CPU, concurrency, so on and so forth. But I could certainly see, all depending on the use case and what we were after, I think that's a very viable thing to do. It should work out of the boxish, I'm guessing, ish. Yeah. Yeah. Yeah. I mean, the move over to courier, we're all in this weird place, where we all want to decompose everything, and then we all, pardon my French, we all bitch about the footprint that it creates. So SEO being a classic example, we literally went back to the monologue. So yeah, I mean, them deciding, hey, we're not going to lay out this big huge SEO system. You're just going to have this little plain old Envoy proxy that's going to answer things, is I think an attempt to address that. It also had Tecton built into it, so on and so forth, its own build system back in the day. So Knative has gone through a few changes. That I think are mostly good. Some of the C coupling, the fact that I don't have to use SEO, or I don't have to use courier, I could go use glue, or I could use any number of different surface mass distributions, I think is a good thing. The idea that I'm not forced to use SEO per se, I think is a good thing. Knative of Vening isn't there yet though. You will notice from our CRs previously that we were looking at Knative of the servings, like oh yeah, just enable SEO. Yay, MTLS enabled, sweet. Knative of Vening we actually have to go a little bit farther with which we'll notice in the CR that I listed, we have a lot more stuff injected. Yeah, so let's see. So one of the things that we're solving here is we've changed our paradigm and we're noticing a significant architectural shift. That implies that we can't just say, hey, I used to have a web service here, I'll just throw it over the fence and it's over there, cloud native. In fact, we would make the case rather laboriously, if you go click on a link that's in the slide deck. We make the case that that's not even kind of enough to get to a cloud native architecture. We need a lot more. So one of the things that we, so that demo in and of itself attempts to address like the actual thing that it's doing, the PubSub activity over an HTTP-based control plane is an attempt to address that. Now, we have a need and that is ingress into that HTTP control plan. So we can choose something that's not very sophisticated, that doesn't really set us up for a complex enterprise use cases later. I can't, well, that's not quite true. I probably don't have great hooks to use an open policy agent in just career. I don't have great, I could do some Envoy filters and stuff like that, but now I'm into the innards and guts of Envoy, as opposed to using semantically meaningful constructs from Istia, that I can then extend out into destination policies, various authorization policies like we saw at the end there, so on and so forth. So that's the problem we're attempting to solve for. If you don't really care about your ingress and what's happening between these components, don't need Istia, just roll out with career, we're good to go, we've got something that's going to serve up our Canadian services. If you don't want serverless, by that I mean if you don't, the scale to zero thing is a, shouldn't be saying this on camera, it's a myth. Our use cases almost will never scale to zero, but we do want them to handle burst C behavior, and that is where K-native serving becomes quite handy. As Eric mentioned, there's certainly other ways to do that. We have HPA autoscalers that come right out of the box and many of our Kubernetes sister house, but we don't have a central, technically consistent means of doing that for just about everything. I've got one maybe, more question I can take as long as it's very quick, or we can go out into the hallway if you guys want to chat more, are you up right now? Silence is golden, thank you everybody. I hope you guys got something out of that, I'm going to be around. So if you guys want to chat in the hallway, probably have to let these guys go. But we just go chat in the hallway if you guys want, otherwise, thanks guys. I hope you guys enjoy the rest of the conference.