 All right, gang, I suppose we'll go ahead and get started. We're a minute over, cool. So real quick, we're at service-mesh-con. I'm sure you guys know that. Congratulations on surviving the apocalypse as well. Anyways, real quick, this is a talk about Knative Eventing, and more specifically what I'd like to refer to as a Cloud Native Event Boss. Think old-school ESBs, but it's 2021, and we're going to present something a little bit different than a monolithic ESB. We're going to inject ICO into this to attempt to give ourselves some notion of governance, which Knative Eventing may lack. About me, I'm a senior architect of Red Hat. I work in an emerging technologies practice in services. You can check me out at entropic.me slash about, or you can go to mikecostly.com, so on and so forth. All right, let's get started. So we've got two sessions as a workshop. We're going to initially introduce the workshop. We're going to talk about the rise of Cloud Native architecture and why that's important. We're going to get into what governance is. Typically what I would call a bad word, but we'll see why it's important in this particular context. We're going to talk about how Knative provides governance, why that's important in and of itself, and then why we would use something like ICO. We're going to do a Knative deep dive. We need to go figure out how Knative works with ICO. Knative Eventing doesn't work right out of the box with ICO. So we'll definitely want to do a little bit of a deep dive because we've got some things to do, which you guys will see in the repo I'll point you towards. Speaking of which, if you guys go to github.com, myCoslo, Knative Eventing examples, this is where everything is going to happen. I'm going to say that again in a second in case you didn't get it. We're going to talk about governing Knative. We're going to do a demo where we're actually going to walk through some of this. We're going to discuss what happened. We're going to talk about some of the governance items. Then in session two, we'll come back, we'll ask the existential question, how did it go? Then of course, there's probably some more things we could do with ICO that we're not demonstrating here. So we'll talk about some of those things as well. So there's a little bait on the hook for you guys to come back to session two. All right. So the workshop intro. Again, as promised, there is that github.org again. Please go there, follow the instructions. It's going to take a while. In fact, it's going to take quite a long while. It's going to take so long that we're going to get into theory and talk about why we're doing this to begin with for a while, and then we'll check back in. What we're using? We're using a Kubernetes distro. In my case, I'm using OpenShift 48. You don't need to use OpenShift. In fact, you could really use any Kubernetes distro as long as we're talking about version past 119 or something like that. ICO2x distro. In my case, I'm using Maistra. It's the Red Hat operator. However, generally speaking, what you'll find is the primitives we're using are ICO primitives. There's mostly nothing specific that's going to happen that is Maistra-specific. So you can probably get away with just about any ICO distro as long as it's a 2x distro. We're going to use Knative. In my case, again, I'm using OpenShift serverless. However, you don't need to use OpenShift serverless. We're only using Knative serving and Knative eventing primitives. Really, you could use most recent distros of Knative. So if you don't have a OpenShift available with all the OpenShift fun, you can really just use, like I said, any Kubernetes distro, most ICO operators, most Knative operators, so on and so forth. Then I'm going to use something to demonstrate Knative services and how we get on the Cloud Native Event Bus. I'm going to use something called CamelK. Totally immaterial to what we're doing here. As we'll note, what we're really after are these Knative serving primitives, such as a subscription, we're going to be looking at channels, these sorts of things. So you don't need CamelK. I just happened to be a little too fond of it and it was laying around, so we're using it here. So real quick, let's talk about the rise of Cloud Native architecture and why it's important. More specifically, let's take a little trip through history. We'll find how we got here. So initially, we had mainframe computing, give way to client server approaches. That was pretty cool, if you guys will remember. Vividly, the late 90s, mid 90s. So this was really cool because it democratized compute. You didn't need to go buy a million-dollar mainframe, you could go buy something much smaller. One small problem, these things tended to be, at least our initial client server implementations, tended to be a little heavyweight. We started moving to things that what we're talking about are things like fact clients, stuff like this, it was really quite difficult to go out and distribute software like this. And so what we started doing is moving to more distributed software techniques. This became, as mentioned, this became unwieldy, right? And some of the distributed techniques we moved to, like RPC, Corba, so on and so forth, they were a little ugly. I think many of us have stubbed our knees and toes and got quite bloody because of Corba, really, really painful stuff if you were doing RMI in the late 90s and early odd years. So we started publishing a variety of tools to kind of accommodate this, right? We started thinking about things like asynchronous messaging, which fit quite nicely with some of the mainframe concepts we'd had in the past. We started coming up with these things that were patterns that we noticed emerging everywhere. So right 10 years ago, 15 years ago, very buzzy, we started coming up with a service-oriented architecture. We started coming up with things like enterprise integration patterns, where we would say, hey, we noticed some of these behaviors that happened quite a bit. And we actually want to encode this in a practice and advocate usage of the pattern. So that was really cool, right? We got there, but, oh yeah, before we talk about that. So initially, we kind of have these point-to-point remote invocations. We have thing A calling thing B, and that moved, we may have even had web services or various other RPC things, one thing called the next. We moved away from that, and as I mentioned previously, we moved to something called an enterprise service bus. This was hot, hot, hot in the mid-ought years. And we found, as you guys probably well know, over the next decade or so, those types of monolithic implementations where everything is coupled to a central bus are great and all until we need to change. So we started looking at things like microservices and started advancing our paradigm a little bit more, where we got independent deployment pipelines. We could change a little quicker, a little faster without necessarily causing ourselves a ton of heartache. But over the last few years, we all had this ginormous edict, whether it came on high from our CEOs or it was just something that we thought sounded like a good idea, we began to move to the cloud. That caused us a whole new list of concerns. What in the cloud, of course, we can't have point-to-point communication per se. We have to expect our infrastructure to fail. There needs to be something to accommodate ephemeral compute, ephemeral storage, so on and so forth. We had multi and hybrid cloud desires. Remember, we've still got all this stuff, from the little stroll through history we just took, we still got all this stuff stuck in a data center. We didn't want to go to the cloud and not be able to exploit those things. So we had to start coming up with ways where we would say, hey, I want to be hybrid cloud. We also probably had some edict from, again, somebody on high saying, thou shall be multi-cloud, they didn't want to get stuck in AWS. They had dreams of being able to go off to Azure and all of the rest of the cloud providers. We needed to distribute, our compute density and efficiency are front and center. When this edict was made, thou shall go to the cloud, nobody thought we were going to be actually more expensive. As we decompose things in a microservices, we started to notice a big explosion of compute and all of a sudden we were getting AWS bills that were quite high. I remember about 10 years ago having a discussion with our CFO and he was like, hey man, I could have bought a new data center for the run rate that we were doing in a quarter. Painful stuff and that becomes front and center in this move to the cloud. So we also want to distribute our architecture across availability zones. What we really mean by that is we don't want to be stuck in a single location. That was one of the problems that we always had in the past. Our DC goes down, we don't really have a great way to move to some sort of passive data center or redundant data center. Avalab, bang, there goes our SLAs, there goes most of our use cases, the business stops. So we started to take on container platforms as a means to abstract this move to the cloud so that we could deploy things to the cloud that weren't necessarily specific to a particular cloud provider. If I am only using AWS managed services, perhaps I might like to call myself cloud native but I'm not really cloud native, I'm AWS native. If I go take these things off to another cloud and I go do the multi-cloud thing that I'm being asked or even hybrid cloud thing that I'm being asked to do, I don't quite have a lift and shift, I have something far bigger than that, right? It's quite likely that it'll be quite difficult to replicate what I was doing there in another cloud. All right, so what does that mean? So we went from point to point, we had this RMI stuff happening in the late 90s, early odd years, we took on some SOA concepts, right? Web services that kind of morphed into these SOA concepts like an ESB, we decomposed that part of the pun and we found ourselves with microservices, right? And now that we've gone to the cloud, we are attempting to take on these cloud native architectures and some of the things we wanna take on cloud native architectures are things like we wanna be able to scale to zero. Remember that resource and compute efficiency thing, right? Became front and center. If I go say, hey, I'm gonna go take the monolith and I'm gonna break it into 40 parts, right? It's very difficult to go say to my boss or my checkbook, hey, that's, I've added in tremendous amounts of overhead, my bills are quite high, it's not a great look. We wanna optimize resource usage and we also wanna avoid random arbitrary workload problems. So for instance, what we really mean by that is if I'm dealing with ephemeral compute, if I know that I may lose machine instances, I know that I may lose an entire availability zone, i.e. what we used to call a data center, I need to be able to handle this and that is kind of some of the tenets of what makes us cloud native. So what does cloud native mean, right? If this is the architectural jump or leap we're making, I've actually spent quite a bit of time defining this, it's rather laborious, if you need to go to sleep tonight, I would read from this URL, we get really, really, really into why we're suggesting there are these cloud native characteristics. But some of the cloud native characteristics that we need to take on to have a cloud native architecture are things like we need to have elasticity, we need to be scalable on demand, we need to be resilient, remember, we have to survive a loss of an availability zone, we simply can't say, oh well, that's tough, my particular availability zone in USC is down, sorry, SLA's can't do anything about that. We need to be observable and manageable, right? It's easy enough to say, hey, I'm going to go with decomposings, I'm going to go put them in the cloud, but if I lose visibility and I lose observability, the same sorts of observability that I had in my more traditional legacies, data centers, well, that's not really a good place to be. We also need to be location agnostic, remember, our computers are femoral, things are moving around, we can't necessarily say, hey, I'm going to go send this HTTP request at 10.so and so forth, that's not really a viable premise. We need to be able to invoke an HTTP request, it's completely agnostic of the physical place that this thing lives. We want to be API centric, one of the reasons we want to be API centric is one, again, in the cloud native integration GitHub, Earl, this there, we're in a container platform, we're in something that is API driven, API and event driven. For our things to come and go, they exist in this API driven API centric world, right? So we need our things to do something fairly similar and we also, because things are coming and going and because this can be sometimes happening quite fast, we want well defined APIs and we need to be able to handle event driven premises such as asynchronous invocation, so on and so forth. Remember, taking on a cloud native architecture doesn't just mean, hey, I plop this thing into AWS or GCP or a digital ocean. If we rely solely on their APIs, remember as we were just discussing, we've got a big headache waiting for us. We want to abstract that some way. The way that we would recommend, generally speaking, abstracting it, again, we get really into this in that Earl, the way we would want to abstract that via a container platform or some notion of abstraction. Whether that is Mezos, Kubernetes, so on and so forth, it's really a material, right? It's the abstraction itself that's important. I would argue that Kubernetes seems to have won the day in this regard. However, let's not get too hung up on Kubernetes as much as we recognize we need this abstraction. And Kubernetes and containers alone are enough. We need a couple of things, right? We need things to care and feed for deployments. I don't want to spend, one of the things I used to spend quite a bit of my time doing once upon a time was configuration. In fact, generally speaking, we'll find developers who are shipping microservices right now that don't rely on some cloud-native abstractions and cloud-native architectures, spend quite a bit of time figuring out how do I configure my deployment on runtime to work in this world? All right. So what does that mean, right? As we're on this kind of stroll through history and we come to the present where everything is moving to the cloud. So, you know, that next logical iteration isn't just cloud-native, or this cloud-native term isn't meaningless, right? We actually are after something. And one of the things we would, we or I, the royal, we were the royal I, would argue for is that serverless fits this paradigm quite well. In fact, serverless, serverless conceptually promises us things like scaling to zero, resource optimization. And again, we want to avoid arbitrary workload prediction. By that, I mean, I don't, again, I don't want to say, hey, feel like we need three pods. I want something that actually scales to burst. This scales ideally algorithmically, right? Based on some common constructs. Cool. So, there's a lot of obviously serverless implementations out there. As you guys probably guessed from the title of the talk, we're going to talk about Knative. The Knative, so Knative is two kind of separate things, one of which relies on the other. So, the main thing that Knative is, is something called, pardon me, Knative serving. And that provides us some components that enable rapid deployment of serverless containers, auto-scaling including scaling pods down to zero. This is based on sampling either from a KPA or HPA perspective. And we'll actually sit there and say, hey, every so often, right, I'm going to determine based on the samplings that I have, whether or not we've passed some threshold and I need to scale up, down, so on and so forth, or potentially even scale to zero. It has support for multiple networking layers, such as ambassador, contour, courier, that's what OpenShift serverless ships with, but we don't want to use that as we'll explain later. Glue and SEO for integration into existing environments. We also have point-in-time snapshots of deployed coding configurations. In fact, I could spend the rest of the day talking about Knative serving. We could get really, really into what that last thing means and why it's important. But if we look at the schematic on my right-hand side, I think your right-hand side too, we'll notice that as we take in revisions to our things that are running, one of the things we'll do is we'll actually spin those revisions up, see if they're viable, the Knative will actually inspect them, go see if they actually ran, do they respond to health checks, so on and so forth, and shelve it, bring down our prior revision, and then bring back up our current revision so that we can engage in this sort of change and roll change without really having to do too much. It's baked into the framework. So what is Knative Eventing? And this is what this talk is actually about, right? So Knative Eventing is something that enables developers to use a venture of an architecture with serverless applications. So as we take on our cloud-native architectures, one of the things we notice is we need to be event-driven, we need to be location-agnostic, so on and so forth, right? Event-driven architectures, specifically the sorts of Pub-Sub behavior that you see on the right-hand side, are actually quite good at this. The event-driven architecture that Knative Eventing espouses, we have event producers and consumers that can come and go. They are, of course, location-agnostic, they will see that coming up. And we essentially have a sort, our dichotomy, if you will, is neatly put as a source in sync, right? Something is a source, we'll talk about the things that can be a source in a second, and something is a sync. Some of the things that can be a source, our event sources are primarily event producers. An event producer, however, may be something like Kafka, right? So what we'll do is we create an event source that actually interrogates in Knative Eventing, that interrogates our source, and then emits HTTP. If we look here on the schematic on the right-hand, what we'll see is source one and source two. We have a logical abstraction we refer to as a channel, and then we have a logical abstraction we refer to as a subscription. The, our syncs subscribe to channels, the channels are logical representation of our event sources. Some of these event sources could be any number of things. I think we get into that in a second, but for the moment, we'll just say they could be things like just something we just created in memory, a broker that we created in memory. It could be something like a real actual Kafka broker or NATS or any number of different things. Knative Eventing as well as Knative Serving uses standard HTTP posts, and it kind of sends as we go create this thing that we see on the right-hand side, right? We're informing the Knative Eventing and Knative Serving components over HTTP. Now you can kind of see where a service mesh might fit in. We've essentially created ourselves an HTTP based control plane to engage in this sort of cloud native architectural stuff that we want like scaled to zero, so on and so forth, or put more succinctly serverless. One of the things that I think is really, really cool, and I'm going to bang this drum really loud in a few slides, is the events that are emitted over this event bus are conformed to something called the Cloud Event Specification. What the Cloud Event Specification is a type of message payload that we always have over our event bus implementation. That means that we have a canonical payload. Normally, generally speaking, I would say stay away from canonical payloads, bad sauce, change is really difficult. However, what we'll notice with the Cloud Event Specification is it's really just something that holds any number of different other types. In fact, it has an event registry, so on and so forth, and I think it's a really critical part of knowing what's happening, what can happen, and what can flow across our hybrid and multi-cloud implementations. Cool. So there's a big word at the beginning of the title of this talk called governing, and it implies governance, right? So let's, and generally speaking in software, this is a dirty word, right? We all hate the governance guys that come down and say thou shalt not do everything you're doing because there's some magical governance theory that we've offended. So MIT took this on, and I think this is actually a pretty good definition of what governance is. We want to centralize information about our digital initiative. So instead of having n bespoke systems all over the place, right, I actually want to give myself some way to cobble these things altogether in a centralized fashion. I want to move from centralized to decentralized governance of digital initiatives, meaning I can't necessarily run everything in the same place. I have to allow people to be able to go out and do the things that they need to do. These could be any different run times, different languages, so on and so forth. However, I still want that centralized information piece and I still want some means of exercising control in a centralized way while handing over power to my developers and let them do their thing. I want to decentralize ideation but centralize idea evaluation and prioritization. I want to make sure my KPIs are meaningful. I want to avoid kind of siloed solutions or these bespoke things that we see all over everybody's enterprise right now where we have 7,000 things providing off-auss. We have just about every sort of way to go about this. We need some notions of technical consistency across how we do these things, most notably in a distributed computing context. We want centralized and consistent means of and compatible means of wire protocols, of the schemas or rather the payloads that are happening over the wire. And we also want some way to do handshakes and stuff like that in a meaningful way. And of course, the idea here behind governance isn't necessarily, as this white paper points out, isn't really to say, hey, I've stopped you from doing something but it's rather to get all of the basic building blocks out of your way so that you can go leverage the things that you need to do to deliver to the business use case or if you're just having fun, the features you're after while we deliver software. So one of the first things we might think to ourselves is, well, Kubernetes might be enough in and of itself. I have service accounts, those things that run my ultimate or pods or runtimes. We certainly don't hand out anything as service accounts, generally speaking, but willy nilly, right? We also have some notion of RBAC there. There are, we have cluster admin, we have developer roles, so on and so forth. We have OAuth proxies, right? So we go who likely in our Kubernetes distro, depending on which one you're using, we may go out to an external OAuth provider, right? We may construct our own OAuth provider inside the cluster. This gives us a common way of authenticating our runtimes, ensuring that they can actually do the things that they wanna do in Kubernetes. We have some ingress governance, right? We've gone beyond the node port at this point. We're not just saying, hey, go find a port on one of your nodes and hook up to it. We have meaningful ways of ingress. We have MTLS capabilities, TLS capabilities, so on and so forth. We have centralized monitoring and logging capabilities and most distributions, right? Remember some of the things we were just after that white paper suggested we should be after is kind of mostly taking care of there. And we have an event-driven API control plane, right? All over HTTP. And generally speaking, we have some technical consistency here between our RBAC approaches, how we do things over the wire, who gets to do what, how they authenticate and authorize with each other. But this provides a pretty basic level of governance and rudimentary notions of OAuth odds from a runtime perspective, right? So for instance, a classic example of this is, well, yeah, my service account presents a shot to the next guy. And the next guy may be hooked up to an external OAuth provider, and I may have been pretty good and got my service accounts in there. So, yeah, I'm authorized, and we had something do that in a meaningful way. But the thing that I'm actually doing in my pod or my runtime is hooking up to NATS or Kafka or I'm hooking up to a database, right? That level of authorization and authentication is not addressed by Kubernetes alone. We need something more than that. We have bespoke communications protocols, schemas, and standards. There's nothing per se saying the things that are traveling over the wire will necessarily do anything but adhere to the Kubernetes API if they wanna do Kubernetes stuff. But as I go speak from one, as I go, let's say, from one system to the next, there's nothing saying that I will be carrying a certain particular payload. There's nothing saying that I will adhere to any particular standards other than maybe some basic TLS type stuff, so on and so forth. I generally speaking have bespoke runtime visibility, right? I see this nonstop almost everywhere I go and talk to people. I'll notice that they've got some other means of monitoring just about every system that they have, right? Their services, generally speaking, one group may be using Prometheus, one group may be using AppDynamics, so on and so forth. What we'll notice in many Kubernetes clusters is that this notion of runtime visibility is all over the shop. And then we also have bespoke carrying and feeding. We'll notice that some people are just doing a plain old Kubernetes deployment. Some people are using an operator and maybe that has some level of maturity in how it cares and feeds for things, right? But it lacks total consistency. There's no one way that we go about this. So we know that Kubernetes in and of itself isn't enough to go after those governance characteristics that our white paper stated. Well, another question, you know, I probably didn't prime the pump enough for you guys to answer this question. But another question that we probably have at this point is, well, is Knative enough? Well, Knative kind of is, right? Like, we probably have a good deal of governance just from Knative. We have service routing. We have revision visibility, right? We have load balancing, blue-green, AB out of the box. This is all happening in a centralized way in a technically consistent way. Many of the features that we'd even get out of Istio were sitting there in Knative. We have autoscaling via common means and HPA and a KPA scaler. I can definitely allow people to instrument HPA or KPA in ways that they want, right? But, you know, generally speaking, I've at least said, hey, here are two constructs that we'll allow into our organization. I have central ingress into services. I have, we'll get into this in a second, but I have an activator and autoscaler. These things are actually gonna talk to my services and that is how traffic is getting directed there at least some of the time, right? So, and I also have the cloud event specification. We'll get into more on that in a second. I promised you guys I was gonna bang that drum, but we just don't have enough to complete the governance picture here either. Something else is needed to guarantee MTLS between components. It may happen to be the case that your Knative operator distribution is using. It has some means of wiring up MTLS, but you've probably got something else for the next thing you're doing and something else for the next thing you're doing so on and so forth. We don't really have a consistent means of doing this. We certainly don't have a centralized way of doing this. This just becomes another bolted on appendage, Frankenstein's silo appendage. The service invocation between our components does not apply off Oz. So, we're kind of left up to ourselves to handle whether or not something can or should call us. So, we need something else for visibility into the performance of our components. We could hook up something like Prometheus, so on and so forth, but we just simply in and of itself don't have enough there. Oh yeah, and Knative in and of itself does not provide a set of rules for who gets to do what per se. Knative in and of itself will say, hey, you get to hook up to the event bus, you can send, as a result, things can go over here. There's nothing to say, hey, I laid down a particular channel implementation and I'm authorized to do that out of the box. So, one thing we do have, and this is the drum I keep promising to bang, is the cloud event specification. This is a core governance capability that we want to take on in our cloud-native architectures. What the cloud event specification is, I think I described this a little bit previously, but we describe event data in common formats to provide interoperability across services, platforms, and systems. We know what's going to come in our cloud event specification. Even though we may have Avro, we may have Protobuf, so on and so forth, we may just have JSON, or heck, in some cases we may just have an encoded string. We have something around that, a payload, not to use the soak term, but an envelope that describes the payload that's coming, that's being shipped in between these services. This is the canonical payload of Knative Eventing. Everything is a cloud event. We could have, as we'll notice previous, this could be MQP, this could be Avro, JSON, Kafka, MQTT, NATs, WebSockets, Protobuf, the Pitcher keeps emerging. What this allows us to do, because of the event registry of Knative Eventing, is say, hey, we have our particular payload types that are associated to particular channels. For instance, if I have a NATs channel, I'm going to have a cloud event spec type of NATs. What this allows us to do is to move away from these bespoke, incongruous means of communication over the wire, and to say, hey, at minimum I have governance and I know what is going over the wire and I can say, hey, or I shouldn't be able to say, hey, why are you guys doing this really strange thing over HTTP as it's not possible because of our Knative Eventing constructs, specifically the event registry. So why not just use Knative Event, right? I kind of mentioned maybe, don't take care of everything, but the default ingress for vehicle is Courier. Courier is a plain old Envoy proxy and it lacks a lot of advanced capabilities that ICO does not. For instance, there is no concept of destination rule, so on and so forth. Yeah, we could probably ship an Envoy filter there, but there's a whole lot of constructs and primitives from ICO that we just simply don't have. We're not wiring up MTLS out of the box. We remember something else needs to handle this ICO all depending on how we go about things. We'll handle this out of the box for us. Knative Eventing also leaves some governance pieces wanting. While many of the Knative Eventing sources provide means of authentication and authorization, that's bespoke by component. By that, what I mean is, hey, I sure I can go talk to my Kafka brokers if I lay down a Kafka channel and I configure myself with some sort of auth, right? And all depending, I may, if I've wired up Kafka correctly to handle the thing that's calling it, right? I may have some notion of authorization there, but I have no consistent central means of going about, a centralized means of going about this. The next time I do this is going to be something different, so on and so forth. What we'll notice is we have an explosion of these bespoke activities over our enterprise without using something like this here. All right, and I think I've covered, oh yeah, and some of the things that we have an SEO and it's an advanced concept that we won't show here but we will make a recommendation later. There's no advanced authorization such as OPA, right? We aren't able to leverage any of these things with Knative. We're kind of stuck with what we got. Or we can go say, hey, let's say Streamz or Kafka, hey, you go figure out the OPA thing based on the OAuth and Jot that was provided to you. Well, again, that's going to be the same Jot every time because it's coming from the same Knative component. I can't really do much with that, right? I certainly can't give myself the ACLs that I would like to give myself in something like Kafka, right? I, again, a topic I could go on and on and on about. So let's do a Knative deep dive, right? Because that's what we're here to do. We're here to talk about SEO and Knative together. So I'm gonna do this. So if you guys will notice, maybe I'll come down here and point to some stuff. Hopefully my screen doesn't go blank on us. So here is kind of the lay of our land, right? We have a little legend over here, maybe somewhat useful, but here's what's really going on here. We have an Ingress gateway in our particular case. If we wanna expose ourselves to the outside world, we're probably creating a virtual service in Issaia. The things that are really, really, really important in Knative, and this is a look at Knative serving pretty much exclusively, but you'll see quickly why this applies to Knative eventing. We have an activator. So when an HTTP call comes into my Ingress gateway, for instance, here via SEO, one of the first things that's going to do is going to hit the activator, assuming I'm scaled to zero. That means it's gonna start bringing instances up. Our controller and dispatcher will actually continue talking to our autoscaler as well. So the activator is going to bring stuff up. It may say, hey, by the by, autoscaler, I'm at zero, I'm at one, so on and so forth. Well, notice that we also have, here we represent a KPA. What that would mean is I've got some metrics like concurrency or something like that that actually does this scaling for me. We're constantly referencing this deployment while our autoscaler is running. And as you can notice here, we're gonna push metrics to the autoscaler, and the autoscaler will actually bring these guys up and down all based on those things. inevitably, what will continue happening is these HTTP requests as the activator and autoscaler are kind of doing their thing. Hey, do you need to come up? Hey, do you need more guys? We're gonna be sending things into our actual deployment, and that's gonna be actually doing the stuff that we wanna do. In our case, doing the stuff that we wanna do may be something as simple as a hello world, hey, I'm a rest service. Could be something a little bit more complex that we'll show later. So the K-native serving components, we have our activator. Again, that's what's responsible for receiving and buffering requests for inactive revisions. We've got, it reports to the autoscaler as we mentioned. We have the autoscaler that is going to take in our metrics and adjust the number of pods required to handle the load of traffic, right? We could go up and down. The service controller, it's gonna reconcile the CRs that are coming in, and it's going to do something. If I say, hey, I've got a K-native, this is a K-native service. Here's your image, so on and so forth. The service controller is gonna be the thing that says, okay, cool, I know what to do with that. And I'm gonna go ahead and K-native serving, I'm gonna go ahead and create some revisions, I'm gonna test them out, I'm gonna mark them viable or not viable, so on and so forth. By that I mean, in K-native serving, if I roll change out to K-native serving, and that change does not work, K-native serving isn't gonna stop answering. It's just gonna maintain the current revision, right? Which kind of gives us a notion of a canary release out of the box. We also have a dynamic admissions webhook. What this is going to do is it takes in that initial CR, right, and says, hey, you know, you gave me a K-native service thing, and that's cool, you're able to, I'll admit you, but I'm sorry, I should just go back. There's two parts to our webhook. We have an admissions configuration and we also have a validation configuration. The admissions configuration will say, hey, okay, cool, I got the CR from the CUBE API. Yeah, you know, you're allowed to do this, you're a particular jot from your particular service account, that's cool. The validation configuration will then say, hey, wait a second, yeah, you kind of screwed up your configuration of your K-native service. I'm actually gonna reject you. The CR will not be successful in the CUBE API server, will not get an admissions response that is meaningful and tells it, hey, I need to launch this pod. We're gonna start going a little faster because we're running out of time and I wanna get to the demo before session, at least part of the demo before session two. This will help though. So K-native serving integrates with SEO out of the box. What we'll see here is right under the spec part of our YAML, we'll notice that we have SEO enabled, right? However, if we just remembered what we were looking at a couple of slides ago, there's a few things that have to happen after Ingress, right? I have to go to the activator. I've got the autoscaler likely also involved, right? So we're going to have to, for those who participate in a service mesh, we actually need to inject a proxy there as well. I'm, like I said, using Maistra. Generally speaking, this is a good approach, but we wanna opt into sidecar injection. We don't just wanna label a namespace and have sidecars injected all over the place. There's a reason for that. There's a bunch of other things in K-native serving that don't actually need to sidecar injected. We certainly wouldn't wanna add that sort of overhead, so on and so forth to everything that's going on there unless we need to. All right, so K-native eventing. Here is, so that was K-native serving. As you guys see, it's good stuff, but we need to do a little bit more with our operator and our CRs to be able to get ourselves to the point where we're using Istio. We, K-native eventing is what we're actually after. K-native eventing is going to create a PubSub construct for us based on this notion of a broker and a trigger. A broker, logically speaking anyways, is going to be something, might just be an in-memory broker, or it could be something like Kafka and NATS, so on and so forth. The trigger is what is actually making our subscription to this particular broker. The broker will define what the backing channels are that we'll be looking at. You'll see a better example of that in a second on the next slide, right? And we have a filter here. We may not want all the events that are being admitted out of this broker, and then that's where our last few slides come in. Once we say, hey, I've got a subscription here going to your broker. I'm filtering, maybe I'm not. Maybe I just want you to give me all events. Either way, that is actually going to call our K-native service, right? And that is what we've just put. That particular K-native service should be injected with a sidecar. Remember, we've also got a few other things that's interacting with it that are also injected with a sidecar and are performing MTLS and potentially authorization policies, all the goodness we would get out of, all the goodness we would get out of Istio. So in our most simple sense, what that really implies is this kind of source-to-service simple delivery. What we're going to see in our demo is something a little bit more, we're not going to get too complex, sorry, but it's something a little bit more complex. So that source, that broker is actually what it really is, is an event source. I shouldn't say that. The broker is defining what this channel implementation is. An event source is the thing that it's emitting things into the broker or into our channels. Well, we'll notice here as we have a subscription that we talked about previously. Logically and physically, this is a trigger resource, but we also have the logical construct of a subscription. Unfortunately, you can't wire one up without a trigger, but the two things aren't necessarily the same thing. What that will do as a result, as our event source is emitting these things, our broker is wired up our particular channel for in a namespace to handle things in a certain way. For instance, this could be, again, an in-memory channel, it could be a NAS channel, it could be a Kafka channel. The subscription is going to say, hey, you're my sync service, and I actually want to, I'm going to subscribe to the thing that's publishing events, and inevitably I have my source sync dichotomy or my PubSub dichotomy as well. Again, it's really, really important to note that this guy here is in our mesh and needs to be injected. That means that some of these other things probably need to be in our mesh and probably need to be injected as they can't talk to each other or complete the handshake without being injected. That's a pretty important thing because we don't necessarily want anybody being able to admit anything to any particular service. We also don't want it to be, if you guys will remember our governance talk, we really, really, really don't want to have 10,000 bespoke ways of doing this. It would be great if we had one technically consistent approach. Cool, and this is, again, another kind of, this is a physical view of what's happening in Knative Eventing. Hopefully this is big enough for you guys to see, but again, we have our event source. We have something in, here's, we have Knative Serving and here we have Knative Eventing. In Knative Eventing, I'm going to wire up a dispatcher that's going to talk to my, that's going to talk to my event source. The, that's going to create my logical broker construct that exists in a particular application namespace, right? When we lay in a channel into our application namespace, that's actually going to call that dynamic admissions webhook that I talked about. That is then going to say, hey, yay or nay, you're good to go. It'll let this guy in and that our controller is going to start doing some stuff. We, our subscription is based on the things that the controller has decided to do. And again, our dispatcher is going to do a little bit more than just talk to our event source. It's actually going to come over here into our Knative Serving ingress. The ingress controller then, at that point, decides to activate or potentially, this is drawn a little bit incorrectly, it should be like this, is going to activate or potentially just send on the traffic to the service, right? And the autoscaler will also be talking to our Knative Service as well. This guy is going to be injected as part of our service mesh. And remember, we injected both of these guys, the autoscaler and the activator, because this guy is talking to this guy as well. So we know that nothing, the thing that showed up and talked to our Knative Service didn't just come out of nowhere, there was a handshake that was governed by us, we did that in a centralized way via Istio. All right, cool. All right, but Knative Vending, unfortunately, unlike Knative Serving, doesn't work right out of the box. There's a bunch of different things we need to do. We need to inject some sidecarts here. Notably, you'll see here, and this is well covered in the repo that we saw earlier. We need to inject our eventing controller, we need to inject our eventing webhook, so on and so forth. We have a couple of different flavors here of controller and dispatcher. Specifically, in this particular CR, our in-memory broker is being injected. So this all comes, I think I'm gonna have enough time for this. This all comes down to our demo. And so what our demo is attempting to do is take, really fully take on those cloud-native constructs that we saw previously. I'm sure everybody here is familiar with Alistair Cockburn. If you're not, Google it immediately and read everything he's ever written. But this is something that we refer to as hexagonal architecture, and that's why it's depicted in this fashion. What hexagonal architecture is also called is ports and adapters, meaning that the way we would like to construct are software architectures, according to Alistair Cockburn, is that we want to have adapters out here on the outside and ports into the things that actually happen in our business. So here we'll notice we have event stream processors, we have an event sync, and inevitably we have an event store in the middle there. That is where our business is happening, right? The event store could be something like Kafka, heck, maybe it's a database, so on and so forth. So on and so forth. But this is what we're after with this particular with this particular demo. So real quick, let's done it. So I'm aware of a CLI guy, I hope you guys are aware. Can everybody see this? Or is it too small? Pretty small, maybe too big. Cool, so if you've been working along and doing the demo, you'll notice we've got a few things here. We've got, well, let's start maybe a little more simply. Sorry? Oh yeah, sorry. I'll stop being a redheader. It's an origin client. Cool, here is my SEO system namespace. You'll notice I have a few things wired up. We're using an SEO 2x with Maestra, so you'll notice the SEO D there. In my particular case, I call the Knative Governance. We have, there are a couple of differences with Maestra versus other SEO distributions. We have a couple of different objects, a service mesh control plan. And you'll notice this is a Maestra specific, but you can easily get there with SEO Cuddle. What this does is a few different things. It wires up the things that we see here are Ingress Gateway, Yeager, Keali, so on and so forth, right? Permit is as well, Grafanaid. As well as we have a few other things going on in the background, Maestra will also go out and lay network policies down for us. If we, you get into the nuts and bolts of the actual workshop, you'll notice that we actually need to go lay some things out, so on and so forth. It also defines something called service mesh member roles. And again, this is a Maestra specific concept, but we don't need Maestra to get there, right? We could easily get there with SEO Cuddle and network policies, so on and so forth. But what essentially this does is our service mesh member roles will be things that we've said, hey, you are a part of our mesh. And in our case, we have a few different things. We have MQ Streams, Knative Eventing, Knative Serving, and our application name space for our particular demo is gonna be service mesh con. As you guys can tell, I'm quite creative. Cool, so let's take a peek at what we have in Knative Serving. Well, as we talked about previously, we have an activator, we have an auto-scaler, there's a few other interesting things that are going on here. We'll notice that we also have an SEO webhook, right? Because remember, we didn't just take full-on Knative Serving out of the box with Courier, we said, hey, we want you to use something else for Ingress, we want you to use SEO. Well, we'll notice here as we have an SEO webhook, that's going to also be providing these sorts of admissions and validation control over the things that are allowed to come in, right? We've got some networking, this being done, as well as our standard Knative webhook as well. Noticeably, our standard Knative webhook is not injected. That doesn't mean everything can get to it. This is one of the drums we bang in the, this is one of the drums we bang in our actual instructions, but what you may need to do if you're using another SEO operator, you may need to go lay down some network policies. Well, we'll notice here is we have a bunch of different network policies. These are all laid out by our particular SEO operator. And what this is going to say is, hey, some things can come here, some things can't. Generally speaking, that service mesh member roles object we just looked at, that's going to define the things that can talk to each other. We've actually blocked all traffic from outside of this namespace unless you're labeled in a particular way, right? We'll depend on the cube, our Kubernetes are back to enforce some notion of governance there, not everybody hopefully in your org and labeled a namespace. If they can, you've got problems. However, all right. So we're up on the end of the first session. The second session, we're actually going to see more of the demo. We'll walk through Knative Eventing, what we did there, why we needed to do those things in Knative Eventing. Maybe we'll do that real quick. Cool, and real quick in our last minute, we'll notice that we have a bunch of stuff in Knative Eventing that's wired up with issue as well inside it, right? If we'll remember the schematics we were looking at, when we get an invocation via an event source, it's going to hit these guys first, right? Because we've injected these guys, we'll be able to, and because we're both members of the service mesh, we'll be actually able to call each other. So what we'll see inevitably is we have to Knative Services, and that gives that a second, and it should spit out some stuff. Plus, yeah, that's fine, I think. To do today. Cool, that took way too long. So those two integration things that I just spoke about, as I said, I'm using Camelk, it's just making my life a little bit easier, but we have these two Knative Services. These are going to be sitting in our service mesh con event, in our service mesh con namespace. What we'll notice right now is, which we just saw up here, one of them is scaled to zero, the other one is scaled to one. What we'll do over by a minute, so I probably should stop, but what we'll do when we come back is we'll actually get rid of that event sync integration, we'll start it up, that'll start spring messages to our channels, and then we'll notice that event bus transformation integration scale from zero up, right? All depending on how much traffic we're bringing in. So we'll all, and then we'll go and look in Keali, you'll notice, or we'll actually scratch out, we'll go into Yeager and notice, hey, this actually got the IMC dispatcher called the activator, called the autoscaler, came back and called the Knative Service. Avalok, governance, so on and so forth. Anyways guys, please come back for session two, because we'll be out of theory and into the actual bits a little bit more. And then we'll also talk about some next steps once we get past these basic authors concepts. Cool, thanks guys. And see you later.