 Oh, it does, okay, you guys can hear me in back, right? Very loud. All right, so my name's Doug Davis. I work for IBM. I am the, it's a little loud, I'm the offering manager for Knative Public, part of IBM Cloud. And I'm here obviously gonna be talking about Knative. So let's go ahead and get started. We got a lot to cover. So in order to talk about Knative in terms of, you know, why it's useful and why it was created, it's interesting to understand a little bit of history in terms of how we got to where we are today. Now, obviously everybody knows about, you know, cognitive technology in general. Let's just talk about some of the highlights here. So obviously, cognitive is all about, you know, containers for the most part, breaking up the monolith. Why? Not just because it's a cool phrase, but because you get better resource utilization by able to scale just parts of your components. You get, you have your teams focus on individual components as opposed to the entire monolith. And of course, by breaking it up, you're supposed to be able to reduce your costs because then you don't have to scale the entire thing, then all you need to scale one little aspect of it. And you can better resource, utilize, utilize what goes on inside the actual virtual machines or the hardware itself. Now, at the same time though, we had this notion of abstraction of infrastructure, right? So as you look at this chart on the right hand side, I know it's a bit of an art chart, you might not be able to read it, but it goes from bare metal to virtual machines to containers to functions and serverless. So as you move up that chart, right, you get this notion of you're moving further and further away from managing the infrastructure, right? And the things become more abstract for you. So as you move further to the right, away towards bare metal, your developers should hopefully be able to focus more and more on writing code itself and not managing the infrastructure because the infrastructure gets abstracted away from you. And of course, that means your developers can focus on writing code, not infrastructure or managing infrastructure, which gives you faster time to market, which should obviously result in more money for you as a company. But in terms of actually, or in terms of actually using this technology, you have a couple of choices presented to you. There's Platform as a Service, something like Cloud Foundry, obviously very high-level abstract for 12-factor applications. You have Containers as a Service, Docker, Kubernetes, that kind of stuff, right? So exposing the container technology as a native thing to you, whereas Cloud Foundry and Platform as a Service typically would hide that if they use Containers under the covers. And then you have Functions as a Service type of platforms and serverless. It's things like OpenWisk, OpenFast, stuff like that, okay? All of these technically use Containers under the covers, but which one you actually use is kind of up to you and depends on your philosophy in terms of how your application's gonna be deployed, right? If you're thinking, oh my gosh, I'm writing a pass because it's a 12-factor app, I should be looking at Cloud Foundry, or if I need more complexity and I need to be able to manage that low-level infrastructure stuff, maybe I should be looking at Kubernetes, right? You've got these choices put in front of you and it's kind of, in my opinion, almost a false choice, okay? So let's look a little bit more in terms of why it's a false choice. So let's start out with Platform as a Service. So with Platform as a Service, I'm thinking Cloud Foundry here for the most part, you have things like a simplified user experience, right? Very simple command line, they hide most things from you, it's really great. They use Containers under the covers. It's microservice-based, right? Every single application deploy should hopefully be relatively small in nature, right? You're doing microservices. It's supposed to be stateless. These are containers, they're ephemeral. As you deploy your application, it manages the load balancing endpoint for you, meaning you deploy your application, you'll get an endpoint that you could hit right away. You don't have to manage any of that stuff, it does it all for you. A lot of these things will actually do build. You provide the service code, they'll build it into a container, deploy it for you, great. If you're in a public cloud, you pay for usage, right? You're not paying for the entire infrastructure if it's not being used. On-demand auto scaling, a lot of these will auto scale as the load goes up so you don't have to think about it, right? And these are all wonderful things. They manage all these wonderful things for you. However, what they do do is they hide the infrastructure from you. So you don't necessarily get access to some of the advanced features, like volumes and networking and stuff like that. Now, Cloud Foundry, I know it kind of does, but it feels a little bit like an add-on. But in most cases, Platform as a Service is meant to hide these things from you, right? Give us your app, we'll deploy it for you. Magic happens under the covers. This is by and by, it's my opinion of you. I know your mileage may vary, but take it for what it is. So let's then compare that with containers as a service and function as a service, starting first with containers as a service. So as you go down the list, what you'll notice is that there are some similarities here. Obviously containers, microservices, stateless, that kind of stuff. But some of the other functionality, things like endpoint load balancing, on-demand infrastructure auto scaling, while they are there in the environment in some way, you have to kind of do it yourself. It's not gonna be a self-managed thing for you, right? This isn't just magically there. And I would definitely not say, at least in the Kubernetes case, that it's a simplified user experience and they definitely don't have build built in. You can add it, but it's not there natively to Kubernetes, okay? So that's containers as a service. Now let's look at functions as a service. What's interesting to me is functions as a service is actually very similar to platform as a service. There's a lot of similarities there, right? They try to make a simplified user experience, microservices will do build for you in most cases, right? You give you a little snippet of code, turns it into a function, deploys it. It's wonderful. And again, they hide the infrastructure from you, which is great. Now platform, I'm sorry, but function as a service also has some other things going for it that you may not initially get from platform as a service. For example, it's meant to be pretty much event-driven, right? It's supposed to be on-demand processing, right? Something happens, you get an event coming into the system of some kind and then you react to it, right? Scale to zero, many functions as a service or service platforms will scale down to zero where platform as a service may not, or at least usually does not. Meaning if you're not actually invoking the service, you don't even have any single instance running so you have zero cost for you in terms of CPU and usage of the infrastructure at that point. Asynchronous invocations, platform as a service doesn't have this, nor does containers as a service by default, right? You have a request going in, processes in the background, you can query its status results later, asynchronous processing. Now what's interesting is when you compare that back with platform as a service and containers as a service, function as a service actually has some restrictions, right? You don't necessarily, you're not necessarily able to run your functions definitely, right? Take platforms like Amazon, OpenWISC and stuff like that, they actually limit the amount of time that you can actually run these functions or these services. Platform as a service, container as a service can run pretty much indefinitely. Likewise, memory usage, right? It's not a free-for-all, right? They limit in what you can do because they're trying to optimize their particular infrastructure to manage this very dynamic notion of running functions in a very quick fashion, right? So you don't have the freedom anymore that you get with platform as a service and container as a service. So when you look at these choices as a developer where you wanna host these things, you have to basically choose which things matter most to you, right? And again, I assert it's a false choice. You shouldn't necessarily have to make this decision and that's where K-Native is gonna fit into the picture. However, before we get there, let's talk a little bit about Kubernetes because Kubernetes is kind of taken over the world in terms of containers as a service, which is great, Kubernetes is wonderful, right? It's the most popular management or container management platform out there. It's advertised as a platform in which you deploy and manage your containers. That sounds wonderful. However, as I talked about earlier, it is not the simplest environment to actually manage, right? You have to almost become an IT expert at that point because here are all the various resources that Kubernetes exposes to you, right? You got pods, containers, replic assess deployment services, endpoint secrets, all that stuff to go along with it. You have to learn, you know, JSON, YAML, spec versus status inside the resources. You gotta start using other tools like Helm, understand the command line, Kube control, possibly bring in other technologies like Istio to manage the networking mesh for you. If you wanna do more complicated things, blue-green deployments, sure, it can do it. You gotta manage yourself, though, right? This is not trivial for somebody, right? Wonderful from a technology perspective, but it's not easy for someone to wrap their head around, especially if they're coming from someplace like Cloud Foundry or even functions as a service, right? So when you think about what we've been promising people in terms of this abstraction, right? As you move closer to the right of that diagram I have over there, right? Closer to the right, you get more abstraction, developers focus on code. I would assert that with Kubernetes, you've actually gone backwards in this, right? Because you're now exposed to the guts and glory of everything because you want that flexibility and features available to you, right? And that's wonderful, but it's a very, very complicated. And all I wanna do is deploy my code as a developer. So let's talk about Knative. So Knative is an opinionated and simplified view of application container management. What that really means is it allows user developer to go back to focusing writing on code, but still allows you to leverage Kubernetes under the covers. So you still have Kubernetes there with all the features, it's just going to expose it in a more user-friendly fashion. However, because Kubernetes is still there under the covers, if you want access to Kubernetes features, they're still there available to you. So the hope here is that the 80 to 90% of the use cases out there are actually covered by Knative natively. And you only need to go around it for that extra 10% of the time to get to the more advanced features. And because of that, what you can also do is you can integrate your Knative applications or services with the rest of your Kubernetes workload. So you're not forced to necessarily choose an only Knative world. You can deploy all your other stuff with Kubernetes that you really need to if you need the advanced features, but they can integrate and talk nicely with Knative applications because it's all running on Kubernetes. So in essence, what you get is the best of both worlds there. Now it's important to point out that for those of you who are old enough to remember when Kubernetes first was presented to us, it was advertised as a platform on which to build platforms. It was not supposed to be the thing that end users interact with. And of course, we know that's not necessarily true. Everybody talks directly to Kubernetes and that's just the way it is. Knative is being advertised in a similar way. It's being advertised as a platform on which to build a serverless or function platform. However, as of right now, people still talk directly to it. So whether that plays out in the future or not, who knows. But it's interesting that Knative became one of those platforms that's supposed to sit on top of Kubernetes. So we are actually seeing a little bit of the truth in advertising come out here. Okay, so with all that introduction, let's talk about what is Knative? So Knative is made up of two components. First is the serving component. Obviously, that's the component that's going to host your application, okay? It's basically gonna use Podge on the covers. As I said, it's still using Kubernetes. And we'll show a demo of this in action. The other component is eventing. Now eventing is basically a set of tools that allow you to manage the eventing infrastructure of your application, right? It gives you tools for subscribing to event producers, how to manage the events when they get delivered into the Knative environment, how they get sent between your various services. Basically, just a whole bunch of tools for you to sort of string these things together in kind of a workflow type of operation. And we'll demo that in a little bit. So let's talk a little bit about Knative serving in a little more detail. Now in serving, your application is called the service. It's a little bit of a poor choice of words because in Kubernetes, you already have the notion of a service. So this is something different. We have yet another notion of service. So don't get this confused with Knative service, or with Kubernetes service. So you deploy your application as a service. Now, every service has revisions. You can think of revisions as just versions of your application, right? As you make changes to your application, whether it's a change to the image itself or a change to the configuration, a new revision will get created each time, okay? So you deploy your application, you get revision number one. When that happens, Knative serving will automatically set up the networking for you, which means it gives you an endpoint that your users can now hit immediately and the incoming requests will get routed to that particular revision. So auto networking management for you. Now, as your user base increases and you get more load on it, Knative will automatically scale your revision for you. So it will scale up as more hits come in and it'll scale down to zero if no one's hitting it at all, right? So then if I make a change to my application, I change to the configuration or the image that is point two because I made an update to my code, Knative will spin out a new revision, revision two in this case, and it'll automatically reroute that endpoint to revision two in a rolling upgrade fashion so there's no downtime for your application. Waits for the application's ready, slowly migrates traffic over, okay? All this magic happens for you under the covers. As a user, all I've done is deploy my application and then version two. The rest is magic. Now, with that though, there are some tools available to you. For example, you can do traffic splitting. So earlier I talked about how when revision two gets pushed out, all traffic goes to revision two. That's true. I can tell it though to only send part of the traffic. So in this particular case, I can tell it to route only 10% of the traffic to revision two, 90% to revision one so that I can test out revision two. And then once I feel more comfortable that it's actually stable and works, then I can slowly migrate up to 100% and get 100% going to revision two. The choice is yours as a developer. Finally, you can also have revisions that actually aren't part of that entire networking scheme. So I can actually deploy revision three by making it again a change to my application in some way. Tell it, it doesn't participate in this load balancing stuff, but give it a dedicated tag. In other words, dedicated name. And it will give it a dedicated endpoint that I can hit. So the scenario here is imagine you deploy to revision three, don't know how good or bad it is at all, but I definitely don't want my customers to see it. So I tell them about the top endpoint. I tell my test team about the bottom endpoint. So my test team can hit that revision three all they want to make sure it looks okay. Then once it starts working okay, then I can go to modify by endpoint routing percentages and start routing traffic to revision three and slowly start draining things off to revision two and revision one. So you have all this flexibility available to you inside a Knative by simply specifying what you want in terms of networking and then the magic happens on the covers automatically for you. And the important thing here is Knative manages all of this for you, right? All of this is still running Kubernetes. We've introduced a couple of new resources to the Kubernetes model, but for most part it's still using the core Kubernetes model under the covers. It's just you don't have to worry about all the infrastructure anymore for you. Knative does it all. All right, let's look at a demo because demos are a lot more fun. And I should point out that what I just talked about there is pretty much almost all of Knative's serving. It's actually not a whole lot of infrastructure in terms of concepts, but it's actually quite powerful. So let's go ahead and run this. Now you guys in the back, you guys can see that okay? Okay, cool. Now I'm not gonna be doing typing. It's gonna be typing for me, but this is live. So if it goes haywire or if things go funky with the network, it can go bad, but I just don't wanna do the typing because it's annoying. So what I'm first gonna do is create a service. Now KN is the Knative command line. So I'm gonna do a service create, very obvious. The third parameter there is echo. That's obviously the name of my service. Then I'm gonna pass in the image I'm gonna deploy. So here I'm doing specifying two things and that's it. The name of the service, the Docker image or container image I want deployed and that's it. Two little bits of information and that's all Knative needs to get going. So if you look at the right hand side way over there, what you'll see is sort of what's going on in the system. The top list where it says echo is just the Knative services I have deployed. Right now I have one. Below that you can see the pods in the system, one. So what Knative does by default is it brings up one pod just to make sure it comes up okay as you deploy your service and that's it. Now, so I have one service which has one pod because it has one instance or one instance running and it deployed it and it gave me a URL right there. So if I then turn around to a curl against that URL, I get high from echo, wonderful. Now just for your information purposes, inside the parentheses here, that little revision, that just prints out the revision name and you notice the tfxfw, it matches what you see over there in the beginning. Okay, and the latest ready revision is there. I just print that out because later on as I do the demo, I will actually assign names through these revisions by default to give you random letters but the names will show you that things are actually changing to the covers. But the key thing here is hey, it worked, great. I have an hbmpoint, I can curl it, life is good, my application is up and running and all I did is give it a name, image. Very similar to something like Cloud Foundry or old Docker. So another thing, I can do another curl against it. Notice that I'm doing HTTPS. So in the IBM cloud when you deploy Knative, we'll actually deploy an HTTPS endpoint for you. So you get security built in under the covers. You no longer have to manage this automatically by yourself, it's done for you automatically. Okay, so let's go one step further. In this particular case, what I'm gonna do is I'm gonna update my service and I'm gonna set the container concurrency. What this means is I'm telling Knative to only allow one request per instance of my service at a time. If that guy's busy and another quest comes in, he's supposed to spin up another instance. This allows you to have a sort of a multi-threaded service versus a single-threaded service. And the reason I'm doing this is, one, to show you the flag. But two is because I'm gonna generate a load against this puppy and I wanna show you its scale up. If this thing could support 100 requests coming into it, it makes it very difficult to see scaling in that case in a demo. So, I've deployed it. I'm sorry, I've updated my service. Let's go ahead and generate a load for 10 seconds. I'm gonna do 10 different clients. Let's go. So, each of the clients you can see on the left-hand side is sending requests to it. But look at the right-hand side. Look at all those pods that are coming up there. You should hopefully see about 10 different pods pop up. Because Knative recognized you have all these loads coming in, 10 different requests in parallel. He needs 10 different pods or about 10. And he spun it up because each pod can only handle one request at a time. And I put a little sleep in there to make sure it doesn't process it too quickly. So, demonstrated auto-scaling automatically. Again, I did all this with one command, one single Knative service create. All right, let's update the service again one more time. Two things are going on here. First, set the container concurrency back to zero, which basically means infinite. So, we're gonna have a multi-threaded application server here running and he can have multiple requests. I'm just doing that because I don't wanna see things flooding the system on the right-hand side anymore. Just as we've run through the demo. Other thing, I'm giving it a name, right? That name right there is kind of funky if I ever gonna use it later on, which I'm gonna do as I do my percentage routing. So, I'm gonna actually give it a name to this next revision. I'm gonna call it echo version one, okay? So, I'm gonna update my service, change the configuration of it. And what you should see on the right-hand side, notice up here, it did deploy another instance. I'll notice they went away because they weren't being used, or the other ones. But I'm left with echo v1. So, that's version one is running. Let's go ahead and curl it, just to prove that it's real. So again, same thing, still echo from high, but when you print out the revision name, which gets passed into it as an environment variable, it's echo version one. So great, I have version one of my application there. Wonderful. Let's go ahead and create a version two of my application. This time, what I'm gonna do is in the configuration, notice I'm giving the name v2, I'm gonna set an environment variable, dogs rule, MSG, or MSG equals dogs rule. All that that's doing is changing the high from echo message. So I can pass in a string to change it. So this time, I deploy it. What you should see is version two pop up there, which you do see, version two's ready. So now when I curl it, you see dogs rule echo v2. So now I have two versions of my application into the system, or known to the system. Now, what we're gonna do is play around with that traffic splitting stuff I talked about earlier. So what I'm gonna do here is I'm gonna say, okay, I'm gonna update my echo service, I'm gonna set the traffic, version one is gonna get 50%, version two is gonna get 50%. All I'm doing is twiddling the load balancing aspect of my application by modifying the configuration. So that's done. Let's go ahead and send a load to it. I'm gonna do another load for 10 seconds with 10 different clients. What you should see is hopefully about a 10, 50, 50 split between dogs rule and echo from high. It's not perfect, but it's about 50%. So again, what we did is we have a load balancer in front, does the split between the two, okay? So now the last thing we're gonna do here, we're gonna change a couple of things here. First, gonna give this next version of the application version three. We are going to tag it, in other words, we're gonna ask for a dedicated URL called test just for version three, and we're gonna change the message to test, test, test, test, test, okay? And I need to speed up because we're running out of time. Ah, crap. This is what I meant by it's live. Okay, hold on, let's run this again and see if we can get there quickly. A lot of some of the stuff I'm demoing here in particular these tagging stuff is a relatively new concept, and it's a little bit buggy, so let's get back to it. If you run it the second time, it usually works. Give me a sec. Any questions while we're going through this? Let me just admit it here. In this particular demo, yes, I am using this jail. Oh gosh, darn it. Okay, we're gonna really cheat this time. I'm using canned output now. I apologize for this. I don't know why, it's one of those things where Murphy's Law kicked in. It works every single time until you actually get to a live demo. But we're almost done. I'm gonna show you what's going on here. Yeah, but at least I have canned demo. I could fake it. I'd have to tell you, you know? I could, because you wouldn't know that this is canned. It looks fairly real. Okay, updated at test, test, test, test, test, generated a load. Now here, I send 10 requests to it, but notice the URL is test-echo instead of just echo, right? To show that all 10 requests go to that test revision I put out there, which means when I turn around and send a load to just the echo service, what you should still see is 10 requests alternating between high and dog's rule because the load balancer doesn't know about that third revision. I excluded it from it. Okay, so now let's go ahead and clean up. Let's go back to the slides. That pretty much, if you think about what I did there, that's actually fairly complicated stuff. Those of you guys who understand K-native, or I'm sorry, Kubernetes, asking a Kubernetes guy to set all that up is non-trivial, right? But look at all the things that we did there with pretty much minimal command line usage in terms of interaction from the user there, right? And I'm sorry, I need to speed up a little because I'm running out of time. But think about what that would take to actually do all that with Kubernetes under the covers. So a couple of things to understand here that I quickly glossed over. Your application must be running an HTTP server by default on port 8080. That's the way K-native routes requests to you. You can change that if you want, but you have to have some sort of HTTP server running inside of there. By default, it is a multi-threaded model, meaning it assumes your HTTP server can handle multiple requests at a time, but it is configurable. So as I showed you in my demo, you could set the container concurrency to one to have a single threaded application if you want. So only one request process at a time, right? But you can also set that. You can set min and max values, which means you can tell it what is the minimum number of instances or the maximum number of instances of your application to have running at a time, including one. And I'll show you that in the next demo. Scaling, you can tell it to scale versus number of requests coming in versus CPU usage. And under the covers, it's a very simplified resource model. For those of you who want to deal with YAML, you really can. Here's a sample of what that YAML would look like if you actually did it using a kubectl command, right? I'm passing in just the echo and the image name. YAML is very simplified. It's not nearly as complicated as the entire pod spec if you're familiar with Kubernetes. So it shows you, like I said, a very simplified user model under the covers. All right, let's talk about eventing. As I said, eventing provides you the core primitives on which to sort of orchestrate the eventing infrastructure of your application. So at the most basic level, what Knative has is the notion of event sources. What an event source does is a couple of things. First is it will basically subscribe to an event producer for you. So it'll basically do the subscribe to say, hey, I want events of this particular type sent to me. It will create what it calls an adapter, which is sort of like the receiver of those events, and it knows where to send those events to, in this particular case, to my service. Now, it does convert those events into what we call a cloud event, and I'll talk about that in a minute, it's a different specification, but it does that to normalize things so it can do some funny stuff or some funky stuff later to it. Now, what you can also have is a broker in the picture. Now, a broker allows you to basically do fan out, right? So instead of a single request coming in, going to a single service, you can route it through a broker, which can obviously have persistence in there if you want, if you want to make sure that you don't lose any messages because your service may be down for a period of time, right? Or at the most basic level, you can just have your services subscribe to it through triggers, where the trigger can not just scale out so you can have multiple subscribers to the broker, but you can also do filtering through the trigger, right? You can say your service only wants certain types of events sent to it, but other types of events going through the broker go to other services. So very basic pub subtype operations you see in many places, but Knative gives this all to you very simply with basic building blocks. Now, if there is a response in the service, that can go back up to the broker and you can tell it where to send those responses. Now that's very interesting because then when you get to more interesting features like sequences, what you can do is you can have events come in to one event sync or service and the results of that go on to another service. So you get to chain services together and sort of a workflow kind of a thing, right? You can also set up choices, right? Basically an if statement or a switch statement where the first one that meets the condition of your filter basically gets the event. The rest of them don't get anything or if it doesn't match any conditions it basically goes to Dev know, okay? Again, basic building blocks in which for you to sort of stream these things together and sort of a workflow type of thing. There's an event registry. Basically this allows you to know what events are supposed to be flowing through the system without having to go query every single event producer. You can just query the registry. It knows what the events are, what the brokers that are gonna receive those events are and it will do the subscribe for you or the trigger management for you to the brokers. Now cloud events I've very briefly mentioned that and I can talk a lot about it but I mentioned the events that come in get transformed into a cloud event. Basically all that means is add a little bit of extra metadata that's standardized. So who sent the event? What type of event is it? That way when you do things like route the event through your various sync, so there's various conditions in the choice, it knows where in that event to look for that common metadata in which to do filtering, right? Because if every event comes from different event producers and they all look different, different formats, different transports, how the heck is that conditional thing supposed to work when everything's different coming into it? By converting it to a cloud event you get that standardization and you can do fun stuff like this under the covers. And if you want more information you can go look at cloudevents.io. All right, second demo in seven minutes. Here we go. Okay, this one is going to use eventing. Oh, I'm sorry. I should have shown you what we're doing first, sorry. All right, in this demo we're gonna do, we're gonna deploy a CI dashboard service. Think of this as your Travis or Jenkins UI. We're then gonna deploy a build service. This build service knows how to take a GitHub repo, build it, push the Docker image to a registry and get the application up and running and send status to the CI dashboard, okay? Knade doesn't help with that but if you build your own service just deploy it as a KNS service you can now play in this game. I'm then gonna create a GitHub event source using the stuff I talked about before to automatically subscribe to GitHub to receive push events. So when the push event comes in it gets received by the event source, pass on to my build service, does the build, sends the results to my CI status webpage. So the user has two different ways to interact with this. One, by doing pushes to GitHub and two, by checking the dashboard, all right? And finally, as the users scale up because I'm gonna set up my build service to be only able to process one build at a time for better resource utilization, you're gonna see the build service scale out as I scale up the pull requests that are happening concurrently, all right? So now let's go ahead and do it. All right, we won't fake it yet on this one. All right, first we're gonna create the CI service. Notice I'm setting Minimax scale to one. Two reasons I'm doing that. One is I was lazy. I didn't want to have to create a persistent store to manage all the states, right? Two, I wanted to show that you can actually deploy services to Knative that do not scale in any way, even down to zero. This thing stays exactly one instance running all the time. If for some reason the system dies, it will bring it back up. The same as a normal Kubernetes operation, right? So you do not have to have a normal serverless scaling application if you don't want to. You can have a singleton out there, just like normal Kubernetes gives you a URL. Now we're gonna deploy the build service. Again, this thing setting container concurrency to one, right? Because I can only do one build at a time per pod. I'm gonna space out the CPU usage. Here I have a GitHub. Now unfortunately, KN doesn't support event sources yet. So we have to drop back to Kube control. Ignore most of the stuff. Basically, what I'm doing is creating GitHub event source. I care about push events. I'm gonna substitute the GitHub repo that I care about and I have some secrets in there to talk to GitHub. But finally, down here, I'm gonna send it to a service called build, which is what I deployed before. So what this thing's gonna do is it's going to talk to GitHub for me, do the subscribe, set up the webhook, and now whenever somebody does a push, it should automatically send that event to me. On the right-hand side, you'll see all the various services that we created. The CI system, the echo service, the build service. Now notice the GitHub events, I mentioned there's a GitHub adapter that receives events. That actually gets manifested as a service. So it's a GitHub service, right? Now there's another section down here. This is basically the output from the CI system. I don't have a fancy UI. You can see it there. There are no builds that have flown through the system yet. So let's go ahead, whoops. Let's go ahead and simulate. 10 different pushes all running in parallel. What you should now see on the right-hand side is over here, the builds should start scaling up to 10. Oh, crap. I think stopped, because I hit the button. This is not going well. There you go. What you should have seen is live, all the builds scaled up, because each can only handle one at a time, but you can see the build results in the CI system are showing the results as it goes, right? So again, we had an automatic scaling system managing the builds for you and processing the events automatically into the covers. All right, let's go ahead and delete it. And what you should see on the right-hand side is everything start going away automatically. All right, yikes, three minutes. All right, so we already talked about all this. All right, so let's go back to this wonderful little eye chart. So you got all these choices in front of you. My assertion is that when Knative is done, everything in that column on the right-hand side should technically have a check mark. Now in fairness, there are some things that are still a work in progress under discussion, right? Not everybody necessarily agrees. We want to support the full functionality. So for example, asynchronous operations, still up for discussion. IBM definitely wants that, so IBM will definitely have it in our version of Knative, but it's not necessarily part of the community yet. But my assertion is that all the things that you care about across the board here should be supported by Knative, so you shouldn't technically have to choose any longer between which platform you want to deploy your application. Knative should be able to do it all for you. So in summary, you should not have to choose which as you want to deploy your stuff to. With Knative, you get the best of all worlds in my opinion, including the simplified Kubernetes experience. Remember, because remember, it is still Kubernetes under the covers. You don't lose the functionality of Kubernetes. You can still go around Knative if you want to in those rare cases you need it. For you as a developer, you can now go back to writing code and focusing on things about your application that matter. How are you gonna split up the application into various containers? Where are those boundaries? Which things you want to scale when? Those kind of things, right? Forget about the infrastructure. That's the whole point of all this, right? And with that, I have a whole two minutes for questions if you have time. I was asked to put this up there so you can read if you want. If you want to make $185, follow the link, answer some questions for us. No, I don't know nothing about it personally. I don't know back here. Oh, there you go. They're back there. See, I didn't lie. I said I put it in my chair deck. I almost did it. Anyway, there was a question. Yes, sir. So your question is basically, does Knative help with the scaling, with the scaling limitations that you're running into? No. Basically, because Kubernetes, I'm sorry, because Knative is using Kubernetes on the covers, if native Kubernetes has a problem scaling for you because of limitations or something else, I suspect this would have the exact same problem because it does not do anything magical under the cover. It's still using Kubernetes for you. Still does deployments, replica sets, the pods. So I don't think anything there would change. Now you may be running into different issues with your load balancers and stuff like that. So if you have a different load balancer in play between your normal Kubernetes versus Knative, then it may be some difference. But in terms of the other stuff, it's all these exact same stuff, which I think is a benefit. We're not doing anything different here. Time for one last question if you want to. Yes, in the back. On this one, I think I'm running 115. But that's my version. I think it supports as far back as 111, I think. You may actually want to at least stick with 113 though, because there's some bug in, or there's some lack of functionality in 111. I think they've introduced into 113. So if I was going to play with Knative on Kubernetes, I would start with 113. All right, cool. And with that, I apologize. That's some glitches. Otherwise, I'd have time more time for questions. But thank you guys very much for coming. Appreciate it.