 All right, I think we'll go ahead and get started. We're very prompt. Thanks everybody for joining us. So we're the Bryans. I'm Brian Friedman. Got Brian McLean with me. We're going to talk about Knative and serverless today. So just to give you a sense of where we're headed today, first we'll talk a little bit about what Knative is. Rather than sort of doing a whole bunch of hello world demos of each component of Knative, we're going to take a real example application, look at how it's architected, and make a few changes to it, and sort of walk through Knative as we do that demo. And then we'll wrap up with a few thoughts about Project RIF as well, which is a function as a service platform built on Knative from Pivotal. So this is the attempt to define Knative. We've got a lot of buzzwords in there, platform, serverless, Kubernetes. But really, it's designed to make Kubernetes more of a better developer experience and really focus on some of the key serverless tenants, which we'll talk a little bit about what that means. What is serverless? So there's three key components to Knative. Maybe you're familiar with these if you've seen other Knative talks or information. But it's important that we understand each of the three components on what they do. So the first, perhaps the most important one, is serving. This is how our code receives requests, how it does the scaling. So if you attended Dr. Nick and Dr. Max's talk yesterday, they showed a lot of sort of Cloud Foundry-like capabilities within Knative. A lot of that comes from this serving component here. It's dealing with all the HTTP request-driven compute. It's doing all the scaling, so the scale to zero, scale out based on demand. It handles the deployment of a container. So you pass a container image in a registry, and it will deploy it to the Kubernetes cluster. It's doing that revisions. So again, if you saw that, the demo, we'll see an example of it today as well. When you deploy a new version, it keeps revision so you can do easy rollback and roll forward, blue-green deployments, that type of thing. And it also handles all the routing for you. So the route to the app just like Cloud Foundry does, it's handled by Knative. You don't have to create that kind of stuff like you would with just Kubernetes by itself. So that's what serving takes care of for us. Then there's the build component. And this is what's handling. I talked about in serving how you're deploying a container. But if we're talking about serverless, we don't want to have to think about containers. So that's where build comes in. And this is what takes our source code and turns it into a container image. The nice thing is that it's pluggable. So there's templates available. The one we'll be showing today, since we're at Cloud Foundry Summit, is build packs. But there's a lot of other options available depending on what your preferences are. And this allows us to take our code, build it into a container, push it to a registry, and then the serving component can grab that and deploy to our cluster. So we'll see how that works as well. And then finally, we can't talk about serverless unless we're talking about eventing. And this is kind of one of the most unique and interesting pieces of Knative. We're gonna get a lot more into how this works and why it's important. But this is really where you can take care of triggering code based on some event that runs. And there's a number of events that Knative sort of ships with and then you can also write your own. So you can take in events and we're gonna see exactly how that works. So that's the overview, quick plug. We're gonna be using some examples that we also go into much greater detail to in a book that we just published with O'Reilly thanks to Pivotal. And if you get a chance to stop by the Pivotal booth, you can pick one up hard copy or download it at the Pivotal website. So the examples that we're showing today, much more detail in here. We're cramming probably about an hour's worth of content into 30 minutes. So there's a lot more detail in the book if you're interested. So now I'm gonna have Brian kind of give us a walkthrough of what the example app that we're gonna show is and then we'll sort of step by step go through and make some modifications to it. Sorry, I'm shorter. So that's gonna be fun. Yeah, so we were trying to figure out a really good application that could show off some of these different pieces of Knative. And we were looking through different data sets, maybe do some sort of visualization. And we found that the US Geological Survey, they put out a whole bunch of data sets for pretty much everything. One of those being pretty much real-time earthquake data. So if there's any seismic activity throughout the US, no matter how big or how small, you can hit this endpoint totally free, return just big JSON array of like 8,000 objects. And you can parse it and use it however you want. So we decided to build something to visualize that. But we did run into a couple of things which we'll work through for a little bit. But this is how our application ended up being architected. It's pretty straightforward. If we look through, I like to kind of follow how the data flows through this. So we have our API endpoint on the bottom left here from the USGS. And then we have something that's just sitting there pulling that data. It is just an endpoint you can hit, return some data. So we do have to unfortunately constantly pull that. And then we send it to what we call our geocoder function. So this was the first issue we hit. We wanna show kind of an address, a general location of where this activity happens. But the USGS only provides coordinates. So I don't know about you, I'm really bad at telling the coordinates from California to New York. So we actually do this by emitting an event. Again, we're not really diving too deep into the technology here. But you can imagine this could be like a message broker, something like Kafka or RabbitMQ. That geocoder function watches for those events. Does that translation from coordinates to address. And then stores it to a database, which our front-end will hit. So what we're gonna be doing today is going from something like this to this. We're gonna add in a new event source. We're gonna look at flood data. And we're gonna run it through the same geocoder function. We don't have to change our function at all. We're gonna emit these events in the same way. And then expose that to our front-end, which I know this looks very similar, but we're gonna add like a dropdown up top. So if we want to just show flood data, it's kind of hard to see, but unsurprisingly, a lot more floods on the east side, a lot more earthquakes on the west side. I thought that was kind of a cool officialization there. But with that, I think we can actually start jumping into Knative. Cool, thanks. The height adjustment here, all right. So we talked about serving. We're gonna, as you saw from the diagram, we're gonna have, we have to deploy a new front-end, right? Because we have to handle the new flood data that's gonna be coming in from our other event. So we're gonna use the serving aspect of the serving component of Knative to do that. Remember, this is a reminder of all the things that it does. We're gonna see a specific example of deploying a container. We're also gonna see an example of the sort of new revision. We're not gonna have time to scale up and scale down, but just trust me, it scales up and scales down. And that would allow us to do, the snapshots allow us to do the blue-green type deployments, that kind of thing. So this is a visualization of what we're doing. We're grabbing our image from a repository, in our case, Docker Hub. We're shipping it off to Knative, and it's creating a new revision and setting up a URL for us there at earthquake demo. We're also actually gonna be covering build at the same time, because we wanna take our code and turn it into container first. So we're gonna, like I said, be using build packs. The cool thing about the build component in Knative is it runs the build completely on cluster, so it's not local. And if you attended, again, Dr. Nick's talk yesterday, he made a great point, which is that it's not even building code from your local directory, it's grabbing it specifically from GitHub and taking a specific tag that you want, grab that, turn it into a container, push it off to an image. So here's the visualization of that. Code from GitHub, turn it into Docker Hub repository. So let's actually do this. So first, let's take a look at what we're doing. So if you saw the Knative demo from Dr. Nick yesterday, they were using a Knative sort of CLI that was developed KNCTL. We're gonna do everything with YAML, because we think YAML's super fun. And that's sort of the native way to do it within Kubernetes. All these Knative components are just custom resource definitions within Kubernetes, so you can just manipulate them with YAML. But under the covers, obviously, the CLI is doing the same thing by manipulating the API. But this is a pretty good way to kind of see how things work. So all we're gonna be doing, well, so first of all, just to show you, we've already got our service running. So if we do get KNative services, we can see we have our earthquake demo up here already going. We can see that we have some pods around it as well. Don't need to see all the extra pods, so we'll just scrap that. So you can see we have a pod here that's completed. That was actually the pod that was used to do the build. And then we have the actual earthquake demo itself running here, and you can see this 0001, that's revision one. So that's already running, but we need to make a change to it. So if we look at our friend and service, we've got a revision in GitHub called Floods. That's gonna add the addition of the UI. So we're giving it, this YAML is basically specifying, this is the service, this is the name of the service, the namespace we're deploying it to. We're defining both our build and our deployment within the same YAML file. We could do this separately, but we're doing it all at the same time here. The build, there's a service account. This is just a standard Kubernetes service account object that allows us to authenticate against Docker Hub. So that we can push our image up to there. In the build, we define the source repository. Like I said, we've got this Flutter vision. We tell it which template we wanna use. We're gonna do build packs because we love build packs. And then we tell it where to actually push the image to. So this is the Docker Hub URL, and we're gonna tag it as Floods so we know the difference between the new and old version. So that's the build definition, and then down here we've got what we're deploying, which is, so there's a little annotation here. We're gonna prevent it from scaling all the way down to zero. Since it's a UI front end, we don't wanna wait for that extra time to wait for the pod to spin up. So if we didn't specify that it would scale down to zero, this is gonna only scale down to one, and then we've got the image that we're pulling from. So here's the image we're pushing to, the image we're pulling from is exactly the same in this case. And then we've got any environment variables that we need. So this is, oh, sorry, we don't need to look at your calendar, Brian. Yeah, okay, so now we're gonna do, all you have to do is apply this template front end service.yaml, and what we should see here down below is some pods spinning up. So you can see the pod that, oh, oh, oh, oh, oh, oh, oh pod, now that's actually doing the build. If I do this quickly enough, if I can remember the command here, build, step, build, we can actually watch this build except I need to do a Chef. So now it's running through, this is just the build pack, right? It's running through the build pack. It's a Ruby application, so it's figured that out and it's finished that step. And you can actually see it's running through for the four build pack steps here. It's on the third one right here. And in just a moment it will complete, which it does and then we've got our new pod spinning up. And you can see there's three containers within that pod. One of them is our application. The other two are containers that Knative uses to handle the scaling and do some other additional cool things. But one of those containers is actually our application. All right, so now it's up and running. So if we go back to our earthquake app, so this is the old version. Can do a quick refresh on this. And now it's as natural disasters. We've got a dropdown. And the map is taking a moment actually because the Geocoder is also running on Knative. And so it's still, it had scaled down to zero. And now you can see here Geocoder is spinning up because the app itself is requesting is requesting that Geocoder pod. So that's spun up. So now the map should be there and it is. Problem is we have no flood data at this point. So even if we go look at, hey, show me all the floods. Right now there's nothing there because we don't have a flood event source. So we need to go add that, which Brian's gonna do for us. Maybe I can stand on my tiptoes. That might be better. So making you listen to that sound. Yeah, so someone messed up and let me talk about eventing in this talk. The good news is we originally thought this was an hour talk and it's only 30 minutes. So you're gonna get out of most of that. I will say though, I do think that eventing is often something that's overlooked, especially in a serverless architecture or serverless platform. Not to say that one demands the other or especially vice versa, but it's a really complementary architecture, right? I mean, you have these real lightweight, single-purpose listeners watching for change of state. They can come up, scale back down when they're not needed, scale out if there's a big kind of concentration of events. And luckily the people that design Knative, the great community around it agreed on that. Specifically, they made it really easy to consume events. So on the back end, there is a message bus like we talked about, but to your application, it just looks like an HTTP request. So no matter if you're responding to a browser request or some API call or responding to an event, you're just handling HTTP requests. So there's no special code written specifically for that. That messaging alert is abstracted also in the platform level as well. So if you have specific needs or you're already very proficient with something like Rabbit or with Kafka, you can choose whichever one fits your needs the best. We'll look at this a little bit deeper, but it also introduces this concept of channels and subscriptions. Again, if you worked with messaging systems that's something that's probably pretty familiar to you. And then what I get excited about, and it's like this for build templates and builds as well, is that event sources are pluggable. They can be brought in ad hoc as you need and it's really, really easy to build your own event source. Little disclaimer, there is a few ways to do it. I do it this way because it's the easiest and why wouldn't you use the easiest? But it's actually pretty straightforward. You can say, here's a container image. Doesn't matter what's running in it, what language you wrote your code in, what framework you use. And Knade will just give you an extra command line flag. And that flag will include a URL. You can go through, determine what it even means to be an event in your context, and then you just do an HTTP post to that URL. So, as you can probably guess where I'm going with it, that's how we introduce this flood data. So we build a container, it's another Ruby script because Ruby. And it's gonna sit there and watch that flood data just like I watch the earthquake data and then do a post to wherever Knade tells us to send it to. This is what the events look like that we're emitting. Super exciting, I know. And we're just sending it straight to our geocoder function. That's a very valid use case. It's a demo, so we wanted to have as little things to break as possible. But more realistically, you're gonna be looking at something like this. I have an event source and I have a lot of things that are interested in what it has to say. So with Knade, instead of sending those events straight to an application or straight to a service, you can say, I'm gonna make a channel. Again, backed by your message broker of choice. I'm gonna send my events to that channel and then my applications will subscribe to it. So, again, introducing this loose coupling between event producer and event consumers, letting us add this extra functionality with pretty little work. I mean, we're already completely re-leveraging that reverse geocoding. We're just kind of bringing it in with a couple UI changes. So, let's try it. Let's see what breaks. So if we look at the YAML again, my legs are getting tired by being on my tiptoes. I apologize. It's actually pretty straightforward. Like I said, we're gonna give it a path to a Docker image, which is unsurprisingly called flood source. We can give it some arguments specific to our deployment. You can see here, I don't know why I'm pointing at the screen, I can highlight. I can say, check every 10 seconds. And this is actually where Knative is going to add that additional command line flag of where to send my events. And then down here on the bottom is where I define where I wanna send those events to. Again, we're sending them straight to a service, but this could be a channel in case we wanted to propagate these out to multiple event consumers. So let's see what happens if we do that. So you can see here on the bottom, we got a new container coming up. If I can find my mouse, there we go. It errors because I'm really good at coding, and then it comes up. And we see immediately our geocoder functions coming up. And that's because this flood event source is already sending out events that it found. And since the geocoder is responding to that, it's gonna scale up and start handling those. If we look at the logs, you can see these events starting to come through. And that's actually the URL that Knative provided to our event source to tell it where to send those events. And again, just doing HTTP posts. Nothing specific to any sort of messaging system, no client libraries, no configuration. It's all brought through all automatically. If you wanna talk about the architecture. So actually we can show the UI real quick, right? Oh yeah, that's important isn't it? Yeah, why don't we look at now that we have data and see these events are starting to come through. Something's going on in Minnesota right now, that's not great. But yeah, so you can see we added a new event source, we updated our UI to handle that new sort of data. But that loose coupling between producer and consumer allowed us to do this really, really easily. And now I think we can talk about the architecture. Cool, so just to review a little bit lower level what we did, we took a look at the sort of high level architecture before, but now that we know what all the Knative components are and how they work, we can sort of double click on that a little bit. So we still have the Knative cluster of course itself, we have a Postgres SQL, which is just also actually running as a container on the Kubernetes cluster, but could be anywhere. When we initially created, you know, set this whole application up, we took essentially two code bases, front end and geocoder, and we pushed them to Knative. Through the build and serving components, push them to, well first we push them to the Docker hub, right, and then we from there deployed them both to Knative, one is running as a front end, one is running as a function doing the geocoder, and then we can actually hit our application via ingress on the Knative cluster. We did the earthquake event feed, we're showing it as a channel here, although as Brian said, we cheated a little bit and went direct, but it could easily be a channel, and then all we did today is we added the flood event source in the same vein as the earthquake, and then we updated our front end code so that it actually showed the new data as well. So this is a little bit lower level than our initial architecture diagram that we showed, because we wanted to sort of see all of the specific components within Knative that we're using, which is all of them, incidentally. So now that's sort of the Knative view of the world, and now Brian's going to talk a little bit about how this relates with Project Riff. All right, I know we've got what, eight more minutes, I won't make this too long, but we did want to talk about Project Riff a little bit. If you've heard about Project Riff before, it looks a little bit different than it did a year ago. If you haven't heard about it, it's this really awesome open source project being worked on inside of Pivotal that is really focusing on the developer experience of Knative. So we've worked with a lot of YAML, we've worked with kind of these low level custom resource definitions in Kubernetes. And what Riff is looking to do is make both the operations and the development on top of Knative really, really nice. There's a lot of things that go into that, of course. There's two I want to highlight today. First, it ships with this really nice CLI. So if you're used to that CF push, you know, that, I mean, that's our haiku, right? Here's my code, push it up, I don't care how. That's what we're famed to do with their CLI. And additionally, on the development side, they ship with something called invokers, which are kind of what they sound like. They invoke our functions. So we didn't look at a whole lot of code that backs what we built today, but if we dove into it, we would see something that looks like this. I hope you can read that, but basically it's opening up a port, it's setting up our web server, configuring it, setting up our handlers, and then we're doing exactly one thing, which is concatenating two strings. And this isn't the most complicated code, but it's about a dozen lines of code or so to basically add two strings together. So what invokers do is they take all that boilerplate code, they're opening the port, configuring the web server, and they wrap your logic around it. So we can go from something like this to one line of code. It's literally just that exact same function that we were looking to do. And if we push that up, again, I don't know how that looks on the, with the contrast here, but we can basically say, here's my code, here's where to send my container image, go ahead and push it up and get it running. I was trying to decide if we had time to do a demo, but I think it might be better to leave time for Q and A on that, but it's a really awesome project. Make sure you check that out. And it's, of course, if you ask anyone, it's RIF with a lowercase r, they get really mad if you capitalize it. So before we turn over Q and A, once again, I know we only had 30 minutes today, so we went through this pretty quick. That book that Brian mentioned is actually free on the Pivotal website as well. So we're not trying to sell anything, we don't get paid for it. We should have broke her to better deal, but you can check that out at Pivotal.io slash eBooks. If you're interested in the code, I'll just leave this up here while we do any questions. We tried to document this as well as we could. There's a couple of things we still wanna document a little bit better, but if you wanna look at the code, you can find it up there on GitHub. Questions? No questions? You guys understand everything about Knative already? I could have done my demo. That's true. I would say it doesn't right now. There's talk about, maybe you've heard of the Tecton, I guess it's a, yeah, the Tecton project. They were looking at sort of expanding the build component of Knative into more of like a pipelines, build pipelines thing, in which case you would be able to handle exactly what you're talking about. I'm not sure exactly where that's gonna land, if it's gonna, right now what they've done is they've sort of spun that off into a different project called Tecton, how much of that's gonna land back in Knative or integrate, I think is still sort of up in the air, but that's where that would get handled. Really the original focus of the build component within Knative was just around like, how do we go from source to image? So it doesn't really handle the test case stuff. It's not like a build in the traditional sense, you'd still probably want some sort of, you know, CICD process on the code itself, but that's where that Tecton project comes into play. The other thing we didn't really dive into is, so that's some of that sort of in-flight right now, still kind of being developed and looked into. Even as of today, what we didn't dive into is build templates, which are basically just a list of things to do whenever I say build. So for the, like there's a Conoco build template, where it just invokes the Conoco container images to build your code. You could imagine there's a, we build a go build template, which just runs go build. So you can imagine a build template that would go through and run your tests and go through that as well. But I think that's sort of what Tecton's trying to address in a more formal way. Yeah, I mean, and you don't need a build template either. You could literally just give the build YAML like steps to run if you wanted to, but the build templates makes it, make it obviously a lot easier. Yeah, so the question is around, why is there so much YAML? Is there anything that we can do about that? And I think if, so the reason that is, is the way that Knative is implemented is as custom resource definitions within Kubernetes. And so everything is YAML in Kubernetes. I think the solution is really around the tool set. So again, if you saw Dr. Nick's talk yesterday, they showed sort of KNCTL, which is a CLI. Brian was sort of showing off RIF and how it can do it. So I think the answer is to, rather than interacting with the Kubernetes and Knative API through YAML and Kube CTL apply, it would be interacting with that through CLI and other tool sets. So it's just, right now the community, I don't think has agreed on a CLI specifically. That's a working group that I think they're working on, but today the YAML just allows us to, show exactly how it works specifically with Kubernetes. But it's a good point. I don't think we expect developers to be doing it this way, ultimately. So if you actually want to see that demo, I didn't plan on doing this, but like you mentioned, RIF is one of the things that's trying to solve that. So if I have this example function here, which literally just gets the time, formats, and sends it back, I can say, it says command here, that's already typed in, RIF function create. I give it a path to that get repo, tell it where to upload it, like I mentioned. And then that's all I had to do. I didn't have to write the YAML, I didn't have to configure the YAML or pre-build that Docker image. It's gonna go through and do all that for me. And we're gonna let this run while we answer another question, but you can see it'll go through that same build process and then give me a running function. But notice the command line arguments that we passed are basically the arguments that we define in the YAML, right? So it's sort of making it a little easier to interact with. And this build, this invoke, I mean, the invokers are effectively like a build template that it's using. And you can see there's eight steps that it's going through, right? Dynamically done live demos. All right, yeah, another question? Yeah, good question. So the question was around, what are the available event sources? Is there good coverage? I think I'm gonna let Brian answer that one. Yeah, so that's definitely something that's growing. I mean, Knative is still kind of in a place where it's maturing very fast, but it is still fairly early days. I wish I knew exactly where the documentation is. I know we're already at time, so I'm not gonna sit here and make you guys watch me look through GitHub. But there is a list of maybe a dozen or so event sources for things like GCP services, AWS services, I know TriggerMesh did a bunch of work to bring in a bunch of event sources for AWS. But I think that's why I got so excited about how event sources are implemented today is that it's pretty easy to write your own, too. Like I hate that answer when I hear that conference, I know. But we've seen that already with companies like TriggerMesh where they're like, all right, well, there's no event source for this AWS service. Let's build it and make it available. And then it's one command and bring it into your environment. Yeah, I mean, like Brian said, it's early days, but the nice thing is that rather than being a centric or public cloud centric, like AWS only, Google only, like he said, you can do GCP PubSub or you can do AWS Kinesis, like take your pick, right? So there's different event sources that are out there, but it is still early, so I expect there to be a whole lot more as the community grows. And I think that's, we're actually over time, but we'll be around. We'll stick around. Awesome questions. Thanks. Thanks. Thank you.