 So, hot off the serverless event in the other room, we've dragged two of them out of that room kicking and screaming, William and Marcus are going to give us the state of serverless on OpenShift. So please give them your full attention. Thank you. All right. Thank you, Diane. So we have a lot of content to cover. We have about 20 minutes. I'll try to speak a little bit fast, but not too much, and let's dive into it, right? To start, well, I'd like to introduce myself. My name is William. I'm the product manager for serverless in OpenShift at Red Hat. And I'm Marcus Hyde. I'm working on Knative for Red Hat. Right. I'll start talking a little bit about service mesh. Even though we are talking about serverless, I think the service mesh plays an important part here in the architecture as well. And to give a very brief introduction, talking briefly about the history here, in 2014, I think there was this explosion of you have to build microservices now. Microservices are a great way to build distributed systems and whatnot. But quite often, those microservices started to grow in size and not become as micro anymore. And you start to add all these traits, all these capacities to those services and inflating a little bit the size of your application, of your source code, and then of your jars as well if you're building, for example, a Java application. So you had to build tracing, circuit breaker, routing, and all these cross-cutting concerns as part of your service. So fast forward, probably 2018, around that time, the idea of creating a service mesh came up, and there was a bunch of different projects coming up to implement that idea. And the idea is that you can actually add a layer to abstract most of those traits and concentrate that implementation as part of the infrastructure. We at OpenShift, we have the OpenShift service mesh, which is built on top of Istio, but it's not only using that. So you have a bunch of different components as well. For example, you have Jaeger, there is Prometheus for observability and monitoring, you have Grafana as well, and Kiali. So the packaging of all these different projects and all these different technologies is what we call the OpenShift service mesh. And the main goal here is, again, like I said, the goal of the service mesh is to abstract most of those cross-cutting traits and concerns into objects that are part of your infrastructure. So they're pretty much now part of Kubernetes, and you can then consume them from all those different services. It doesn't matter the programming language you are using, it doesn't matter the technology you're using on top. As long as it's an application containerized, you're running that on top of OpenShift, you can then apply policies to implement routing, logging, and all those different concerns. We have a bunch of live demos and tutorials available on that URL. I'll leave the URL there for a while, and then at the presentation as well, you can access them and play a little bit with service mesh and see how that could apply to your use case. To summarize and to tell a little bit about the roadmap, service mesh in OpenShift is coming to GA in 4.1, and if I would summarize without too much details about the technology, the main customer benefits, the benefits you see in your applications are these. So reduced need for developers to have operational knowledge, so you are, again, abstracting all that through the platform. You are providing a framework to do service discovery, observability, and distributed tracing again without adding anything to your code that's all abstracted through the platform. You can then inject all these different rules and these different traits using a policy-driven mechanism, and through those rules, you can apply, you can execute traffic shaping, you can do routing, you can do A-bit testing, canary deployments, and perform some chaos engineering if you want to validate how those micro-services, how those applications actually will behave in a production environment, for example, and you get Kiali, which is this very nice visualization tool that allows you to observe what's going on in your mesh, what's going on with all the multiple services you might have composing your applications, even with the multiple versions you might have, right? So more links there on how to get started, how to install, and to do the demos. But as I said, we have a lot to cover here. Now I'll dive a little bit into serverless. Looking at the landscape available in CNCF, and we were right there at the serverless practitioner event, they compiled the serverless landscape into this nice chart, and you can see that the serverless space is, I mean, evolving and growing in the number of tools, the number of technologies that they have across the board. So you have tools, you have frameworks, you have platforms. Some of those platforms, they can only work in a hosted environment, so they are available just as a proprietary implementation available in a specific cloud provider. But others, they are installable, right? You can actually install them anywhere you want. Most of those can be installed on top of Kubernetes, for example. And then because of the portability that Kubernetes gives to you, you can run that pretty much anywhere on any platform you want, on prem or on different cloud providers. The other thing that, whenever talking about serverless, that there is this misconception that serverless is all about functions, right? And I think one of the analysts that we interacted with, I think he came up with this phrase that I like a lot. It's pretty much saying that, so function as a service is serverless in the same way that a square is a rectangle, right? It's not really the only way to do serverless. It's a specialization of serverless that is specific to certain use cases, but there is more to serverless than just functions. You can do serverless to a bunch of different workloads. The same thing would apply to microservices as well. They are nice. They are important to promote separation of concerns, to build a very nice architecture with distributed systems that are very focused on the business need. But again, they can be serverless or not. There is nothing really specific to serverless in just doing microservices, right? And containers as well, although it's one very important characteristic of this kind of workloads, because it offers this nice standardized package to promote portability. So as long as you have a container, again, you can run that pretty much anywhere you have a container platform. Again, those containers, they're not necessarily anything, they're not providing anything specific to just serverless. This is just the format you are packaging to make sure you have this kind of interoperability to run the workload whenever you want, whatever you want. So if I would summarize and try to position serverless in a different way that you can think about it, I would say, think about it as a trait that, again, you can implement and you can apply to all sorts of workloads. It's also a spectrum, a continuum. There's a great talk by Ben Cahill that talks a little bit more about that. But think about how serverless you are. Again, for certain workloads, for certain things, you can be more serverless if you can give away control because you want to gain in velocity. But for other things, you actually prefer to keep that control with you and you want to do certain things in certain ways. That then is going to give you the responsibility to do that. You have now to write that, you have to implement that, but that's, of course, going to cost you a little bit in velocity. And it's also about writing less code. Again, the more you can give, you can share with the infrastructure, the less you have to write in your application, which is, again, very related to the story also with service mesh and how service mesh also provides that. So if we look at the different kinds of workloads, you have microservices, you have functions, you have applications. And that layer, I'm calling here the application framework layer. You have all these different ways that you can write applications. But then at the bottom, in the end, those applications, they get compiled, they get packaged as a container. And once you do that, it becomes infrastructure and you can run that pretty much anywhere. So this is where the technologies and the different projects would come into play. So of course, at the bottom of that stack, you have OpenShift, you have Kubernetes. But then you have things like Knative, which I'll briefly talk more about it as well in the next slides. But you have the service mesh, you have Istio, you have KDA, the project we announced with Microsoft. I'll cover that a little bit in a little bit as well. And you have the layer on top with a function framework like Azure Functions, or you can write microservices using Quarkus, or our own function as a service with OpenShift Cloud Functions that is coming. Briefly looking at Knative, you have the three main components available there, build, serving, and eventing. With build is a little tilted there. And the reason for that is pretty much because a lot of the facilities available in Knative build are getting ported to a new project called Tecton, right? As part of that evolution of the project that was pretty much released 10 months ago, we started with build as a module to pretty much get a container out of a source code, out of a project. Very quickly, we saw the need for something a little bit more complex that we could actually orchestrate different steps of that build process. So then Knative Pipeline started. But then very quickly, we also realized that the scope of that would be beyond serverless, beyond what Knative was doing. And then we ported that to a standalone project called Tecton. In Knative, then specifically, you have serving and eventing. Serving then is responsible for the auto-scaling part based on events. So it's going to allow your containers to auto-scale based on the number of events you received. But then it's also going to scale down to zero whenever you don't have any more requests running. It also offers the integration with the service mesh today with Istio. And the other important module there is eventing, which offers this common infrastructure to connect different event sources to those workloads and also to consume those events that are going to stimulate the applications available in serving. So now I'll switch to Marcos for the demo and you see that live. Alrighty. So that's a lot of nice words towards what serverless is, what we think serverless is. And one of the most important bits I want to stress is serverless is more than functions. And serverless is not only functions. Because when function as a service arrived, it looked like two, three years ago, I don't know. It looked like now we need to rewrite everything that we have in functions. And people started doing that, but they quickly hit spots where it didn't work. Like, can I run this and that program? Oh, it's not available in the runtime provided by the provider. And so on and so on and so on and so on. And today I want to show you how easy it is to first set up Knative serving in an OCP-4 cluster. And then how easy it is to port over a very, very old school app and make it serverless and make it run with the traits that William just presented to you. So let me actually exit that. And this will be awkward because I have to hold the microphone. But please bear with me. So installing Knative first, you can do that via the operator hub, which is part of the OCP-4 installation by default. You just go to catalog, operator hub. And now I killed my filter, filter by that. And then you have the Knative serving operator up here. Click on it. You click Install. You say Subscribe. And there you have it. What do I? Yep. All right. Now with that installed, you end up with a system basically, as I said, with Knative serving installed. And onto the next demo, which is, as I said, porting over a very old school application and making it run serverless. So the old school application is a PHP guest. Oh, actually, I should. And last but not least, we dropped this container name because Knative doesn't like container names. And it will name it itself. And that's literally it. We saved that. I saved and applied it. And it worked. We see a pod coming up there again. And now I can get the URL of that Knative service. OK, service. There you go. Pod is up. We go to that. We see the same textbook. Now if we wait a few seconds, which is actually, I think, 60 to 90 seconds, we will see that this deployment will scale down. And I'll tell you more about that. But keep an eye on the display there, seeing it scale down. So what we've seen now is that we've made the application that I talked about, that old school PHP engine x thing, which is just packaged in a container. And container is the common denominator here. It's packaged in a container, so it runs on Knative serving, which makes it serverless. If we would still live in the 2000s, there could be a huge traffic spike to that deployment right now. And it would scale up to meet the needs of what you need, without me having to figure out how to set up an auto-scaler, without me having to figure out how to get to the metrics of that actual deployment, and without me having to figure out how to set all of that up and make it work in the end. And just in a short bit of time, this will scale down. And that's the demo. That's how easy it is to make an app server these days. Yeah, again, like we were presenting before again, it's not only about functions. This idea of migrating current applications to serverless is super important. Because again, quite often, when we talk to the customers that we have, they are concerned sometimes about, oh, do I have to rewrite all the applications in order to leverage the benefits of serverless, right? And as Marcos just demonstrated, again, that's not really the case. You can pretty much save the ML file that you have today to deploy that application, delete some code, which is one of the best things to do, and you're good to go. You're pretty much then running that same application as serverless. As you can see, the pod's now terminated. It's scaled down to zero, because again, there is no traffic for that particular workload. Going back to the slides to wrap it up. So again, serverless and Knative is coming to OpenShift. It's available now as a developer preview technology, so you can try it out. But it's going to be tech preview in 4.2, and we have goals to take it to GA by the end of the year. Of course, this is an upstream project as well, and it depends a lot on the stability of those APIs and how the community is evolving upstream. The main benefits to summarize, again, it's very familiar to Kubernetes users. Again, if you are already using Kubernetes, using OpenShift, you don't have to learn a complete new stack and install a bunch of other different technologies on top of your Kubernetes to do that. Apply a couple of CRDs, extend the current Kubernetes you have, and you have serverless capabilities. It can scale to zero, just like you saw. It can auto-scale to n, based on the demands you might have. It's not only for functions. Again, applications functions pretty much any container workload. And we did not demonstrate it here today, because, again, short on time, but there is also a powerful eventing model that can be used to trigger those applications, to trigger those containers, from Kafka, to Chemok, to Fuse, to GitHub, and a bunch of different event sources that can be used. And, of course, based on an open source project, no vendor lock-in. Again, there is nothing proprietary here for what we just showed. Very briefly as well, I'll mention about the project we announced with Microsoft, called CADA. So CADA allows you to use Azure functions on top of OpenShift. And we are also integrating CADA with Knative. And the idea for doing that is pretty much to reuse the event sources and the powerful eventing models that we have in Knative with CADA. But CADA also allows and enables Azure Qs and Azure Service Bus to also trigger Azure functions. Also, because we're talking about Azure functions here, you can use the same CLI, the same tooling you are used to do when you are deploying Azure functions, so the Func CLI and the VS Code plugins, to create functions and deploy those targeting OpenShift. There is more about that in the link here. Again, we just announced that at Summit last week, or two weeks ago. And that's pretty much it. Thank you very much. Thanks.