 everybody and welcome to another OpenShift Commons briefing on OpenShift TV. I'm Diane Mueller and I'm really thrilled to be here. We're continuing our series of talks around 4.8, the latest reliefs of OpenShift and today we have with us Naina Singh and Lance Ball who are going to talk about OpenShift serverless and serverless functions and what's going on in the latest release and what's coming down the pike. So I'm going to let the two of them introduce themselves. If you have questions ask them in the chat wherever you are and we'll relay them back to the speakers and try and do some live Q&A at the end of the talk. So take it away Naina. Okay, well I am Naina and today I am here with my esteemed colleague Lance Ball. Lance, would you like to introduce yourself? Hey, I'm Lance and I'm the functions architect on OpenShift serverless. And today we are here to talk about the OpenShift serverless and OpenShift serverless functions that will be released as a tech review with OpenShift 4.8. Only the functions part as tech review, the OpenShift serverless is already GA and ready for your production application. So let's get on with our show and tell. What is serverless? We see lots of definitions of serverless today and most importantly the functions which are snippets of code known as functions and that is just one of them. So I just wanted to mention and start this series today with saying that how Red Hat sees serverless. It is a deployment model where you can run almost any container and or function that we're going to talk about later without worrying about the Qt complexities with an additional benefit of your workloads scaling up on demand and going back to zero. You deploy serverless apps that are stateless, event driven and has distributed and elastic for all your modern day challenges such as high availability, real time response. So in essential we see this as a feature of your platform. So why serverless? As we all know that technology is no longer a novelty but it's like just turn on the switch. There are no complexities that I need to know and focus on the value creation what matters most to you. And serverless simplified approach to Kubernetes with its sensible defaults and dynamic scaling gives you more room for experimentation, more granularity and make the business as usual once upon a time story of having that extensive architecture and design, worrying about management and provisioning, capacity planning and all that. So with that what serverless is and why serverless is, I would like us to take a little bit deeper and see where serverless could be. So the first thing that comes to mind is where application has an unpredictable or varsity number of request. Because it is a deployment platform we also see it as where you can use the maximum resource utilization and reduce your carbon footprint. It can, it's event driven is the heart of serverless. So you can use it to do the event driven architecture that will make your apps loosely coupled and reactive and distributed at the same time. Elevated developer experience could get complex. So you can reuse the skills of your existing developers. You can try out your different deployment strategies and things like that. So to kick it off after we get this beef and show, I would start with a serverless pattern. So at its core it's simple and event has occurred and it triggers your application. So when running on Kubernetes that means starting a container to handle that event. So your app will do the processing and produce results and this event could come from a variety of sources. It could be HTTP request coming from Kafka, Slack, Twitter, you name it. And depending on how many requests your application would scale up to handle the demand and when they're ideal for enough time they will go back down to zero. So again it will make your solution loosely coupled, using events, making them reactive and respond in real time. Since we are going to talk about OpenShift serverless and I want to get to the show part of the show really early, I will just deep dive into what OpenShift serverless is. So OpenShift serverless just like any of the Red Hat project is built on an open source project. And in this instance it is canated. That is what brings serverless to the containers running on Kubernetes. It has three components, the serving, which allows your containerized app or function to deploy a SCA native service and autoscale up and down to zero. And these requests are what becomes request driven. The eventing that is an infrastructure that will send and receive events that will trigger your application that has been served by serving. And lastly the command line that allows you to interact with these constructs to make it easy to create resources and connect with the server applications. And you don't need to deal with YAML at all. So it's an execution model that we are offering where resources are allocated onto that. It comes with one click install experience with an operator and provides you data experience with the monitoring and updates. There are no external dependencies and the user experience in addition to CLI has been augmented by providing developer and admin experience on the Dev console. And we will see this in our demo today. One more concept I would like to cover because we are going to hear it a lot when we talk about serverless, the cloud event. So when the event driven architecture is the key for our modern day challenges and when everything is an event, it raises the need for the consistent, accessible and portable events. Serverless uses the cloud event specification, which is a CNCF project. And this describes the events in a common way for a message that could be understandable by all your disparate and heterogeneous component of your solution running wherever. Because hybrid cloud is the key for Rahat. One more thing that I would cover is eventing. So since we are talking about the event driven architecture, what it does is this is the key native eventing is the infrastructure for consuming and producing events that will trigger your serverless application. And the most basic component is the event source, which will provide that mechanism where event providers connect to your application and send events, right? And UCNCF. So in this topology, the event source, which is connected to any number of event sources that is coming from that you can see in the screen, get connected to a broker or a channel. The broker, a channel, we can think of them as event mesh, could be a sync, or you can connect them directly to your application where they can trigger them. And on the right hand side, you see the revision and the traffic script, that is the deployment strategy that the serving part that we were talking about. And you'll see it in our demo as well. And after these concepts, it brings me to the serverless functions. So that we have added as tech preview and open ship. I just wanted to remind you and we would love for you to try it out and give us feedback. So when every container can run in serverless fashion, what does serverless function bring to the table? We call it our programming model that lets you focus on just value creation, right? It's a single responsibility event driven k-native service, where you don't need to focus on the building API framework or figuring out what API framework or how to build a container or do the configuration on how to deploy a container, how to do readiness check, and how do I open a port and things like that. You can locally test it, locally develop it, and you can deploy it into your cluster with just one command. So it ultimately is that serverless functions are always serverless apps and not all this function because you can run your any app that you have congenerized, which will give you the benefit of scaling up and demand depending upon that. So in the real world, we believe the solutions are going to be mix of legacy, monolith, modular, monolith, what you want to call them, microservices and functions connected together to deliver the value. I would like to note here that Red Hat built the upstream project Boson on which the serverless functions are based on. We have donated it to the K-native community and we are very excited to take it further and define this. As an industry wide standard, we will cover this journey from Boson to K-native donation in some other talk in near future. But today we are going to focus on what is in the product line. So just to iterate one more time before I hand it over to Lance for our demo. So we have augmented open ship serverless that is based on K-native to offer these functions. To a CLI, the command line interface, which is we consider the event developer, the developer's favorite tool, it provides the project scaffolding in your well-loved runtimes that are provided out of the box, which include Node.js, Python, Forkurs, Go, and Spring Boot. It creates the project structure. It even gives you a boilerplate code to just start your code. And with one command, you can deploy it to the cluster in serverless fashion. And these functions could be involved using event sources that the eventing part that I just covered, or simple HTTP. And with that, I would give it to Lance, who's going to take us to what open ship serverless functions comprises of, and how we can use it to create our solutions. Lance, do you want me to stop sharing? Would you like to share? Sure, I'll share my screen. I will end up doing a demo. Okay. All right, so thanks, Naina, for that introduction. I appreciate that. And I assume if there's any issue with seeing my screen, someone will speak up. So, right, serverless functions. So Naina talked a lot about sort of the programming model and the CLI that will allow you to create a new project and deploy that project to your cluster. You're going to see all that in a few minutes with the demo. But before we get into that, I just want to kind of quickly give a brief overview of what the technologies are in open ship serverless functions. So it consists of basically four things. We've got build packs that know how to take a function project and turn that project into a container that can run in the cluster. You invoke all of this stuff through a plug-in in the Knative CLI. We'll see that. We have run times that invoke the functions. And then we also have templates for those function project creation. The architecture kind of looks a little bit like this. So we've got the funk binary, this plug-in here, and it's got these templates built into it. When the user types funk create, a new project is created. When they type funk build, we see that that project is combined through build packs with some invocation frameworks. And that all results in an OCI application image that's based on our UBI8 optimized for Kubernetes. So creating a project is really super easy. Because those templates are built into the CLI, there's nothing that you have to do as an end user other than just create. And a new project is created for you. Building it also very easy. It's a single command, funk build. And it just takes your function project, combines it with some build packs, and you have a resulting image. And then deploy takes that resulting image, pushes it to a container registry, and then ultimately to the cluster creating what is in the end a Knative service. So a function in OpenShift serverless looks just like any other Knative service. So you can feed event sources into that function, as you would for any event sync in Knative. Function can communicate with event brokers, and also with other services. And so we'll see all of that. I'm going to do a demo now. So let's go look. Let's go over here to the terminal real quick. The first thing I want to do is show you how easy it is to create a project. So we the KN CLI is the CLI for Knative. And the plugin is funk. And then I can type create, give it a runtime of TypeScript. Tell it that I care about events. And I give it a name viewer. Sorry about that. No worries. Okay, so now I've got this project and I can CD into project directory. And if I look at it, it looks like just about any other TypeScript project, we've got a source directory that contains that contains my index.ts, which is my function. And let's take a look at what that function looks like. Here we go. So it looks just like kind of any other typical TypeScript function. It accepts a context object and a cloud event object and returns a message. And in this case, we're just checking to see is there a cloud event? If there is, we return an error. Not we just log. I mean, if there is a cloud event, we log it to the CLI. So let's just see what that looks like. Let's run it. I can do KN funk build and use the minus V flag to see all the output. I give it a registry destination, which is my personal registry, and it will eventually push that image to the registry. And you can see that we're using build packs to create a function image. And then I can run it with KN funk run. And now it's just running locally. So how do I test this? Well, I could use curl or something like that. But we have a little tool with the plugin that allows you to test locally. So I can run KN funk emit and give it some data. Hello world, basically. And then give it a sync of local, which is kind of a special flag to say, send this event to my function running locally. And you can see now that the function received that event and is printing it out just like you saw in the code. I can do an npm install. And two tests, right tests are built in npm test. And we can see that it does some linting and runs a few unit tests and a few integration tests and everything passes as it should. And that's pretty neat. So it's a viewer, right? And I want to deploy this. So I'm already logged into my Kubernetes cluster. So I can just run KN funk. I can type it deploy. And it will build the image again. There is a flag that I should have provided to tell it not to build that image again. But okay, so it's going to build the image and push it up to my Kubernetes cluster. And you can see that it's now deploying the function to the cluster. So let's go take a look at that. I've got my topology here. And here it is. I already had a trigger in the system created for it. So this is just a little trigger that does all right, my function will receive messages of type telegram dot message that are happened to be in the Knative eventing system. And you can see here on the left, prior to the call I created or prior to this demo, I created what's known as a camel. It's part of the camel K project. And this is a telegram camel. So I have a telegram bot, and I can send messages to it. And this camel receives it and pushes those as cloud events into the Knative event broker. And the function that I just created and deployed will respond to those. So I can say hello, here I am talking to the pot in telegram. And if I look at the logs for this, we should see that it received a cloud event. Yes, a cloud event of type telegram message. And I can say hello again. And we can see the new cloud event arrived. That's pretty neat. So that's that's all pretty cool. It shows you how quickly and easily you can take a TypeScript function, create it from scratch, get it deployed into your cluster really in just under a minute. If I hadn't been talking so much, it would have taken just a minute. So one thing that I will show you here is how do we get rid of it? Let's say we want that function to go away. I'll do kinfunk delete. And it doesn't delete what's on my, you know, local disk, but it will remove that pay native service from my cluster. Sometimes it takes a minute to reconcile, but that will go away eventually. While we're waiting for that. So I want to do something similar to what I just did with that telegram bot. But I want to sort of have some some cool interaction between a bunch of different services, events moving through my cluster. I've got a project called telegram image analysis, and it's got a few different functions associated with it here. One is called a receiver. One is called processor and one is called responder. And the way this works is that the receiver receives telegram messages, just like the viewer that that that we just deployed. And then it checks to see does that telegram message have an image in it? If it does, then I want to deploy that. I want to reformat it, set the URL for the image and respond with an event of type telegram dot image. If it doesn't, I want to reformat it as a text message and reformat that event and respond with that event as a type of telegram text. Okay. And then the processor will take any telegram image and call out to the Microsoft Faces API to examine that image and look for any faces that happen to be in it. And then the responder will respond to the bot. So let's do that. The first thing we need to do is I'm just going to deploy these and then we can talk about what what these different functions do as I. Okay, so I'm going to deploy the responder first, I suppose. And you see, I can give it a directory name. So with the minus P flag. And so we're deploying the responder. Let's check here to see. Yes. Okay. The viewer was deleted. So why are we waiting? Can you show us the architecture of this demo that you are presenting to us? Yes. Well, do I have an image for that? I may not have an actual image for that. But I can show you what the project looks like from the code perspective, if that works for you. Each of these is a function. We've got the processor, the receiver and the responder. The receiver is a Go function. We support Go. It's a function that again receives a cloud event. And as I said, checks to see if it's a telegram message that has only text or a telegram message that has only an image. Some other neat things that I wanted to show. Let's go back here real quick. Now let's do the receiver. Another thing I wanted to show is configuration for your functions, right? So a lot of times when you're writing an application, you've got some secrets that you that you want to not have checked in to GitHub, like for example, API keys. If you have an API key for, for example, a telegram bot, you don't want to check that in because that could be seen out in the real world. So what we have is the ability to specify things like environment variables in this configuration file called funk.yaml. And funk.yaml is the primary configuration file for all of, for all of the function projects. And it specifies things like what your build pack builder is. As I said, you can specify environment variables. You can add annotations like regular Kubernetes annotations, and you can mount volumes from Kubernetes. I won't be showing you that today exactly. But okay, we've done the receiver and the responder. Let's do the processor. So the receiver, as I said, is a Go function. The processor is written in Quarkus. And again, this one is kind of nice because all of the APIs are just a little bit different. They're idiomatic to the language or the runtime that they're a part of. And so with Quarkus, what you get is a function that can, that has this annotation at funk on it, and it maps cloud event types to that function and accepts input and again receives a cloud event. You can see here, we're calling out to the Microsoft Faces API to analyze our function. Let's take a look at what we've got here now. So we've got our receiver and our processor. Now we know that the receiver cares about messages with a type of telegram message. So now I can add a trigger to filter all messages coming from the broker that are of type telegram message. And those will be sent to the receiver. We know that the processor cares about messages of type telegram.image, wondering where my responder is. And then in some cases, we can do things like, like use YAML files to deploy our triggers. And I'll do that for the responder. So we've got a trigger for the responder here. And I can do apply trigger responder. And now that will create a set of triggers for responder of type image and of type text. We should now have the architectural diagram that you were looking for now. So as you can see, the little blue circles have gone away here for the receiver and the processor. That's because we're in a serverless environment. And these functions have spun down to zero. So PNATIVE has determined that these functions haven't been receiving any traffic. And so in order to conserve resources, they've spun down to zero. But we got a pretty good response here from the responder very quickly said, send me an image with faces in it. And I will analyze it for you. So can pull up an image here and send it. And we'll see that now the. So what happened was the telegram source received that telegram message, reformatted it, sent it to the broker. The broker sent that as a type telegram dot image to the processor. The processor called out to the Microsoft Faces API and sent a response back to the broker, which the responder then picked up and responded with information about what it detected in the image. A very happy person. And so I think that about covers it. This is, I think, a really fun demo. I love to do this because I think it shows some of the cool ways and creative ways that you can take event sources and tie them together to, you know, in this case, it's just kind of a fun little application. But you can certainly see how events can drive processes within your business with serverless function. And so I think that is about it for me. So while we are seeing if there are any questions or not, I'm going to act as an audience send in and I'll ask you questions. Okay. So what you showed us a couple of commands with the CLI, right? The create, run, deploy. And the image one, that was really cool that we can just test it without using Curl or have event source at my local. What other things I can do with the K&Funk plugin? Well, you can run it locally. I think I did show that you can deploy. It has a an ability to set up configuration for your application through an interactive UI, a text based UI. So it will examine your Kubernetes cluster for example, config maps that may be available to you and allow you to set those config map properties within your function configuration. And volume mounts are similar. So it makes it very easy for me to do all those secrets that you were talking about, right? The configuration, the mount and everything. Yeah. Yeah, exactly. You don't have to know anything about, well, you don't have to know anything about sort of the structure of that and the fact that it's YAML and what that YAML looks like. Now you just have to use this nice little interactive UI. And it is a lovely UI. I was that's what I was commenting in the chat is that this is a really cool UI for doing this and I am getting really tired of typing and editing YAML. So certainly grateful for that. I'm wondering if you can tell us a little bit about what's coming down the pike. This is obviously we're looking at the latest and greatest stuff, but we've got, you know, we're always, you know, heading forward. What do you see on the roadmap for 4.9, 4.10 and what's coming down the pike? For functions specifically, what we see is, of course, a GA around 4.10 timeframe. The one very specific thing that I would like to call out for functions GA is the on cluster build. Right now we just have the local build available because we made the local development a priority for our tech preview. So the on cluster build so that it could plug into your pipelines and all those stuff, right? And because most of the enterprises are more secure and want to have that control over that. So that is going to be one of the features. The typescript that Lance has shown, it is available, but I think officially it would be in our 1.17 release, I believe, and then there is Rust on the horizon for that, right? And we're also working on the IDE experience. So you can have, through the IDE, you would be able to create a function, deploy a function and be able to do all this stuff in an IDE. So that is about the serverless functions that I could think of right now. For serverless itself, what we are looking into is again, security that is paramount, so do the encryption and do an encryption of k-native services, things like that, with service smash, without service smash. We are also looking into the cold start improvement because as you know when all the containers has gone down and when they have to come up there is that delay and so we are looking into the Kubernetes and all that stuff to figure out. For eventing right now, we have, we offer Kafka, k-native Kafka event source that it can do, but it offers only channel at the moment, so we are going to offer k-native Kafka broker that's going to be in round 4.10. We are hoping 4.9, but it probably be 4.10. So those are the big items, other than that the API gateway story and all that stuff In the meantime, we are working with the OpenShift Developer Console to have as much as we can the part of the console UI, the cool UI that you just mentioned. We, nobody wants to deal with the YAML, so that's why our focus is always on what, as much as you can do with the CLR and then the UI, right? So Lance, what do you think is the coolest new feature that's come, that people should check out in the 4.8 release that we've been yammering about today, that you want feedback on? Well, I didn't show it, but that interactive user experience to sort through volumes and config maps I think is actually really powerful. And we should probably add that to the demo. It's such a new feature that I actually don't have, it included as part of the demo that I like to do, so that is probably the biggest thing. I personally am also just excited about the typescript I'm invested in a lot of effort in it myself. Yeah, and just for users to be able to use the serverless functions, because even though I mentioned in the beginning that serverless is more than just function, but that's how it is perceived right now, right? So we are able, we're very excited to have that perceived serverless to be actually part of our Red Hat serverless story now, right? Another thing that I wanted to ask Lance to show us that, like I mentioned, we don't have to worry about the liveliness flow or all those things, like all these things that gets done for you, you don't need to worry about container creation, you don't need to worry about anything else. So we are really excited to put this in user's hand and collect as much feedback as we can. So my perspective, the whole serverless function part is I'm really excited about it. Well, I hope so because you're the PM for it. So if people want to reach out, what is the maybe share the screen where the best landing pages to find, get more information or to give you feedback? What resources do they have to get started here? So one thing that I also mentioned that we were working on an upstream project goes on, on which the serverless functions is based on, right? And we have recently donated it to the K-native sandbox community. So we can share the both URL, we are still maintaining the boson until the move is completed. And that's a good place to reach us and going to be the fastest because you can reach the whole team there. Other than that, we have our support channels that anybody can reach us to. Or we have our documentation because it is tech-free so we have provided documentation for it and I'll share the link in a minute. And there is serverless dash interest at redhat.com is an email that could be reached out for feedback to us. So this boson project, you've, have you officially donated it to the K-native group? I am going to let Lens talk about it because this has been really excited for us and Lens will create our function architect. So Lens, pause for a second. How about if you share your screen and go to that GitHub repo and so people can see it. That would be great. And, and one person while we're doing waiting for him to do that is asking is OpenShift serverless existing in the code ready container? I believe that it is, it depends on the version of OpenShift that is available in CRC but it is available. So OpenShift serverless is available on 4.6 US, 4.7 and so on 4.8 of course. So if you are using those OpenShift it will come with that. So you, you need to install it via an operator, like install the serverless operator then it would be available to you. Those code ready containers are getting more popular by the minute. And I wish that the new one, the single mode one that's coming out. I'm really excited about that. We all are. Yeah. All right. Tell us about the boson and where it's at right now. Okay. So I want to give a little bit of history first because I am very excited about this and the history is part of why I'm so excited about it. We about a year ago engaged with the Knative community as part of a larger industry wide effort to determine how Knative might move forward with functions. And there was, there was a lot of disconnect during that period and ultimately that whole effort was abandoned. And we got very excited about the concept and the idea of bringing this technology to our OpenShift developers. So we continue to work on it just on our own. And cut to a year later we've got this technology that we think is really great and we share it with some folks in the Knative leadership and it receives, you know, we're now going to be part of the Knative Sandbox. It will be the de facto functions experience for Knative. So very excited about that. Probably this afternoon this repository will move over to the Knative Sandbox organization. Right now it's part of the boson project organization. But, you know, GitHub is really good with redirects. So if you go here you'll end up in the right spot. This is the code for our CLI, but the boson project consists of a lot of different technologies, like I said, in those slides. So we do have our build packs here as well. As JS runtime is the function invocation framework that's used for TypeScript and no JS functions. Parliament is the function invocation framework for Python. And then we've got some other resources that are available. Research and prototyping some Rust runtimes as well. So that's the boson project, sort of the history of it. The boson project started as a result of those efforts within the Knative community about a year ago and has now come kind of full circle. Well, that is awesome. And that is the red hat way. We love to get our stuff out there in the open and keep it there. So this is great. And actually, thank you for that take because I, it wasn't on my radar. So I'm thrilled about that. And that'll be another thing we can pull out in the next Knative upstream OpenShift Commons briefing. So that'll be, I wonder. Yeah, I'm working with Simon for a dedicated show on this journey itself from boson to Knative Center. So I think this is going to be really interesting. Yeah. I think that would be a great talk. So again, maybe if I can coerce Naina into sharing her screen and to where we're on the pantheon of red hat landing pages on our sites, where it's the best place for people to go to to learn more about serverless and OpenShift. Let me open it up. Just give me a second. This is always the challenge with everybody. I think I think I should end every briefing with what is your landing page or which one of your landing pages would you like people to go to to find out more? Because I know that things have changed a little bit on how we have our documentation now. So yeah. And it will change again. Change is the only constant, especially in documentation. But we do read people nicely, usually. So if you can see my screen. So this is the OpenShift documentation page that we have right now, right? We come here. This is docs.openshift.com and we select the version because we have some version out there. The latest is 4.7 now, as you know, the 4.8 is still in the works. They're selected. And once you are under OpenShift OCP platform docs, Serverless is part of it. So Serverless is here, ServiceMesh and all of them are here. So Serverless documentation is part of the larger OpenShift documentation. So you have just one place to go. And then once you go here, it starts with release notes, right? The Getting Started guide, the serving eventing parts that we talked about, the event sources that we have, what kind of event sources, inbuilt event sources and supported event sources that we offer. And I'm going to ask Lenz also to, if he has, because I'm thinking he has the Camelot installed, he can show us with the help of Camel K how many event sources opens up for you to be able to connect to Serverless applications, right? And then here, the function part is here. Of course, there is this message about that it's a technology preview feature. And then it talks about how you can get started with it, what are the prerequisites, and how to build it, and another important part we also do is language specific. So if you are developing a Node.js function, what could you achieve from the metadata file that we provide and the boilerplate code that we are providing, right? It could do what can be done, what are the return values, how do you return a hand? So it's like all the programming needs that you have for the code that we are providing, it could give it to you, right? We are in the process of making a new release, Serverless 1.16, that is going to be on 4.8. So the docs are going to be in a flux for a couple of days once we get it settled. But docs.openshift.com is the perfect place for you to start looking into the docs. We don't have a direct live feedback from the documentation part. So for that, we probably need to come through the support channel for us or use the email address that I was talking about. So the one other thing you mentioned earlier too is if someone's going to set up Serverless, they need to use the operator to do so. Where do they find that? So Lance, if you have a cluster up, if you'd like to show the operator installation, if not then, so on the OpenShift console, there is, and then you are in admin perspective, there is operator and install operator. So this operator is available on Operator Hub. So you just type in OpenShift Serverless and it shows up and then you just click install and it will automatically up. You have a choice whether you want to do an automatic update or a manual update. We would suggest automatic, of course, so when the new version is available, it will, it will, but your choice. I can show that if you want. Okay. That would be awesome because that will really get people started rocking and rolling here. Okay. So you should be seeing my console here. I've got the developer view up to install operators. You're going to want to be the administrator view and here's the operators section. You can look at the operator hub to see what operators are available. And usually I just, it's nice. And there's Red Hat OpenShift Red Hat OpenShift Serverless. You can see that it's already installed here and all the operators. I've got the Camel K operator as well. That's where the Telegram event source came in. So yeah, this is the operator. Nina, I know you asked me to show all of the Camelit event sources. I'm on, this is actually an old, this is a 4.6. Oh, okay. So it's not available. It doesn't, like it can run them. It just doesn't show them. Doesn't show. Yeah. So that is going to be part of 4.8 where it actually shows. As you know, it's hard to get the 4.8 cluster right now. So, but all those event sources like 300 plus event sources that Camel K provides show up there. Some of them are supported. Some of them are in check preview, but they are available and you can play around with them. Yeah. And we have a couple of past OpenShift Commons briefings on Camel K and Kafka and all of that. So if you're looking for them, just go through the briefings list and search on Camel and Kafka. And you will recently, we've just done a whole series of them on that a little while ago. So there's lots of good content out here. And I think you guys have done an amazing job showcasing the serverless side of OpenShift and its functions. And so I'm really appreciative of you taking the time to do this and all the work that goes into this and how passionate you both are about this topic. Now I can see why you're the PM yeah, why you're the functions architect. I love the title. So thank you both for joining us here today and everybody who's been out there watching. Thanks very much for taking the time to spend and spending it with us today. We will have more 4.8 updates coming more deep dives like this one on different aspects coming up. So if you check out the events calendar on commons.openship.org you can find those. And if there's a topic we didn't cover let me know reach out to us and find us on our Slack channels or wherever you are reaching out to the commons folks and I will endeavor to find someone to talk about it. So thanks again guys and have a wonderful day. I'm looking forward to the 4.8 release party. Almost there. It's going to be on Zoom and it's going to be virtual but I'm still looking forward to it. All right guys thank you so much. Thank you. Thank you for having us. Yeah, thank you. Bye bye. Bye.