 Hello everyone, welcome to another OpenShift Commons briefing. You've seen some of our other sessions on serverless and we wanted to bring you more workloads and more ways to use serverless and OpenShift. And today we have Mary Cochran, who is going to present on camel pay and serverless and give us a intro and a demo and really looking forward to this. All right, Mary, you want to take it away. Perfect. Thank you so much. So as I said, my name is Mary Cochran. I am a senior app dev solutions architect. So I've been a thread hat a little over seven years working in this integration and application development space most of this time. And I live in Charlotte, North Carolina. So hopefully today you can learn a little bit about camel pay and serverless and how the two can work together. And also how camel pay can work without serverless is that works better for your workloads. To start off, figured we'd go through a little overview of camel. So I've actually camel is an upstream project that we use for our integration technologies here at Red Hat. And it's originally been part of what we call Red Hat Puse. The magic camel is pattern based integration engine. And it is Java based is based on enterprise integration patterns. And so these patterns are really design patterns that have been used for years and years to do various data integration, data transformation, and have become kind of standardized. And with that, camel comes out of the box with over 300 components. And these components, you'll see in one of the next screens, you can do things from connect to various AWS services through reading a file, connecting with Salesforce, calling REST APIs or creating REST APIs. So a lot of different components right out of the box that you can use. And then camel also has built in data transformation. You can move from JSON to XML. If you're in healthcare, you can use things like HL7. One of the larger use cases that I used camel on, we actually took a CSV file and kicked off a much larger process based on that CSV file. So every line in the CSV file was really its own process instance. So we used camel to read that CSV file, take each line, create a process instance from each of those lines and also to validate that it wasn't a good format, the data was what we expected, and then go from there. With these routes, you can develop them either in Java or in XML. It just really depends on the developer's background and what type of experience they have, both ways work. And then there's native REST support out of the box. And that is for creating REST APIs as well as calling them. I mentioned there's over 300 plus components. So this isn't all of them, but it gives you an idea that, as I said, there's everything from connecting to using Docker or Cassandra to JBPM connections and in Venice fans, tons of things out of the box. If you're interested, you can just Google camel components and you'll find big old lists. Let me talk about integration patterns. Some examples of what those things are are splitting up items. So we talked about the CSV file that kind of goes into this first use case where it says split orders and then send each order. It's very similar to that CSV file where we split the file by line and then send it to different places. So in this case, we're splitting the orders from a larger order and having each item then say, okay, it neither gets to be sent to an electronics area to be fulfilled or to another area. You can also do things like aggregate data, enrich it, normalize it, resequence things. And then these enterprise integration patterns as a whole are based on this book that you see on the left, enterprise integration patterns by Gregor Hope and Bobby Wolff. The camel routes, as I mentioned, you can write them in either Java or an XML. On the top, you have this Java example. So here we're taking a file in the folder data, the files named inbox, and then we're just sending it to a JMSQ named order. On the bottom, we have that same exact route just written in XML. Taking this one step further, we have a little bit of a more complex route where we say, from a file named inbox, we're going to split up the body based on its line. And then each of those lines, we're going to change into a custom XML. And then each of those XMLs, we'll send to an active MQ queue named the line. All these integrations are really helpful for connecting different systems to each other, different data transformations, as well as creating new microservices. Camel K specifically is a sub-project of camel that we just talked about. But before we get into camel K, I want to talk a little bit about corkis because that is the runtime that camel K uses. The corkis is what we call supersonic subatomic Java. It's Java modernized and made to have low resource utilization, really fast start time, but then still have the Java that we're used to when it comes to writing our code and developing. It doesn't have any big learning curve that learning a new language sometimes does. And it's really made for containers first. One of the big things with corkis is the developer joy, the ability to do live coding. So instead of having to rebuild your code every time, you can edit a file, save it, and then just refresh your browser. Out of the box, we also have over 90 extensions. So you can use various libraries with corkis and it unifies imperative and reactive, and then it's made for containers first with that fast boot time. Corkis basically takes Java and then if you're using growvm, which is essentially the virtual machine that was created to be a polyglot jvm. So polyglot virtual machines be able to run different types of languages and run time. And so if you're using growvm and using corkis, you can actually compile the code down to native, similar to the way you might if you were writing in C, which then results in this very fast boot time, low memory utilization. And it's a great fit when you're running serverless workloads. Little bit about that developer joy, little cartoon says, wait, so you just save it and your code's running, but it's Java. I know, right? Supersonic Java for the win. So this is really a game changer for Java developers and it takes the speed of development to a new level. When it comes to the resource utilization and startup time, if we look at comparing artificial cloud native facts, so to something like a spring boot microservice versus utilizing corkis plus open JDK, or utilizing corkis plus using that growvm, where we compile it down to native, you can see that the startup time and the memory utilization is drastically different. We're going from 218 megs down to 130 down to 35. So even if you don't compile it down to native and you just decide, okay, I'm just going to use these corkis libraries with my normal open JDK, then you're still getting a significant cut in the memory utilization startup time. It says, first response, we're going from 9.5 down to 2.5 down to sub seconds. So a lot of bang for a buck just for switching that runtime. The camel K, how do those two things fit together and work along with serverless? The camel K is the sub-project of the larger Apache camel family and it's really born on Kubernetes. It's made to run on Kubernetes. It's made to run in containers and be super lightweight and really work well with serverless. It has an operator, so it has that aspect and it actually started way back in 2018. So when it comes to running on Kubernetes, you can run it just on vanilla Kubernetes. You can run it on OpenShift, which you'll see today. And then you can also run it on OpenShift with Knative and serverless. So I'll actually show both two and three today and see some of those differences between the two. It comes to camel K versus corkis versus normal camel. Here we have the camel three projects overall. We have camel K projects, so that's made for camel and Kubernetes and Knative. We have the traditional camel, which quite honestly gets utilized in all of these other projects. Camel corkis, you can actually run camel with that corkis runtime without utilizing the camel K technology, camel Kafka connectors, Spring Boot, and cork. So we look at performance of camel K and this is without even going into utilizing Knative and the serverless technology. You can see that camel K is this blue compared to if we have a binary on OpenShift and it does some source to image, we're getting higher deployment time and redeployment time. And then if that binary lives somewhere remotely, even higher, if we're running it just from say my laptop to a remote OpenShift instance, then even higher. And what's interesting about this too is the deployment time, say initially still takes a few seconds, but that redeploy is really pretty instantaneous. So how does this work? We've seen some camel routes. How does it work with OpenShift and camel K and Kubernetes? So camel in general works traditionally in a Java project, whether you are running it as XML or as Java, usually there's some sort of project structure. camel K changes that and switches it and at this point you only have your integration file. So you just have a file with either your Java code. You can also use groovy, JavaScript, or XML as well. Today I'll show you mostly Java examples, because that's what I'm used to and I am a Java developer as far as my background goes. So you have these files in this example, we're saying from some sort of timer, every second, a route ID, this is nice for identification, for logging purposes, those sorts of things. We're going to set a header that google.com and then you send it and then log it. Camel K also has its own CLI tool. This camel with a K and you can use that to then run your integrations. So you've probably seen people use the OpenShift CLI tools before. We'll use that as well. And then once I'm logged in with my OpenShift CLI, so OC login, and I'm logged into my OpenShift cluster, the camel K CLI will then use that to say, okay, I want to run this integration on this OpenShift cluster in this project and then deploy it from there. And then using the operator, that's how it knows to deploy it, how much, how many pods it needs, redeploys, all of that sort of stuff. It also verifies if you need dependencies. So in this example, there is no dependency beyond what we call camel core. So the camel core library has a lot of stuff already built in, but say you're working with a database and you needed, say, a MySQL connector. And so you would need a specific camel dependency. Down here in the CLI, you would just have a dash dash dependency equals and then the information, the artifact ID for that dependency. And this also allows you to utilize if you have, say, another library that you've coded and it has, say, all of your data structures that you need. You can make that library available, saying something like a nexus repository. Give your operator access to that repository. And then when you use the camel commands, it via the operator will know where to find those dependencies and still be able to So when we deploy from that file, you can also do these live updates. So there's a way to run a dev mode or just from a redeploy, as I said, it's pretty instantaneous, less than one second. So that's normally the approach that I do. If I redeploy, then it looks at this integration custom resource that it's created via the operator. And then the operator goes updates the running pod, creates the new one, gales down the old one, and your code is redeployed. As we take a look at both camel K alone, and then as we move through this discussion towards the serverless components as well, can look at and think about the architectural evolution of serverless. So nutritionally, we've had these servers or services that are autonomous, loosely couples, a move towards microservices. They have a single purpose, they're stateless, they're independently scalable, and somewhat automated versus a move towards serverless with more single action, ephemeral. So camel K is really at its root towards this edge of microservice. It doesn't have to go into this serverless technology. So the first example I'll show you isn't quite using all of the serverless aspects yet. And then the second example will be full blown serverless. Once again, we'll have this file with our integration, deploy it. The operator goes, okay, you want to run this? It'll say, okay, is there a K native profile? If not, then it's a regular deployment. So that's what you'll see first. If there is yes, then we'll say, okay, we're going to need to serve this as if it's serverless with our K native serving. Some use cases for camel K, this can go, whether you're using serverless with it or not, is integration on demand. Or if you have a batch job that say, you have a batch job that runs once a day at 2am, using camel K with serverless is a great use of it, a great implementation. It'll only use your resources when you need it, scale up, lightweight and can really save some money in that way. Let's take a look at what camel K looks like on OpenShift. Lost my mouse for a second. So here we have an OpenShift cluster. And so I have previously started to utilize this OpenShift container platform. And as I mentioned with camel K, it is installed via an operator. I previously installed this operator, but if I hadn't, I can come into operator hub, search camel K. You'll see we have the community operator, K native one specifically, as well as the red hat integration one. So here I've installed the red hat integration camel K. If I go over in my installed operators, you can see that. And here you have some kind of other box provided APIs integration platform is needed to run any camel K integrations. But in this case, it also will automatically create that platform as soon as you deploy a camel K integration. Also gives you some descriptions for how you can go about installing things or writing your first integration, if it's something that's new to you. In this case, go to projects. I've created this project called camel K on workloads. While that's taking its time, I'm actually going to go ahead and copy my login command. So that way, I can log into OpenShift and use those CLI tools. Let's take this login command. And here I just have a terminal. First, I'm just going to paste that login command. Let's go back over here and why that's not loading, try and give it another second. But here we have this camel K project. So I'll select my camel K project to use. There we go. So here we have my operator and that's running as a pod. And then I also have previously deployed this simple rest service. So if I go ahead and click on that, it will route where this is available. And so there's nothing at the root context, but I did deploy something at slash hello. So what if we want to update this camel K integration? Let's take a look at that file. So this is the file that I have for this simple camel route. So I have a rest API deployed at slash hello via a get request. And then I'm setting the header to just return some plain text. And here I have it saying to transform to a simple response of hello world. And this one looks a little more up to date because it already has a few more exclamation points. But sorry to interrupt really fast. We have a request to make the terminal font a little bit bigger if you could do that. Thank you. Yep. So here I'm going to update it just say hello Mary instead. Save it. And then now our file saved. Go back to this topology view. Here we have these details for one pod. A bit more narrow. So keep an eye over here on this right side as I do camel run. Simple rest. So this is going to redeploy this integration. So we should see it over here pretty quickly. We're already starting to deploy and it's done. Come over here. Refresh. Then we see hello Mary and it's done. So that redeploy is really pretty quick compared to building a whole new Maven project and then creating the image from the source to image and going through all of that. It's much more instantaneous. Let's take a look at what happens if we take the camel case technology and add in serverless and k-native. So first let's talk about what is serverless and k-native. Well serverless is execution model code is executed dynamically by dynamically allocating resources. So you only have to utilize what you need and then you only pay for what you need. And it removes some of the traditional always on server aspects. So you can actually scale down to zero. As I mentioned if you have some sort of batch job things like that then you can scale down to zero and only scale up when you need to run that batch job. And so k-native is red hats serverless offering and primary component of that. There are two components of it. We have the k-native serving and the k-native eventing. For today's purposes we're going to focus on the k-native serving which is that ability to scale to zero and request driven compute. So you can scale up as needed. But we also have the k-native eventing kind of this message style integration. So before we go any further I do want to take a minute and actually deploy those operators. Just in case. So here I have a different open ship cluster and the reason I have that is actually because when you install serverless this is going to install it cluster wide. So I'm going to go over to my operators to operator hub. Search for serverless and here we have red hat open ship serverless. Click that. Go ahead and install it. We're actually on a 4.7 cluster. Solid namespace says the namespace doesn't exist but it'll create it. So that's all good. So while that's installing we go back to operator hub and we also still need that camel k operator. So before I had pre-installed it this time you get to see me install it. You can see the list capability level over here on the side. So this one's able to do basic install upgrade as well as manage that full life cycle. Here I actually want it to be on a specific namespace. I'm actually going to create one. We call this camel k serverless. I didn't install that. Go to installed operators. You see my serverless operator has installed. Go over here and we're also going to initiate the k native serving instance. So this is that ability to then scale down to zero, scale up, do those things accordingly. I'm just going to look at this camel view and we actually also need to put it in this k native serving project. If you try and do it outside of that it'll actually just give you an error and tell you it actually has to be in this k native serving namespace. So if you forget that's also okay. Go ahead and create that and back here. Go back to that serverless namespace. Okay so now we have our serverless operator installed. Our camel k operator is installed. If I go to projects, that one I created. Workload pages will allow you today. We see that camel k operator is here in running. So since this is a different OpenShift cluster, grab that login command again. Go back to my terminal. Login to this new OpenShift cluster. Going to need to select this different project. So camel k serverless and now we can use that same camel run command to deploy that same integration and since this is the first time I'm just going to leave that to get created while we talk a little bit more about serverless and then we'll jump back and take a look at how it all works together. We've deployed this integration. It's a REST API with k native serving. So there shouldn't be a container if no one needs it but as the request comes in, the pod will scale up, then it will return the request and then the pod can also talk to various external systems if it needs to. From there, more requests come in. It needs to scale up more because there's higher load. You get more pods. It is straightforward and back down. So some benefits of serverless. You've heard about OpenShift serverless before. You've probably heard of some of these benefits but to reiterate, you get faster time to market. You aren't trying to arbitrarily predict how many resources you'll need which can lead to lower operational costs. It also can reduce your packaging and deployment complexities. You saw with camel k, we're deploying these individual files versus having to have more complex project structure and then worry about these various defendancies and lots of regression testing, that sort of thing. And then you have this flexibility to scale on demand. It comes to what this looks like with the no longer guessing how much compute you might need. So without serverless, you're kind of in a spot of maybe you're under provisioned for most of the time and then, or over provisioned for most of the time and then under provision say around holidays when you have a spike. Versus the serverless, you can simply scale as you need it. So as we jump back to look at what camel k with serverless and k native looks like, and we saw camel k running by itself, which that fell more into that just microservices but towards maybe towards serverless as far as microservices go. And it is using as a corkis under the covers. And then when we add in k native, and we're using camel k, corkis and k native all together, we're really firmly into that serverless build. And then as you can see, OpenShift covers all of this. If you've heard about some of our streaming technology or in Q broker for messaging, that also can fan across the spectrum. So let's take a look at what serverless in action looks like. And also again, go back here. Looks like we are still deploying. Okay. So when I started to kick off that camel deployment earlier, what it actually is doing, so hard to see that full screen, it is actually downloading the various dependencies from Maven. So this can take a minute depending on where those dependencies are. This is a demo cluster. So it looks like it is taking a minute today. We find those resources. So we're still downloading in the meantime. And look at over here, since I installed that serverless operator, we actually then also have this serverless menu. To go down, click on serving. Right now, we haven't deployed anything yet. Well, we're trying to deploy something. It's taking a little longer to build than it normally would. But there's no services found at this point. But when you deploy them, you would actually see them here. And then revision. So if you deploy a second version of something, that would show up in the revision. And then same thing with the routes. So as these things show up, they'll start to, as these things get deployed, they'll start to show up in the serverless routing. And then you also have still your standard networking menus for all of those sorts of things. But here, looks like we actually got a server. Might have to jump back to our other open chip cluster. We'll give this one a minute. And in the meantime, I will show you what you can do if, say, you already have your camel K deployment. And like we did over here, but you want to add the serverless operator, then you can still install that after the fact. While you're doing that, I just want to remind everybody to drop your questions into the chat. We already have some coming in and just want to tell everybody, you know, add your questions, please. Thanks, Mary. And I'm hearing in the chat that red hat Maven is down at the moment. Well, that would explain what I'm seeing. Thank you for whoever said that. Thanks. At least we got that first one deployed. Right. Do you want to answer a question right now? Yeah, let's do that. All right. So is there a way to migrate from existing ETL tools like SSIS to camel K and serverless? I mean, you can always migrate how much of a, I don't want to say pain, but how much of a process it will be, I think, various depending on the tool. So we don't have any tooling today to migrate from those specifically. Some tooling that we do have in professional services are from typical business works. We have some tooling to migrate from that to traditional camel. And the process from traditional camel routes to camel K is actually pretty straightforward. Most of the time, your actual routes are going to stay pretty much the same. It's just that then you're taking them and isolating them. And then any of the dependencies, as I said, you're deploying into some sort of repository. Look at that. Nexus, Artifactory, wherever that may be. So then they can use them as dependencies. Thanks. What's your preferred method to migrate? With that type of thing, honestly, you're probably looking at a lot of rewrite just to be candid. Thanks for that. Yeah, check back on this. Same thing. Let's see if we can get this creative serving. So I've now installed that on the original cluster that didn't have serverless installs. So it is working on installing the creative serving aspect on this cluster. As you can see, the menu then showed up. Go back to this project. I'm actually going to redeploy this so that way it will deploy to be able to go up to my last login. This is back to that other cluster since the other one didn't have those dependencies. And then we're going to switch this back to hello world, save it, and then I'm going to just redeploy it back to that original cluster. And so since we have this serverless running, now this time, since it deployed, it's kind of hard to see in this view. But there's this little K native symbol that shows up to show, okay, this is a serverless deployment now. And so it starts up off the bat with a single pod. It assumes that it will need to be utilized. But if we give that a few minutes, I'll hit it now just so you can see that the route exists, it does what it needs to. But then we're going to just give it a few minutes so that it can scale down to zero. So if there are other questions, bring them at me. Well, I was wondering, Roland is on right now, and he is one of the serverless architects. And Roland, I was wondering what you thought of this demo as we're going through it. But if you are available, see this is where I should warn people before I come out. That happened. I mean, the funny part is really what it gets to scale down to zero, and I can show you it scale up. You just started echoing and I don't know why. Is that? Yes, sorry, I just found the mute button, sorry. Oh, okay. Yeah, I think it's super impressive. I really like all the camera stuff and as you mentioned, so seeing now the autoscaling mechanisms, I'm really looking forward to that for the next step. Well, just in time, it looks like it just now autoscaled down to zero. So that was, I don't know, maybe a minute of it sitting there idle. And then if I come back here and let's just hit that URL, give it a second. We jump back over here. You can see it scaling up. So even when it scales on to zero, you can still hit that URL if it's a rest request, and it'll scale up and then still get you that response. You obviously want to keep in mind there was a little bit of a delay. So if it's something that you need that instantaneous response with web browser, then yeah, you're always going to want to have at least one pod up. But if it's something like more of those batch case use cases or batch job use cases that I mentioned where it doesn't have to be instantaneous, you just need it to come up at a certain time of day and do its thing and then shut back down, this can be perfect. I'm glad I get that work even though our maven repositories are not cooperating today. So any other questions? Not yet. You have been answering the measure. Go ahead. Perfect. Well, as far as, you know, content goes, this is what I had around camel K and serverless. But I will, you know, be here if other questions come up for if you think of something else, we'll be on the line. Thanks, Mary. I love those demos because I have not seen camel K in action yet. So that was fantastic. What use cases are you seeing the most as you've been working with customers? Yeah. I'd say as I said, batch shops is a big one. camel K outside of the serverless aspect, a lot of customers, they already have often these micro services that they've implemented with camel, but that they're in, you know, maven project structure and maybe there's only a few different Java classes in that micro service, you know, it's still pretty small. So a lot of them are breaking it out just into that camel K integration and then saying, you know, that dependency so that it has, say, the model that it's used to for all of the data objects. So doing things like, you know, if it just needs to create a database and then send that to a queue, if it needs to just take from one queue, do some transformation and send it somewhere else. So a lot of the integrations that are pretty basic that customers have already had in micro services now are just getting broken off even further into those individual files. And a lot of that has to do with the benefits of, you know, you saw that fast reload time when I redeployed, it was pretty instantaneous. That's a lot quicker than the traditional deployment where I'm having to rebuild the whole maven project structure and then rebuild the image. Yeah, that was pretty impressive. I think we're getting spoiled with all these new technologies, right? I know, right? So Ali asked, can camlets run as camel K server lists too? Yes, yeah, they can. So camlets are a new part of camel as well that are then part of this larger camel ecosystem. And a lot of the camlets, I was looking at it the other day, are actually built in as well now. I forget where that was. Can you briefly describe what camlets are for others that may not know? Yeah, yeah. So a camlet is basically, if you look at that route, you know, there was different components for integrating with different pieces. So some of those pieces of code have become somewhat standard. So a camlet is kind of like that piece of code that then instead of you writing it over and over and over again has become reusable for connecting, say, to Amazon S3. That's a pretty standard thing where you say, okay, I just need to go to this S3 bucket with this authentication. So then you can use those parameters instead of writing that full camel route. It's kind of like this reusable chunk of code that then you can take that step further because camel is already a domain specific language when you use it in Java. So it's already abstracted some of that away. Camlet just takes that a step farther. Thank you. And let's see, is camel K usable with free scale as an integration? So with three scale, I don't know if there's actual, I know there was a component in the works with camel with free scale. I don't know if there is that for camel K specifically, but any rest API that you create camel K, you know, you can still manage that with three scale and do those aspects. But as far as the component itself, that I'm not sure if that has made its way into the camel K project. Sounds like a good follow up that we can do. Can we add tracing to camel K services? So any tracing, like you can add camel itself has things like wiretaps. So you can add that sure to camel K if you want to do tracing for say with open shift service mesh and utilize Istio, Yeager and Keali and all of that to have your distributed tracing and be able to see where things are going. You can definitely add that in on top of this. It doesn't really work any differently than other services within open shift for that sort of tracing, but you can add it on top. Ali says Yeager kind of tracing. Yep. So you can add that. It just is that separate installation piece. So Roland, you added some great comments into the chat. Do you mind hopping on again and yeah. Sure. So actually, I just wanted to mention that camel K are currently a hot topic in open shift servers in Canada itself. So we are currently working on to integrate Camelette as a source for Canada. And it's integrated on the CLI level, your K and plugins or K and this is CLI tool, how you can manage serverless applications on the CLI. And there's also integration in the developer console coming in. So just want to mention that Camelette is really kind of a very interesting technology. I really can recommend to look into that because it's really makes much, much easier. It's a very good fit for Canadian eventing. So Mary, this is the perfect opportunity if you had a wish list of what you'd like to see for camel K and serverless. I mean, what are you seeing customers asking for? What are you seeing in the field? Do you have insights that you would love to share with the rest of us right now? I mean, anything that has to do with our integrations with Kafka and, you know, just getting that to a very streamlined place, which I know that we as an organization are working on and figuring out the best way to do all of that. But that's such a big story right now in the field of, okay, we have all this data and we decided that we want to stream it and we want to stream it with Kafka, but then we all have all these things to connect to and camel tends to be a piece of that too with the data transformations and REST APIs. And so just getting that to a point where it's, you know, very usable and I shouldn't say usable, user friendly and easy to use because I know that that's something as well that we run into with competitors a lot as well. So that's probably the biggest area. And Roland, what do you see happening with camel K integrations other than the Camelettes and the OpenShift Developer Console integration? Yes. So actually, I'm really hope that more, so actually there are tons of components and I think there's only a subset of components currently available as Camelettes. And I really would love to see this portfolio to increase and I see there's a lot of work going on. I'm really super curious because I think it's one of the easiest way how you can create an eventing source for OpenShift Serverless in Canada. And yeah, we just need to make it more popular and just imagine if you could use any of those 200, 300 components out there for Camel as a source, I think this would be super awesome. So this would be something and yeah, so we're working on that task and so step by step. But actually, I think we are in a good way there. Yeah, I'm really looking forward to that. Yeah, this is pretty exciting. All the integrations and I mean, I see it on the inside of Red Hat and we all see it but communicating that to your customers and partners. I guess a follow on question to that is how do they give you feedback? How can they reach back to say, hey, we really want this integration? What would be the best way? Sure. So honestly, the best way is it's twofold. One, talking to your account team if you know them and letting them know because then they can take the appropriate news to create a request for improvements, open JIRA tickets. If you're comfortable doing that, you're free to do those things yourselves as well. And then also support tickets are actually really big for helping us track which customers want which features. And then the more customers say that want feature A, that makes it more likely that we'll do feature A versus if there's feature B also requested and then say from one customer or maybe not a Red Hat customer, maybe it's just from the community versus if we have a bunch of customers asking for feature A, then we know to prioritize that. So I know that's a large way that we do prioritize those features. And then from the account team perspective, any information that you can give the account team on, we're looking for this feature or this connector so that we can do this, this, and this and really diving into a use case can be very helpful because then we can then take that to engineering and to the business unit and be able to talk about, well, this is the use case that we're seeing. And that way we can also talk about if there is a different way that people are approaching that use case, we can help you with that or if it really is that you need that feature, sometimes that can accelerate that development as well. And is there a community? I know you just were addressing some of that, but the upstream so camel.apache.org, right? Thanks, Chris. Yeah, yep, that is correct. And that's a very active community. There's various email lists on there. If anyone is looking to get involved and become a contributor, that's always awesome. Then I would say a great way to do that once you've signed up and filled out all the appropriate forms. It's just to start if you find an issue in some documentation, just so that's a documentation or if you find a typo and then it gets you used to that process as well. Thank you. And I know the Knative upstream community is also really active. So Roland, are they discussing camel K integrations upstream? What do you have insights that you'd like to share there? Yes, actually, at the moment, so there are certain Knative repositories which are dealing about a camel K source. But actually what I'm working on and the working group I'm associated with is the client group and we're working on the plugin side for camelettes. And actually, we're not only thinking about camelettes as sources, but also camelettes as things. So you can also leverage camelettes to get input cloud events and then perform a certain action in there. And so both aspects are currently on the table right now for us. And yeah, so Christoph is working on that. And Christoph Stepisch, who is a contributor to the camel team. And we are closely working together with camel community over this channel here. And for those who don't know, when we talk about source versus think, source is just where the information is coming from. Think is where you're kind of pushing it to. Thanks for that clarification. The bucket and the, I don't have a good analogy. Yeah, I mean, the way that I first followed it was with the Kafka integration. So the Kafka will have these, you know, source connectors or sync connectors that you're pulling from say a database to then sync it into another database, something like that. But here, you know, you see these AWS integrations or Salesforce, JIRA. That's just a list of some of the smaller camelettes that were installed off the bat when I installed that camel K operator. I know this is always, you know, hot question, but it says tech preview. You have, you know, when do you think that it might go GA, not pressure oriented, you know, camelettes specifically? I don't know. I would be hesitant to give a timeline on that camel K in general, that'd be within the year. Very easily. Because we are one three one, it should be like at any point now. People have been using camel K in tech preview, developer preview, all that for over a year, which tends to be a good indicator. Nice. That's good to see that adoption and really was good to see the list of camelettes because there's so many and you're still working on more. So that's awesome to say that list just keeps growing. All right. Well, we, if anybody has any more questions now is the time. Otherwise, Mary, is there anything that you would like to end on some last parting words for everybody out there? Sure. I would say especially, I mean, if you have access to an OpenShift cluster, definitely give serverless and camel K a try. You may as well play around with it, see what you like, see if it works for you. And if you don't have access to an OpenShift cluster, then you can always go to learn.openshift.com, play around with some things, or talk to your account team about getting new access as well. We're always happy to set up workshops, demos, all that sort of stuff. Thank you very, very much. This was great, great intro, great demos, even with Maven not working right now. We figured it out. It's all good. And Roland, thank you for jumping in and into the discussion. And thank you, everybody's saying thank you in chat too. And with that, we will end and until next time, everyone, Chris, can you see us out?