 Good morning. Good afternoon. Good evening. Welcome to another edition of the developer experience office hours here on OpenShift TV I am Chris Short executive producer of OpenShift TV and technical marketing manager here at Red Hat I'm joined by two of my favorite Red Hatters today from our developer evangelism team Ryan Jarvenin and the one and only Natali Vento. How are y'all doing today? Hey doing great Yeah, as usual We would love to hear your thoughts on topics for the show. If you have anything you're super interested in seeing Definitely mention it in chat. We also have a feedback form, which I may as well Paste a link. I got a new shortened URL For the feedback form. Let me actually I'm gonna join. I'm not even in chat yet. Let me join the join the chat here and Topics Let's see topics Welcome All right, I just posted a link to a bit. Lee slash Dev X feedback camel case capitalization so capital D e V capital E XP capital F for feedback Give that link a click and let us know what topics you'd like to see here in the developer experience office hours Natali great to have you here today to help cover today's topic. We're gonna go through some serverless scenarios from learn dot open shift calm Feel free to drop questions on relative to that Topic or other topics into chat and we'll do our best to try to handle that's pretty interesting. Thank you It's always a pleasure to join you In this developer experience office hour. Good morning and good afternoon everyone Here it's a kind of 5 p.m. I guess the hours Change from your side Brian Ryan. I think it's 8 a.m. From your side. So yeah early Yeah, yeah, no, it's I mean it's 11 here and it still feels early for me sometimes. So yeah So We're here to talk about and I want to sorry hang on two things Ryan I can make you an even fancier shortened link. We can talk after the show. Oh cool Yeah, and then and then I want to point out that Joel Lord is in the house everybody So if you've got your front-end framework questions like tee those up Now's a good time for that And then also shout out to JP Dade who has been in firm. Well there firmware hell for the past 14 hours Thank you for joining us. I hope this is a good break for you So what are we to talk about folks? Yeah, so right so what is this serverless thing? That's the main topic today And so I personally have have tried to scope in my my area of focus on on Kubernetes and a couple years ago I had a lot of folks say it talking a lot about Istio To me and being like hey Istio is the hot new thing You really got to get up to speed on Istio to see the future of Kubernetes and then I also heard from other folks Istio is not fully Maybe not fully stable like there's there's good pieces to it, but there's also Areas that are maybe still evolving, right? Like any good project. Yeah, like any good project so I kind of tried to focus down on just what is relevant and useful to me and under my kind of traditional developer scope of responsibilities and tried to take a look at the Kubernetes API and Figure out what are the minimum number of resource types that that I need to be Informed of and so deployments was one that I'm like, okay. I need to Deploy a container. I need to know what deployments are. I need to route traffic to the container I need to know what services are, right? And then other higher order concepts that were optional or not always included I kind of was like I'm gonna gloss over this until it becomes more of a mainstream usable for everyone type of Topic and so Knative was kind of in that group of things where I was like Unless it is something that's applicable towards a major proportion of the audience Maybe I can skip it short term Well, I have recently Come up to speed. Well partially up to speed on on Knative as a field of Developer facing work and I think there is a lot in there That really extends the experience for developers So there's a new resource types that you're going to need to learn Also, a lot of these resource types may not be available on a stock cluster. So you may need to install Knative or an operator in order to get those abstractions into your cluster So just some basics there Yeah, go ahead another Just to add in addition to that also another Methodological approach, right? So with serverless, we have something we have this function That exists for a limited limited time. So we are in charge of taking that control On the input and taking then the value from the output But all those functions are just best effort executed So it's another approach. So we execute we can execute the multiple Function multiple application in this serverless ways in parallel But we don't expect too much real reliability. We don't expect to do real time for this stuff, right? So it's an also another approach. Let's say on demand reacting to events But also in a kind of the best effort So it's not the solution for any use case, but it's really really important in modern workloads like if you think about iot or multi-cloud it's very it's uh, it's Terrific important to deal with a framework and pattern like in serverless Yeah, and we have a for folks who are looking to get up to speed on this topic. I have let me see where I have I have a Link to a past Open shift commons video that I'm going to drop into chat So there's a there was a open shift commons briefing on the k-native project Featuring a couple of the upstream contributors Paul mori Roland huss Matt more and scott nickels So a kind of a cross company Collaboration a lot a lot of folks not just red hatters on this commons briefing. I just watched part of that this morning Really good content there if you're looking to get up to speed on k-native and learn kind of what's involved I definitely recommend taking a look at at that video absolutely, so JP day brings up a great kind of like conceptual question for us and chat, uh, you know, like k-native is serverless. What is the the open shift or kubernetes comparison to lambda does does ocp have a fazz Function or function as a service feature um And that's kind of like a good line of understanding right like fazz Is an isn't Serverless all at kind of the same time right like ryan you want to or natalia you want to dive into that a little bit Or whoever wants to dive into that feel free Yeah, um, if no one else is gonna jump i'll i mean natalia was natalia unmuted so Oh, go for it natalia. Go for it natalia. I was uh, just joining that discussion. Okay, go ahead right, sorry I was just going to say this video Totally covers this topic and I would not have been able to elaborate the differences between functions as a service and k-native serving If it were not for this open shift comments briefing I mean i'm sure I could have found this information in plenty of other places But they do a great job of kind of splitting, you know What belongs on this side of the line on the functions as a service? And what belongs on the k-native serving side of the equation? And so based on that video, uh, I'm Will attempt to summarize for for that group, but I think the difference was Knative serving provides a lot of the underlying abstractions on the kubernetes api But it doesn't make it doesn't have functionality for automatically spinning up a service When it gets external traffic from outside That's something that might need to be added as part of your platform technology Or you may have to kind of code that up As as part of your solution set Other features that are in that group of like not fully supported or not fully included In k-native are things like, uh, I guess I kind of said scaling from from zero to one But uh, not just one but scaling up to maybe a hundred replicas or however many you need Based on the external traffic demands and then also scaling back down to zero and idling The service when it's no longer needed. So that that's one example of something that is pretty common in a Uh, functional or function as a service or that platform as a service style Um solution But usually that function as a service is a little bit more higher level And a little bit closer to a paz style functionality And kubernetes is kind of By by scope trying not to be a full blown paz Um It's kind of leaving those details to be implemented by higher order solutions So you can get a lot of it via k-native Open shift if you have, uh, the k-native, uh, or serverless, uh, operators installed Open shift can provide a certain amount of that and so we should see some of that in the demo today Cool And I will toss out there. Go ahead natality. No, I was uh, just adding that different because there was the question What's the difference between lambda or open shift serverless or k-native? So that bit of lambda is the function as a service that ryan was mentioning And the open shift serverless is gonna have also that layer with a sub command inside the k-n Cli so you can see you will do k-n f a f a s and you can launch a function From a source code base. You can create a container and running it in a serverless way So the similar piece to aws lamb that would be that one But as ryan mentioned and k-native is kind of a backbone of serverless While function as a service is something on top But we're going also to put this something on top and k-native is agnostic, right? It can talk with many functions as a service as a plugin in We're going to add this bit with k-n f a a s to have this function as a service inside k-native cool another detail that has uh The architecturally been kind of moved out of bounds for k-native is the ability to build your Source code into a container image Early way early on in the k-native story that was a kind of in scope or something you could do using k-native Now a lot of that functionality has migrated over to tecton Um, and so you can run your build in tecton and then have tecton hand off to k-native To serve up the the resultant container image So we're trying to use the appropriate upstream abstractions With a kind of community of maintainers around it and not duplicate functionality that exists in other parts of the cloud native ecosystem Yeah, and just saying there's so many parts and pieces of the cloud native ecosystem If k-n faz doesn't work, there's probably something else out there that does work for you. Trust me another interesting piece that is uh really an attempt to align with upstream needs um k-native earlier on in order to do uh kind of traffic splitting between multiple k-native services You would have to install istio as a requirement right Now a lot of that is kind of behind a compatibility layer where you can Plug into istio and use istio for traffic splitting, but there's also several other Kind of traffic shaping providers. Uh, they're working on support for ingress v2 I think that's uh, I think alpha or beta Currently, but that's in progress. Uh, but there's a variety of kind of traffic splitting Uh implementations and istio is no longer a strict requirement. You could use it, but it's not a requirement of k-native Right and the name of the new ingress for doing this is courier This is the new tool they use instead of istio. So istio is not a requirement and you you got you can use courier out of the box to to have uh this functionality For for getting the traffic inside your serverless application Well, well, do we have any other questions in chat? If not, I'm going to start a screen share and uh Okay, so hopefully you all can see a desktop I have I have opened up I uh I Want to post in a link into chat in case anyone else is interested in joining? Uh, so we're at learn dot open shift dot com slash developing on open shift slash serverless We've got hyphens between that developing on open shift. Um But you should be able to find it from learn dot open shift dot com Yeah, we have a whole serverless kind of uh folder there with a lot of different topics in there There's an introductory serverless is which is what i'll cover today And then there are a couple more sections covering uh camel k Which gives you some nice uh extensions to the eventing Solutions in in serverless So let's see this says estimated time 30 minutes We'll see how long it takes us and um, if you notice any issues with the content as we're going through it Let us know i'm We're in the constantly in the progress of updating this content and making sure that it is relevant recent accurate. Um, this is currently using open shift four dot four For this uh scenario. So it's not our absolute latest Um, but it should still cover a lot of the concepts So i'm gonna hit start scenario Uh, oh goodness capacity limit. Let me reload and see I should have told everyone from the audience to to hold off until I get a session I posted the link to I don't think we have that many people in the audience that would make yeah I got a capacity limit warning yesterday as well Um on some other scenarios, but then this one magically worked for me So I guess I lucked out yesterday looks like I've got a session available. This does take a couple minutes to start up Um, what the work that it's doing in the background is installing the serverless operator So this takes a little bit of time um, we can jump into Let's see. We may actually have some we've got screenshots here to demonstrate what's happening in the background and if you're interested in seeing a lot more detail You can log in with admin credentials And then go and install the operator So this is kind of the steps you would need to do as an admin in order to make Open shift the open shift serverless operator available to developers This is just done once and it's installed Uh in all namespaces. Oh, this is uh, it looks like it's just one namespace for this example But oh no, let's say all all namespaces All right, but it creates its own namespace for its resources, right? Yeah. Yeah. Yeah, that does make sense. Yeah Um, so another detail I picked up from that Open shift commons summary is if you are using a kubernetes cluster that uses namespaces as part of its Uh, role-based access control and part of its uh, multi-tenancy um, then Knative works pretty decently well Uh, if you're using that type of analogy to do, um Uh multi-user support so you can have uh use namespaces as buckets for your k native stuff and then and then uh associate people with the rback rules and Hopefully that splits up access control in a reasonable way So we're almost to the end of the install here Uh, let's see see by chance increase the font size on the right hand side. Yeah, let me That's a good opportunity for me to mess with the font while we're while we're waiting here So eventually this will flip over to ready equals true It looks like we've got a loop running in the in the shell here to Pull the api every couple seconds Yeah Next step is to log in as a developer and create a new project so I should be able to Uh, here we go tutorial ready Let's try OC who am I? Currently logged in as a developer cool and OC project Looks like I'm already I'm using the serverless tutorial. This one isn't uh linked as a clickable step So it should be already completed for us, right Um A lot of these scenario steps if you click on the text It'll automatically paste it over into the right and get you up and ready to go So first command we Yeah Yeah, um So what we're going to do in this section is deploy our first Open service k native service not kubernetes service So we're kind of reusing the word service. Uh, but hopefully we've set the context appropriately Um configurations revisions and routes will be set up will scale to zero and uh, oh it should automatically scale to zero When we're no longer contacting the service nice So we could see that configurations revisions routes and services are all available on the api Uh, the open shift Dashboard provides a lot of nice visualization for this as well We could look take a quick look at the schema for Actually, this looks like a plain service, uh, but the api version is in Serving k native dev I was kind of expecting kind service would um Would be the official service, uh resource type, but I think this api version distinguishes it cool, so We can run k in is the command line tool that you can use for interacting with all the k native, uh Resources you can also use kubectl, but k in gives you a lot more specific functionality for k native, right So that should create the service Things happening I'm gonna see if I can run this in the background and it's usually Usually in our presentation, we put the difference between what is the difference between Writing a service and writing a deployment and other kubernetes stuff. So Usually we presented also as a way to Make a shorter yaml file to define multiple things Of course in the serverless way, but it's also a way to have a kind of shorter Infrastructure as a code for your services If the service itself is as ryan mentioned it's a specific api definition for k native But 100 the hood then implements the deployment the service the horizontal pod auto scalar So it's also a way to write less yaml if you want to follow this Serverless path and I found also One link to this to so maybe it's more clear Let me check So I can share in the chat Here we go Yes, I should just share in the chat presentation we made with brian and the cloud native italian events We present a general in a general way what is k native and in this specific slide I link it There's the reference between a service and what the service does under the hood to create the deployment The result of pod auto scalar the service so from 20 from 70 lines You uh come up to 22 lines Which is cool, no if you if you follow if you want to follow this serverless path Nice So it looks like we should have our initial serverless solution deployed and available. I ran a curl example command against the resulting generated auto generated url And we got a response back and I'm guessing that this first number is probably a hash representing that initial Service revision. Yeah, I think it's like a unique idea for the person the one if you hit it again, I Oh Do the same curl and and I'll get a different uh, maybe Unless it detects you as the same individual somehow I'm not accepting any cookies with this curl Yeah, there you go So are the numbers different? No, it looks looks like the same to me. Um Um Yeah, well, we'll see the curl lincoln chat. Well, let's hit it. Yeah happens Yeah, and then but then when I need to scale down to zero my demo is not going to work because everyone's going to be pinging it Anyway, good. There's the url uh in chat We'll see whether my uh auto shut down i'm for Works or not, but it's still the same uh like uid string. So I guess it's just unique to the app or whatever Yeah, and then I think there's a counter. Yeah, it should be a counter or something the counter's me for four Yeah, okay, cool. So I got one the second time I curled it So I wonder if it already idled and then spun back up or something Uh, if we do oc I got 12 now. I just okay Oc get deployment so let's pick the route. Um Okay, interesting. There's a deployment resource as well Nice There's a route list. That's always helpful. Yeah So this is uh, hey nice. Yeah, I'm used to doing oc route list and then Having to do a little bit of explaining to upstream users why I'm using oc instead of uh kube cuddle But k in I have less explaining to do so k in route list gives you your your routes in a very upstream compliant way We're still using kind of open shift terminology a bit calling it a route. Um, so Hopefully we're clear what type of route this is and um How we ended up with it Natalia, do you mind if I grab those slides and put it in the the channels like Speaker deck site. Sure. Sure. Okay, cool. Thanks. There are public Publicly available from yeah, I just want to make sure the like I can point people to it in the future without most content is partially taken from the great William Marketo uh decks Yeah, I adapted for for these events Always great stuff from William Mm-hmm So it looks like our previous image was tagged uh k native tutorial greeter Uh quarkus was the image tag And we are doing a uh roll forward or update Um to go to the latest tag We could do k in revision list So now we have a second generation That has recently been deployed Should be able to rerun the curl command with the same url And let's see is that a different number on there 845 kind of looks this looks similar to me same that 845 in string Yeah, it's the same I was kind of expecting that uh number to change but maybe it's uh an id for The route or something else rather than the the revision or the generation maybe Maybe yeah I'd have to look into the source code for this particular example to be sure The source code is publicly available if you dig into the examples here on katakota That's the beauty of elven source Yeah Yes Okay, so we ran the curl it looks like we've got some uh Some traffic there We could check the number of pods and it looks like we have Two of two Running So we're scaled up to more than just one, uh, which is interesting Uh greeter service will automatically scale down to zero if it doesn't get a request for 90 seconds So if you are running a a curl or some type of refresh on your browser If everyone holds off for 90 seconds, I should be able to Uh rerun this oc get pods and we should see it automatically scale back down cool That's pretty cool see Currently at two of two Um And it looks like one's currently terminating. So we're already down shifting. Yeah, nice so Question for after the demo, but I'll ask it now since we're waiting. Um, how easy is it to modernize existing apps to serverless? Right? Like what are the constraints that usually people stumble over is a good question It's a great question chris. I think it's also uh hard to to answer Yeah, no, that's why I'm asking it. Maybe maybe as we said before, uh, don't expect Uh, that your application is uh, it's gonna have to be real time If you have any real time application, that's not gonna work because the scheduler the internal serverless So serverless in general as this, uh, say paradigm your function your application runs for a limited time Let's say by default five minutes or 90 seconds if you don't use it, right So don't expect your application to be critical or real time. It's just one shot And you if you have this key if you keep this in mind and your your use case can can work out to serverless then it to be honest that adapting your application to serverless in open shift. It's Dramatically easy. It's just flagging in the developer console. Hey, my application is serverless And that becomes serverless. So under the hood you write the dev console write the service CR the servers API service API the kinetic service So it's very easy to write or deploy an application serverless That is difficult to understand if your application. It's a good fit for serverless and we understood that uh, that your if your application is actually Can work asynchronously independently Can also be linked of multiple application multiple functions. That is good use case But if your application is a long term long time running one hour, let's say Running application that you are kind of violating the paradigm. It's it's not gonna work The the biggest thing is like think of it as you know Can your application work as like if this than that like the iftt.com thing like if Event happens do thing event happens again Do thing another event happens. There's this different scenario now, right? Like you have to you have to kind of Break your app down to the point where it can just say, okay I'm only gonna run and it's only gonna take me a little amount of time to do this one run and um Off it goes. Think of it as like the 12 factor app, right? Like yes, there can be state involved, but um It's very much in that kind of like Execute and then continue kind of scenario Right. Yeah, I absolutely say it. Yeah I think part of it depends on how Far how much yaml are you currently dependent on and how is your app? architected I think yeah, if you've already kind of architected it in a 12 factor style Functionality you might just be able to run a tecton build and then deploy the resulting image as a serverless Resource and Hopefully it it just works If you are very invested in helm charts or other advanced yamls You might not be able to stuff all of those yamls inside k-native. You might need to Re-adapt some of your yamls to be Knative you can use the kn command line to help generate those initial yamls And then you can store those yamls in a helm chart But helm doesn't have quite the same support for traffic shaping. So if you're using helms kind of roll forward roll back It's not a direct match With the functionality that's available in in k-native today So there there's a no explicit Compatibility layer between helm and k-native, but there is a lot of support for Um getting a standard 12 factor app to work with very little effort right So this section should be just about done. I tried logging into the dashboard Uh using the developer credentials and had trouble finding the namespace So i'm gonna need to test that again. I thought last time I tried it it worked correctly So i'm not sure if this is just a bug in uh catacota or or something else Um, but we'll we'll step through the next couple steps and and see how far we get Um, I was able to log in as an admin, but then didn't see a lot of the resources I was expecting to see so let's see this next section should cover traffic distribution and uh blue green deployment Nice I'm curious how many folks in chat are, uh, really involved from a development perspective and if so Are they managing their own blue green deployments or are they just handing off to A build and see iSuite and then that's the end of the road for them I know in in my past experience. I was uh, generally kind of a developer And a sre at the same time develop developers In many places i've worked have been Responsible for the uptime of their service Um, and so we were able to roll forward new features to the public And if we broke something we would get paged and they'd be expected to roll it back. Um, so That's what i'm used to. Um, i'm curious if any folks in chat, uh, are Just kind of hand off to a pipeline and that's the end of the story If so, you know, this blue green may be out of scope for you, but um, this is really useful from from my perspective So yeah folks if you're If you want to tell us how you're playing apps today, please feel free to drop it in chat and you know, we can try and see if That could be a compatible way towards K native Most of the traffic splitting that i've done in the past is 100 covered by what's available in k native. So Um, now that i've seen all the traffic splitting that's available here I'm kind of curious like do i really need istio? It hasn't been Proven to me as a developer that that i need all of that traffic splitting support, but um, I wasn't doing uh mixing and and uh I wasn't making full use of all the features that the istio tries to provide So here we can run an Update we're going to set the revision name to greeter v2 and set this environment Uh key here cool The revision list shows we've got a v1 and a v2 available We can run another update. It looks like we are setting greeter v1 to current and greeter v2 to previous And then setting a latest tag as well And 100 of the traffic in this case is going to go to greeter v1 See if I can get a url to paste into into chat here oops Copied a little extra, but that's fine. Hopefully you got the you got the main url there And it's still that 88451 same one still interesting. Yeah, I'll have to figure out what that What that number is it might just be auto-generated. Yeah. Yeah So what's cool is um, I just check the headers on that request, right? So it shows up as it's an envoy upstream service type And uh, there is a cookie involved if you want it And yeah, like this it looks like it's coming from just any old web server, but it's got a couple extra headers for envoy Okay Interesting. Envoy proxy. Cool Okay, I got Number four in my response here Nice. Yeah with the eye off here. Let's see. I got five six seven oh my gosh And it just says hi greeter, um I thought that there was we were setting an environment variable. So I was I guess we're on v1 currently um I didn't check the response earlier when we were on um v2 But you should be able to roll between v1 and v2 change the the traffic allocation And uh off you go. So canary releases is this is this was one of the key capabilities that I relied on as a combo developer and sre. Um, I would always deploy my code we'd use I used to use cookies and To to basically do the the traffic splitting And um, if you were cookieed one way you'd go down one path And if you were cookieed a different way you'd go down a different path and I could Set cookies on all the incoming traffic. Um, I would have a special Cookie id that I could set just for myself using um my javascript console or other things like that and um, then that would allow me to go down and and access a Uh, sir like a solution that was published But not given any percentage of the traffic Um, so that was how I would do kind of canaries in the past and then I could ping my service Verify that it was running correctly Maybe even run some test automation against it in production to ensure that it was Fully functional and then I dial up the traffic Once I was somewhat confident that it was working as I expected in in production So this will show you how to step through all of those hoops here What's interesting is that Together with this in overshift. It looks like we have three way to do AP testing the traffic splitting canary. So we have the routing system general routing system Or then we have a istio service mesh and we have uh, also serverless and the serverless one is more maybe more Smart because your application is not active until you call it. So it's not running you are Saving resources you are saving your application for consuming resources Until you invoke it through a route through a url. So maybe this is the smartest way to to deal with revision canary What do you think? Yeah, I think it's pretty cool that you can basically always have every version live in the system to some extent and then update your traffic Splitting rules to either expose or hide those services or have them shared behind a unique url. So um, yeah, this gives me pretty much all I was hoping to get from istio Without having to take on the added complexity of learning all the istio crd's Um, so I need to do more research into istio now to see how it compares and what's the potential Value for me as a developer, but this has pretty much all I Need as far as traffic routing For my basic use cases at least Nice That's awesome So we'll run a couple more examples here It looks like my my console is a little bit slow to respond. I wonder if I can do a reload on this without No, I don't see a reload button There's a refresh for the browser side, but See I'm still waiting on on something. Let's see if I here. I got my terminal back. All right Let's see which, uh See if I have this curl working Huh, I apologize if you hear my dog. I can barely hear your dog. Uh, it's not at all that bad. It's uh, thank you. Yeah We've got something running here, but I hit some kind of uh issue with my with my shell earlier Um, so I'm not sure how many of these commands actually pasted let me see if I can find the Uh example for Showing the different uh showing the Let's do kn service Um Is there like a kn service list? Yeah, so it looks like I have a greeter v2 Um, I don't see what the traffic split is But we should uh should be able to list that Uh revision, I think It's kn revision Yeah, you should be able to see in the revisions. Yeah Let's check Let's try that cool. Okay, great. So this is what I was hoping to see we've got a v1 and a v2 Um, currently we've got 100 to current And there's these other tags latest and previous that you can manipulate um So more examples here. We're running almost to the end and I've have one more Section to go through here. So this is the scaling section This is going to talk about or just demonstrate Scaling to zero why that's important understanding The grace period for scaling to zero. That's customizable setting auto scaling strategies concurrency based auto scaling um, a minimum number of replicas um, setting up a horizontal pod auto scaler so you can do more advanced Scaling Lots of nice options in in this section Um, and the open shift dashboard is going to show a lot of really cool Um visualizations to go along with this as well assuming it it uh loads up everything for you I'm going to give that dashboard one more shot and see if I see any of this content I'm still getting the catalog and it's saying no workloads found so That's I have a odd situation going on with maybe I'm in the wrong namespace. I think serverless dash tutorial Was the right one was the right one. That's the one it's claiming to be in from the command line And I do have resources showing up from the command line I am confident though in new releases of open shift should be available in in 4.4 But I'm not totally certain but new releases of open shift have some very nice advanced Kind of topology view where you can see a lot of these resource types visualized and I get a better impression for how eventing is going to connect them together and cause different interactions between these services Yeah, absolutely For instance, if you have a kafka cluster, you can connect the kafka messages To your application just dropping a line from the kafka Event point through your application 100 the hood there is going to be a k native eventing API like the serving but For for events and it's going to be automatically done by the open shift web console So this is a user experience As improved a lot and it's much much easier to Prepare your serverless workloads in terms of serving And eventing so reacting to internal event can be a database can be Streaming platform can be anything from kubernetes that can scale up your application There's also a lot A wide variety of eventing kind of sources and and syncs available via camel k I have not Learned very much about camel k as of yet, but I know we do have several Learning scenarios focused on camel k and the additional eventing types that are made available Through through that solution So definitely if you're interested in learning more advanced eventing And integration around eventing Definitely take a look at those follow-on scenarios that come up after this one Is anyone else currently going through this scenario any folks in chat? Let me know if if you saw something dramatically different than what I saw When you opened up your open shift web console Uh, so I have so far created a new service Set up some max scale um, and a couple other modifications This last example, I think generates prime numbers And you can input, um, I think kind of a seed value and it'll Tell you what the next I think what the next prime is following that interesting Careful how big of a number you input. Yeah I didn't know about that hey tool Yeah, it's a c so I have reported a bug about this dc in the console. I'm getting an error message back after running this hey command um, and so in the in this you could see it says xml version anonymous caller does not have storage objects I believe that this This is actually a bug Related to catecota. So if you're running this on your own local cluster You can copy and paste these commands into a local Uh, code ready workspaces instance or mini cube or or other kubernetes offerings rather than just having it paste across into the examples or into the embedded shell um, so Hopefully if you copy and paste this into your own cluster You won't see this error message from the hey command. You will need to install the hey command. I guess If you don't have it already But this is actually a a error with catecota um, and I've already reported a bug so No need to report it. I'll hopefully get it cleaned up. I need to jump through those Yeah, yeah already done the the effort for you Also that one I was expecting to see an issue there. Okay prime generator is up Just a reminder we have about five minutes left five minutes left stop for okay. What what's the next topic? Believe it or not. We will be talking more about serverless functions Excellent. All right. Yeah, we've got an overview and demo with Nanya Singh and lance ball from red hat. So yeah, we'll be talking about those functions. Oh, nice. All right I enjoy chatting with lance Yeah, this show ties in very well with the next one Kind of all about the serverless today save all your really hard questions for for lance and and crew Yeah, like if you want that like all your questions that we couldn't quite answer Bring it to that one and you could probably get an answer Yeah, I thought this was like good the warm up for the next session, right exactly exactly. Yeah Yeah, so that should get you through the end of the serverless session Here there's several more examples You can click on the more scenarios button at the end and that should let's see I'm gonna go back to the main learn See if I could find here we go open shift serverless. So there's a whole section here There's getting started and then these other five all kind of dive into Or at least four of them dive into camel K use cases camel K is cool. It's cool. Apache camel is a popular open source project and Camel K is the serverless version of this project It's pretty new, but it's having a good momentum because it's helped you defining What is called camel routes? So those are routes which are virtual like an Apache we're kind of Apache controls, but made with a camel with a DSL language that abstract you For connecting multiple point multiple source can be a database IQ And you connect this to another cluster to another Kafka partition So it's an abstraction around this eventing part and it's it's really cool I suggest to try it out to the scenarios It's something that is having a great moment. Yeah. Yeah, no to be sure we've talked about Camel K a couple times on the channel Oh, yeah Well, I posted in one last link to our topic survey If you have topics that you would like to see coming up One of our suggestions that we had in there was how to debug and view logs with serverless apps so We might hit that up in the future or you can ask the the hosts on the next session But yeah Thanks again For your feedback in chat, let us know if there are other topics that we should hit on this show in the future And I think that's it for us Yeah, awesome. Great work today, Natali and Ryan. Thank you so much for joining us here Like I mentioned, stay tuned for the next show that's coming up here in just a few minutes We'll be talking about serverless functions with the serverless functions experts apparently, uh, yeah Lance ball is serverless a genius here, uh, to say the least but yes, thank you all for joining and I will see y'all very soon See you next time. Always a pleasure. Bye. Bye