 Good morning, good afternoon, good evening, and welcome to another edition of the Developer Experience Office Hours. I am Chris Short, Executive Producer of OpenShift TV, and today I am joined by three of my fellow Red Headers, Serena from the Future, Nichols, Brian Jarvenan, and Jay, I won't try and say your last name, but please start the introductions, Jay, go ahead. Hello, everyone. It's really nice to be here. My full name is Javerdan, you can call me Jay. And I am working with Red Hat as a UI engineer in OpenShift, basically, and so it's a pleasure to be here. Thanks. We're happy to have you, Brian. Yeah, I'm a developer advocate in Red Hat's OpenShift team. I'll be keeping an eye on questions in chat, and also if you have future topics for this show that you'd like to see us address, definitely let us know in chat or otherwise, and we'd be happy to try to cover those topics as well. Or Q&A, you have regarding this specific episode. I'll be helping wrangle those in chat, so definitely let us know if you have questions about anything we're covering today. Awesome. Thank you, Ryan, and Serena. Hey, everybody. Serena Nichols, I'm one of the developer tooling PMs focusing on OpenShift developer experience in the console, and so I'm glad you guys are all here, and Jay is one of our Epic owners specifically, well, very frequently working on serverless features inside of the console, so we're really happy to have him here, and my dogs all say hello as well. So, Jay, yeah. I was gonna say something. Serena has some dog duty to take care of today, so yes, if Serena disappears for any reason, that's because she's chasing on new dogs, so yeah. All right, so Jay, maybe you could start. Yeah, you wanna kick everything off here, buddy? Yes. I hope you guys can see my screen. Yes, I'll get there. So Serena would like to start. Yeah, sure, so yeah, today what we're gonna do is we're gonna demo some of the four seven features that we've added into the console, and then after that, if we have time, we're even gonna start demoing some of the things that we're looking at, that we've already completed development on, that will be coming in post four seven, so some really exciting stuff here. So I think Jay's got a couple of slides here just to, again, remind us on what Serilis is as well as on the next side, introducing canine. Oh, go ahead. I forgot to say, remember, folks, some of this stuff is future stuff. Some of this stuff is now stuff, so keep that in mind. Right, yeah, thanks for saying that. No problem. Yeah, so we're gonna be talking about canative and then specifically with the console, when the OpenShift Serilis operator is installed, we have the ability to get Serilis features as well as eventing features. So this first feature that was added in four seven, which is now available, we'll be able to see some things in the admin perspective. So I'll just hand over to Jay to demo what those features are. Yeah, so I'd like to start with the first one. So this is the Overshift cluster and as you can see, I have the operators installed that is Red Hat OpenShift Serilis 1.30, which is needed. And I also have Red Hat integration to camel K, which I'll be talking a bit later about. So the first feature goes here is we are now providing options for admin as well in like admin perspective. As you can see, we have a serverless nav section which has both the favorite things like serving and eventing. If you click on that, it will list you all the services, revisions and routes. And likewise we have for eventing which lists all the event sources and it has all the features like filtering or searching the brokers, the triggers, channels, subscription and we even have ability to create from this. I mean, you can go ahead and create any of these say for event sources. So that was about the first feature we have. So I think we should talk about the second one now. Yeah, Jay, I was also going to say you want to just remind people if you go back into the developer perspective that they can search for those elements and then add them to the navigation because it's always a nice thing to do. So yeah, you need to go to Dev, yeah. So that's a great feature because many times when we are working you would like to see something. For instance, I want to have my brokers handy. So I can just select on broker and say add to navigation. So this will be preserved here. At any point in time we can come back and we can see the list of brokers available. So that was about this. We can have a number of navigations here. Thanks Jay for showing that too. Even though that's an older feature I think it's something that when people realize that it's there it's pretty exciting for them. So it's a good thing to show. So the next piece are you going to look at the event source catalog. Yes, so if you go to develop perspective we have option here under add you can see the cards over here. You can go to either through catalog or event sources directly. So if you click on that it lists all the sources. So the objective behind this to improve the user experience by providing a scalability because event sources are growing and we also support camel connectors. We call it camlets over here. So you can see like GA, Salesforce, Telegram, Slack all of these are like powered by camel. And so Jay can you just go over I don't remember specifically what happens as far as operators are concerned for this. Is there a specific operator we need to have installed in order to expose those? Sorry, yeah, so again this is that piece. So yeah, in Italy I spoke about the operator which I have installed. So that is Red Hat integration camel K 1.2.1 provided by Red Hat. So as a user, we need to have this operator installed and we can also create integration platform. So whichever name space you create this you can see all the camlets over there or camel connectors over there listed. And you can also access to that from eventing perspective. I mean, just by clicking on event sources with one click. And again, just to kind of talk through this if that camel operator was not installed you'd still have event sources but we'd be limited to what is it four or five of them? Yes, like API server source, container source ping source, sync binding and Kafka source if you have created the CR for that which is provided by serverless operator itself just to highlight. So you can see here Knetif Kafka. So if you click on this and if you enable it the source you can see it. I see, okay, excellent. Thanks for walking through that. And I think also just to kind of note the event source catalog I think we mentioned some of this in previous office hours but we have in four seven we've changed kind of though the experience of our catalogs a little bit trying to make things more consistent throughout the sub catalog. So you could still get when you go to the developer catalog you'll see event sources inside the larger developer catalog or you can directly go into the event source catalog and just the event sources. So the next piece you're gonna talk about channels and subscriptions. Yes. So I think already in previous sessions or a couple of times we have seen about the different ways events are basically can be consumed like one way is direct connection which is like source to service. And in today, I mean in this session we'll be talking about channels and subscriptions. So that's a way where we can have multiple subscribers for different events coming through different sources. So as you can see in this particular diagram or this particular visualization so let me take you to the topology view what we call it. So this is the sample project which I have created. So if you guys can see over here I have a channel which is in memory channel and I have two of the sources which are being think to that one is spring source another one is candidate source. And then we have these two services which are receiving all the events coming through all of these sources. So in channel we don't have mechanism for filter. So all the events emitted by these sources will be subscribed by these services over here. So this is our candidate service which I created which basically prudent the locks emitted. So if you see over here you can see hello channels. This is something being emitted through spring source. And I'd like to add one thing to that you can do all of these creations through GUI or through web console as well. So for instance, if I want to show you guys I'll go ahead and click on channel. So you can see there are options if you would have enabled Kafka channel you can even see that. So I'll try to create a channel now. So we have our channel now next we need this event source. So I'll go to add flow and which we have seen I'll click on event sources I'll say ping source and if you click on this create this is the new thing which we added in 4.7 we have the ability for a user to switch between form and YAML section. So earlier that was not the case. So user is no more limited and all the data whatever they're typing here will be preserved here. So I will say per second and sync it to in-memory channel I will not have the app name and I'll take create. So what this will do it will sync the ping source to in-memory channel and which can visualize in topology as well and let me quickly go ahead and create one more event source. So this time I'll go with API server source. Yes. So already I have created a cluster role bindings to watch the events and a service account associated. So I'll just select the service account here we are calling it events essay and there are different modes as I'm going to watch events I'll call it resource and if a version for events will be view one and kind will be event and if we click on say create you can see both the sources are sync to in-memory channel. Now next we need is a service which we need to subscribe to. So I'll go ahead and I'll use container image and this one image which I use always frequently. So it's more like a logger service which prints the information going over there. So I'll call it channel display because that is what we are going to use for showcase I'll call it zero and we have to select kind of service of the option resources because we are going to create a serverless application and I'll click on create. So we have one of the service ready. Let it come up and let me go ahead and create the other service just to showcase that. So I'll go ahead to container image again I'll use the same image which I use last time and I'll call it channel display one and I'll select again, candidate service. So as you can see all of these operations that we perform through web console in order to go to CLI if you don't like it much and if you do, feel free to. So as you can see over here we have our services up. So let me try to rearrange it. Yes, we have- I'm also gonna just type in here a little bit too, Jay because the other thing that's kind of cool in 4.7 is that topology layout now persists too. So as you're rearranging things the next time you come back into this project you don't lose that which is awesome. So I just wanted to mention that as well. Yeah, that's a very good point and I think it will be useful for user to persist the layout. So now I'm going to create a subscription. So I'm just, if the guest is, I just tracked and dropped on a service and you can see that it is asking you for the name if you want, you can modify it. I won't for this session, yeah. And I'll use another one to connect to another service. So I have my memory channel being subscribed by two of the services. And soon you will see the container logs emitting the events or showing the events whatever it being subscribed or by this services. So meanwhile, let me go ahead and show you that. Yes, as you can see, all of these cloud events are being provided here or you can watch for. So based on this, the service could be anything. It will use case where you want to perform some action based on some cloud events. So that is what I wanted to showcase here. And as I told earlier, with channel, we can't do filter basically. So all the events emitted by this API server source where we are watching for events or ping source, which is emitting on a time interval. So all of these events are being sent by both of the services. So that was about channels and subscriptions. So anything to add, Serena? Or shall we go back to the slides? No, I think that's great. You're gonna go over brokers and triggers next too, right? Yes. So in this slide, I have visualization to demonstrate that. I mean, if it makes more sense here, yeah. So next is brokers and triggers. So the drawback or our wallet, one of the benefit which we have with broker and triggers is like we have the ability to filter it out. So we can filter any of the cloud events based on whatever we want to and attributes or any of these things which are not possible with channel. So this is how the visualization would look like. So let me take you back again to the project which I already created. So if you see over here, so this is a broker and we have three of the event sources which are being sent to this broker and then we have again two services. And if you notice this connector, we have this one perpendicular line over here. So this denotes the filter. If you click on this, you can see the filter. What is the filter I have applied over here? Like it's def.connected.sources.ping. So I wanted this service to only get basically events from PingSource. So that's what it does. And for the other one, I don't have any filter. So it will still get all the events from all our different sources. So let me quickly show how we can create this through GNY again. So I'll go to admin perspective this time for change and I will go to eventing. I'll select the project which I was focusing earlier and create on broker. So this is what a simple broker would look like. I'll call it default and I'll hit create. So we have our broker created. If you go back to the topology view again, you should be able to see even broker over here. So we have our broker. Now next what we'll try to do, we'll try to create a PingSource again. Or better I can, yep, let me create one. So I'll repeat the process what I did last time. So I'll go ahead and create on PingSource. I'll say PingBroker. So we are thinking of the broker but still you have options to choose any of these things. So I will continue with broker and I'll call it PingSource one. And should you let it to give, I'll give per second. So I'll say create. So now I have my PingSource ready. And just to show you guys, I mean, we can even point it to some other addressable object here with easy drag and drop if you want to. You can point it to IMC or you can add it to broker itself. So I'll go ahead and create another event source that is API server source. So again, I'll use the same thing to watch for events and I'll go for resource as I'm watching for events. I'll select events as a service account which I already have created. And next thing is, say, pick create. So as you can see over here, as I pointed my PingSource one to memory channel, so it is pointed to that. So let me bring, let me make a change and this time make this PingSource point to broker. So we did that. And yeah, it has updated. Yes. So next is again service. So quickly I'll go ahead and repeat the process which I did last time just so that you can visualize what is the difference. And I'm going to create again a creative service. I'll call it broker display zero. I'll quickly create one more. Call it broker display one and select creative service. Now we have all of these things over here. We have two of the revisions and we have our broker. So let me make a step to the screen and keep the services in one side. Yes. Sorry, yeah. We have so many things on top of it. I don't want these options. Yes. So next what I wanted to show you is like how to create a trigger. So we have our event sources which is being sent to a broker. And with easy questions again, if you click on this and drop it to a service, you get option to add a trigger. So as you can see, this is the trigger name and we can provide the attribute here. I'll call it type and value will be dev.kated.sources.ping. I wanted only ping source to be received by the subservice. So that's more like a filter here. I'll say add. And now I'll track and drop another trigger to a different service. So I don't want any filter over here. So I'll do the add again. And as you can see, so we have everything set up. You have to wait for some time and slowly we'll see the spots coming up or the women events are being received. And we can see in the log the filter happening only for the service, not for the other one. So. EJ, I have a question for you. Yeah. If the user didn't want to do drag and drop, is there ability to do like the right click or go to the options menu for the broker? Yes. So if you don't want to do drag and drop, you can do a right click and say add trigger. And you will get the same option. Like if you want to, if I want to subscribe it to a different service now, I can do that right away. Okay. And the same thing done achieved with admin perspective as well. So just to highlight that, if you go to event thing, if say if you go to brokers, we have option to add trigger here as well. Okay. And then that would also be the same menu like if you're in topology and you have your broker selected, there's an actions menu on that right hand side panel that would also allow you to do it from there as well, right? Yes. So if you click on any of this, I think you can see now the parts are up. So if I click on this logs, I should be able to see only the logs from the pink source. That is best hello. And if you go back to the other service, you should be able to see the logs from both the event sources. It's gearing up, I believe. Yeah. So, and if you click on this proper, you can see the details of the subscribers. Like, okay, these are the services and you can click on so filter. You can see the filter over here. And yeah, likewise for channels as well. You can see all of these details in the sidebar inside navigation. And this action menu helps you to perform the same object things, whatever we can on the right click on the context menu. So that was about the brokenness, which I wanted to show, yeah. That's awesome. Let me ask a couple of other questions if you don't mind. Could you go back to the ad page? Yes. So on the ad page, we have the ability, we have a tile to create channels and we have, but we currently, we don't have the ability to create brokers from the ad page, correct? Yes, yes. So broken vision. I think 4.7 is just through ML currently. So that's the feature is provided in the admin side. But yeah, I think this is something we'll be working on. I think we'll be happy if we can add flow as well. Yeah, that's what I was gonna also say. So you already added your brokers navigation item. So if you had that there, you could just go down even in the dev side, right? And then you could use that create broker up there on the top, right? Yes, yes. If you wanted to create it from there. Okay, okay, great. I just wanted to let people know that that still is, you can still create it from the UI, but it's just YAML forms right now. So one of the things that we've done enough, a lot of work on in the last couple of releases and continue to do is trying to have form driven experiences in addition to YAML driven experiences. And we're just not there in all the spots yet. So, okay. And would you highlight as to why you would want to use the brokers over the triggers? I'm sorry, the brokers over the channels. Yeah, so the main benefit I see of using broker over channels is the ability to filter it out. So assume that you have N number of event sources, we from Slack, Telegram, PinkSource, API server or Kafka, but and we have like three or four subscriber, different services listening to those. But as a user or we have a use case where we only want a particular service to receive events only from say Kafka, not from other sources. So broker helps us with that to achieve that filtration logic with the ability to filter it out. So which is something like providing different attributes like one such thing I did here in this particular trigger, I added type for PinkSource. So what this means is this particular service is only received events being emitted from PinkSource, not other sources, which are being synced to this particular broker. So if you say add trigger, you have option to add multiple attributes over here. Awesome. Okay, thank you. Chris or Ryan, is there anything in the chat that you guys wanna bring up around what Jay's demoed for 4.7 so far? Yeah, there's... Go ahead, Chris. Oh, I'll take it, all right. You do, dude. There's a couple of questions that popped up in chat. One that I'm gonna butt in front of the line and ask from the ad view. You had, I'm gonna throw my own question in. So from that when you did add, there was catalog event source and then channel. Is there a way to filter these to get just K-native things or just, I've seen in the past like a checkbox on the side of things that are only operators. Do we have anything like that to help filter what we're looking at? So just on the ad page itself right now, it looks like a tile view. We are just so you know, in the future, we are also gonna redesign this so things are kind of more aligned with everything that's K-native-based or pipeline-based, et cetera. But I think Jay was just gonna go into the catalog itself. When you go into the larger developer catalog, we do still have these subtypes on the left-hand side. The differences in 4.7, well, I'm sorry, let me step back. Pre-4.7, we used to have checkboxes there that allowed you to see multiple types at once. Now when you go to developer catalog, you see all of them. If you want to drill in, you only see a single type. Got it. If that makes sense. And when you're in the larger developer catalog, you do still see those labels on each of the tiles that are indicating is it a builder image or a Helm chart, event source, et cetera. And there's event sources there. Okay, great. And there was also, let's see, if I scroll back and chat a little bit, I saw OpenPixel had a question about what's the difference between Knative Kafka and AMQ Streams? So the difference between Knative Kafka and AMQ Streams is Knative Kafka will provide us with the Kafka channel and the Kafka source. But in turn, AMQ Streams will help us to have the Kafka cluster set up, create the bootstrap server, Kafka topics, et cetera, so that Kafka source can consume that particular thing. So if that helps, yeah. Yeah, OpenPixel definitely let us know if that doesn't cover that particular question. We had another one from OpenPixel saying that triggers are a basic Kubernetes resource, correct? Or is that a Knative resource? So the triggers which I saw, that's a Knative resource. It's Knative triggers. And I think the question is, is there a way to trigger events from external source running outside of OpenSafe, say, data-based service on cloud? So again, I mean, to answer that, yeah, I think if it is a public URL which can be accessed, I'm not sure, but I think there is source for particularly that, just like we have for different sources, be it Twitter or anything. So I think that can be consumed. So I'm not very sure with that answer, like how it can be done if it is something external. But yeah, if we have external service, which is addressable object, then that can be synced to in OpenSafe. So what I mean is if you go to develop perspective again, and if you go to any of the event sources or any of the scenarios, and we have the ability to sync URI. So if you notice over here, so it takes any valid URI. So this will be useful if we have any addressable entity, even if it is outside of question. Awesome. So OpenPixel was asking a clarifying here. So he's imagine are KnativeStreams equal to Kafka and then Knative Kafka equivalent to the integration into OpenShift. That doesn't sound quite right to me. I'm not sure about how to answer that one. Yeah, I think- I'm not sure about that one either. We might want to get more clarifying. Yeah, AMQStreams, yeah, yeah, we'd need to, I think have folks from that team in there. Yes, I think Kame's answered there. So basically, AMQStreams is a red hat separate version of StreamG Kafka in that simple. StreamG Kafka is a community operator. So if I take you guys to the admin perspective again, and in the operator hub, you can see, I think StreamG will be, something will be aware of. You can see this community version. So likewise, we have AMQStreams, which is like productionized version from red hat. And if you wanted to try out Kafka source, so let me show you right away. Let me go to Knative Eventing Name Space and I'll try to create Knative Kafka. So if you click on create, you should see the option like channel if you want to enable it or source. So I'll enable it and channel as well. I'll say enable it. I want Kafka channel to be there now as the default channel. And if I hit create, so it will bring in both CRDs basically over here. So it will take a bit to instantiate. And after that, under event sources, we should be able to see Kafka source as well as under channels, we should be able to see Kafka channel. So let me go to, sorry, developer perspective. And but still with this, we still need to have AMQStreams or StreamG in order to have Bushtab server or Knative or Kafka topics over here. So if I go to developer perspective now, under add flow, let me select, yeah, the name space where I was in. If I select say event sources, we can see the Kafka source over here. We were not there earlier. So that was the CR which we created. And if you click on this, this is what I mean, Bushtab server. So once we have a StreamG, we need to have a Kafka set up. So this will provide us these URLs or the topics which we want to subscribe to or listen to basically. Thanks for going through that. And I think Ben just answered one of the questions as well, right? In chat. Yeah, thank you. Around AMQStreams. In chat right now, by the way. Yeah, AMQStreams helps you deploy and manage Kafka clusters inside your OpenShift cluster. Okay. Nested clustering, good stuff. You know, and a few of these, like as Jay has done, as showed, as soon as some of these operators are installed, we expose additional elements inside of the UI so that you can access the features for them, which is cool. Okay, so Jay, I wonder is it time for us to flip over to the future? Okay. So the first thing we're gonna talk about or show I guess is one of the things that if you're creating a serverless function, it is visualized inside of the topology view, even though we don't currently have the way to create one inside of the console, right? So there's the ability to create them through the CLI, the Knative CLI, and I don't know if you were gonna go over that or you're just gonna show the visualization, Jay. Yeah, so let me show you the visualization first and maybe you can talk about the steps if someone wants to try it out. So, well, sorry, not this one. Upcoming demo. So if you see over here, we have two Knative service. So one is powered by serverless functions. That's why we have this FX icon and a different background color. So this is the way you can identify anything that's created through serverless function in topology. And if you wanted to create it, we can go to CLI and yes. So let me create a new project. I'll call it Jaytest12. So it's very simple to create. You can just save, pay, and fun, create, node test, a simple one I'll be using. So it basically creates a projector, node test. If you go to that and do code dot, try to open a grace code for me and just to showcase what it does, it creates an index.js and they can see that we have a get and post where it returns the query param and the payload. This is the simple thing it is trying to do. So let me go back to the CLI again and I'll do KN deploy. So basically what it does, it tries to build this and deploy it into any of the registry. So I'm using Docker here, sorry. So it is trying to build that thing and deploy it into the particular namespace. So it will take a while to do that. And let me meanwhile go back to the slides and I'll do visit this and so you guys in that namespace want to stop. So the other thing we could do, Jay, if you don't mind, if you go back to topology view and just zoom in a little bit so people can see just a little bit closer that the difference between that blocks on the left. So the background color for a KNate for a serverless function is like a light purple. And then to the left of the KNate of service badge we're showing that FX icon. Like we mentioned, this is not in 4.7. This is not released yet, it's in the future. So that icon will likely change before we get this in the product. But it will be something similar to that to denote that it is a serverless function. And then if you look at the item, the box on the right that's just our regular KNate of service where it's a gray background, visualized similarly, but it has a gray background and has the KNate of logo. And then I think we've also talked about that we haven't figured out exactly what we're doing. If you select one of these items in the side panel, Jay we talked about somehow probably in the details tab there do noting that it's a serverless function as well, right? But we haven't gotten to that yet. Yeah. And just to add being KNate of service it has all the ability of a serverless service like graphical spreading everything you can achieve here. Yeah. So if I go back to my, I think function is deployed. So let me quickly validate this. If I go to Jay test, as you can see it is up. And if I click on this particular URL it says query empty. I'll say name is to Jay and it's to surf. So it just does what I saw in the code. So it could be anything. I'll go back to the slide again. Excellent. Thank you. Okay. So the next piece is we've added a couple of extra advanced scaling options for the KNate of service. So are we able to show that in an edit as well as in a create Jay? Yes. So maybe we just demo the edit piece maybe. Yeah. Makes sense. So let me go back to the service functions which I created. So if you do right click and say edit node test it will take you back to the edit flow and you have this option over here like scaling. So if you click on that, we can see a lot of options over here talking about auto scale window, and things like that and all of these pieces. So as we are editing it, so it is not populating the default values because nothing was provided in the beginning by user. But if you are going to create a new one to just to showcase in container image we even populate the default value what they are currently. So if I select KNate of service now and I click on a scaling we can see that 60 seconds is the auto scale window. So you can change it to minutes or it goes to one hour. So these options will configure you can set up concurrency utilization limit target and min and max part which you already had. So all of these capabilities are there. Excellent. I think would like to highlight Serena. I was just going to say now that that's great. So now we have the ability to do that upon creation or even after on an edit if you want to make those changes. And that gives us some consistency with the support level in the CLI now which we didn't have in four cents. We didn't mention here like what version of the OpenSIF serverless are we operating using? So we are currently into 1.13 of OpenSIF serverless. Awesome. Okay. Great. And then I think there's one other feature that you guys have been working on and on the dev side as well, right? Around canadifying something as some of us like to call it. So again, just to make sure people know this is not 47 this is going to be something in the future but maybe you can talk about that one as well. Yes. So the purpose or idea behind this is assume that I have a workload and I'm new to a creative world and I wanted to try it out with my workload be it deployment or deployment config. So I'll quickly go ahead and create a deployment. I'll use basic simple hello OpenSIF image. And if you click on create it basically creates the deployment for us. So once the deployment is running we'll see option to migrate it to a creative service. So let me take you back to the slide just to demonstrate if you talk in the YAML thing it's something like this. So when we create a deployment we have to create service or out if you want to space and a lot of those things. So with creative service is just small even with the YAML I mean we feel a lot of how to call it reduce in the specs in the YAML but now with a web console with the easy click you should be able to achieve that. So I think it's taking some time to come up. Yeah, container is creating. So if you click on this you can see create creative service over here. So this option is shown across in all the context menu or XN items. If you click on this it takes us to the new view where it shows that this is the image we are trying to use which was part of the deployment. And we have the ability to again tweak any of the advanced options whatever is being shown over here be it health checks, scaling, resource limit, et cetera. Or you can just choose to ignore it. So with one simple click we should be able to have our creative service out of the deployment as well. The chooser can try it out. So this was the deployment and we are not building a deployment at this stage. So it will take a minute I guess of your seconds, not a minute, sorry. And yes, it is up. Even if it takes a few minutes that's pretty amazing. Yeah. And if you see what Hello Open SIFT is it's nothing but Hello Open SIFT. So let me quickly, sorry. Let me quickly go and show you our creative service. Yes, it's up and running. And it has all the ability of our creative service. Perfect splitting, scaling and all over the features which it offers. So yeah, this is awesome. So just to mention too, like we said this is future right now we're calling it create Knative Service. We're gonna be doing some, I think validation if that's really the name. So things might change by the time that comes out but this is definitely a feature that people have asked like how are we, how are we able to take an existing service and make it be serverless. So I think this is like really interesting work that the dev team is doing and has started off on. So it's pretty cool. Thanks for sharing all this Jay. Looks really good. And again in chat, like if we would love to hear any of the comments and what you've seen in 4.7 as well as some of the works that we're doing post 4.7, please let us know what you think. And Jay, I think you had another slide then. I think we are done with the slides. Oh, okay, okay. Awesome, awesome. So is there any other questions in chat? Oh, I'm sure there is. Let me track real quick. All righty. Can you see the nodes view in 4.7? I have an issue with the formatting of the table in my cluster on 4.7.0. Sometimes loads correctly, sometimes the rows get misplaced. Ooh, just in nodes view though. That's weird. Interesting. I think Jay, you're running like a homegrown, not homegrown, but you're running like a for post 4.7 version. So you could still try, but I'm not sure if it's gonna look exactly like what people would see on 4.7. Be kind asks, what is the roadmap for OpenShift serverless functions component? Services and eventing are mature, but what about designing and deploying functions like in OpenWisk or other functions as a service type thing? Yeah, I think we'd need to pull in Carina or Nina, the PMs that are aligned with OpenShift serverless itself. I don't have that information. I'm not sure if any other Red Hatters that are on chat might have that. Yeah. But we don't have, I don't have it right off the top of my head. Yeah, I don't either. So me, oh, you, oh, Ryan, wow, you did it. Okay. Yeah, yeah, I'm not totally sure on that one. I know we used to have a way of running OpenWisk directly. I think I'm not sure if there was a operator or a template or something, but if you search for OpenShift and OpenWisk, you might still find examples. I think that was back in the OpenShift 3 era though. And a lot of our functionality for serverless is now basically covered by Knative. From my understanding, I'm 100% certain on that. And I'd be happy to defer to other folks, but yeah, I think Knative and Camel K is kind of the future for the kind of cloud native community that's building directly on Kubernetes. I don't think OpenWisk had anything specific to Kubernetes, like it wasn't built out of CRDs or I don't think it ran on the control plane in the same way that Knative does. So Knative gives you a little bit tighter integration and more community support around Kubernetes, focusing on Kubernetes. Answering a long question in chat. So let's just talk about it. Are the features demonstrated part of OpenSource Knative and the console and developer experiences OpenShift value add question mark. I'm thinking they're asking like where does OpenSource start and OpenShift begin, you know? Yeah, and they also might be asking specifically on that create Knative service because that's not something that's part of. And Jay, maybe you could also explain like the implementation of that create Knative service. That's not something that's an API that was provided by the Knative community, correct? Yes, so as part of create Knative service it is something which we have handled completely into the GUI in the first version just to get the feel of it, like how it works. So you know, to Spain, basically what we are trying to do is get the image, whatever is being used from the internal history and try to map the specs to the Knative service. So for instance, if you have HPA you will have a min scale, max scale and other things are like built-in volume variables or whatever is there in the container basically the board spec. So everything can be used in serverless and we are providing the option as well to use it more like to validate what is getting in because in the form they can see, okay, what is the image? What are the advanced options being used? So they can tweak with it and just delete it. But there is plan to have it completely driven by the APIs. So I think that will be something to have more consistent and in order to support even failovers or this will roll over and call it in future. Cool, chat is pretty active, but Paul is in chat answering lots of questions. So yeah, spotman support is either just merged or about to merge, by the way, be kind. Yeah, good call Serena on that one. Be kind, you were asking, be kind as a question that says how are these run times being packaged? And I'm not entirely clear on what they're speaking to as far as run times. I know for my code, I've been using still source to image to compile some of the application code and then just deploy it as a Knative service. So it can be pretty simple from the UI, but I'm not entirely sure if that's what you're asking about in terms of run times. Yeah, like which one, like so many layers. Yeah, yeah. From my perspective, run time is which version of NodeJS are you using? Are you on the Node V12 or Node V14? That'd be my application run time, but there's a lot of ways to define run time. And serverless functions are using cloud native build packs if that answers the run times question. And in serverless functions, we have options like you can pass with parameter like hyphen l as you can see. I mean, it can take like these run times, go Node, work or Spring Boot if it helps. Awesome. Oh, good. Yeah, that's a good feedback. Cheddar meant and B Browning chiming in and chat saying that serverless functions are using cloud native build packs. And that's a good point. So we had a source to image build strategy. A lot of that build process is getting moved into what was K native build and is now tecton. Some of that functionality used to be covered by K native and has been migrated into the tecton feature set. So that's usually how the build would be run for kind of packaging up your run time services and making them available on the cluster. Yeah, it sounds like we ought to have someone chat about build packs on here sometime for sure. And it's buildpacks.io is the website. If you wanna go to, they have a demo like right there off you go kind of deal. And also to mention it is also a CNCF project. I forget if it's incubating is what it says. So yeah, yes I do. JP Dave hit me up email style. Great chat today. Yeah, lots of fans are like rolling back through still. So yeah. Cool. I was wondering, Ryan, do we wanna, do we want to put that survey in that we had in the last four weeks? I think we've ran it for a few weeks, right? Sure, yeah. Be happy to post that. If you have a link, jump on it. And if not, I'll go look for it. Yeah, okay, great. I got it, yeah. Maybe you can just quickly, we're asking people who haven't already filled it out to fill it out to indicate what they'd like this developer experience office hour to kind of cover, I guess, right? Yeah, we had a lot of great suggestions today. I know there was one on, let's see, what was it called? Fungi? Fun thing. Funky, maybe? Yeah. Yeah. And was that linked to? Did we link to that? I'm gonna scroll back in the chat and see if I could find it. Plus one to Quarkus, Fungi, Funky, and OpenShift serverless demo. Okay, and then, yeah, like always go to learn.openshift.com. And if you wanna, this actually, I think you can install the serverless operator in our sandbox now too. Your mileage may vary. It will probably be pretty slow. I'm not sure if it'll actually like fully work, but you can definitely get started with CRC on your local machine at the very least. Cool. Yeah, developer sandbox today. Pretty sure there's no serverless on that yet. There is no, okay. I'm not sure where that, like that project. I could be wrong, but I don't think, yeah. Yeah, I just looked and I didn't see it in there. Okay, cool. But yeah, definitely the learning channel and you can definitely run this stuff on CRC. CRC for sure. Yeah. Yup. Yeah. And CRC seems we're already containers. Sorry, I'm using vernacular that people might not know. Shocking. I guess we could also point to the Red Hat developer page that, oh, I guess maybe you've already done that. Okay, perfect. Yeah. Excellent. And like I said, there's two Knative books we offer for free. I will redrop those in chat. Knative cookbook and Knative patterns. I need to make that Knative patterns description much shorter, but yes, they are both available for free the developers.redhat.com site. Just head over to the ebook section if you're navigating along and there's a link directly to it if you need it. Great, awesome. This has been great. This is a really deep dive. I am now intrigued by Knative more than I was before. So good job on that one, Jay. Do you still need 16 gigs of free memory to run CRC? No, they are actually working on that. We're trying. It's at 12 right now. I think the next step down is nine, right, Ryan? Yeah. Not sure. I don't know. I got a super 32 gig machine just so I can run CRC, which I know is not realistic for the general population, but the sandbox definitely does something to kind of help make those environments available. There's definitely limitations on that. You're not admin and you can't load all your operators, but that's part of the trade off there. CRC does seem to be pretty active. I think they're just about to do a 1.23 release, which I may have the newest version of OpenShift in there. Cool. I'm keeping my eye out for that. I saw a beta release, but I don't think it's got an official. We've got it on our internal CI servers, but not out for the public yet. Yeah. I have the, what is it called? Like cloud envy or whatever, where it's like, I have a server in my basement and I run stuff on that, so I have that luxury. So yeah, you can definitely get your hands dirty on this with CRC and definitely look at the Knative cookbook that Jay is showing, or Jay is showing, God, I did it again. Jay is showing on the screen. And if you have any questions, as always, feel free to reach out. I am cshortatredhat.com at Chris Short on Twitter. Feel free to hit me up anytime. I can connect you with any of the folks on this call or otherwise, like I said, JP Dade has IDM needs. So he has my email and knows where to find me. So I'll get him hooked up in the right place. And if you have questions, I can do the same for you. So thank you everybody for showing up today and walking through this great job, Jay, as always. Wonderful demonstration, very good deep dive. And thank you all for joining out there and asking your questions. We wouldn't do this without you. So thank you very much. And just a quick heads up on what's next on the channel today at one Eastern 1900 Central European time, Red Hat Advanced Cluster Management presents Cluster Pool Scaling and Automated Dev Clusters, which if you have to manage dev environments, this is going to be awesome for you. So definitely check that out. Again, that'll be in about an hour. So yeah, stick around or click that subscribe and follow button wherever you may be. With that said, we can safely say goodbye. If you have further questions, again, reach out. Don't be afraid and thank you everybody. Stay safe out there, stay healthy and we'll catch you next time. See you later. Thanks all. Bye-bye. Thank you.