 Today, we're going to do something kind of interesting from my perspective. We have members and actually almost the entire team of developer advocates for OpenShift. With us, we're going to talk about OpenShift 4.4, the latest release, and from a developer's perspective. So, Jan Kleiner, who leads this team, is going to introduce everybody. We're going to have demos galore and insights galore, as well as Q&A and AMA at the end of this. So, ask your questions in the chat, we'll relay them back, and we'll open up a conversation at the end. So, Jan, take it away, and thanks for coming, everybody. Good to see you all. Thanks, Diane. So, as Diane mentioned, you've got a few members of the OpenShift Developer Advocates team here. Brian, Joel, and I will be demonstrating some of the different features that were added in OpenShift 4.4 that are primarily focused at developers. I believe Jay Doves is also going to be joining us, and he may be participating in some of the commentary and questions as well. So, I will just go ahead and get started. I'm going to be covering some of the updates to the developer perspective and the developer catalog. I'm out of full screen mode here, and hop right over into the developer perspective in the web console. So, if you're not familiar with the web console in OpenShift 4, there are these two perspectives. By default, you often will land in the administrator perspective, but you can toggle over to the developer perspective here. This has been around since, I believe, 4.2, but there's been a lot of features that were added to make application deployment even easier in 4.4. So, these include developer catalog updates to allow developers to filter and group items in the catalog, labels to visually distinguish item types. I'll show you all that in a moment. We also have operator-backed services in the developer catalog now, and that allows developers to run a variety of workloads that are installed and managed by Kubernetes operators. We'll also look at Helm 3 a little bit as well. So, let's go ahead and get started. We look at the catalog here. This will open up the developer catalog, and you can see that in addition to the kind of the filter options that we're here to begin with, we now can filter by type. Right now, I have all of the items available, but let's say that I just wanted to look at builder images and operator-backed services. You can toggle here, check these checkboxes on and off to narrow down the list of items that are available. If you are looking at the operator-backed services, of which we happen to have nine here installed on this cluster, you can see that there's also this group by dropdown menu. So, if you choose group by operator, what this will do is just kind of visually clump together the operators or the items in the catalog that are related to different operators. And this can just make it a little bit easier to find what you're looking for and find the items that are related to particular operators. Now, if I install, let's see, I'm going to install this one, Cieger operator here, just to show you what that process looks like. Also, you may have to bear with us, we were having, there's some issues going on with Quay at the moment, which may cause us to have some problems deploying certain things depending on if it needs to pull images or not. So, when you are installing these operator-backed services, you have the opportunity to manually edit this yaml here if you want to. I'm just going to click create. And then that's going to take me to the topology view. So, in topology view, you can see that these are visually distinguished as being operator-backed services. The little o here stands for operator-backed, and it has this dotted rectangle outline around it. In this particular case, there's only one item in this block, but some operators may have multiple components. They would all be there in that rectangle so you can see what is all grouped together. Now, if I were to click on this, you can see there's also this managed-by link here. This is helpful for those operator-backed services so that you can see what is actually managing this particular item. Going back over to that developer catalog. I'm going to show you a couple of the other things here. So, when we have all the items showing, you have these little labels here now so that you can add a glance, see if something is a builder image versus the template versus a Helm chart and so on. So that, in addition to being able to filter, makes it a little bit easier when you've got a lot of items on this screen to find what you're looking for. Now, Helm charts were added to the developer catalog in this version of OpenShift. Right now, the Helm charts that are visible are coming from a specific repository. In future releases of OpenShift, you'll be able to specify which repo of Helm charts you want to have loaded in the system. But I'll go through the process here of showing you what it looks like to install a Helm chart from the developer catalog. This is a Node.js example one. So, when I click in there, you can give it a release name. You also, this is from the values.yaml file if you want to make any changes here. I'll click install. And then similar to what we saw with the operator back services, Helm releases are distinguished with this as well. And you can click on here if you want to follow the builds or any of those steps as it's getting deployed. You'll notice over here there's also a link for Helm in the left navigation that is also new. And this will allow you to see all of your Helm releases here. Now, Helm 3 has been, you can use the Helm 3 CLI with OpenShift for a while now. And if you need the CLI and you don't have it, you can get to it here under command line tools. You can go here to download the Helm 3 CLI if you need it. I'm going to, if screen sharing cooperates, switch over to my terminal and install the Helm chart that way so you can see that they show up in this list whether they're, you know, installed from the command line or the web console. Let's see if we can get that working here. It's just a little bit bigger. Okay. So let's do Helm install. Example, my SQL, stable, if I type that right, we will get that installing for us. Okay, it's good. So now I can run Helm list. And here you can see, hopefully you can see both the example of my SQL one that we just installed from the command line as well as the Node.js one that we did from the web console. So switching back over here, you can see that coming up as well. And it's also here. You can also see the revisions and any of that information you can click into it to see more. All right. So let's deploy something using one of the builder images from the catalog if we're having Quay cooperate at the moment. Let's see. We'll give it a shot. So I'm just going to use the sample repo here for simplicity and create. I wasn't paying attention. So I added that into this application grouping here for Yeager. That's not what I actually want. So I'm going to take that and we can edit the application grouping. So instead of having it grouped with Yeager, I'm going to create a new application grouping. We'll call it node example. Right. Give it a bad name. That's better. So now it's sitting here in its own application grouping, which is helpful to you. You can also, I believe, shift click to drag this in and around if you want to move it that way too. So what I wanted to show you next was these connectors. So if I hover over this item here, you'll see the little dotted line connector show up. You click on that. I can use that to make visual connectors between different items. And that is exactly what it says. It's just a visual indicator that there's some connection between two items. In certain cases, you can create a service binding using these connectors. But here, let's say, for example, these two components communicate, I can drag that and then anyone looking at topology view will be able to see that there's some association between those two. You do that by mistake. You can delete it pretty easily. And then the last thing that I want to show you here with the developer perspective is adding items to projects or applications in context. So now if you right click on the topology view, you have this add to project option where you get basically the same view that you had from the plus add menu right here in context. So we'll do another one really quickly just to show an example. I'll pick this one. This time we'll create our new application right from here. Java example. And click create. And then that's created here. So that can be a time saver if you are trying to add something into a project or application to do that straight from topology view. So I think that was most of what I wanted to show you basically in summary. These are some new features to the developer perspective and developer catalog to make browsing and finding items easier and managing and adding to your deployments easier from the topology view. All right. So I'm going to stop sharing my screen and pass it over to I believe Brian is next but correct me if I'm wrong. And it's Joel. I don't mind. I can go next. Great. Either way. Doesn't matter. Joel, if you want to go ahead, go for it. Perfect. See if I can find the screen. And I will. This one. Perfect. All right. So I'm going to talk about my favorite feature in 404, which is the OpenShift Pipelines but most importantly the Pipeline Builder. OpenShift recently introduced OpenShift Pipelines, which are built on top of Tecton, which is what you see right now on my screen. So Tecton is a Kubernetes native CICD as you can read and it's a powerful and flexible open source framework for creating CICD systems along the developers to build, test and deploy across cloud providers and on-premise systems. What's really nice about Tecton is that you have all those little basic building blocks and you can build your big pipelines for CICD and everything is run inside of your Kubernetes cluster. Just to make sure we're all on the same page here. This is what a Tecton pipeline looks like. Basically you have your pipeline, which will contain a bunch of different tasks. Tasks can be run either one after the other or in parallel depending on what are your needs at the moment and each task has one or more different steps. Once you've created those pipelines, you can actually put in some resources and tag resources on it so you could have some import resources and outputs. So as an example, you would have a Git repository as an input, then you would have through your pipeline a series of tasks and steps that would build an image from that source code and eventually you would have an output, which would be an image that you can then deploy to your internal service, to your internal registry on OpenShift. So once you have all of that, you will be able to trigger a pipeline run, which basically is the execution of a pipeline. This is what we're going to take a look at. To build all of your different tasks, there is a catalog available on the Tecton CD GitHub slash catalog. And you can see that there's a bunch of different tasks that you can start from. So say you want to have a task that will perform something with the OpenShift client, you can just find your YAML file, which is right here. And you can just import this inside your OpenShift cluster. You can also use the version one alpha or beta, v1 beta one was just released just a few days ago, but a bunch of pipelines still uses alpha. I think this is GDB changes sometimes soon. So let's take a look at our pipelines and how it would look like. So if I want to take a look at my cluster, I've got a brand new project here, and I can take a look to see if I have any tasks. And currently I don't have any tasks. So instead of importing one directly from the catalog, I'll actually go ahead and create a task manually. And we'll give this task a name. So we'll call it the hello task. And you might guess what it will do. So what it will do is that it will do a echo. And we'll use hello, inputs parents name. And it will input whatever we pass it as a param. We'll probably want to add a default value as well so we can just add. And this is a task for Tecton. So we have it right here. I can now create that. And now that I have a task, I can actually go ahead and create a pipeline with that task. So if I go in pipelines, which is part of your navigation bar now, I can create a new pipeline. We'll give it a name, we'll call it the hello pipeline. And from here, I can select my first task that I want to do. So you'll notice that I have a bunch of prepopulated tasks that were defined by my cluster admin. I could use one of them or the one that I've just created. So I'll use the hello task for now. If I click on it, I can see all the different details. And you can see that parameter's name was already pre-filled with world because that's my default values. Why don't I just go ahead and create this pipeline? So this is a pipeline. So it has one big pipeline, it has multiple tasks, while only one in this case. And each one of those tasks could have multiple steps, but that one only has one. So if I go to my pipeline, I can actually go here and start this pipeline. And you can see the task is now running. If I click on it, I can see the logs. I'll have to be really quick because that shouldn't take too much time. And here we have it, so we can see that it was successfully completed. This was changed to a check mark and we see the output hello world. So if I go back to my pipeline, I can select the pipeline, I can go ahead and edit it. And you can see that I can change my parameter to say hello Joel instead. When I said all those tasks are reusable, you can very easily change them by using different parameters. So if I go ahead and trigger this pipeline now, well as you might guess, the output will have changed a little bit. So now we have it, we have the hello Joel. So the same task was performed, but we've used different parameters to actually change the output here. So if we go back to our pipeline once again, there's a few things that we can add. So why don't I go ahead and change just a little bit? And I forgot to mention, but you can easily add more tasks if you needed by just adding them and depending them to the pipeline. So you can really decide, well start by doing your hello task and then do some maven, you could do some other output and then use buildup for example. Now, if I wanted to instead of using, and typically you would normally use parameters as part of the pipeline and not as part of your tasks, so I could go here and add a pipeline parameter. And I'll use it, I'll call it name again, and it'll be the person to greet and still keep world as a default value here. I can save this parameter. And now I'll go into my YAML and I'll change my task to use not the value that was hard coded in here, but instead it will use params.name. So what this will do is that my task will now use this pipeline parameter and append that and use that as a value for my full pipeline. So now if I start my pipeline again, you can see that I am greeted by this nice little model. It says person to greet. I can leave it at the world for now, we'll just run this task. And if we look at the logs, we will see the hello world once again. But now because this is a parameter by a pipeline parameter, I can use it through for each one of my tasks inside that pipeline, of course. But if I start my pipeline again, I can either start the last run, which will reuse the same parameters again, or I can start it again and just specify a new parameter here. This can be very useful if you have, say a GitHub repository with multiple branches. So each time you wanna start that pipeline, you might wanna use a different branch or some sort of option that will change each time that you wanna run that pipeline. They can easily change those when you have them set as parameters. Resources can be used kind of in the same way. Why don't I go ahead and create a little bit more complex pipeline. We'll look at something a little bit more complex. What I wanna do is something similar to this pipeline here. So I wanna start with a Git repository. I'll use a task to create an image out of this. And then I will output an image that I can then deploy in my internal registry. Before I start that pipeline, I'll actually need to create a image stream inside my cluster. So I'll just paste it in here. I can go back to my pipelines and I'll create a new one. So we'll call it our deploy pipeline. And now we'll use our S2I Node.js. This is a Node.js application that I'm going to use. You'll notice that in my pipeline builder, I have this little exclamation mark telling me that some things are not ready. Some required fields haven't been filled. And that's because I don't have any resources available right now, so I'll need to start by adding resources. I'll need to tell my pipeline what I'm expecting. So I'll be expecting a Git repository. So we'll name that parameter Git repository and we'll be expecting an image as well. So we'll just call it image name and it will be a resource of type image. So now I can go back here. I can specify the Git repository that I'm going to use as well as the image name. So you could have multiple tasks that need multiple Git repository. In this case, it happens that I only have one task and it's only using that repo. I'll just need to change this to false as well and we're going to leave it at a Node 8. Doesn't really matter for this demo. And once again, we have our pipeline. So everything is ready. So now that I have this, in theory I could just take on a GitHub repo, add that as a resource and it will be able to create our images and deploy that inside our cluster. Now, as Jan mentioned, we've been having a few issues with one of our servers right now. So let's see if this will actually work. So I can go ahead and start my deploy pipeline. And you'll notice that now I'm being asked to fill in those fields because I never specify the resources. So I can actually create my resources on the fly. So I'll tell it to use github.com. I will use the software collection Node.js application, also known as the Sklorg Node.js X. Perfect. So this just created a resource that I can then reuse in multiple tasks if I need to. And my image registry, which I can never remember. And I can't remember which project I'm using. Pipeline demo. Let's start again. All right, so this one was pre-filled because now it's created and I'm using pipeline demo and we'll use demo app. So let's also create this image resource, perfect. And we can now start our deploy pipeline. Let's take a look. And you can see that this task has multiple steps. They can see to generate the build, the push and they will all be executed one after the other. Let's just cross our fingers and hope that this might work. It seems like it's taking me a little bit too long. So what would happen is that it would actually create an image now and I would be able to create my application, specify that I will take an image from my image stream which I've created earlier, so the demo app. And once it's actually been deployed, I would have a latest tag here that I would be able to use to create my application. I would then give it a name. I would use deployment config so that each time that this pipeline is run, each time a new image is pushed, it would actually redeploy my application and create those routes. This one is still running. Oh, something's happening. Look at that. Let's actually wait for it to happen. So as you can see, we have all the different steps that are happening right now. So we were creating the image, we pulled the Git source files, it then generated the Docker file, it's now building the actual image. This should just take a few seconds. Thanks to this amazing cluster that we have which goes really fast actually. There it is, step seven. I think there's nine steps total, so it shouldn't be too long. And thank you for installing. That's good, cleaning up. Perfect, perfect, perfect. Everything is there, it's now being pushed. So in the next few seconds, we should see it being completed. And there it is, perfect. So it actually successfully completed. And now I can go ahead, create my application. We'll use an image stream. So as you can see, I now have the latest tag that I can use. Let's keep all the defaults and we'll create a route for this. And we have our application and in just a few seconds, we'll see it being deployed. So I never actually created that image. Everything was taken care of by my pipeline and I can now access our demo application here. If I were to start this pipeline again, no the deployed pipeline, we could start the last round. So it'll use the same defaults once again. So it'll use the same GitHub repository as well as the same image to be or the same image stream or yeah, same image name. So it'll be published in the same image stream. Then you would actually see that application being redeployed as soon as this task is completed. Now this was also a relatively simple task. So with the pipeline builder, there's many things that you can do. Why don't I go ahead and I'll actually use this deployed pipeline and we'll just tweak it a little bit. So in most cases, you probably won't want to systematically deploy your application. As a developer, you probably have some sort of process to, well, you probably have some tests, some unit tests that you want to run. You probably have some auditing, security auditing that you want to do. So I'll just go ahead and create a few tasks that I already have in here. I'll just need to copy and paste them. Okay, I'm back. So I have two tasks. So this is a Node.js application. So I'll use NPM to run an NPM audit just to make sure that we don't have any security or vulnerabilities in there. Let's just create this task. And my second one will run NPM run test, which will run all of my unit tests for my project. And I can go back to my pipeline. Let's just see where it is to see if we can actually see the new application being deployed. Oh, we're almost there. But remember, I just triggered that pipeline as soon as the first one was completed, as soon as this one was deployed. But you should see it in just the next few seconds. It'll have redeployed a new version. There it is. But now pushed to the internal registry so we can automatically redeploy our application. So that was actually very fast. And it's the same version as before. Same source code, because I didn't change anything in the meantime, but you saw that it redeployed the application automatically. So I was saying, you probably have some sort of different processes in place. So we can go back and we add our NPM test to make sure that all of our tests are passed. And we can also run in parallel because we don't necessarily rely on the NPM test to run the NPM audit. So we can both run the testing and the auditing at the same time, just to save some time on the build. We will use the repo that was provided to our code. And let's just save this. So now we're getting into a little bit more complex pipeline so you can see that we have multiple actions going on. Why don't I go ahead and start this pipeline again. We'll use the same repo and image. And you can now see that we have two steps. So it'll run the NPM install and then run the NPM audit. And the other task will do the same thing. When you have multiple tasks like this, they're actually running simultaneously in two different pods. So we can see the results going on. And you can see that I've just got a narrower here in my NPM audit and NPM test succeeded. So if I go to NPM audit, I can see what's happening all at the bottom. And you can see that I have one critical package which caused an error in our pipeline. So if I look at my pipelines again, you'll notice that the status here instead of being all green, we've got one failed task and the other one was canceled because we've got a task that's failed before. It actually canceled the next tasks in line. So that's why this one was never triggered. So now at least we have our, we have some volume number of these in our code, but this was actually never released. So we can actually do those changes before we deploy a new application. And that's pretty much it. There's one last thing I wanted to show. Let me just see if I can switch to my VS code instead. And here it is. The last thing I really liked about, and this was just released just a few days ago actually, if you go to your VS code, or if you're using VS code obviously and you go to your extensions, you can search for the Technon extension. And there's a Technon pipeline extension by Red Hat that you can install. And once you have this installed, let me just bring this a little bit. I can actually see all the different pipelines that I have installed on my cluster. I can see all the different tasks. So we can see that we have this yellow task that I've shown you. We can see the details of the last round, which was 13 minutes ago. The NPM test was around two minutes ago. I can actually see all the YAML related to that. And the same goes for all of my tasks. I can actually see all of the details of my tasks, the yellow world task that I had earlier. This is what it is. We also have all of our pipelines and so on. So everything that has to do with Technon is available right there inside of our cluster. But what's really neat about this is that if I actually open up a pipeline, I now have access to the pipeline preview. So I can see that this specific pipeline has two different tasks and I can jump from one to the other in my code. I can jump to my build task here and I can see how they are dependent and then they will both be executed before the next one. So you actually have all of that preview very similar to the pipeline builder that you have in OpenShift, but in your VS code as you are doing that development work. So that's all I had. I will be monitoring the Twitch chat if there are any questions. So please go ahead, fire them away. And I will now hand it over to Brian to talk about serverless. Okay, there's also a whole lot of questions in the BlueJeans chat too. So check that out. And Brian's been doing a good job in the interim trying to answer them, but I think you can weigh in. So thanks. Thanks Brian, I'll take over. Yeah, there's quite a bit of interest with Technon. So that's pretty awesome. You got quite a bit of questions but hopefully most of them are answered. All right, so let me share my screen real quick. If somebody could just say that it looks good on their end. Looks great. Okay, cool. So yeah, I'm gonna talk about one of my favorite things that just came out with OpenShift 4 is pretty much that serverless is now GA within OpenShift. So now it's generally available previously, serverless has been on a tech preview and developer preview release. And with 4.4, it's now GA. We could consider it stable, at least for the serving aspect of serverless. And I'll get into some of the details on here, but we do have a blog article that is pretty good that goes through a lot of the details. Highly suggest you check out that information. So one of the things with serverless is it allows you to deploy applications and have them do things that are generally pretty nice to have. There would be good recommended practices to deploy an application whenever we're deploying, maybe at scale. Maybe we want a particular application to be able to scale down to zero pods running. So we're not wasting resources instead of having maybe a deployment config with one pod that's always up and always running. There's reasons to do both, but serverless allows us to do something where we could scale that down to zero and it's pretty cool, it's pretty neat. And on the screen right now, you could see that I'm logged in already, I'm looking at the apology view on the developer console. And in here, I already have a serverless application, our serverless deployment already done. And I'll show you some examples of deploying a new one in a second. But first I want to see what are some of the new things that we have within the console to make it easier to work with serverless applications. The developer console has been getting better and better and it's still improving and it is pretty awesome so far right now. And we could see that we have in this view, the ability to actually look at our K service, our K native service. OpenShift serverless uses K native and the K native service is the main aspect of a service that's running within serverless. We could click on that K native service within the developer console and we could see some of the details. The stuff that's really important to me, like I could see that this hello service has nothing running right now and it tells me that, hey, all these revisions, they're scaled down to zero. Maybe if I make this a little bit bigger, it might be a little easier to see. But I could see that, hey, everything scaled down to zero, so there's no pods running. If I go and try to access this, I could click on the open URL and because there's nothing running right now, we could see that, hey, OpenShift is saying, hey, go start this pod and now we have a pod running. So serverless allowed me to not have anything running and it waited for a request to come in. Once a request to that app came in, it then hurried up and scaled up that pod, got it running and me as the client, I just saw the application. I didn't see anything crazy. It just took a second to load up. But I could see hello world version one, et cetera, right? So that's cool. I could see some of the information here that's within the developer console. I could see that, hey, 80% of my traffic is going to this hello one revision and 20% of the traffic is going to the version two revision. And I set this up beforehand and I'll show you how to do it later on. But this allows us to do something like a canary release. You can see right now the pods terminating and scaling back down to zero. There hasn't been any traffic coming in for about 60 seconds, give or take, that number is tunable. But that basically is the timing that says, hey, scale down because nothing's coming in. So with this going back to the traffic distribution, serverless allows us to do things like canary releases. That typically might be a little harder to do when we're working with Kubernetes or OpenShift. When we roll out a new version of our application, how do we vet that that application is actually good and stable and working the way that it should? Well, serverless allows us to go and set some of these percentages. I could roll out a new revision right here and a revision as a snapshot point-in-time configuration of the service. So an example would be the new revision would reference the new tag of our container because these are just containers. And this, we could say that 80% goes to hello one or 20% goes to hello V2. And I could change that if I want. So I could go in here and I could say that, hey, this is a little bit better now. I vetted it, I looked at the logs, things seem to be okay. Let's make it a 50-50 split. We can see that I've got a couple different other tags here and they're referencing different things. And the tags are pretty important because these allow us to access these revisions. Outside of the normal everybody goes to this route, I could go to current dash, whatever the rest of the route is, or previous dash, whatever the rest of the route is to hit a revision that's specific to that one. And I'll show you an example once I set this up. Let me go ahead and set these up to one. It looks like there might be a bug in this right now. Figure that out. It wouldn't let me save that with nothing there and zero also wouldn't work. So maybe we'll figure out that traffic distribution or maybe I got an update that I need to get to my OCP console. But either way, I switched out the percentages here and I could go and see, I could click on see, open URL for hello v1 and this goes to current dash hello. Whenever I go to revision two, I could go to previous dash hello. So that's pretty neat. I could define a route or a sub route or a traffic tag, they're called both interchangeable terms, but I could specify those and go directly to one of the revisions or I could go to the main hello serverless tutorial and I could get all that stuff. So it's cool that serverless gives me the ability to do this complex traffic distribution and networking on the cluster for new deployments and new revisions without me even having to really think about it. All I do is specify I want 20% to go here or 40% to go here or what have you, right? That's pretty neat. The other thing that this topology view gives us for a case service would be the route, right? We could go and drill into and understand what's going on with the route, check out some of the configuration if we wanted to look at the YAML and we could see how that stuff's built out, but one of the things with serverless is it doesn't really require me to think about YAML. It makes it easier to deploy applications. I don't have to worry about the YAML aspect of Kubernetes, right? I don't have to think about it. All I need to know is some of this stuff around serverless. So anyways, let's go ahead and show you how easy it is to add an application and make it run as a serverless app. So if I go into the developer console and hit add, I could again choose just like Jan showed from get container image, et cetera, right? I could just point this straight to my source code and it would build out an application for me, but I already have an image already made, so I'm gonna go ahead and hit deploy image and let me grab the image name. So here we go. I just pasted in a simple hello app. This is version two of that app. Just gonna call it, leave it the default here. I'm not gonna change that, but I could. I could change the name, right? Jan showed you what the application context was in the developer console and then the hello app is what it's gonna be called. And then under here resources, right? I'm going to just choose K-native service. Well, yeah, you said, hey, this is in tech preview anymore. Yeah, that should be gone. This is GA now, so there's an issue with that, but that tag should not be there. This is definitely GA now, but either way, it's there and we could do K-native service. And I could go and define some of the details around here, right? I could go and specify some of the scaling information. Say that I didn't want this to scale down to zero. Like that property for this particular serverless application doesn't make sense for me, but other aspects of serverless do, such as the auto scaling that's already set up. I could go ahead and say that always run one part of this and I could change some of the concurrency details. So serverless gives us the ability to scale up whenever I think it's a hundred requests are concurrently coming into our application. So if we have more than a hundred, then I could go ahead and scale up an application by default automatically. I don't have to think about it. And I could change those limits here if I wanted to. But I'm just gonna go ahead and specify one part, always running and click on create. What this is doing is that this is kicking off a K-native service, right? So this is basically pulling in that image and it's gonna build out. We could see that it's running. I've got one available and I could look, I could drill into this and see I've got one pod running and there we go. Just like I specified, one always running and I could go click on the route and I could see this is version two of that app. So that's pretty neat. I could see all this stuff deploy an application just like I would deploy any other app on OpenShift and specify use OpenShift serverless instead of the standard deployment config and there we go, we're good. And I could see the details within the developer console to get some of that stuff, to get the information that I really need. If I want to, I could drill into the pod just from here. I could look at the logs really quickly. I have access to debug things if I need to from the developer console really quickly and easily. One of the other big things with serverless, like I said, is the ability to use this stuff without touching YAML. And I showed you how to deploy an app without touching YAML on the console but what if I like the CLI better? There's command line tools that allow us to work with OpenShift serverless in the command line. So on my OpenShift console, I could click on the question markup at the top and I could see command line tools. I click on that, this is the repository that's running on my OpenShift that has the signed from Red Hat, command line tools that are here. And I could go and download the Helm one or the OC command or OpenShift do, pretty cool. But KN, right? I've got KN available here. This is the OpenShift serverless command line interface. And this works with Linux Mac Windows and allows me to work with the OpenShift serverless on the command line. So let's see that in action. So let me switch over to that app. I think that's a little too small. Well, I'm all set up now. Let me make it a little smaller. So, so I could do OC, who am I? Just to double check, I'm logged in. So I'm logged in, I'm good. So AC project, I'm on the serverless tutorial project. So I'm good, call that stuff and make sure that, you know, that's set up. And it's pretty nice to get this stuff working, by the way. Maybe there's a blog article to, you know, talk about how to get this in your prompt that we could do. But anyways, that might be useful later on. So KN is the command line tool. And I already have this installed. And I could do command line completion. I have that set up available as well. I just hit tab to get some of this detail. I could see what the KN command line tool allows me to do. I could work with the same things that you just saw me do within the console, right? I could do plug-ins or could work with stuff like that. I could work with revisions and routes and services. And those are things that are really important as far as OpenShift serverless and the serving aspect goes. One of the things that I'm not talking about a whole lot within OpenShift serverless is the eventing aspect of that. And that is, I think, preview. I'm not quite sure with OpenShift. It's coming, it's coming pretty soon. And it allows me to act on events that happen, right? So I could do things whenever a database gets updated or whenever a file gets added to an S3 bucket or something cool like that, right? And the eventing aspect using camelK and all this stuff, the stuff that I'll talk about later on, not this call, but once it starts getting a little more stable and that's what really makes the serverless aspect shine, right? But what we're talking about is the serving aspect and the service stuff, so KNService. And then on here, I could see that I could create something, I could list something, I could delete it, et cetera. So let's just list, see what we have. Got a little smaller, so it all fits, there we go. So you can see that I've got that hello app that I just created and it's running the version two, right? Or sorry, hello app, it's running a revision name that's automatically generated and then I have hello version two, which is the existing one that I had before. And I could do stuff that's pretty neat here, I could do KNService describe, say hello. And in here, I could get some of the details with that. I could see that, hey, I got those percentages set, right? 47 goes here, 1% goes here, et cetera, right? I could see that information. I could see the tags that I talked about, hello, latest, preview, et cetera, right? And some of that detail. I could do KN revision list and I could see all the revisions that I have that these guys are pointing to. So I could see I've got revision version two and then version one. I specified these specifically and I'll show you how to do that. And then I've got the apps, which is the one that I created within the console. So we could see all that stuff there. And I could describe some of that and get more detail, but we've seen most of it in the service. So I don't think that we need to KN route and do list. I could see all the routes that have available to me and maybe describe the hello route because it's a little more complex, right? I've got the same stuff. I could see traffic targets and I could see the URL for each individual traffic tag. So I could go directly to the previous one, for example, if I wanted to do curl and I could curl that, it'll take a second again to spin up that container, but there we go, it's not too long. And I can see hello world and there we go, that stuff's pretty neat. So if I wanted to say create something, let's go and look. So I've got pan service create and let's just do hello one, right? I could specify a container image in here and then specify the namespace that I want it to go into in the revision name and all of these things, right? If I want to, let's go ahead and create it, that's going, but if I wanna see the list of all the stuff that I can do, I could do KN service create dash H. And in here, I could see, hey, this is the help and you could see there's a ton of different flags and examples, like creating a service with multiple environment variables. This help page for the KN tool is really nice and it gives you a lot of the stuff that you would want to create a KN service in the command line really easily. And it's pretty neat, right? This one command, KN service create, deployed an application using an image, specified some of the stuff. Like I know in Kubernetes, I need a namespace and I wanna call it a specific revision name. So I know I want that, right? But like if I wanted to specify the minimum scale, I could specify that in here, right? And I could specify environment variables using the dash E with some of the details and it tells me what I need to do to do that. So instead of me having to know all of the YAML that's associated with that, which is pretty, there's a lot. I could do OC, get, let's see, get service. Oh, whoops, not type that, right? So services.serving.knative.dev and then hello one dash O YAML to see the YAML output of what that created for me, right? We could see that, hey, this is all the stuff I created it, some of the details, but I don't really care if I'm creating it, I would use the YAML under the spec stuff and I would see that, hey, this is the image I specified, I need to specify a name in here, give it a spec. Like I don't need to know any of this to deploy my application, it doesn't matter. I'm just using the kntool specified in the command line tab completion helps me set up these flags. It's pretty nice. And the kntool is really a robust, it does quite a bit and it's improving each revision or each version of OpenShift serverless. And like I said, as eventing is getting more and more out there, I'd imagine there's gonna be a lot of really cool things we could do just with the kntool and eventing and well serverless in general. So anyways, with that, yeah, any questions or anything over any of the stuff we talked about? I guess back to you, Diane, at least. Jordy's asking, do we need to install the serverless operator to use the serverless features in OCP 4.4? Definitely, yes. So with Tecton or OpenShift pipelines as well as OpenShift serverless, both of those require a cluster admin to go onto that cluster and install the operator. You could look into the operator hub that's on the sidebar of OpenShift and the admin view. Go and install the serverless one or the pipelines one into the OpenShift dash operators namespace. You could look, there's instructions on our docs on how to set that stuff up. With pipelines, you're done at that point. You could start using it with OpenShift serverless. You have to then create a Knative dash serving project and deploy a instance of Knative serving. You just look into the installed operators and look at OpenShift serverless and deploy that. And you can specify and customize the installation with it but by default, the defaults work and they're pretty nice in what I showed. I didn't customize anything, everything worked. So those two steps, once I'm done there, any user could deploy a pipeline or a OpenShift serverless deployment. And I'm gonna ask one question, which is sort of a setup because you kind of asked it and I think you wanted Joel to answer it. So are there plans for following the Kiali approach and show usage-based topology, not topiary topology? And because we've been talking about that a little bit and the Kiali view is pretty nice, but can you elicit an answer for that, Joel? I think you might have answered it a little bit in the chat there as well, but it's a pretty, the difference between the Kiali and the service mesh approach to things. So there was that and that was within the topology view, sorry, not Joel, but within the topology view, you could connect applications and whatnot together and it would be nice to maybe see the traffic flowing through there if maybe service mesh is installed or whatnot. That's what I was thinking. I don't know, maybe there are other questions. I remember hearing some of the talk about some of that earlier this week and not quite sure, it's not, we don't do that now. One of the other things that I was thinking about Joel is when somebody asked about using Tecton or OpenShift Pipelines with other like Azure, CI, CD and stuff like that, like is that something possible too? Those two questions, I'm not quite sure, so anyway. Yeah, I saw that one. So using Pipeline Squid, I don't have the answer for that actually. So I will need to do a little bit more research and yeah, if whoever asked the question wanna get in touch with me, maybe in Twitch and we can establish a way to follow up afterwards, I'll definitely follow up on that. Right, there was one other question which you may or may not have answered, but I'm just gonna repeat it. How to set web hook trigger for BitBucket? Anyone wanna take the bait on that one? Yeah, yeah, so with that, I posted a link, hopefully you got it, I'm not sure if it was on Twitch or whatnot, but if not, feel free to reach out to me or Joel, I'm sure we'll both get, or Jan, we all could help you out, but just basically look at the documentation for setting up a web hook on OpenShift and there's instructions in there on how to do it with BitBucket specifically. So in there, you definitely could do, you could set up a BitBucket web hook. So there was a link to something in Stack Overflow, or was that the link in the docs? The docs one, so I originally found a Stack Overflow link and then I looked at the docs, I should have looked at the docs first. Yeah, there's that whole thing about documenting by blogging or FAQs and stuff like that as opposed to the documentation and for once we actually did it in the docs, so. Right. That's a really cool thing. We have a tendency at Red Hat to document by blogging and the bane of my existence, trying to keep those blogs fresh and up to date and current as each release goes out. So the more we can transition stuff into the docs, the better off everybody is. So I'm looking to see if there are other questions coming in. I think you guys have done a really great job exploring some of these new features and I'm really looking forward to getting you guys back again on a regular cadence, because I think this is a great way to educate the OpenShift community and get you guys some recognition for all the hard work you do demoing everything and making things understandable and comprehensible, especially with all the new features coming out in each of the new releases. This is, it's a lot to chew off. So if you look at our roadmaps and things like that, these are the folks who are on the front line all the time, trying to get people up to speed and educate them. So we're really grateful for all the work that you guys do. And there's, oops, let's see. Is that another question? Oh, there was one other question. Well, I'm all here thanking you. And I think it's an interesting one from, was about the pipelines, if they're meant to be in use with other CI CD systems like Azure, GitHub, actions, et cetera, or are these tecton pipelines meant to be used just by themselves? I don't know, not answer that question. That's the one for which I'll have to follow up afterwards. So I'm not sure who asked the question. So if it's internal, you can find me internally. Or if it's- It was out in the real world. It was out in the real world where people are trying to do hybrid cloud all the time and figure out how to make all of these systems mesh. So which makes another layer of complexity on everything. Lots of new features, lots of new platforms, all the platforms have their own approaches and tools. So again, we'll have you guys back many times, I'm sure. It's wonderful to have you here today. Thanks, Diane. For follow up, I think you can all see the slide here. This has all our Twitter handles on it. So if you need to get in touch with somebody, this is one way to do it. Cool. All right. And with that, we'll thank our producer, Chris Short, again for backing us up and streaming us live everywhere we can possibly find a stream to be on. And we'll be back again tomorrow. Tomorrow we have Andrew Clay Schaefer from the Global Transformation Office who's gonna be talk about cloud native operating models. And he's one of our gurus on DevOps. So if you're around, join us again tomorrow at 9 a.m. Pacific 12 noon. And I think it's 1600 UTC somewhere in the world. But you can check the calendar and we'll be there too. So looking forward to hearing other things that you guys wanna talk about, Brian and Jan. And then maybe we can drag Jason Dobies out with Josh Woods to talk more about Kubernetes operators as they want to do as much as possible. So yeah, and showcase the work that they did in that wonderful book. So yeah, I'm gonna get a couple thumbs up on getting them to talk more about operators. You can never talk enough about operators. Oh, there's the book, there's the plug. You can download it now and get it, there you go. So all right, with that, Chris, I think we're gonna hang up our hat, the red hat for the afternoon and let it rip and we'll see you all again soon. I will again also post this up on YouTube with some of the ins and outs edited out. So if you wanna watch it again in your leisure, we'll have it all there probably later this afternoon.