 Hi, everybody. As you probably know, OpenShift is Red Hat's distribution of Kubernetes. Today, we're going to take a look at the latest release to OpenShift version 4.4 for developers. I'm Josh Wood, co-author with Jason Dobies of O'Reilly's recent Kubernetes operators. That book will tell you all about it. I'll be posting a little later in the chat. Now, while a shameless plug for my book is compulsory, that's not what today's broadcast is actually about. Today, I'm joined by three experts who are going to take us through OpenShift version 4.4 features concentrated on developers and how they respond to and participate in broader trends in the industry. First, Jan Kleiner will tell us about the new developer perspective in the OpenShift web console. Joe Lord will tell us about CI pipelines within OpenShift and what comes with OpenShift out of the box for building those pipelines. Brian Tannis will outline features for functions-driven serverless development on the platform. So with a reminder that you can visit try.openshift.com for options on how to try anything you see today for yourself, I'll hand it directly over to Jan Kleiner with news about OpenShift's developer perspective. All right, great. So in the 4.4 release, there were some additional features added to the developer perspective and in particular the developer catalog to make application deployment easier. So these include things like updates to the developer catalog to allow you to filter and group items more easily, labels to visually distinguish different types of items, and then operator back to services. Let me go ahead and share my screen and I'll go through each of these in more detail. Hopefully that's working. So in the developer perspective here, I'm in topology view right now. I'll go ahead and start by showing you the developer catalog changes. So anytime you want to add something to a project, you can click over here to this plus add menu. And this here is the developer catalog that we're talking about. So when I click into here, there are these filters. Right now I just have operator back services enabled, but I can choose which ones of these different types of items here that I want to see. And you'll have this area of the screen filtered for just those types of items. So when I remove them here, you can see they go away there. These little labels here also help you when you're looking at the results, see what you've got in front of you and distinguish whether something's a home chart versus an operator back service. Let's look first at these operator back services. There's also this group by filter here. You can choose group by operator. And all this does is it kind of bunches together the different operators that you can install based on what they're related to. So for example, these are all ones related to the Kubernetes build API. You can see ones down here that are all related to AIMQ streams or Keali and so on. So let's go through the process here and see what that looks like if we want to install one of these. You can click on it. You get some information about it here. When you click create, you're able to change these defaults if you need to. I'm just going to click it and see what happens. So you can see what I already had deployed in my project here. And then here is this operator that is being brought up for us. It looks different in the topology view than your standard, you know, application and deployment config. This O, I don't know how well you can see that on my screen share, but there's an O here to indicate that this is one of those operator back services. And then this, whoop, that happened really fast. This dotted line around it is going to encompass all of the things that are related to that operator back service. So as this is coming up, I'll just click into one of these. If I click into the stateful set, you can see information about it. And for these operator back services, you'll also see this managed by label here. So it'll tell you what this resource is managed by if you want to drill in and look at that. So I am not going to sit there and wait for that to finish. What we'll do instead is go back to the developer catalog and take a look at another type of resource that we can install. So in OpenShift 4.4, Helm reached GA status. Helm 3 is available now. You can filter for Helm charts in the developer catalog. If you search for node, for example, here's an example Helm chart that just launches like a node example app. So let's take a look at that. So once you've clicked install, you can modify these default values here. I'll just click install for now. And then down here at the bottom, you can see that that's coming up. Again, just like we had the O for the operator back services, you have an HR to indicate this is a Helm release. And then you can click into that to see information about the Helm releases. You can also see in this more menu, there's also an entry now for Helm. So if you want to see all the Helm releases that you have set up, you can go there to see them as well. Now, installing a Helm chart via the web interface here is just one way of doing that. If you wanted to use the Helm 3 CLI, but you don't have access to it, you can get that from here. If you click the question mark, you can go to command line tools. And there's the option to download Helm if you don't have the CLI installed already. So with a little luck, I can show you what that process looks like of installing Helm chart from the CLI. Let's give it a shot. Okay. So let's see. If we do Helm install, I can't see what I'm typing because of the zoom little stuff in the way. Let's see if this moves. Yes. Okay, that's better. So Helm install, we'll do example my SQL and then call stable my SQL. Let's see if I type that right. Let's see if I'm even logged in via the terminal. Okay. So it did work. All right. So now if we type Helm list, probably can't see that. Let me clear so it's at the top. Helm list. You can see both the the one that we installed from the web interface and also the one we installed here, both on the cluster. Now, like over here, we should also be able to see both of the was here now because they're both available. And in topology view, you can see them both there too. The last thing that I wanted to show you as far as the features that have been added to the developer perspective in this release is the ability to add items to an application or a project in context. So this background part of the topology view here, whereas this light gray, if I right click on that, I get this add to project menu. So by doing this, I can add an additional deployment application, whatever I want to add to my project. We'll do, let's do from catalog again, but this time I'm going to look for a builder image. We'll just do Node.js as an example. Create an application. And I'm going to use this example repo to launch an application here. Click create. And now you'll see that start to get deployed. It's putting it inside this application here that I already had. I could have changed that if we wanted it to be its own separate application grouping. But so now that's been added to our project from that menu. You can also do visual connectors. If I hover over this, you can see this little dotted line arrow show up. I do it right. I can create a visual connector. Let's say these two, this deployment and this deployment config are related in some way. I can do that just to give a visual indicator that those two are related. For certain other types of resources, you can also create service bindings this way too. And then if you decide you don't want that, you can click delete to get rid of it. So those are just a few of the things that have been added in this release related developer perspective and the developer catalog to just streamline the process of deploying applications. So that's pretty much it for me. I'll stop sharing. All right. Why don't I go ahead now and talk about the pipelines that were the pipeline, especially specifically the pipeline builder that was added in 4.4. So pipelines are based on or OpenShift pipelines are based on Tecton. And just to give you a little bit of context on what exactly they are and what they do, I'm going to just introduce you to what are those pipelines. So OpenShift is or at Tecton is a powerful way to create CI CD systems and it allows developers to build and test and deploy directly inside your Kubernetes cluster. So it extends on Tecton by adding a very helpful interface to manage all of that. So how does a pipeline or what does a pipeline look like? Well, you have that on the screen right now. So a pipeline is constituted of multiple or one or multiple tasks and each task has multiple steps in it. So the task, I already have some tasks deployed in here and I'll show you how we can use the pipeline builder to actually create those pipelines and make it easier and how you can manage all of those as well. So if I go to my OpenShift, I've already installed the pipeline operator here so you can see that I have this pipeline navigation bar navigation button here and I can see all the various pipelines that I already have installed. I can see when they were last run and I can see what was the status of this last run. To kind of demo you some of that in the works, I have an application that was deployed here already. It's just a Node.js application. I should have put probably something here at the root level but we've got this API. I have a HelloRoute, I have a add to as well so we can get the result of whatever number to which we add to here. So this is my current application. Like I said, it's already all built in Node.js and it's a very simple express application. So I just have my different routes that were added here. Now, if I go into one of my pipelines, I want to get ready. I want to make sure that I can easily deploy that whenever I make a change. I can create this new pipeline using the pipeline builder. Let's call it demo, demo, demo pipeline. I might already have one called that because I call all of my pipelines demo pipeline. So this is my new one here and I can select all the multiple tasks that I have. As part of the administrator view, you can see there's a lot more in the pipelines that you can add so you can specify all the different tasks, you can specify different pipeline resources and we'll use those in a few minutes but I already created all those tasks that I can use inside my pipeline builder. So why don't I get started and I'll just use this S2I Node.js task and if we go into the details of it, we can see what this task will actually do and there's a little bit of YAML involved here but you'll see I use S2I to create a Docker image and then it will use Builder to deploy this image into our registry or into our internal registry. By doing so, I'll be able to redeploy my application as soon as there's a change in it. So I can use this. I can also add multiple tasks either sequentially or in parallel. So why don't I go and just before I actually deploy this application, I want to make sure that I can run some tests on it. So I'll just run my Node.js test suite. Once again, if we look at the task, we'll just have a similar YAML file here and we will simply run our, well, first we'll start by running npm install to install all the dependencies and then we'll run npm test which will take care of running all of that unit testing suite. Now this test can be done in parallel to another task that I already have created here which is just making sure that my code is clean, that the code that I've just committed to my GitHub repo actually follows the standard or linting standard and so on. So those two can be run into parallel. That doesn't matter but I want to make sure that those two are successful before I move to my S2I Node.js task here. Now I'll need to add different resources. So my pipeline will take an input. It'll take a git repository as an input. So I'll just call it git repo. I'll specify that this is a resource of type git. I could use many different type of resources here. As you can see, we can use git image cluster storage and so on. So I'll have two resources. I'll have my git repository as an input and as the output, I will have my image which will then be deployed inside my registry. So if I go and click on each one of those tasks, I can specify the required inputs and outputs. This one should need an output but let's have an input here and inside our S2I, we will actually have both an input so my git repo and I should have an image available too. So it's not available because I didn't specify image here. So I can go back here so I have that image and that's it. I've got my pipeline. It was all created for me. That took care of creating all of this YAML stuff for me. And this can be a little bit tricky when you start playing with TecDom pipelines or OpenShift pipelines on your own. There's a lot of ways to figure out how to run a task after one or so on. So using the pipeline builder is a lot more visual and a much easier way to figure out how to use those. Now if I want to run this, it's just a matter of going back to my dashboard and I can just start this pipeline and I can specify which GitHub repo, so which one of my resources that I want. So I'm going to run this one here which will add a new feature to my API and I will output this image name. Now this will start. You will see here that it actually runs and you can see each one of the steps running. You can see what's going on. If you click on them, you can actually see all of the logs and see in real-time what's going on with those pipelines as well. Now I forgot to actually do a change in here. So I'll actually stop this pipeline and I'll just go ahead and you can see here the status of this task because I've just canceled it. So it tells me that it was canceled. If I go ahead and add a new feature to my application, so I'll just add a new route which will add three to whatever parameter we pass it. Here it is and I will just send back a value. I will parse it. I don't see what's going on here. All right, perfect. I'll add three to this and I'll return a status of 200. There it is. So that should be good. I mean obviously I should probably write some tests but I won't do that now but you could also run some tests. Those will validate everything. So now that I've got my code changes, actually now that I have my code changes, I still need to commit them of course so that OpenShift knows where to find those changes. So why don't I go ahead, get commit, add a new add three feature. Now I can push this. All right, now that I have this, I can actually rerun this pipeline. It's not cooperating so why don't I just go ahead and rerun from scratch here, start. I will use this feature, this image and start. So now as you can see, it's actually running. Once again, I can go back in here and we should see each one of the steps as they are getting processed and as my CICD pipeline is actually managing all of those. This should take just another second. We should have a result soon and there we go. We've got our test that ran successfully and we've got our second task which, oh, we failed. So we can actually go back in here, see exactly what happened. So it actually failed in the linting phase and we should see it here. Well, in theory we should see it. I'm just having a hard time scrolling here. So we can easily find out where in our pipeline it actually failed and we can see that we prevented the new application to be deployed in this case. So if I would go ahead and fix that issue, you would see that I could trigger that again. So that was in there because I didn't leave some spaces here. Just fix that. I'll recommit my code with the same message because why not? Let's just push that straight back into our repository and now we can rerun this pipeline and this time it should run successfully. So there's a lot of different things that I've quickly shown. We will see that result in a few minutes. It takes about one or two minutes to run this one. But you can see that the pipeline is actually running. We've got all the different pipelines that we've had in the past. Those that ran, you could connect some of those to be connected on GitHub triggers as well so that they would automatically be triggered as soon as you have a change in your code base and rerun all of those tests for you, run that linting, making sure that all of their code is beautiful and ready to be deployed and then automatically deploy that application into your cluster. So apparently I did something wrong again so it failed again. But you got to get the idea here on how it should be working. So if everything went successfully, you will see that it actually worked and you will see a new application being deployed in your topology right here. That's pretty much all there is new. So yeah, that's all there is new in the pipelines and I guess I'll leave it to Brian now to talk about serverless. Cool. So let me share my screen real quick. You should see it now. Yeah, cool. So I kind of have been reading some of the articles that we've talked about and saw that hey, serverless now is GA or maybe I wrote this article but either way, serverless is now generally released within OpenShift and serverless allows you to deploy applications and have them do pretty awesome things. I'm going to talk about a couple of those things. One of the main components here is serverless handles the way to be able to handle different revisions of your application and you could determine how you want to release those appropriately. So whenever we have a new release of an application, we could be able to handle that in a canary style where our existing app works and we maybe have 100% of the traffic deployed to that but when we release a new version, maybe 10% of the traffic might go there and we could adjust that as we move along to see that application grow and make sure that things are working appropriately. One of the other big aspects that I was reading or wrote in this blog was that these applications are able to scale down to zero so if I want to be able to manage my resources effectively on my cluster, I can do that using serverless. Whenever an application isn't used, I could go ahead and scale that guy down to zero automatically and not use resources idling and consuming memory and things like that. Whenever a request comes in, that application will automatically turn up and start working and we'll see some of those things as I show some of them. One of the other things that I was doing is I'm looking at ways to organize my life a little bit better so maybe it's a do app or looking at different to do apps might be something that's helpful. I was talking to some friends and they haven't ever used OpenShift or Kubernetes before so this is another thing. They're not familiar with this but they work in React and they want to know how do I deploy an application? I already have an existing React app. All I care about is deploying that app. So all of these things could be together. We're going to deploy this application using serverless on OpenShift. I found this to do app and written in React is pretty awesome. It's simple but it shows the ideas and we're going to see things scale and do all this stuff. So to begin I have an OpenShift cluster here. This is the same one that Joel was working on. So my project that I have here is isolated from what his project was doing. I don't have access to that. You can see I don't have the same projects or any of that stuff. So because of OpenShift's multi-tenancy abilities and all these things I could see that and I can not access that which is awesome. We're two separate users doing two separate things. Now if we wanted to we could do that. But anyways to begin I'm going to just go ahead and add an application and I'm going to import this from Git because you saw I had that repository within Git. So following similar from how Jan showed I'm going to go here. I'm going to clone this guy. I'm going to copy the Git URL in here and we could see it's detecting which builder image it should use. The reason why this is happening is this application has no idea that it's running in OpenShift or Kubernetes or whatever. It's just a React application. There's no extra things that I've done. I can do extra things to make this build more custom if I wanted to do that and do certain things like use the pipelines that Joel talked about to be able to you know handle different steps in processing. Maybe I want to lint this and make sure things work. I would use Tecton or I would use OpenShift pipelines to be able to do that. But in my instance I'm just deploying the application. I didn't know anything to begin with for example. I just wanted to play my react here. So I'm going to import from Git. I pulled that in. You can see that OpenShift automatically detected that this is a modern type JavaScript web app using React. We could go and choose different builder images and what I'm going to leave version 10 in here. We're going to choose a application grouping. So Jan showed the boxes that are in that topology view. This is a new application so I'm going to create a new application here and I'm going to give it a name. So BT test react maybe out of zero. And I'm going to give it the name here. So BT test react zero as well. So the application name is going to be that and then this particular instance of this service is going to be that. Under the resources here we could deploy with a traditional deployment that's typical in OpenShift. You might be familiar with that. You could use deployment configurations, those types of things. But what's new is OpenShift serverless and we want to use that. So we deploy this as a Knative service. Doing this is just going to deploy this application as that Knative service or that OpenShift serverless service and it will be able to get all of the benefits and things like that that we're able to achieve using serverless by just choosing this box here. That's it. So one of the other things that are in here is maybe some scaling configuration and this is somewhat unique to OpenShift serverless. We could go ahead and specify what are the minimum number of pods that this container or this application is able to be at whenever no traffic is coming in. So in our example we want this to scale down to zero. We don't want to use any additional resources. So I'm leaving that as default. I'm not going to change that. But maybe your use case is a little bit different and you want to leave that application up but you still want some of the other benefits that serverless gives you. You could scale that up and maybe specify one or two or more. And the reason why you would do that is if an application is scaled down to zero whenever a new client comes in there will be a little bit of a lag or delay for that application to come back up, turn back up and get going. So you have a little bit of a waiting game on the user side because you're consuming and saving resources by scaling down to zero. But we're going to leave it to zero and see how that works. We could specify different concurrency targets. So by default serverless scaling is handled by concurrency. So if we have, I think it's like maybe 10 concurrent users or maybe it's 100 concurrent users, the number trips me up. I might have to look that up. But once you have that many concurrent users by default the pod will automatically scale up. So that's a little bit different than maybe some horizontal pod autoscalers that are based on CPU. By default we're getting concurrency based scaling which is pretty cool. So I'm going to go ahead and just click on create here and you could see that this application has been created. So the BT test react zero is what I gave it as the name. And we could see in this application we have the one K service or K native service. So if I click on that, let's go back to the topology view and we could see it in the sidebar. So if I click on that, I could see the different resources that are related to that. There are pods. Currently our application is being built and I'll go and look at the build and you'll be able to see that and once that's done it'll automatically show up in this pod section. We have revisions and a revision is a specific snapshot point in time configuration set of this application. So later on I'm going to go ahead and edit this app and I'll have to create a new revision to be able to pull in those changes. And then we have the route. So by default we get a route that is able to scale down to zero and do all these things. It handles all the traffic coming in and all that stuff. So I'm going to go into builds and in here we're able to see kind of what's going on. I could click on builds. I could go and see that, hey, this is running and I could see the logs. I could see what's going on. I could see standard stuff that I would see in a normal OpenShift type build that, hey, this is pulling the last commit that was part of this repo and all the various things that are happening here. But what I want to do is I've set up or deployed or started to deploy an application, but I want to be able to work on that a little bit easier. So one of the things that I want to do that I typically would do would be configuring the webhook to GitHub. It's a relatively simple thing that I would like to do. And by me doing that allows me to push code and I could change this configuration and all this stuff. So to do this currently we have to go ahead and create a new secret that has a unique name in here and I'm going to create that secret. We're good. Now I could actually create that webhook. So I'm using GitHub. So I'm going to copy the URL with a secret, go back to my GitHub repo and go in here, configure the webhook real quick. Of course, looks like I need to log in. Give me a second. All right. So paste in that webhook config or URL with the secret. There we go. And I am going to disable SSL verification for this since this is running in my local cluster using self-signed certs and all of that stuff. So I don't need that. It won't work otherwise in my example. And there we go. We're good. It has a green check. The webhook is now set up. So whenever I push code to the repository, it's going to automatically trigger a build. But again, because what I talked about with Knative serving, the revision point is a snapshot point in time. So I'm going to have to create a new revision once that happens. So anyways, let's go back to the application. Let's see what's going on here. The build is now completed. We could see that it's running. We're good there. And I could drill into a couple things. But first, I just want to click on the link here. And I could see, oh, hey, look, my app works. I got a little too excited whenever I put too many explanation marks here. But we could see test. And I could do a simple ephemeral to-do list. There's no database, nothing set up here. But you could go and configure all of those things. And this is running with Knative and OpenShift serverless. One of the things that I want to do, because I specified this as a get type repository to pull in all that stuff, is I could just click on edit source code from my UI here. And this is going to realize and recognize that code ready workspaces is installed on my OpenShift cluster. I already had that configured. I installed the operator and it's set up and ready to go. And this is a web IDE. This allows me to go and edit the code within my browser, all running on the same OpenShift cluster. So it's going to go ahead and pull in a couple of the different things that it needs. It's going to spin up my new project to be able to work on this. It's going to go and pull in the image for Eclipse Thea that allows me to edit the code on the fly. So you could see it's going through and it's loading. And here we go. This looks pretty familiar to me. I use Visual Studio Code locally on my machine. And while this isn't Visual Studio Code, this looks very, very similar. So this is pretty neat. I just clicked on that one button from that topology view. And I could go ahead and edit some stuff. So let's go ahead and get rid of those explanation marks. There's too many there. We don't want to be too excited. So let's just get rid of them all, save that, and add this, push it to version control. So push it back to get. So not a very great commit message, but it doesn't have to be. So I committed that. Let's go ahead and push it. So I need to specify my GitHub login. Give it my password. Let me copy that over. There we go. You could see it's working. It's asking, hey, do I want to periodically fetch so that it could check for changes? So I don't have to think about it later. That's pretty cool. And these are things that helped me be a little bit more efficient. I could go back to GitHub and I could see that, hey, this updated app was the last thing I got rid of those explanation marks. And I could go to my open shift and go to the builds. And I could see under here that there is now a build that is running. It's currently in the running state. That webhook got set up. It's going to go ahead and update the image. Once that's done, I'm going to create a revision to be able to handle and see that in person. But we're going to let that run. And I'm going to talk a little bit about this other application that I have available that has multiple revisions already configured under it. So in here, we could see that we have multiple revisions here. We could see from this topology view that we have a certain set of traffic distribution already configured. So the version one of our application is set to be 80% traffic distribution into it. The version two has 20%. And then our latest revision has zero because that was just a development type of revision. I'm not doing anything crazy with that. But I don't want to automatically just overwrite everything and make it all the users go to that. I want to vet it and validate that this thing is working appropriately. Whenever users come into the application, they would just hit this standard route, the greeter-bt-task, et cetera, right? That's what this URL does here. But with OpenShift Serverless, one of the things that we get is the ability to have these sub routes. So in here, I could go directly to Greeter version one. I clicked on the button, and we could see that it's sitting here waiting for a second. It spun up really quickly. And you could see in the OpenShift console, you could see that happen, that spun up, and we could see that now there's a blue deployment, so there's a pod that's running. And you saw, it took a little bit of time, but it's not really slow, but there's a little latency there. So one of the reasons why you would maybe not scale down to zero is if you have to have that kind of requirement where, hey, we want it to always work that quick. But there's those downsides, right? I want to be able to handle my resources a little bit more efficiently, so maybe I am okay with the initial lag whenever a user comes in. And if the app has been sitting for a while, you could tweak this period of time that that user, or that pod spins down and terminates and scales down to zero if you want. But anyways, we could see that here, but say I wanted to go directly to that one revision that was the latest one of this Greeter. I could click on that, and we could see that, hey, it's going to start spinning up. We have one requested, now it's available, and now it's loaded. So that's pretty neat. Let's go check on the builds real quick, see what's going on there. So we could see that that build is now complete. And what I need to do is specify a new revision for the app that I was building already, right? So if I go in here, go to the case service here, and if I go ahead and edit this service, I go into the spec, the metadata, and I'm going to just give it a name of bt-test-react0v2. That's all I have to do because looking in here, we could see that the spec is using the image stream within our OpenShift registry, and there's no particular tag. So it's going to just pull in the latest one for this particular revision. Now, if I wanted to specify specific tags for specific revisions, probably a best practice, I would go ahead and go and specify these tags and all of that stuff. But I'm not going to do that. I just specified, hey, give me the latest revision, and this will go ahead and give me a whole new revision for this application, and now you can see no exhalation marks. That got edited. I did it all in the browser, and that's cool. Now, that's awesome. So React app spun up using OpenShift serverless, showed some of the automatic web hooks and all of that stuff that we could do. One of the other big things that happened with OpenShift serverless and becoming GA is having a command line tool that allows us to interact with our Knative services or OpenShift serverless services a little bit better and easier in the command line. OpenShift gives us that tool packaged as part of the OpenShift release so that I know that the bits that I'm downloading on my machine or that my developers are downloading on their machines are getting the official signed code and all of that stuff. So to do that, I just log into any OpenShift 4.4 cluster, the one that I have available. I'm going to click on this question mark here and go into the command line tools. And in here, I could see there's the Knative or KN OpenShift serverless command line interface and I could download that for Linux, Mac OS, Windows, what have you. I already have that downloaded so I'm not going to waste time to do that, but I'm going to show you the terminal real quick as the last thing. So let me switch over to that and you should see that here in a second. And in my terminal window, I already have the Kn command line tool already up running. I ran Kn service help because we're interacting mainly with the Kn service, the Kn service. And I could see that there's a couple of different things that I could do. I could create, I could delete, I could describe, I could list all of the things that I have that are running as Kn or Kn services on my cluster. So I'm going to just go ahead and create a brand new OpenShift serverless service. And to do that, I just do Kn service, create, give it a name, and then I'm going to specify a container image this time. So instead of pulling in the Git repo, I'm going to give it a image. So click on enter and we could see that, hey, this is creating the service that's named greet test. And you could see it took, oh, look, there is an issue with, oh, I see what happens. I didn't specify the tag or the name of the tag of that image properly. So there we go. I already have that name. So that's just for the purposes of the demo, what I would normally do would be KN service delete, greet test. So let's do create, greet test, give it a number in here. That way we don't have to wait for that to delete. Anyways, we could see that now everything is good, right? Took seven seconds, the time to pull down the image, create all of the routes, revisions, and all of that stuff. I could just go ahead and curl this guy, see that it worked. That's cool. I could do KN service, if I could type list. And I could see all of the various, the services that are there, right, that are running. I could see my React app that's here. I could see the original reader that has the multiple revisions that you saw. And then I could see the new one that I had available there too. So that's pretty neat. And the KN tool does quite a bit of stuff, right? So it allows you to set up command line completion in the terminal so that I could work with this tool a little bit easier, which is pretty awesome. I could look at the different revisions. So if I wanted to do KN revision list, I could see all the revisions I have available, as well as the percentage of traffic that's associated with that. So if I wanted to, I could change that relatively easy. I could also do KN routes list. And in here, I could specify all of the various routes that I have available is the OpenShift serverless services. I see one of the questions that in the chat, like, does that command line completion offer bash or ZSH or all of that? So there we go, KN completion. That accepts either bash or ZSH. So I could go and specify like bash. And then it's going to, you know, give me the stuff that I would, you know, drop in as an include into my.bashrc or my.zshrc file. So, you know, I could just pipe that into a new file and include that if I wanted. And that's how you get command line completion for that tool. And this should also, I think, work with Windows. It definitely works with Mac. As far as completion goes, the command line tool definitely works on those three. So those are available there. So anyways, that's the quick, let's launch or react that using OpenShift serverless and, you know, kind of workflow of how to edit and work on those applications. Now, in a production environment or whatever, maybe I would want to do a little bit more robustness for each step in that process. And I would tie OpenShift pipelines to that build step so that I could do extra things like what Joel was talking about in his spot. But with that, there we go. Back to you, Josh, I guess. Right on. That's great, Brian. So we wanted to reserve a few minutes at the end here today for questions that we'll be taking in the Twitch chat stream. So if you have questions, let's pop them in there. I know by reading that stream and listening to those three presentations, I definitely had a question of my own that came up for Joel. So like, how do I integrate OpenShift pipelines in my existing developer workflows? Are there IDE plugins? Are there easy ways to work with pipelines if I'm new to them? Yes, that's a great question, Josh. I have here, I'm typically using VS Code as part of my development process. And in here, I already have installed the pipeline, oh, actually it's right there. So I've already installed the Tecton pipelines plugin, which is available. And you can see here that I have all my different pipelines that are available. So it connects automatically to my OpenShift cluster. And I can see the pipelines that I have, I can see the various latest results from my latest runs, and so on. If I look in here, and I open this one up, I can see... Do you want to share your screen with us, Joel? Are you showing things that we should see? Of course. Of course. Okay. There you go. Is that better? Yes. Thank you. So yes, so you can see here that I can connect and I can see all the different resources that I have, all the different pipelines that I have inside my cluster. And I can see all the different runs as well that were performed. So my latest pipeline runs, you can see the latest one that failed during the demo. I actually fixed it so you can see that the latest actually worked here. So it connects and it uses the take-in command line tool, and you can visualize a lot of different things in here. Right on. Cool. So you may have mentioned this, and I might have missed it while I was worried about your screen sharing, but like this is... I can get this from the usual VS Code plugin or extension catalogs, like it's easy to find? Exactly. Well, while I'm still sharing, I can actually go in here and I'll see the Tecton pipelines. And it's just a matter of installing it from the marketplace or the VS Code extension marketplace. Cool. Very cool. It's even got a cute logo. I like it. So Brian, somebody in the chat is asking what I think is a pretty interesting question, which is, can you give us any information to distinguish between when you would use a minimum pod count on a serverless service that you set up in Knative and what the function of that is, as opposed to just scaling it as a deployment in a standard or direct Kubernetes way? Yeah. So by default, a deployment in the standard type of Kubernetes deployment, you specify, hey, I want maybe three replicas of this particular pod or this instance, right? And maybe I realized that, hey, I need to scale that up because it's not meeting the load. Well, by default, there's no automatic type of scaling or any of that stuff. That's something that I need to handle. And I could easily do that. I could just do a Kube CTL or OC scale that particular deployment plus one or whatever. But one of the big things that a serverless gives you is the ability to not even have to really deal with that. It's all handled in a Kubernetes way. It's all handled really nicely, but I could go ahead and deploy my application and by default, without me thinking about the ops point of view, all I really did in my example was pushed an application. If a lot of people start hitting that URL, what's going to happen is that application is going to scale up. So by default, I'm getting an awesome scaling that's built into OpenShift Serverless. And by default, you're getting that deployment that's scaling to zero whenever nobody's hitting that application. And the reason why you're doing that is I don't want to consume resources for an application that I'm not using. Why do I need to do that? I don't need to do that. So I could scale down to zero. And maybe my workflow is this is an application that is maybe a login. It's handling logins for maybe our web portal or something. And a lot of logins happen in the morning, but overnight, nobody's working because I only have users or employees that are working during the day. So during the night, that could scale down and maybe IT or whoever could go and run a lot of parallel tasks and a lot of stuff to use the cluster for other tasks that don't affect the login tool. But in the morning, that first user, whenever they come in, it might take that little bit of time for that application to scale up to get to the one pod or multiple pods or whatever. But once that's happened, everybody else is going to be really instant the way that you would expect. That's really fast. But I'm using my resources that are available to me more efficiently. And that's one of the reasons that you would do that. And one of the other things that Serverless allows you to do is event driven architecture. And I didn't even talk about that very much. But there's a whole eventing aspect of Serverless that allows you to do things based on events that have happened in the cluster. Maybe somebody uploaded a file to an S3 or maybe somebody created a user account. And that fires an event to my Kafka stream or what have you. And that then goes and wakes up some services that need the handle processing for that user. But they don't need to happen immediately. They just need to happen. And they don't need to always be running. They just need to, hey, I got a user, let's go do what I need to do. And that's all that I care about for that app. So event driven architecture is a really big reason why I want to scale down to zero. Because I might have many multiple microservices that are all available and do important things. But they don't always need to run all at the same time. Right on. Very cool. Thank you. And we'll look for any feedback on your discussion of that from the folks who had questions about it in chat. So by the way, I've posted in chat while Brian was talking there the promise link to the free edition of the Kubernetes operators book. But more germane to today's discussion. I want to make sure to briefly share my screen and let you take a look at URLs for blog post covering the topics that all three of our folks have introduced and highlighted in OpenShift version 4.4 today. We will also post these links for your future reference in the eventual VOD listing for today's broadcast. So you'll be able to find them there as well as on your screens here. And with that, we'll kind of have a little bit of open discussion among the folks who presented to make sure that we've caught questions coming in or follow-ups to the couple of things that Jill and Brian were able to touch on in detail. Jan, I was super interested in the sort of visual connection properties in the developer perspective. So that's something I'm definitely going to play with a little bit that I haven't played with very much so far. It reminds me of connecting things in WebObjects a long time ago, which is one of the first sort of drag and drop service specification interfaces I ever saw. Yeah, I think it's cool because depending on how different people want to use TopologyView, if you want to use it as a visual representation to kind of almost like a map of how your application works, it allows you to do that. But then you can also use it in a more functional way too, if service bindings are something that you need. Right on. So I'm going to hand things back over to line producer and Twitchmaster extraordinaire, Chris Short. And I'm going to stop sharing my screen so that we can have a human face up here while we're talking. Oh, I see we've made it to Brady Bunchview, which is my favorite thing about these sessions. And Chris, you could probably take us towards a wrap up. I think we've hopefully met people's questions. All of us are available at our Red Hat email addresses. All of us are on Twitter for follow-ups and you can get to those addresses for folks through our IDs on Twitch where you've been watching the stream. Chris, take it away. Awesome. Thank you so much for joining us today, everyone. Thank you to Jan, Brian, Josh, Jason, Joel, everybody in chat for asking questions of participating. Thank you again. We also have our friends over at Dev Nation. So head on over to dn.dev slash upcoming. And you will see some of the folks that are on this chat and some potential future sessions in the future. Also coming up later today at 3 p.m. Eastern, you think I'd figure out the UTC time by now, 1800 UTC. Nope, that was last week. Sorry, 3 p.m. Eastern. We'll just go with that. Josh or Eric Jacobs and I will be breaking down the machine auto scaler. That'll be interesting. So yeah, hang on tight for that one. So thank you again and we are going to sign off for the day. Great to see everyone and talk soon.