 This demo will give you an introduction to using OpenShift from the perspective of a developer. Throughout this demo we'll be deploying a ParksMap application. This consists of a front-end, which is a map visualization tool, and two backend services which supply geospatial data to the application. There are multiple ways that you can interact with an OpenShift cluster. There's a web console that you see here that has both an administrator and developer perspective. We'll spend a lot of time here in the demo. But you can also interact with OpenShift through the command line. Since OpenShift is a Kubernetes distribution, you can use the kubectl command line tool, or you can use OC, which is the OpenShift command line tool. The syntax between kubectl and OC are the same. They behave the same, except OC adds a few additional commands for your convenience. Let's check to make sure that we're authenticated to our cluster. Okay, so we're signed in as user 6 right now. The first step in deploying our application is to deploy the ParksMap front-end. And we'll do that by deploying it from an existing container image. Let's toggle to the developer perspective. We land here in topology view, which is empty because there's no workloads deployed yet in this project. You can see that there are six different ways you can deploy application services or components. Since we have an existing container image, we'll click this one. I'll paste in our image name. And then I can either hit Enter or click the magnifying glass. And this will pull in the metadata for our image. Give our application the name. For workshop, our component is called ParksMap. You can choose either a standard Kubernetes deployment or an OpenShift deployment config. We'll choose deployment config since we're using some of the features that it includes. This box here is checked by default. This will create a route for our application, which will allow us to access it from outside the cluster with a URL. You can add additional advanced features here. We'll add a few labels because this application uses labels for discovering back-end services, which we'll talk about later. So we have app equals workshop, component equals ParksMap, and roll equals front-end. We'll click Create. So you can see here in topology view, we now have this gray circle indicating our workshop application. And then this circle here indicating our ParksMap deployment config. You saw this circle turn from light to dark or blue, indicating that the pod is up and running. When we click this, you can see our deployment config has several resources. It has a pod, a service, and our route. When we click the Overview tab here, you can see other information about that deployment config. Now to show you what the application looks like, we can click this here. This indicates the route that we created. You could also get to that route from here. So we'll open that up, and you can see that it's an empty map visualization. Next, let's take a look at some of the self-healing capabilities of Kubernetes. Right now we have one pod running. If we wanted to scale that up to two pods, there's a couple ways we can do that. One would be simply to click here on this up arrow. I actually clicked it twice accidentally, scaling it to three. And then that'll allow me to show you how we would do this on the command line. You could do OC scale dash dash replicas, since I really wanted to. We'll say replicas equals two. And then we give it the name of our deployment config ParksMap. So this will scale us to two replicas, which is what we intended. You can see it's happening right here. But what I want to show you is going to be in the command line. So to simulate something going wrong with one of the pods, what we'll do is kill one of them and watch what happens. So I'm going to grab this pod here. OC delete pod, give it that pod name, and then run OC get pods. All right, so that pod is deleted and marked for deletion, that is. And once that completes, we should be able to see that there's a new pod running in its place. So now you can see this pod, which we had before is still there, but now we have this new one in place of this one which we deleted. So it brought that back up for us without us having to take any additional action. Now we're going to scale back down to one before we move on. When you have an application running, you may want to look at the logs for your pod. We can do that here by clicking on this link to view the logs. And you can see in this window here a streaming view into your logs for your application. If we scroll here, you'll notice that there's some errors in our logs. And we'll take some actions to handle that in an upcoming step in the demo. You can also get to the logs from the command line. If you OC get pods again, you can say OC logs minus F and then give it our pod name. And you'll get a view of your logs that way as well. Now this is fine if you have just a single pod or if you're not really concerned about your pod restarting, but if you want to look at logs across a larger-scale application or if you want to be able to view historical information about the logs, then looking at them this way isn't going to be sufficient. So aggregated logging is built-in to OpenShift. You can click this link here and this will bring up our Kibana dashboard. We log in here at the same credentials that we use for OpenShift. And we have to grant permissions here as well. So here's our Kibana dashboard. We've already got a query in here. We have to set this to our User6 project. Okay, so we already have a query in here saying we want to see logs for a pod with this name, which is our current pod, and in the User6 namespace, which is our project, and then for the container named ParksMap. Now we got this context when we came over here since we clicked to it directly from these pod logs. So it brings us right into the context that we want to look at. If we were to search here for that error that we saw as well, then what we should see is those two errors that came from our log. And again, these are related to some permissions for a service account that we need to add, which we'll do in a further step. You can also get to these logs from the Administrator view by going to Monitoring and Logging. So let's take care of that error in reference to a service account. OpenShift automatically creates a few special service accounts for every project, and the default service account is the one that takes the responsibility for running pods. OpenShift uses and injects this service account into every pod that's launched. And by changing the permissions for that service account, we can fix this error and allow our application to do what we need it to do. We're going to go back over to the developer perspective, click on here, and then go into our User6 namespace. From here, we're going to click on Role Bindings, and we're going to add a new one. So we're going to click Create Binding, give it a name of View, because we want to grant the default service account view access to our project. We'll select the Role Name of View. Scroll down here and select it. It's a service account for our namespace of User6. There it is. And the subject name is Default for the default service account. We'll click Create there. And now if we come back here, you can see we have this new Role Binding with the View Role for a service account named Default in our User6 namespace. You can also add those permissions for a service account via the command line if you prefer to do it that way. So now that we've done that, we need to go and redeploy our application so then it can take advantage of those changes. So we can go to Actions, Start Rollout, and that'll trigger a new deployment. And you can see the latest version is two now. You can also grant access to other users to access your projects with an OpenShift. One way you can do that is here from Project Access. So right now, User6, which is us, has admin access to this project. The default service account has view add access. If there was another user, for example, User1, I could give them access to our project as well, View, Edit, or Admin access. Let's say I want to give them view access. I can do that here. Now when that User1 signs into OpenShift, they would be able to access View at least this project as well. Next, we're going to deploy our National Parks backend. We're going to deploy this one from a Git repository. And the way that OpenShift does that is using the Source to Image project, which takes source code and a builder image and creates your application image. So let's try that out. We have a Git repo here. My gogs Git repository for this National Parks app. I'm going to grab that URL. And we will go to this plus add menu. And like I said, we're going to deploy it from Git. So I'll paste in that URL here. That is a Java application. And it's Java 8. So again, our application will be named Workshop. We want to group these together. We'll name it National Parks. Once again, we'll just use a deployment config. We do want to route to the application. And as we did before, we're going to apply some labels here. App equals Workshop. Component equals National Parks. And roll equals backend. We'll click Create. So what's different here is that you can see this icon for a build. So now our build is running. If I click on this, you'll be able to view the build logs. What this is doing right now is pulling down that source code. And you'll be able to follow through the whole process as that S2I build happens. Once that build is complete, you'll see this turn from the circular arrows into a green check mark to indicate the build was successful. You can also view build logs from the command line. For example, oc get builds. Or oc logs builds National Parks-1. We'll get us the logs for that build as they continue. This is a Java-based application that uses Maven as the build and dependency system. And for that reason, the initial build can take a couple of minutes as Maven downloads all the dependencies needed for the application. We'll go back over to the web console and wait for this to complete. Okay, you can see that the build is complete now with this green check mark. And we have a blue ring here indicating that our application is, or a pod is up and running, as you can see there. And we can click this route. Now this is a backend application, so it's not meant to be accessed at this URL. But there is an endpoint slash WS slash info that gives us some information about the application. And as I mentioned before, we're going to have a database that this connects to. It's going to connect to a MongoDB database that's going to contain some of this location data. So let's go ahead and create that now. So to deploy this database, once again we'll go to this plus add menu, and this time we'll use the database option here. This is going to bring us to the developer catalog, where one of the sections is databases. You can filter here or search. But I see the one we're looking for, which is this MongoDB ephemeral. That's going to work for our demo purposes, because we don't need persistent storage for this demo. Now here we're going to name this database service MongoDB National Parks. And for each of these username and password fields, we're going to enter MongoDB for everything. Notice this generated if empty. This is a feature of the template we're using, where if you leave this empty, it will create, it will auto-generate a string for you for these values. But as I said, we'll do some MongoDB and then click create. So while that's coming up, we'll take a look at what's here. Notice the secret right here in the parameters section. Let's click on that. This is a secret that was generated for us that we can use, for example, with our backend application for authentication with the database. So we'll click add secret to workload. We want to add that to the National Parks backend, and that'll be added as environment variables. Now when we add that secret to the workload, that triggers a new deployment. So that's happening right now. We want to add this MongoDB National Parks to our workshop application grouping. You can right click edit application grouping and select workshop, and it will move over there. And when we deployed this database, we didn't, we weren't presented with an option in the template to add the labels that we've been adding to our other components. So we can do that through the command line now. Paste the send. So this is going to add a label to our MongoDB National Parks deployment config and service with app equals workshop, component equals national parks, role equals database. If we go to ws slash data slash all, we've got an empty set of data here. We have another endpoint load. It's going to load a bunch of data in the database. And then if we go back to slash all, we can see it here. So now we have database all set up. Now the way that this parks map application is set up, it's using the OpenShift API and querying for routes and services in the project. If any of them have a label that is type equals parks map dash backend, then the application will know to talk to those endpoints for map data. So what we need to do is label our route, our national parks route with type equals parks map backend. Let's do that in command line now. And the way that we're doing this isn't a requirement, but it's to demonstrate to you that you can use this type of discovery for backend services. All right, so now with that route labeled, if we come over here to the application and refresh, we should see that all the data is showing up now. So now we have our parks map front end with the national parks backend and the database all connected. Now we're going to take a look at application health checks via readiness probes and liveness probes. So a liveness probe will check if the container in which it's configured is still running. And a readiness probe will check if a container is ready to service requests. So we will set those up in our national parks backend. So we will click here and go to edit deployment config. In a future version of OpenShift, you'll be able to do this via a form rather than editing the YAML directly. I'm going to copy some YAML for these readiness and liveness probes and we'll insert these here after image pull policy. There it is. There we go. So we'll save that. And that should trigger a new deployment. Once again, you can see that's happening here. Now let's set up a pipeline in OpenShift to take care of our application lifecycle. There are many different ways of doing this. We're going to use a Jenkins pipeline and we're going to set it up so that the Jenkins pipeline will control when builds and deployments happen rather than some of the triggers that we have set up now for configuration changes or image changes. So to do that, let's go ahead and remove some of those triggers that are in place for our national parks project. Go here and edit that deployment config one more time. And what we're going to do is just remove this triggers section completely. So our image change trigger and config change trigger are now gone. Back to that get repository and gogs that we looked at before. We're going to create Jenkins file for our national parks application. So come in here and sign in. Let's see user six. We'll create a new file. I'm going to call this Jenkins file dot workshop. Okay. So we'll create our pipeline next. Go here to the plus admin you into the catalog and search for Jenkins. For the most part, we'll keep the default settings but I'm going to set this to true just to speed things up for the purposes of our demo. Back to topology view. We have our Jenkins service here. I'm going to right click and add it to our application and we'll wait for this to come up. It may take a couple of moments. There we go. Looks good. Next, we'll click this plus admin you. This time we'll choose YAML. We're going to paste in some YAML directly here and then click create. And this is going to create a pipeline that uses our Jenkins file dot workshop from the repository. Now, you'll notice there's a deprecation message here referring to OpenShift pipelines. OpenShift pipelines is a new feature of OpenShift that's currently in developer preview and it's based on top of tecton. So this will allow you to do more cloud native CICD in the future but for the purpose of this demo we're using this Jenkins template. So go to the builds tab here. Click on national parks build and build one. You can see that our build has started here. We'll go to view the logs. This is going to open up Jenkins for us. We log in with our OpenShift credentials and you can see the logs there as it progresses. Soon that will finish. Okay, so our build has completed. You can go back to topology view and if we click on it here you can see the version is incremented. So next we're going to configure some webhooks to trigger the pipeline execution every time there's a change in our national parks get repository. So once again let's go into the builds menu and if we scroll here down to the webhook section I'm going to click copy URL secret for our generic webhook. Then we'll go over to gogs and go to the settings link here choose webhooks under add webhook select gogs and then paste in that URL. We'll switch the content type to www.formurl encoded and then everything else can stay as is. We'll click add webhook. Now back in the code we're going to get to our source code that we're going to modify. Okay, and it's this backend controller.java file so we'll click edit and we're just going to change some of the text that's returned here so instead of national parks we'll say amazing national parks and commit that change. And what you see here is that as soon as that happened a new build was kicked off based on that webhook. So it triggered a new pipeline build. So here if we type OC get DC to get all the deployment configs you can see several things listed here. Our national parks deployment is at revision 5. Let's roll back to revision 4. We can do that with OC rollback and give it the name of that deployment national parks. And then we get a message here that it was rolled back to national parks dash 4. And that'll take us back to the version that we had prior to the change. You can see a new deployment is happening. You can also roll forward similar to how we rolled back. We can roll forward to national parks 5 and you would do that the same way. OC rollback national parks. That brought us back to the version that was triggered by the webhooks. In the next section of the demo we're going to deploy a backend service that consists of a REST API and MongoDB but this time the application will already be kind of wired together and described as a backend for the visualization tool. So once that application is built into deploy you'll be able to see it in the map. We're going to use a template for this. Let's take a look at what is in that template and this is basically all of the information that's needed to generate and deploy this application. So from the command line we can run OC create with the minus F flag and then pass in the URL to that file. And we get message back that the MLB parks template was created. Now there's two different ways that you could instantiate the template from here. We could do it from the command line with OC new app or you can do it in the web interface. We go to the plus add menu and go to the catalog and search for MLB. Now we have our MLB parks item here in the developer catalog. Let's do it this way. So I'll click on this. Click instantiate template and now you see all of the parameters here that we can fill out. Our application name we'll use MLB parks. Everything else can stay as is. We'll click create. So over here you can see the MongoDB MLB parks database was spun up as well as this MLB parks deployment. We can put each of these into our workshop application grouping. When we deployed our national parks back end we saw how S2I helps you get from source code to a container. But when you want to do fast iterative development that may not be the best option. For example if you wanted to change some CSS or one small method and you don't want to go through that full commit and build cycle just to see if the change worked you may want to be able to do this a different way. There's a few ways of addressing iterative development. One of them would be to use the command line tool called ODO. For example we'll show you how to do this another way. So OpenShift with the CLI you have the ability to do a deployment from your local machine. So in this case we're going to use S2I again but we're going to tell OpenShift to just take the war file from our local machine and put that in the image. So doing this type of development lets us do quick builds on our local machine and then quickly send up the war file. So over in the command line here we'll do getClone of this repository go into mldparks and run maven package. So we need to pay attention to the location of this root.war file because we'll need that directory later. So let's go in and make a code change. So we'll go to source plain OpenShift vg roadshow and then back in controller.java come down here and where it says return new backend similar to what we did before we'll change this message to say amazing mldparks and save that. Okay we'll go back up to our mldparks directory and run maven package again. Okay so now we have our war file built and we want to kick off a build using it. So what we can do is this. We can run oc start build pass in this build config for mldparks and we save from file and we tell it to use target slash root.war and then we'll pass in this follow flag so we can see what's happening. So as soon as this is done and it's deployed then the map should have a new label. Let's watch what's happening from the web console. You can see our build is running here. The build is complete now. The deployment is rolling out and as soon as that's finished we'll pull up the info page for our backend our mldparks backend and we have a good message here now. So that's one way you can speed up your build and deploy process. Again as I mentioned before the odo command line tool is also very useful for fast iterative development. So that completes our demo walking you through OpenShift from the perspective of a developer.