 Well hello and welcome everybody to this OpenShift Commons briefing. Today we're going to hear from Nigel Brown, who's a developer advocate at IBM about Java application deployment options for OpenShift on the IBM cloud. We're really excited to have Nigel here today. And today is also our first experiment in live streaming to Twitch. So if you see me Twitch a few times today, it's just because this is a new experiment. And as well, Nigel will be available for some Q&A at the very end of this. So if you're joining through BlueJeans, ask the questions in the BlueJeans chat. If you're joining through Twitch, ask the questions on Twitch. And our other moderator over there will forward the questions to this session. So without any further ado, Nigel, thank you very much for coming. And we're thrilled to have you. So take it away and let's see what live demoing does on Twitch and streaming at the same time. This is going to be great. Thank you all for having me so much. My name is Nigel, as has already been said. I'm a developer advocate at IBM. I do advocacy around IBM cloud, working a lot on cloud native deployments, working on containers, and Kubernetes and OpenShift. So one of the things that we noticed when we were starting to do a lot of these workshops is that people were happy with the demos that we were giving in Node.js, but a lot of people were coming from Java backgrounds or have like their monolithic applications or other types of applications that are written in Java that they want to be able to get into OpenShift and we weren't doing them enough of a service. So what we have here today is a workshop that's going to go through some of the more entry-level overview of OpenShift and about how we can get our Java applications, specifically those that are written in a microservice way to deploy onto OpenShift. And if the demo gods are willing, we'll be able to show you a few different ways of deploying, get to show off some of the features of OpenShift as well as this very lightweight bare bones example of a Java microservice that you can take and modify for your own uses. So we're going to switch off the slide and bring in where we're going to be working from today. Okay, let's see. And then quick question, can people see my camera as well or are they only seeing my screen? We're seeing both your face and your screen. Okay, I will try not to pull any weird faces while we're two. No promises. Myself? Okay. And then I will share this chat. So this is a workshop that I gave last week on the 13th and I set up a gist that we had going at the time with a lot of helpful links in it. So I'm going to drop the gist there so you all can follow along with everything that's going on here. What I want to show is this link here. So it's an OpenShift on IBM Cloud workshop that was written to show Java and OpenShift 4. The application that we're showing today shows us a web app that would be a sort of like author return system. So you'd send a git request to figure out who an author is and get information back on that person. So we've got the web application front end that requests data from the web API. The web API retrieves a list of articles, title and author's name from the article service. And for every author, it retrieves the details of blog URL and the Twitter handle from the author's service. In this lab, we only use the author service, but again, this is built as a very bare-bone skeleton thing that you can build on top of for your own application. So I want to give a huge shout-out to Nicholas and Harold, whose names you'll see a lot through this workshop because they're the ones that wrote this content. So in this workshop, they have the three optional parts, the pre-rex, and then the two optional parts. And then the rest of it is the fourth through the eighth exercises are more dealing with getting the IBM cloud deployment working and hooking it in with other services and things of that nature. But I think that it's important that we do spend a little bit of time in the optional parts because that's where all of the content about Java is. And this lab is beautifully documented. So whenever you do want to go or view it, it'll be all there for you. I'm going to move this over so I can see that as we're going in case people write things there. All right, perfect. Close up. Okay, where were we here? So we're going to start off with the prerequisites, of course. One of the beautiful things about working in IBM cloud is that we have an IBM cloud shell now. So if I go into IBM cloud and I hit this little button here that looks like a shell, we'll get a shell that loads up. And if you don't want to install packages locally, especially with keeping track of versioning and everything else, you can definitely work in the IBM cloud shell. You can do most things that you need to do there. Anything that's Docker related, you'll have to have a local Docker and get an installation. But for everything else, dealing with OC, the OpenShift CLI, dealing with Cube Control, Cube CTL, Cube Cuddle, however you pronounce it. I'm not going to die on that hill today. You can use all of that inside of IBM's cloud shell. The rest of the prerequisites, oh, I should probably put the link in for creating an IBM cloud account. I did not do that. I will get that and put it in later. Okay, so the rest of this, so we're going to go and grab the GitHub repo that we're going to be using. So in the cloud shell, we're going to clone a repo. So this, then we're going to move the order of these. And then, yeah, if there are any questions that anyone has, especially Diane, if you want to ask anything or say anything while we're going through and the more mundane parts of me copying and pasting things, I'm happy to do that. And then we're going to change into that. And then the last step there on the prerequisites is to create an environment variable called root folder. We're going to be passing that in as an argument to some commands later. So, yeah, if you're not familiar with bash, this just essentially sets a environment variable called root folder and runs the command print working directory. So it says this folder is the output of that root folder. We can make sure that it went through by sending an echo out for that root folder. Cool. We are in the right place. Great. And then I already have a cluster provisioned in IBM cloud. This is me, PNB Dev. But if you were creating an OpenShift cluster, you'd go to the catalog. Actually, let me do that in a different way. So once you signed into your IBM cloud account, you go to the catalog, and then they're going to have some featured recommended things. But if you search for OpenShift, you get this Red Hat OpenShift from IBM cloud. And then you have a choice between 3.11 or 4.3. And, yeah, go through, set up your cluster, however you'd like. Pick out your infrastructure and how many worker nodes you need. And then, yeah, it'll take about a half an hour-ish to get the cluster up and running. And when you do, you'll get to a page like this that tells me I have a cluster set up with two worker nodes that are working normally. Let me close my Slack so that we don't get interrupted. And if I go to the OpenShift web console, which I've done here on this page where the topology shows up, this is what we get. I'm in the developer persona. We're going to talk a little bit about that in a minute. But, yeah, this is how my cluster is doing right now. And that gets us through the pre-rex. One of the things that's cool to see is that when you're inside of the OpenShift, the web console, you can grab this command here. So if you hit the dropdown on the name and you copy the login command, generally a bad idea to broadcast your tokens. I'm going to do it quickly. And they're going to reset in probably an hour or so. So you won't be able to hit my cluster after this. But, yes, reset. Oh, cool. Great. And then if I open up my Cloud Shell and then just paste that command in, it logs me in. And, yeah, now I can look in OC version just to see that I'm logged into my cluster. And it tells me all about my OpenShift cluster that's running. Okay. And I think that brings us to the end of the prerequisites. Nothing in the comments yet. Hopefully people are still okay, still hanging out with us. And so this part would skip ahead to lab four. We don't want to do that because we want to look at the pre-rex. We want to talk about what's going on with Java. So the lab two, the first part. And there's helpful videos to get you through all of the content that's here of Niklas and our Harold working through all of these different examples. But it's important to kind of bring up the infrastructure of our Java code. So we created microservices implemented with Java EE and Eclipse Micro Profile. The microservice has been kept as simple as possible so that it can be used as a starting point for other microservices. It contains the following functionality. You got an image with OpenJDK Open Liberty and Micro Profile. We've got a Maven project for all of our project management needs. We've got an Open Liberty server. That's how our web server is working. We've got a health endpoint, which is going to be important later when we start talking about how OpenShift works, how Kubernetes works to make sure that our applications are running healthily. We've got YAML files for folks who may have been used to already deploying to Kubernetes and want to do a similar style of deployment to OpenShift. And then we have our sample REST get endpoints for the author's application, for the get author and for the author. And then, again, the service provides the REST API get author. Normally, we use a database, but we're storing the data locally just so that we can show how all of these things fit together. And I would encourage you, definitely, if you're working through this on your own time, take a minute, read through this. This is really good. It even links out to a blog that Nick Luss has written about how to run the Hello World Java microservice. And then we've got inside of our repo as well, because we're going to talk about different methods of deploying to OpenShift, one method we're going to talk about is deploying with a Dockerfile. If you already have a Dockerfile defined for the application that you're trying to run, you can do that really simply with OpenShift. And in the Dockerfile, it's interesting because it's a multi-step build. We've got, at the beginning, we build an environment container from all of the requirements for Java, and that sets up all of everything that we need to build the application, and it makes sure that the container that we're deploying doesn't have anything in it that it doesn't need, because if you set up one container, for example, that has all of the build tools in it, then that's just a bigger container than it needs to be. But if we set one up first to handle all of the build, to handle all of the environment variables, to handle all of the environment setup, getting Maven installed, everything like that, and then we start another container that actually runs our application. The container that gets sent out to production is a lot more lightweight and easier to deploy, especially, and that's something that will become a concern as we're deploying more and more microservices and looking at the resource requirements of what we're building. So, the example will take it line by line to say, okay, here's what's going on in each line of MicroProfile, and then there's an option to, excuse me, in each line of the Dockerfile, not MicroProfile. But there's an option here to run the container locally to see what's going on here. I'm going to check how we're doing on time to see. You got as much time as you need to make sure you cover everything you need in as much depth as you can. Awesome. All right. Well, yeah, let's do this locally. Let's, let me make this a little bigger. So, I've done this before. So, I will have already, I'll already have the repo down here. Oh, yeah. No, wrong repo. It's the OpenShift on IBM Cloud. That's the one we're looking at. Cool. And then if I go to, yeah, why not use S2I? Thank you. I'm so glad you brought S2I up. We are going to be going in. Honestly, S2I is one of my favorite features of OpenShift. So, for those who are unfamiliar, S2I stands for Source to Image. It's an open source project that's worked on by Red Hat. And what it allows is for us to create these custom builder images that will take our code that's already been deployed, our code that's already been written, package that into a container for us and run that. Because essentially a container is just a process that has all the isolation around it. So, if we create these, this automated way of outputting a process that can access all of the libraries and features that it needs, then it's a lot easier to get people who have never used containers before to be able to use it. And one of the really cool things, actually, let's take a guess. That is something that we do look at in this workshop. We will absolutely do that. But the reason I don't bring it up early is because I think that for the people who are doing a lot of Java deployments, they're often what I found is they want to get a sense of what's happening with the Java first before we show them, okay, hey, actually, there's this really cool thing that you can do, call source to image where you don't have to deal with these build containers and everything. We have the containers already built ahead of time for you. But yeah, let's look at, so if I go to my developer view in OpenShift and I look at the topology, because there's no workload running in this default project, we're going to create a project later for what we're going to do. If you go to the from catalog here, you have a huge list of ready-built builder images to set up builds inside of OpenShift. We use a lot the Node.js one, but if you wanted to do an engine X server or Perl or anything like that, you have a ton of builder images here, and if you don't see what you need, there's really good documentation for setting up your own builder images and even having to be able to show up in the catalog as well. But yeah, we will get to source to image builds, but the reason that I didn't show it was at least to show how the Java implementation works and how you can set it up without doing or how you can set it, how we set it up to show all of these different options for deploying in OpenShift. But yeah, source image is amazing. I love it dearly, and we will talk about it. So back to what we have here. I've got my, we've got a Docker file here. So if we look at the Docker file, it's the same that we saw in the first exercise in that, and what we just went through, let me pull up. I'm just going to pull up the instructions on a different side so that I'm not continually switching windows back and forth. Cool. Got it here. Great. So we want to go into, let's see, we want to change into root folder that set that variable locally yet. So I'm going to go up one and then set that root folder variable that we did before. Okay, and then we're going to change into root folder slash deploying to OpenShift. And in this folder, we're going to go ahead and Docker build. We're going to tag it as authors, and we're going to build this directory here that we looked at before that has the Docker file in it. It's really fast because I have built this container before. And then I'm going to run it. So Docker run. We want it to be interactive. We want to remove it when it's done. And we want to map port 3000 to 3000. And we're going to call the authors container. That's what we just tagged that build that we just did. And we're going to give that a second to get up and running. So the thing about this container that I've noticed as opposed to other ones is that it does take a little bit longer to get going. But that was a little bit more heavyweight. There's a lot of things that are happening in the background to be able to get what we're about to see happening. So we're going to look at all these, all this logging is normal. And as soon as it tells us it's ready, then we're going to go look at the host 3000 and look at the application that we're building. Are there any other questions while we're getting this set up? And, yeah, Dan, I don't know if there's anything else you want to say about source to image. I see that you dropped the link there to the docs for SDI. Yeah, no, the SDI stuff has been around for a long time. It's really, really useful. I'll share the links at the end of this session and people can do it. You can go to docs.openship.com and just search on S2I and you'll find it if you're listening in. But it's a really handy tool to have. It is all open source too, so it's pretty cool. Yeah, and it is something that you, like Diane mentioned, it's open source. So you can use it outside of OpenShift if you want to set up your own build paths with SDI to do even your Kubernetes builds that you aren't doing with OpenShift. That's totally allowed. And everything we do, all of OpenShift itself is open source as well. And all of the operators and anything else that we use in these apps are, you can find OKD is the name. You can go to okd.io, find OpenShift itself and all of its glory and operatorhub.io for any of the operators that we use as well. So it is Red Hat. We do everything in the open. Yeah, of course. So yeah, if I hit port 3000, we see that I do have an open Liberty server running. And I'm going to, let me just grab the path that I need to be able to look at what we're trying. The first path that we need is open API slash UI. And we get a graphical view of the API working. So it makes it a lot easier to debug what's going on with our calls. But yeah, if we run this get author call, the model is that it returns a name, a Twitter handle on a blog address. And so if we, yeah, I could, if we curled, again, we could curl this endpoint and get the same information out. But what's important to see is that if this is what we would expect to see once we have the deployment up on OpenShift, if it's not working, then we know something is going wrong. So I'm going to jump back over to, yeah, we just wanted to see it running locally. And we looked at the UI. And yeah, if you can't open a browser, you can hit a curl. And then we're going to go over to lab three. I think I'm finished with running it locally. So I'm just going to shut that container down. Let it do its thing. And then we'll hop back over here. All right, great. So understanding the Java implementation. So yeah, using Maven. Maven is a software project management and comprehension tool. Concept of a project object model. Maven can manage a project, projects build reporting and documentation from a central piece of information. So this is the palm file that we have set up to explain our Maven implementation there. How the open Liberty server is configured is in this block here. And then how the endpoint is implemented as well. So microservice architecture, popular approach for building cloud native applications in which each capability is developed as an independent service. It enables small autonomous teams to develop, deploy and scale their respective services independently. And then eclipse my micro profile is a really good way of doing that with Java. So the classes need to expose the author service. And then the author implementation is here. Yeah, as I said, all of this is really, really well documented. I encourage you, if you wanting to have a look at it, please take the time to do so. But the important part that we're looking at here is what we saw that each author has the attributes named Twitter and blog. And when you're implementing this yourself, of course, do whatever you need. And then for getting the author, we want to figure out, we want to set what our response is going to be, what our 404, 200, 500 errors are going to be, and then what we'd expect to get back. So this is the data that's being held locally. So for what we want to change, we want to make edits to this block of text here to be able to see the application updating as we're building it. And I think in the local implementation, I've already made updates because, yeah, I did this a week ago and I didn't nuke the repo. But for what we're going to deploy, we'll be able to make some changes as we're going and see there. So let's go, yeah, and how we're going to support the live and readiness probes and Kubernetes with health checks. So one of the things to understand if you're not familiar with Kubernetes and OpenShift is this way of dealing with applications being built in, or being described in this imperative versus declarative way. When we're working with declarative, when we work with imperative systems, what we do often is we have a clearly defined algorithm for how our application should work. And that's what we, as programmers, often write. And then with imperative programming, or with declarative programming, or declarative operation, which is how Kubernetes and OpenShift work, you say, when my application is healthy, these are the attributes that it has. And then with the use of the control plane in Kubernetes with it, the control loops are similar to how a thermostat might work. We have a thermometer that's telling us what the temperature in the room is. We have a desired temperature and it's constantly checking and making the changes to make our application be what we say it should be. And the way that we're enabling this in our application is with this health endpoint that's returning the values to say, hey, I'm healthy, my data is ready to go. And if it's not, then we're going to get errors to come out of our system. And that's gone into in a bit of detail here, excuse me, with how the Kubernetes micro-profile health documentation on GitHub. So definitely check that out. So I've already changed the data and run the container locally, and we're going to move on to, yeah, to see it update. But now let's get into the deploying the OpenShift part. I see a comment here. I understand this is an introduction, but since the door was opened by Nigel, I hope he covers on binary builds versus just from source and delta build starting from a Docker build. It's probably the slowest and worst option available with OpenShift container platform. Yeah. And one thing also to know with OpenShift is that the Docker builds aren't going to be available in every cluster because running Docker build requires root. And one of the things about managing security is not giving everyone root access. So it may be the case that especially with clusters that have been set up and access given by some other sysadmins to developers, you may not even be able to run Docker builds in your OpenShift cluster. I want to make a plug for folks. If you have a chance, Red Hat has a great book about that covers everything that is mentioned in the comment by Peter, all that we don't have time to do everything. Go build it. Yeah, build is great. But check out this book, Deploying to OpenShift. It's free online. I think that I put the link in the gist to it, but it covers all the different ways to build OpenShift applications and deploy. All the information is for OpenShift 3. There hasn't been one that came out for four yet, but the information is still great. It gets into a lot of specifics about how you might be able to set up more specific builds and the differences between a lot of the build strategies, which we won't have time to go into today. But yeah, that's a great point that you're making, that there are a lot of different ways to build and some are better than others. But they all have their place. They all have their uses. And yeah, if there's anything specific in the end, we'll maybe try to cover that. But yeah, I'm going to plug along with this because watching the time. In this lab, we'll work with the OpenShift web console and with OpenShift OC CLI on IBM Cloud Shell. The following image is a simplified overview of the topics in the lab and keep in mind that OpenShift is Kubernetes platform. Yeah, so Kubernetes is the engine that makes the OpenShift car run. It's a lot of the commands that are, I mean, every command that in KubeControl is a valid OC command. So if you're familiar with Kubernetes, then there's not a lot of the changes. I think that a lot of times when we give this workshop and it probably won't be the case here because this is to like folks in the Red Hat OpenShift ecosystem, making the differentiations between Kubernetes and OpenShift and people understanding that they're different, but it's Kubernetes underneath. And yeah, sometimes we talk about how OpenShift was like a Rails thing initially and then it was rebuilt with Kubernetes underneath. But yeah, I'm sure you all already know all that stuff, so I won't labor that point. So the lab has two parts. We build and save the container image. We create an OpenShift project. We define a build config. We build the pod inside of OpenShift and save the container image to the internal OpenShift container registry. And then we'll deploy the application and expose the service. And yeah, there's a lovely GIF here that you can check out when you have time to view this on your own. So the first thing we want to do is create an OpenShift project. If you're not familiar with projects in OpenShift, you can think of them as a Kubernetes namespace that have some extra cool features built into it to make it work a little bit better. That's just kind of what OpenShift does. It takes everything that Kubernetes does well and makes it a little bit better. So we're going to, actually, there might be, I might have already done this. So I'm going to make a different project, but I'll have to watch. I have to be careful about any sort of project names that are in any of the commands because, yeah, I'm going to pretend my first name, last name is something else today. All right, let's open up our cloud shell. And I'm going to pull up the instructions over in a different window. Yeah, I know how this works. I know how to do it, but it's always whenever I'm doing it live, working a little bit too fast that I get lost and something breaks and I'm going to do my absolute best to not do that. So if I'm working a little slow, that's why. So we want to make sure that we're in that root folder. And we're also going to go to deploying to OpenShift. Perfect. We want to create a new project, which is done with the OC new project command. And then we're going to give the argument your first name, your last name. I say my first name is Nigel and my last name is OC project Nigel Nigel. And that's going to create the new project. You can add applications to this project with the new app command. For example, try new app syntax. I'm not going to do that because we have steps here. So what we're going to do is we're going to do a binary build here with the Docker build strategy. So there was some mention of binary builds before. Yeah, here we go. So we're going to do OC new builds. So we get our build config created. We're going to name it author spin because it's our binary. We're going to pass the flag binary. We're going to use the build strategy. Strategy Docker. Okay. And cool. Nice. So the resources that were created there, we've got an image stream. We've got a build config and they were both successful. So we're going to start the build now. So OC start build authors spin from directory is here. Uploading directory period as binary input for the build. And what we're going to do is pop over to our open shift console. All of the things that I've done here are available to be done from the console. But if we jump into our administrator view and then let me look at our builds. I'm going to look at the project. Nigel Nigel. There we go. Here's our build that's going right now. It is running. It's a Docker build as we already said in the flag that we passed. And yeah, we'll let that plug along. Okay. So the build has started. Yeah. One of the things that we're going to have a look at is like with doing logs on IBM cloud. But what we're waiting to see here is that there's a part that says that our image was successfully pushed to our internal registry. So people may be familiar with container registries dealing with Docker hub, or maybe you have a private registry somewhere. And I think, well, I know that we have a service on IBM cloud as well if you want to set up your own registries. And yeah, inside of Red Hat, excuse me, inside of OpenShift, there is an internal container registry just like Kubernetes has an internal registry that it's storing all of the containers that become the pods that are being used in our deployments. Build is taking a little while. But yeah, when it's built, we'll see that that image was pushed over to the registry and then we'll be able to deploy it. So we're going to hang out for a second. How are you doing, Diane? Wonderful. If I stop hitting mute all the time, it's really good on both channels, so you're flying along fine. Cool. Great. Awesome. Well, maybe like push ahead a little bit. I don't want to sit and wait for this thing to build because we have so much to cover. We have so much to talk about. The next thing that we want to do is to verify the container image inside of OpenShift. So if we looked at the image streams inside of builds, instead of where we were looking at, looking at the builds themselves, we see that this image stream was created three minutes ago. There's one image in it. And yeah, yeah, it's, oh, it has, oh, from a pushed image. And we want to look at the information under the, oh yeah, the image repository. So that's the repo internally to OpenShift that our container lives. And then we're going to need this in a minute because we're going to look at, the first thing we're going to look at is if you're used to Kubernetes already, how you might mimic the same things in OpenShift. And we're going to need this stream. I'm just going to copy it now to update our YAML. So a little bit of foreshadowing. So we're going to deploy the microservice. That's the next thing that we've got to do. And pods are the basic building blocks in Kubernetes. That's the smallest manipulatable unit you don't really deal with containers. But pods are just a group of one or more containers. And it represents the processes running in your cluster. So let's start with the deployment at YAML. So again, we have inside of our, inside of our repository, we already have the YAML already there for you for the deployment. And yeah, so if you're unfamiliar with how the YAML works, I'd encourage you to check this out. We could spend workshops talking about YAML. But the important things that we want you to see here are just that we've got the name of our container or the name of, yeah, the name of our container, the image, which ports are running and the liveliness probes. So in the full deployment at YAML, we're going to have to change, excuse me, we're going to have to change this line to be able to put the right one for our deployment there. We're going to first edit the YAML. Let me get back over to our cloud shell. Perfect. And yeah, we're going to look in our deployment folder. And we're going to copy the template deployment YAML that's already there. We're going to call it deployment YAML. And then we're going to edit that deployment. And all we're going to want to do is change this line here. So, oops, I did that wrong. Vem is not working with me today. It's okay. We'll do this the slow way. When I remember what I did wrong, like right after this is done, of course, the tab makes my stuff too. Be careful of your indentation in YAML, just like Python, except you don't get a warning. Then I'm going to save that. And then we're going to apply that deployment. So OC, apply, dash F, because we're going to pass in the file as an argument, deployment.yaml, who our deployment was created. So let's go look in our topology and an open ship to make sure that that is showing up the way we expect it to. So we're going to jump over to our developer view and look at our topology and look at there. A brand new shiny deployment of our authors.bin. The container is creating. We're done. Yeah, we're going to look at how to do this from the console, but for those folks that, yeah, a lot of times we're dealing with like shelling in the resources, don't have the luxury of looking at this beautiful interface that we have here in OpenShift. All of this stuff is available from the command line, and we're showing it with the cloud shell, but yeah, works as easily when you're shell done. We're going to give that a second to go. Let me look at what comes after this part. Yeah, we're going to also apply a service. And this is another thing that if you're used to dealing with Kubernetes like you'll be familiar with, if you haven't and you're just dealing with OpenShift, count yourself lucky because we're going to show how to do that in a much simpler way by checking some boxes when we're deploying an OpenShift. So I want to at least draw your attention here to the YAML here. So in the service we see a selector of the pod using the label app authors. And yeah, this is again already been created for you but you can update it as you wish and we're just going to apply that service. It doesn't have to be changed because the port's the same regardless of which registry it's deployed to, which registry your container stored in. So we're going to go ahead and apply that service right now. So OC, oops, apply. We're going to pass in as an argument that file that's in this folder service that YAML. Cool. And that service was created. And so we're going to check in the web console again and that's apology view before there wasn't there wasn't a service in routes but there will be now when we go look. So and that's apology. Now we have a service and well, the routes are not yet. We'll figure out what's going on. Why it's misbehaving. But. Oh, I didn't expose it. That's my fault. So after you applied the service then you have to expose the route. Let me do that right now. OC expose. There's dash bin for our binary build. Now that that's been let's have a look now. Okay, yeah, there it is. So it's my fault. It's usually my fault. Everything that goes well is because they broke this workshop beautifully and everything that broke is absolutely my fault. So we can look again at the at the routes. So this is what we saw in our browser before and if we then go to that same UI, which was at slash open API slash UI. Perfect. We see what we saw locally, which means that we're it's working so that at least gives us some idea of how it might be to develop locally with Docker using containers everything like that, and then deploying it to open shift and making sure that our environments kind of match the only difference being this long string of text at the beginning here for where our application actually is. Check. Yeah, can expose the blend to create an internal service just like you can expose service great English. Yeah, yeah. All right, cool. And if we ran the get. You can run it in the in the UI, checking it with Nicholas's name because that's what's in this one. And then we get the output of his Twitter and blog. And yeah, we can pass that as a as a curl command as well. But I want to move a little bit ahead. And if we're doing this as when we're doing this as a workshop where everyone's following along, have you create this log DNA service to capture the logs from your open shift cluster. And I definitely encourage you to do that when you get a chance. If you're doing this on IBM cloud so that you can get a bit more intuitive look at your logs and everything. But let's move on to the other deployment strategies of the other deployment options and open shift. So if you have an image that exists in Docker hub already you can do that so we're going to be dealing more with the open shift UI as opposed to dealing with command line for now for these for this for these next couple of sections. So I'm going to jump back over to our developer view and we're going to add. Wait, I clicked the wrong thing one second back. There you are. Okay, we're going to go to create we're going to deploy a new container image. So in our ads. Use the container image, we're going to do in high love authors v one, which is the same container that we had the same container that we image the same image that we had before but this time we're going to get it from Docker hub. As opposed to providing the the Docker file for our binary build. We're going to change the name to be authors image because we're deploying it as an image this time. We're going to create it. We're going to create a deployment config. And then we're going to automatically create a route to the application, which is prevents that whole like the ML fiasco that I just ran into before. I'm going to go ahead and create that now. And if we look at it. We can see. Yeah, the same things that we that we had before that we had this set up with all of the animal and everything that are automatically kind of handled for us. When we do it with the UI and open shift. And then when it's done, we're going to go ahead and check that same it's not ready yet. Give it a little bit. Not ready yet. Should be ready soon. But yeah, there was a. There was a fair amount to get the the job application up and running it wasn't immediately accessible before there it is it's up and running now, then we're going to go ahead to that same open API UI. And see exactly what we expect to see. So that deployment took us a lot shorter time than it did before to build it, then the apply the animal files and everything else you can just if you have an image is pre made. As long as this OCI compliant. You can run it in open shift with the exception of your containers can't be running his route there's a flag that you can change in open shift that don't recommend it. But yeah by default your containers don't you can't run his route because yeah if your process escapes this container then it can execute commands in your cluster as root and generally not a good thing. So yeah you if you have your own image that you like to deploy to open shift. Yeah, all you have to do right there and then this is the same job application that we had before that's been containerized and deployed to open shift. So two deployment strategies we did a build with the binary and then we have an image now. And see, and next we want to look at source the image build, excuse me. So I'm going to go ahead and well the first the first we're going to do it from from a get repository and I'm going to do that from the the exercise has us doing that from the command line. So let's do that that way, because I can just do this instead and then break something and then spend, you know all of our time troubleshooting. We're going to make sure that we're in the right projects. Oops, OC not PC. This is a Mac projects. We're using project Nigel Nigel. Great. That's the one that we intend to be using. And we're going to use the command OC new app that shop. And I'm going to copy and paste in this long ring here and we'll talk about it in a second. So, yeah, so we've got a string for the repository this open shift from IBM cloud, the context directory so where actual application is is in that deploying to open shift directory and we're going to name it authors dash get, because this is our get deployment. Get that started it. And we're going to watch the logs here. Great. Everything's built. Amazing. So what you're going to see here is that because inside of our directory and on GitHub, we have a Docker file there it just takes that Docker file that's in our in our get repository and uses that to build and expose our application. And so, if we, we didn't have to create any YAML files or anything like that we just pointed it to a repository that had a Docker file in it. And if our repository didn't have a Docker file in it that's where we get to source the image so I got a little bit ahead of myself there. We're still using a Docker build strategy because we have the Docker file in our repository. Excuse me, let's go have a look at that. Oops. Jump over to builds and we see that our authors dash get build is here. And the get repository is there the IBM open shift and I've been cloud workshops. And if we look at like all of the ammo is already generated for us there for the build config we don't have to deal with any of the YAML stuff, which can be stressful. And if you're not used to dealing, even if you are used to dealing with it, then it, then it can be really stressful. And so, after the, after the build status is complete, then it'll be ready to hit. So let's authors get build one is still running. So it's going to take a minute, as it did before. And then we'll have to create the routes again. Oh, wait, we've got the service there. So we'll have to expose that service. And then we'll have to get the routes. All right, so we'll jump back over to our cloud shell. OZ expose the service or authors dash get. All right, now we got that exposed. And then we're going to grab the route from the CLI as opposed to from, from the UI this time. So OZ get routes, routes, authors dash get. We've got our routes. Same thing that we expect to see open API UI. Same as what we had locally. So, so far we've done three different build strategies, not counting the one locally. We've got a build that we did from an image and that was existing in a public Docker hub, Docker registry. We've got this git build that we just did. And then we did our binary build as well. See if we can fit another one in here. So, yeah, we've got that built. I think I did all the things here. Yeah, if you had your own image, that's back before we're past that now. We did all the, yeah, we've got three different deployments there. Amazing. And now we get to the source to image builds, which was mentioned at the beginning. So we have a custom builder image set up that same image that we were talking about before. So there's not in the catalog already, the open liberty builder image. And so we're going to pull in the one that we had before and we're going to use OpenShift to make it available to us. We're going to use the CLI to make it available. So I'm going to hop over to our CLI. We're going to start. We're already in the right project. That's the first step here. We're the image, the builder image that Niklas made as well as available in a public repository. So I'm just going to copy in that command. So we want to import an image, the docker.io and hide love, then the source to image open to liberty latest. Confirmed. Yes, we want to pull this image and add it to our internal registry. Great. Great, great, great, great, great. Just fine, first time. Of course it did. I expected it. 100% expected it to work. No problem. Don't it up. I'm not a surprise. Put it up. I totally knew it from there. So we're going to go check out in our admin view. We're going to have a look at the image streams. And we can see that our S2I open liberty builds or our open liberty image stream is here. And the cool thing about these image streams is like if the image gets updated, then it triggers the update here in OpenShift. And then all of our images that were built with those images are updated again. So with dealing with security vulnerabilities, anything like that, you don't have to go through and rebuild every single container that has the flaw in it. You just look at your builder image, patch whatever needs to happen, and then your builds automatically will roll out from there. You can keep everything up to date and secure. So we're going to move ahead what comes next. Deploying the microservice. So yeah, the previous step installs the open liberty builder image only to have to be executed once after this multiple open liberty applications can be deployed without Dockerfiles and YAML files. The image builder expects a certain directory structure of open liberty projects with two files, the server and the war. So before the code can be pushed to OpenShift, the war file needs to be built with Maven. And Maven is in the IBM cloud shell. If you're in a different shell, then you'll have to build it differently. But these are all the things that are, this is how the builder image is set up. So we have to do exactly this. But if you wanted to have a slightly different build strategy, you could set your builder images up to behave in whatever manner you would like. So I'm going to go ahead and follow these steps very closely. So the first thing we want to make sure that we do is that we're in the right directory, which is root folder, the variable we set up before, flash, deploying to OpenShift. Cool. And then we're going to Maven package. Let that run. So after those commands, the file authors.war will exist in the target directory and you can check by doing a listing of that target directory. So when this builds, we'll see what we expect to see. And then we can set up a new build with that custom builder image that we just imported in OpenShift to be able to support this Java OpenLiberty application. Lots and lots of text. Let me check the comments to see how we're doing. We're doing pretty good here. I think you've answered almost all of the questions I think that people have asked. Nice. And we're all trying to read the fine print here, so this is great. Okay, so I was successful, so I'm going to just check that target's folder. Oops, our gets. Nice. So we have that war file that we're expecting to see. And so we can press on with deploying it with our custom builder image that we just brought into OpenShift. So let's set up a set up our new app. So the OZ new app, we're going to call it S2I-Open-Liberty. I should have copy and pasted this colon, latest tilde slash dot. And then we're going to name it authors dash S2I. And if I broke something, then we're going to copy and paste in this command. All right, great. So we've got our new build set up, and we're still going to have to... Well, let's see how it's doing. Jump back over to our cluster. We're going to check in our topology. Let's go have a look. All right, this S2I, here's the one that we just did. Build one is running. So there's a few more details in the documentation here. So we got that output that we expected. We'll see that the build failed, which is to be expected the first build. Before the microservice can be deployed with the image builder, the code needs to be uploaded to OpenShift. This is done via the OC start build. So in the OC start build command, we refer to the code of our Java microservice and the current directory. So we'll execute that in the cloud shell. So let me jump back over. OC start build directory here. Authors dash s. All right, now our build has begun. And then... Yeah, so build one did fail, which is fine. We need to give it access to the war and the server files that we created when we set up the Maven project. So have look here. But yeah, this one should work, no problem. So let me jump back to the topology view. Wrong one. First image. Okay, when build two is complete, a pod will be started and eventually be running. Once... Yeah, we'll need to expose the route again once it's complete. Oh, it is complete. So we need to expose the service first and then get the route. So let's expose that service. So it is s to i. All right, that's exposed. And then we can get the route here. Or we can get the route from the topology, the topological view, our route here. And we see exactly what we expected to see before OpenAPI UI. So we have used four different ways of building and deploying our Java applications to OpenShift. We started with doing our binary build. Then we grabbed from an existing Docker hub image. Then we went over and did it from a Git repository that had a Docker file in it. And then we did our source-to-image build using our custom Java builder image. But if we're doing a different language, the source-to-image build, if we did something that was in the catalog, for example, it would have been a lot more straightforward. Let's say that we were doing the Node.js one and we created an application. We would just have to point to a Git repo there that had a Node.js app in it. And it would run through everything. And we wouldn't have to do all of the work of exposing the routes and everything else. But because we set up this custom builder image, we had to do a little bit more configuration than we would normally. But yeah, that's Java for you. You need extra configuration. I'm sure that you Java developers out there are familiar with that. So, yeah, if we go back to our topology, yeah, we can see the four different builds. Oops, did not mean to do that. Yeah, so there you have it, folks. Employment options for Java with OpenShift. Woo-hoo! Amazing. Thank you so much for this. This is a tour de force, shall we say. I'm looking forward to walking through the workshop myself again in a slower speed in a bigger font from my eyeballs. Oh, I'm sorry. It's perfectly fine. I just need to make a bigger screen and I'm doing good here. So this has been really wonderful. And if you people want to get a hold of, Nigel will put all of this up on YouTube on the RH OpenShift YouTube channel shortly. And we're really thrilled to have you here. We'll make you do this again on other places with new features and functions come out. And you know, I'm very impressed. I do see like when your screen gets a little bit smaller, there's a little strange green bar on the side. But I think this actually stream nicely on Twitch and BlueJeans Simultaneous State. So next time we're going to do you Twitch, BlueJeans, YouTube and Facebook. Oh, man. So thank you for being game and trying this all out with us. And we really look forward to giving you back again soon. Yeah, thanks so much for having me. Thank you all. I hope you all are taking care of yourselves out there, being safe, you know, there's trouble in times and I'm glad that we're finding ways to stay connected to still be able to deliver content, to help you deploy your applications, learn something new. So thanks so much. I'm glad you all spent your time with me. And hopefully we'll see you soon. All right. Stay home, stay safe, and be well and be kind to each other. So take care, guys. Thank you.