 Hello, my name is Daniel Helphan and I'm a developer advocate with Red Hat's OpenShift container platform. In this video, I'm going to show you how to use OpenShift Do or ODO. ODO is a developer-focused CLI for OpenShift. It allows you with just a few CLI commands to deploy local source code that you're working with out to your OpenShift environment and have that source code running on a container. In order to show you what we're actually going to be deploying with ODO, I'm going to go ahead and start by showing you some local directories that I have. The first one is called Wild West Backend, and Wild West Backend will contain Java source code that will serve as the API for a Wild West style game. The second directory that we have is called Wild West Frontend. Wild West Frontend contains no JS source code that will serve as the frontend for our Wild West style game. So I'll go ahead and start by using ODO to deploy our Wild West Backend portion of the application. So just to show you the basic structure here, it's just a normal Java project. I'll start by compiling the code in this directory. Then once this completes, I will go ahead and create a project using ODO. By running this ODO project create command, I can go ahead and just specify a name for a project that I'd like to work with. And it will go ahead and create this project out in our OpenShift environment. So now for the rest of this video, we'll be working with a project called ODOGA. So the next thing I can do with ODO is run this ODO Catalog List Components command. So what ODO Catalog List Components shows us are all of the different language options that we have to actually work with local source code and then deploy that local source code out to OpenShift with. So you'll notice that the first option that we have available here is a Java option. And what all of these language options correspond to here are container images that will support different languages and then basically be able to just run those in a container out on OpenShift. So in order to actually use this Java option for what we have in this local directory here, what I can go ahead and do is run ODO create and then specify I want to use this Java language option. And the last thing that I need to do is just give our application here a name, but this is just an optional field. You don't have to specify a name. So I'm going to go ahead here and run this and you'll get the output here that basically says that it has validated the information that you've specified. You're just basically going to designate that you want to create a local configuration here that specifies that the source code in this directory is Java source code. And I can go ahead and check on this by running ODO config view. So what ODO config view will show you is just the local configuration information that you have. And it'll just give you some details about how this application source code will actually be hosted out on OpenShift. So the first parameter that you'll see is type and then the value that it has is Java. So we're just specifying that this is Java source code that we're working with. You'll see the project that we're working with, which is ODO-GA. And then you'll see port information that's associated with the application. And then you'll also see the name of the application that we specified, which is back end. So the next step here is we just want to actually deploy the source code out to OpenShift. So in order to do that, all I need to do is run ODO push. And so what ODO push allows you to do is it controls when you actually want to deploy your source code out to OpenShift with the local configuration information that you have specified. So the first step as part of ODO push is it's going to validate the information in your local configuration and just check that everything specified is correct. It is going to check whether or not we actually have a pod running in our OpenShift project that will correspond to this source code that is in this directory. So since we don't have a pod already running in our project, what it's going to go ahead and do is set up a pod for us. And then the last thing that will happen here is that it will actually set up our source code to actually run in a container on the pod itself that's in our project. So while this is running here at the command line, what we can go ahead and do is actually go out to our OpenShift project via the web console. And we'll select ODO-GA. And what you'll see here is that we have this pod that is running in our project. And you can specify or see by the dark blue ring around the pod that the pod itself is actually running. There's this OpenJDK symbol in the center to designate that this is a Java application. And you can also click on the center of the circle here to just get information about the component that you have just deployed. So you'll notice that there's information here that allows you to view the logs of your application that's actually running on this pod. And you'll actually see some of the port information that was specified in our local configuration, and then also the name of our application itself. So let's head back out to the command line here. And we're just going to wait for the output here to finish up. So what it's basically saying now is that our container is now actually running on our pod itself and that our application should be available. So in order to verify this, what we can do is run this command here, which is ODOLOG. And what ODOLOG will basically show us is just the logging information of our application actually running out in our OpenShift environment. So you're seeing that we have this information that's being output here that started Wild West application. So everything should be actually running out in our OpenShift environment. So with just those two commands, we actually went ahead and just deployed our local application out to OpenShift. But what would be nice is if we actually had a front end piece of our application here so that we could see the game itself. So in order to do that, what I can do is head on over to this Wild West front end directory. And if you remember from the ODOLOG list components command, we actually have this Node.js option available under the Java option. And in order to actually use that, all we have to do is run that same ODOLOG create commands that we ran with our Java component. But we'll specify this time that we want Node.js. And that will just give the component here a name. So we'll run ODOLOG create Node.js front end. And now all we've done at this point is specify that we have Node.js source code in this directory. But what would be nice is if we also could specify that we want a URL to be associated with this application. So in order to do that, all we need to do is run ODOLOG create and specify that we want this to be associated with the front end component. So now with these two commands, what we've done is said we're working with Node.js source code in this directory. And we'd also like this component to have a URL associated with it. So once again, in order to actually just deploy this out to our OpenShift environment, we can run ODOLOG push. And it's just basically going to give us the same information as we saw with the Java component is that it's going to validate what's in our local configuration. It's going to ultimately set up the pod in our OpenShift environment, because we have never actually deployed the source code before. One of the major differences, though, is it's actually going to give you the URL information directly here at the command line. So you don't actually have to go out to the web console to get the information about the URL. It'll just give you that information directly here in the output. And now what's going to happen is once again, it's going to set up our container with our source code that will actually run on the pod that is being set up in our OpenShift environment. So if I go ahead and head back out to the web console, you'll now see that we have this Node.js component. And that light blue ring that was originally around the pod itself was just specifying that our pod was starting up. And now that it has this dark blue ring around it, it's specifying that the pod is available and it's running. And that all that will need to happen now is that we just need our container with our source code from our local directory there to be running on that pod. You'll also notice that there's this open URL information here in this icon. And that will allow us to actually access the URL that we created via ODO. So if we were to go ahead and click on the center of the ring here, we can actually view the logging information associated with this run-in component. And you'll see here based on the information that we are seeing that the application is available. One of the things that you'll actually notice in the logging information here is it's saying that it's just listening on IP and port port. It would be nice if we could actually change that so that we could see where it's actually running and what port the application is running on. So that's something that we can actually use ODO to correct later on in this video. But now what you're seeing here is that everything should be available. So let's just go ahead and click on the URL to see like what we've actually deployed out to our OpenShift environment. And you're seeing here that we have the front end piece available, but it's missing something. It's missing the graphics that should display in the center of the screen. So what we can actually do in order to correct that is that we can use ODO to actually link the two components that we have in our project, the front end and the back end pieces, so that we can actually pull the graphics from the back end and have those display to the end user here. In order to do that, all we need to do is head back out to the terminal. Let's clear the output here. And then we can run the span command here, which is ODO link back end. We can specify that we want the information from back end to be linked to our front end component, and then we can just specify a port. So I could type this out in long form here, and what this will actually do is go ahead and take information from the back end and share with our front end component about where the back end is actually running so that the front end knows how it can actually communicate with the back end component. But I don't actually have to type this out this way because I'm actually working in the directory of the front end. So it will actually just know that I want to specify that this linking information will happen for the front end with our back end component. But you can use that component flag to specify which component you'd actually like to link with. So by running this command here, it's you're seeing from the output that our back end has been successfully linked to the front end. And so what that means is it's actually sharing information about the back end host and the back end port. So now our Wild West front end will actually be able to communicate with that back end piece that's deployed out in our OpenShift environment. To see what that looks like, we can head out here and see that our Node.js application is actually restarting. So the pod has this light blue ring around it, and it's in this pending state, meaning that we need to actually restart the pod. And then when the pod is restarted, it will actually have those environment variables that were basically shared via ODO with the front end component here. So it'll know where it can actually access the back end application. If we go out to the logs here, we can see that it is available. And so if we were to come back out to our application now and refresh the page, what you'll see in the center of the screen is that we actually have the graphics that become available since we are now communicating with the back end. So this was really cool to be able to see that, you know, just with a few commands, we can actually deploy a whole application and have the components communicate with one another. But remember that logging error that we had associated with the front end piece, it would be nice if in the logging information, we could actually specify better information about the application starting up. So in order to do that, all we need to do is make local changes to our source code and then redeploy our application using ODO. In order to do that, we can head back out to the terminal. And what we could do each time that we make a local change to our source code is that we could run ODO push again, and that would redeploy our source code. Instead, what I'm going to do is run ODO watch. So what ODO watch will do is it will wait for local changes that you make. And then each time that you actually save one of those changes, it will just automatically run that ODO push again. And so you don't have to actually run ODO push all the time, you can just save your local changes and then a redeployment of your changes will actually happen. So in order to change the logging information that we saw, what I can do is head out to my IDE. And I know that here is where the logging information is being sent to. So I can actually go ahead and change this to actually use this information here. So this will actually get the IP information and it will actually get the port. And now if I were to go ahead and save this, ODO watch would pick up on that. But I also want to make some other change to my application. I want to go ahead and change the header that is associated here with OpenShift. And I'll just go ahead here and make a change here and just say ODO will be able to actually pick up on this because I'm going to save it along with the information in our server.js file. So I'll change the header instead of saying OpenShift, it'll say OpenShift change. And I can just go ahead and save this. And now if I head back out to the terminal, what you'll see here is that ODO is actually just automatically picking up on the fact that I have made changes. And then it's going to just run an ODO push that will just basically take the local changes that I have made. And it will just push those up to our OpenShift environment. So let's go ahead and head back out to our project. And if I go ahead and click on the logging information now, what we'll now see is instead of it saying just listening on IP and on port, it'll actually specify more specific information about where our application is running. And the other thing that we'll want to check on is just to see whether or not that header that I changed actually is different this time. So if I go ahead and refresh the page, what'll happen is now you'll see that the header here says OpenShift change instead of just OpenShift. So once again, what we can do with ODO is we can really quickly make changes locally, not just actually initially deploy our source code out to OpenShift. So we can actually check from the beginning that everything is running in our project correctly. And then we can actually continue to work on feature requests or bug fixes to make sure that everything is running in our environment as expected from the beginning. So I hope this video was helpful in explaining some of the concepts around what we're working on with OpenShift Do. And if you would like to actually just go ahead and get started with this, you can head out to the OpenShift Do GitHub repository. And there's information on actually installing ODO. There is a short demonstration here in the video. And you can also go through this really simple scenario about how you can actually deploy an application using ODO. So it's really simple to get started with. And what I'm really excited about with ODO is that I think that this is a better workflow for how we expect developers to go about using OpenShift. And I hope that everyone has that same experience with it. Of course, if there's anything that you would like to report about OpenShift Do, come on out to the GitHub here and request feedback for feature requests or any bug fixes that you would like to see as part of ODO. But I'm just really excited to see what people are able to actually do with this tool in terms of working with OpenShift. So thank you so much for your time. And I hope that this video was helpful in explaining why we are excited about what's going to happen ultimately with OpenShift Do.