 Good morning, good afternoon, good evening, and welcome to another edition of the developer experience office hours here on OpenShift TV. I am Chris Short, executive producer of OpenShift TV. I am joined by the one and only Ryan Jarvenin. Ryan, how are you doing today, man? Hey, hanging in there. It's a chilly week for a lot of folks, but I brought some warm coffee to keep myself going. Yes. I have my Get Cheat Cheat coffee mug today, just in case. Yeah, this is fine, even better. Awesome. So welcome, everyone. We are doing fun things today with what? With Odo. Kind of the Odo. I've heard some folks pronounce it Odo. I don't really care. You could literally just spell it out, Odo. It doesn't happen. Yeah, as long as you type the, as long as it parses on the command line, that's the main thing. And so, yeah, we're going to talk about using the command line to set up quick, iterative, or even interloop development scenarios. We have a scenario on learn.openshift.com where you can follow along and try out all of these scenarios and try out all the learning examples. I will post that into chat in a minute. And I also have a couple other URLs for folks. If you are interested, let me paste one. Let's see. Figure out where my mouse is. Where did the mouse go? Yeah. And, okay, here is Twitch. I don't see the Twitch chat. All right. Let me click on this deal. You can just chat it to me and I can chat it to you. All right. I'll chat it to Chris and figure out how to do the, wait, this is a sidebar now. I didn't do it. Watch as Ryan learns how to use a computer. Yeah, exactly. I should turn on screen sharing just for the added entertainment. Just for the bonus. Yeah, exactly. Here's a chat to Chris. And it was so short, I should have just set it out loud. It's ODO.DEV. Oh, my gosh. Yeah, all that effort. There you go. I'm just going to hit the screen share and show you what this site looks like. And let's see this desktop. Okay. This is ODO.DEV. So there's some notes on this site here for installing ODO. You can go to the GitHub repo and check out issues, file pull requests and whatnot. Watch some of the activity there if you're interested. We've got a short video here that you can watch, but I'll be doing a demo on learn.openshift. And these are probably more documentation on a couple of these subtopics, how to install, what's the basic concepts and we'll cover most of that today in this stream. You can also, there's more information. This is about deploying a dev file. If you're interested in a lot of details around the dev file 2.0 format, I know there's a whole kind of index of all the attributes you can set, what they're for, what the purpose is and how to configure them on that site. I'm going to be going to this developing with ODO on learn.openshift.com. So you can find it by going to learn.openshift.com and then clicking on this developing on open shift. And then that'll click you into a kind of subset of topics. There's one that's underneath called developing with ODO. That's the scenario we're going to run today. And this will give you a, should boot up a VM that'll give you one hour of free access to open shift. So you can try it out, figure out how it works with your particular tool chain. So I'm going to be using the embedded console that's provided there. But there are, you could run the exact same commands from your local laptop if you had an IDE or some other set of tooling that you wanted to link into a remote Kubernetes cluster. The Kubernetes cluster for today's purposes is going to be an open shift cluster as well. But the goal with ODO is to be compatible with any kind of upstream Kubernetes option. Yeah. So what we're going to set up today, it's like I have my command prompt and I had one extra, this is a totally optional step. But just in case I was going to run one additional command and update my ODO CLI, you could find this command on that ODO.dev site under the installation step. But this is just pulling a fresh ODO binary off of the open shift.com mirror and saving it in user bin ODO. Nice. Is there a way you can blow up that file? There you go. That's big. Okay. Is this still readable here? Yeah. Let me know if that's readable audience. I can go one step bigger if needs be. Yeah. I think that might be good. I mean, I can read it and I'm looking at the Zoom in a tiny window. So I'll go one bigger just in case. And then tell me if things aren't lining up, I can take down a step. So the two pieces we're going to be looking at, we have a backend component. So part of what we're trying to model is not just an inner loop development workflow, which is important to me as a developer. I want to make sure that I can not just deploy something that I've made all my commits and it's past its CI test and it's fully been vetted for production. And I want to do a rollout to prod. Yeah, that's valuable. But to me, where I spend 90% of my time is usually on refining those changes before I make my commit. And before I do my push to production, because I want to find out what all the edge cases are, what all the bugs are, work through all those conditions in pre-production environments as much as possible before I do my big rollout. That's usually kind of the end of the line for me. So this, we're going to try to show you how to do that whole inner loop, how to make changes, test them quickly, but test them against a very production like environment. So we'll be using OpenShift Cluster to host our microservices and we'll be dealing with multiple microservice components. So it's easy to just do a single web service, but this is going to be showing one that's kind of a front end web service. And then we have a back end that's Java Spring Boot application. So we've got two different, that's going to be our web services layer. We could have a DB behind that as well, but this is actually going to connect to the Kubernetes API and read events from the Kubernetes API as our database or our final data store behind the web service layer. Nice. Cool. And there's some nice kind of separation. The front end app doesn't have direct access to the API credentials that, well, I guess it can access the same service account token. So it could potentially perform similar operations, but we have kind of a separate separation of concerns a bit between who accesses the Kubernetes API and who's connecting with clients. Cool. So we have our Odo command ready. I'm going to run Odo login with a dash U, username option and a flag for passwords as well. Ideally, probably not pre-print your passwords unless you want them in your bash history, but this is just for demo purposes. So make it easy to log in. We're also going to create a default project called My Project. And then now that we have our login credentials, I'll open up a new tab and log into the web console as well. Okay. Next step here is actually kind of like a RBAC type of use case. I said our backend environment was designed to connect to the Kubernetes API and read events. So one way of doing that, every container in a Kubernetes system is usually given a some kind of service account token that's mounted into their container. It's called the default service account. So most containers have this, you know, pretty much all of them have this default service account. The default service account might not actually have any permissions granted to it, or it may have root permissions, cluster admin. I'm not sure how locked down your particular environment is, but OpenShift clusters are usually pretty restrictive on an individual container basis unless it needs elevated privileges. It doesn't have them by default. In fact, you don't even have view access on the Kubernetes API. So this first step is setting up view access. There's a graphic on how to do it via the web UI. And I also have an example command. You could run the same thing from the command line by running this OCADM policy add role to user. And then the role we want to add is view permission. And we want to add, we could call the role binding name, something like default view. And then dash Z says which service account we want to add it to. And we'll add it to the default service account. So we can enter all of these details through a web-based form. I'm going to skip over that part and just use the command line to get the same thing done. Awesome. Let's paste that in. Okay. That should give our back end access to view events from the API. Cool. Feel free to follow along if you're interested. You can join me at learn.openshift.com and go through these same steps on your own and either use the command line like I'm using. I'm planning on adding some notes to the content later today to kind of make that command line option explicitly noted in the content here. Nice. So if you come through later, you'll see that alternative option. So next step is actually setting up our back end service. The assumption with this Odo setup is that you possibly already have a tool chain that you have found a lot of value in in the past. Maybe you have an IDE that has a particular set of plugins or other features that allow you to really quickly iterate on whatever source code you're working with. We'll take a look at the Odo catalog. And this should pull information from OpenShift to see what's been added in the DevFile registry there. So there's a bunch of DevFile components. These are using Odo and that DevFile v2.0. So there's Java Maven, Open Liberty, a couple of different options. This wild fly bootable jar might be one that I swapped to or let's see, there's a spring boot option in here. So I should try one of those for the back end component. But today I'm going to use the supported Java S2I image. And I think I had selected OpenJDK11 on UBI8. We'll take a look, see what the text says here. There's also a Node.js source to image build that you can use. In today's demo, I'm going to attempt to use a Node.js DevFile. So we'll see how that goes when I set up the front end component. Notice you said attempt. Yeah, attempt. This is going to be, I did a couple of dry runs last night. A couple of them broke, a couple of them succeeded. Well, actually, it was me just experimenting with different command line syntax and finding out what are the edge cases. So this may be my, if it works flawlessly with no errors, this will be my first flawless pass through. So wish me luck on this. We appreciate failure on the channel. So don't forget that, right? Like, failure is embraced here. And we like to teach people how to troubleshoot things when they go bump in the night, too. So don't think you have to do this successfully. I mean, we all want you to succeed, but it's not required. All right. All right. Cool. Pressure off. All right. So I stepped into our back end environment. I'll do another LS in here. It looks like there's a palm XML available. And I ran the Maven package command in order to package up a local jar file. That'll be available in the target slash target folder. So like I said, with this back end image, we're going to use source to image in order to push this binary directly into a container. And that'll allow me to use my local tool chain, produce jar files, and then use odo push in order to synchronize promote that jar file up into my kind of production like environment. So let's run this odo create command using source to image and our Java binary. And then to send those changes to the server, I can use odo push. I think we have one other step in here. We can take a look at the config. This is odo config view should print out kind of the basics that we've set up for this application so far. Source type is binary. We can also, there should be a local file odo config.yaml. So we can take a look at this. This has the same information kind of in a yaml format. It looks like there's been some ports that were allocated here. So that should give us kind of assurance of what we're about to send to OpenShift. So we could run odo push in order to promote that, not just the jar file, but also kind of create all these changes in the OpenShift console. So we could log into the developer view and see what the result looks like. There is a dev perspective welcome tour. I'm going to skip the tour for today and jump right into my project. And here is the Java container that is starting to spin up. So it's in light blue right now because it's still pending. If we wanted to tail the logs while this was being created, I guess there's some status coming back here, but you can also run odo log-f in order to follow the logs from your command line. So if you wanted to get access to your logs to your server output, you're not running it locally anymore. You're running it in a hosted environment, which should give you more production-like feedback, but then you need to access that feedback. So here you go. You get access to the remotely hosted log stream, but while being able to do your development locally. Yeah, you know what? My kid has a harmonica in the front room. Yeah, I can shut my office door here if he keeps... No, I thought it was like a UPS or something beeping in here for a second. Yeah, yeah, totally fine. Kids playing music in the background. If he keeps jamming, I might shut my office door in a minute. He usually gets bored of it pretty quick. I'll be right back. Let me go ahead. Go ahead. I can entertain if you want. Okay. Right back. So folks, if you're looking to kick the tires and light the fires on your instance of OpenShift, yes, we have CRC, which you can run locally. I dropped a link to it in chat, but we now have a real developer sandbox, which gives you an environment for two weeks, I believe is what the limit is now. I would highly encourage you to check that out so you can actually get a good feel for OpenShift and all the things that come with it, as well as your ability to add on with Operator Hub and testing your deployments, check out that developer view to see how your iterations of working on the platform would work. So please feel free to log in, give that a try, and it's free. All you need to do is create a developer account on that site and you'll be off and running. So head over to red.htc-dev-sandbox and you'll be kicking the tires on OpenShift in no time. I think I found the chat finally. Good for you. Yeah, I know. Thank you. Take accomplishment. Well, doing a demo and chatting at the same time is no simple task. So, all right. So thanks for stalling for me and thank you audience for tolerating my kid playing the harmonica, doing his own Folsom Prison Blues. I'm here from Folsom, California. We've got to get that harmonica jam. Okay, so here's the front end instance that we want to set up to communicate to our back-end environment. So I've changed into a new directory. The config.yaml is still in our back-end folder. So it's in a different directory. And whenever we change directories using Odo, it's going to try to help switch context for us and make sure that next time we run Odo push, I'm pushing my Node.js source code instead of my jar file. So just changing directories should be all I need in order to do that step. I'll list the files here and we could see we've got a package.json. So this is a JavaScript application. I can run this Odo create Node.js front end and this is using the S2I flag. This would use the, this will totally work using our source to image solution. But I have also recently added a dev file.yaml to the repo. If you don't have a dev file.yaml in your repo currently, you can generate one by running Odo create Node.js front end without the S2I flag. And that'll print a new dev file for you. I'm not going to run that here because it'll stomp on my existing dev file that I've added to the repo. And I made one small modification. I changed the port number on the dev file to bind to a different port. But other than that, it's basically, here, let's take a look at that dev file actually. So there's a bunch of basically commands that this dev file has staged up that you could run. This first one is doing the install and kind of building the base of the application. This next one is going to boot the application and run it. There's a couple other options in here. And currently we do not have anything in the, here's the components. This is the front end component. And there's a single container in this front end component. Sometimes there's port information listed in here. And so I deleted about three lines of port information. So with those changes, I should be able to run. Let's see. Oh, I know the command. It's Odo create front end. And I'm going to give it an app name as well. Just setting the app name to app. But I'll supply that information as well. So Odo create front end. I'm not saying which image to use, because the container image is already specified to use Node.js 14 from OpenShift's catalog. So I don't need to specify the type of front end, because it's going to read from the dev file. Front end will be the name of the component. And I can specify an application grouping to go along with it. So let's try that out. That was a much shorter output. If you look here, it's, I think it expects there to be a little bit more. Oh, maybe that's after the push. Let's try Odo push. Okay. So this is going to, this should be a little bit different than the output we're expecting with the source to image style build. Source to image is going to do a slightly different line of work here. This is going to start with validating the dev file that we've added on our own, making sure that it's compatible, that it's a version 2.0, or if it's a version 1.0 dev file, then it can maybe make some accommodations for the older file format as well. That should start up our Node.js process. I'm going to hop over to the console view again. Screen sharing, hover deal is right in my path. All right. So here we go. The Node.js container is already up and running. One thing we can note in the, let's jump in and take a look at the logs. We showed how to view logs from the command line before, but you can also access the logs via the web terminal. And you could see there's an error printed in here. This one is actually kind of intended. This app is expected to have some amount of environment variables to let it know how to contact the backend service. We could potentially have it read this information about what other services are available. It could maybe ask the Kubernetes API what other services are available, but then you have an application that's more tightly bound to only running on Kubernetes clusters, which is probably a safe assumption, but why tie it to really any platform if you can avoid it? Environment variables are a clean and easy way to configure how you're going to connect to another service. So we've got two variables that we need to set in the container, and then we'll be able to connect to that backend. You can also, while I'm on, ah, man, pop over, okay. While I'm on this, you can also access the live terminal. So that's kind of nice. I could ask who am I? I've got a really high random user ID. And I can see process one is, looks like we've got Supervisor D is our primary process. So I think that should automatically restart our app on change or if it crashes. Okay. So next step, we have our front end app. We found our config error in the logs. So next step is setting the config. We can do that via Odo. We can also manipulate the dev file or another, there's a E and V file. So let me do cat. Let's see if the changes are in the dev file yet. Yeah, it looks like already we have these environment keys that were added to the dev file. And then we can push those keys on up by running Odo push. And I'm going to add one extra flag here, Odo push. You can add Odo push dash dash config. This will do a slightly faster push. So if you're really in a hurry to get your work done and you're only pushing configuration changes, Odo push dash dash config might work a little bit quicker for you. Still taking a little bit of time here. But anyway, that should fix our issue with the back end service. Since we're publishing new environment keys into the environment, the container will need to be restarted in order to read those new environment keys. So that's probably what our additional kind of pause there is maybe waiting for that container to run its reboot cycle. Last step is going to be, let's see, go back to the topology view and see where we're at. So we have our two component services. They're shown as grouped together in this application grouping to kind of indicate they're a single app. We have a nice kind of delete to application or add to application grouping control from the UI to mirror a little bit of what you can do with Odo on the command line. And this container right now, I think we're ready to connect to it and run our tests. So let's go back to the command line. The command for this is going to be Odo URL create front end since we want to add to our front end component. And then we're going to publish port 8080 externally run Odo push to run that, promote those changes. And that should add a little launch icon. Here we go. That'll allow us to open up the URL in a new window. So here's our default application. Looks like it's up and running. And I think if we stay with this a while, it should pull objects from the Kubernetes API and then show them on the screen. And then you could kind of blast them and delete certain resources with a service account that we created earlier. Only had view access. If we were to give it edit access instead, then when we click on these resources, it'll actually do a delete. And since these resources are used to host the game, the game might break. So you typically don't want to give it edit access unless you had assets in the game that you were prepared to, you know, were replicated like a pod, you know, you could set the replication to five and then blast as many pods as you like if you have the right kind of high availability settings. So this allows you to test a little bit of that. We set our service account just to view. So it's a little bit more, sorry about that, a little bit more limited. So there we would have taken out a very, very a loud feedback from the app. Nice. Yeah. Ooh, I need to change that demo. But anyway, that's our setup with a back end component that can read from the API, potentially modify content on the API as well. And let's see, now that we have the whole front end, back end, data store all set up and implemented in our production context. Now I want to flip back to my development tool chain, right? I want to see what changes can I make without having to stop and make commits or stop all my work and do a deployment and or wait for a CI build or, you know, if I want really quick iterative feedback on how changes to either the front end or back end are going to impact a system after I make my commits, this gives me a workflow where those commits are now decoupled. I'm not doing my build just off of a get push, which is kind of what you would expect from a traditional platform as a service is you do get push and then changes show up somewhere, right, on the platform. This has the same type of workflow, but now decoupled. So you don't have to make a commit in order to test your changes. So we made sure we're in our front end. I'm going to run instead of Odo push, I'm going to run Odo watch. And I'm going to background the task with this amber stand here. So we'll get our command prompt back. And then so that's going to wait for something to change in our front end container. Since it's built from source, any change to the source should automatically get replicated into that remote container. This should give me a really fast feedback loop for changes. So the editor, I'm going to the high power ID that I'm going to be using today is said. So we'll run said dash I to do a quick search and replace of Wild West. You can see currently it says Wild West up here. And let's change it to, let's see what else that should. So okay, we automatically found the changes flip back over and see how quickly refresh and already the changes are reflected. You might have also, we were using supervisor to host this process. But if you were coding with a react or some more responsive development kit, it might automatically promote the changes right to your browser using a web socket. So you don't even need to hit the reload button potentially, right? But that kind of depends on your development stack and how close to a production environment are you expecting in your hosted environment, right? Nice. But depending on how you have things set up, you could have things even faster iteration. But that's just to illustrate quick change on the command line. And it was immediately reflected in the browser. If I'm happy with the change, then I can do go get my get commit and get push and move changes to the next stage of the release pipeline. Awesome. Any feedback or questions from folks in the audience today? No, not really. I mean, kind of quiet today in chat. I see looks like someone asked in there. Yeah, someone Marcus asked if Odio was OpenShift or Kubernetes and technically it's both. You should be able to use either one. I'm kind of interested to do like comparison versus scaffold from Google. Scaffold seems to try to do some similar features. I don't know if that one is has features that are tied into GCP specifically, or if it's more neutral. I'm assuming it's more neutral, but I don't know if it has a lot. I think it's more neutral, to be honest with you. But more neutral also kind of, well, I think part of the nice feature of the positive spin I can put on Odio, I think you should be able to use it in fully neutral environments, any Kubernetes or OpenShift. So it should work as a somewhat of a feature, should be very feature-complete in terms of what Scaffold offers. I need to do more testing there before I make statements like that, but I would assume that Odio probably covers a lot of the same use cases as far as doing that push and watch. Odio does not provide things like terminal and IDE. There's a certain amount that's out of scope and a certain amount also like the being able to tail the logs kind of implies that you have a distributed logging setup and running on your cluster. I don't know if Scaffold has the ability to tail from aggregated logs. And I would assume that that type of stuff would be kind of prepackaged as part of your platform solution running on top of Kubernetes. With Google's Cloud, usually you've got to sign up for a separate logging service and whether it's integrated with all your applications and available to Scaffold is maybe a separate issue, but I'll need to do a follow-up on that topic. So I think that is just about it for the demo portion I have. Hopefully that shows how you can make rapid changes. I could also potentially pop into a terminal, make changes directly. That's not the best way to do it. Another kind of decorator you can have on here, we have this link to open the URL. If I had code ready workspaces installed, there'd be a link to open the source code in a hosted IDE. So these dev file, especially the 2.0 dev files, if you are using Eclipse Che or code ready workspaces, they use the dev file.yaml content as a way of setting up new workspaces in code ready. So that's definitely something we're trying to have well aligned between the hosted coding experience and the bring your own, use the command line, use VI or bring your own editor, basically type of experience. We're trying to make sure that they're both equally available and valid ways to interact with your code. Amazing. That's great. Good stuff. Cool. Well, let's see. I think we've got another question. Is it possible to do this with any language? So I can go back to the command line and we can take a look at what's available. Let me see what shows up in the catalog. So this developer catalog is going to show, let's see, languages. Let me unselect operator packed. So here's a list of language support. A portion of this is going to be using source to image. So I showed you how the Odo dash dash S2Y flag can set up source to image. You can also use code ready workspaces with source to image builds as well. So most of these that are marked template are going to be using probably a template that's wrapping a source to image builder image. Yeah, this template is going to combine Node.js and Postgres together. And there's a couple of different configurations of that with a ephemeral data store or an actual persistent disk, right? So, and these ones are not operator backed. So these are our kind of older Postgres installs, but just shows how to quickly template out. Are you sharing your screen? Oh, I am not. Okay. I need to jump back to your screen. I thought I was thought I was still on screen share. No, you're cool. All right, cool. Let me like it's like, I should see what he's you should be seeing. Yeah, yeah, you should be saying there's also I saw in the chat question about can we do ingress with Odo? There's actually an example on setting up ingress. Let me hit share screen. And we will get to that. Okay. Okay, so here here is the list of languages you could find in the developer catalog. So there's a wide variety of languages to choose from. Some of this is based on source to image. I will jump back to the command line and run Odo catalog list components. Nice. Wow. And this should show Odo. Just a few that are baked in there. Yeah, yeah. So this list, Ruby, Python, PHP, Perl, Node.js, you have the language and then also which image tag. And usually we put the different kind of releases or runtime versions for that language as tags on one image until things get too complicated and then we switch to a different image name. But there's also these options up here, Python Django, this Node.js option. These are using the new dev files instead of older source to image style. So there's a lot of different options for Java in there. I need to take a look at some of these and try swapping out the backend with a currently that backend demo was based on source to image. I need to try it using a dev file as well and then see if I can pop open the background in or the backend in code ready workspaces. Awesome. And that should be pretty easy to set up on the cluster. I don't know if I have time to do it, but that would basically just be switching to administrator view. I'd have to log in as administrator. But then once I go into log in as administrator, it should be available in operators. I could try that out if we've got extra time. But the question we had in chat was around ingress. And I'm pretty sure I saw a link in here about specifically about ingress. Maybe it was here. Production. I know there's somewhere right. Here we go. Deploying your first dev file and ingress setup is part of the recipe here. This setup is, like we said, this should support OpenShift and just vanilla Kubernetes. So you could see in the ingress examples, we're actually using Minicube instead of an OpenShift environment. So you can try that out. It works with even Minicube. Yeah. Even Minicube. I have not tested it with kind. You'll need to enable the Minicube ingress add-on. But yeah. And then it walks you through the rest of the kind of steps there. I shared a link in chat for everybody so they can walk through it if they so desire. Excellent. Well, since we have a little extra time, I might try logging out and logging back in as an admin. Let's see if I got this spelled correctly. Oh, no. That's an interesting password feedback. Okay. All right. Oh, yes. Okay. Here we go. So I could jump into operators, go to operator hub, and we'll see how quickly I can get code ready. Here's our code ready workspaces. I'll just click on install. It's going to handle all this by itself. Enable those operator macromended monitoring things. Okay. Always, always, always. Always do this checkbox. All right. And I'll leave it to automatic approval, automatic upgrades, and just trust that this will continue running appropriately for at least the rest of my demo. Question. Is it necessary to configure something in SE Linux to run all of this, or does it handle it smoothly? I have not seen any specific SE Linux callouts anywhere in any documentation. I know it's higher than that, I believe. Yeah. Yeah. There are usually that those type of concerns are dealt with by kind of like architect level folks that are manipulating source to image containers, or kind of setting up a layered build strategy for the base images that the application code is then added on top of. So when we're running a build here, we're really basically just doing like a docker copy or a copying in a new layer on top of a preexisting Node.js container. That Node.js container that OpenShift provides has already been adapted to work really, really well with SE Linux. We've taken additional precautions to make sure like when I ran Who Am I and it returned a high number instead of a username. We try to make every application or every project namespace have a unique UID. And then we tie kind of RBAC and multi-tenancy around that UID. So it's somewhat shared within the namespace context. But if your UID were to try to take over the system, it's just a standard user. It's not root. You don't get any elevated privileges. In fact, a lot of the privileges are really knocked down quite a few notches. So if you try to run a docker container that expects to run as root, you might run into a couple problems. But if you take your source code and run it through source to image, you don't need to make any modifications at all. Hopefully that answers the question well enough. I believe so. We'll know in a second. All right. Let's see. It looks like there's a question about is it 4.7? This is an OpenShift 4.6 environment right here. 4.7. I think we've got release candidates cut for 4.7 that are available. I need to find out where I can get access to 4.7 code-ready containers. But yeah. Let's see what... Is it going? Is it working? I believe we have... Let's see. So that should give us the... I'm logged in as admin. Let me see what else I need to do to get up and code ready. I don't know if I need to add a CR. I need to add an operator back to CR. No? No. I thought when you did CRW, it was CRW. Yeah. Right. You just install that and then go back to the... Let me log back in as developer. Yeah. Maybe deploy the custom resource from the... I'm still logged in as admin. We've got some credential like going on. Aggressive coconuts says it might take about five minutes. So I don't... Oh, okay. Makes sense. Thank you. Aggressive coconut with your one-up mushroom. That's a very, very nice username and avatar. Okay. So we're back in. I don't see any extra decorator on here yet. If you go... If you go back to the administrator one where you logged in, of course, it might be... Oh, administrator view. Yeah. Check the installed operators. See if it's at least in install. Yeah. Still hasn't been showed up. Yeah. Hasn't appeared in installed operators yet for my developer context. That could be that the operator has more work. We installed it in a way to make it available for all namespaces. So I was kind of expecting maybe I'd need to install a CR or something in the local namespace. Decorator is via an annotation. Oh, okay. Well... I'm going to Natali. Okay. Interesting. Well, so that ought to be pretty easy to do theoretically, but I don't know exactly which annotation to add. So go to all projects in this view if you can. All projects. I don't know how to do all projects. Okay. That's weird. Oh, here's the annotation. I can drop it to you in chat. How about that? Okay. Let's see, annotate, deployments. I can try to run something like that. Yeah. Give it a whirl. All right. Let's see if this works. And let me find command prompt. So this environment is about to self-destruct at the top of the hour or a couple minutes after it, but we've got a little bit more time to run. So we want to annotate a deployment. This is the example here is definition labs dashboard. Let me try to find the beginning of the command line and then I can modify some of this. We'll find, I bet it's front end instead of definition labs dashboard. Let's get deploy. Whoops. So we've got one deployment named front end. And I'm assuming that's where the annotation needs to go. So let's do annotate deployments front end. And then we'll add this annotation. I think I have this copied correctly. We'll find out pretty soon. Okay. Annotation has been added. Flip back to the developer topology view. Aha. I see an extra edit source code. Nice. I hadn't even planned to get this far along in the demo. It looks like that's just a link to the repo. Refresh this view or maybe it's that second annotation. Looks like there was two of them. Yeah. Let me add that second annotation as well. Carlos, we're trying to add the CRW link in the dev view. Yeah. This was kind of out of scope for my intended demo. Let's see. Open shot. This is interesting. It's a pretty short annotation here. Okay. Add that back on and reload and see if we have a code ready. Well, we've at least got the new edit source link from Git. But it looks like it's pretty easy to add annotations in order to mark up some of these features. I know we have past office hours sessions that focus a little bit more closely on code ready workspaces. Those will tell you exactly how to set up the linkage. For our purposes today, we were expecting the back end to be producing jar files somehow, maybe via a thick IDE that I have running on my laptop. And the front end was going to be a light command line edited Node.js application. I usually do those with VI. But theoretically, either one of these environments could also be accessed via a hosted IDE. And then I can have all of my tool chain available on demand, hosted in the cloud. So I don't have to worry about whether my laptop is having issues or whether I've installed the right version of Node.js locally or any of those kind of issues or should already be handled if I have a cloud hosted IDE less to fail on my laptop into the equation. Yeah. Cool. So are you happy with things or do you want to like... I think this is about as far as this is farther than I was planning on going. If there's anyone in the chat that has things that I should give a try while we're here. Cool. Shout it out otherwise. Or if you have topics that you want to see next week or in subsequent weeks. I think the topic for next week, Serena was going to be back with some news about 4.7 next week, I believe. Nice. That'll be fun. Some 4.7 stuff. That'll be great. Yeah. 4.7 serverless. And Jai... I think Jai Kumar's planning on joining. Awesome. That'll be fun. Cool. Yeah. All right. So we got farther than we thought. All the way as far as we could. That's fine. No big deal, Ryan. But thank you for the demos as everyone is. What's up next on the channel? Is the comments on today? No. I think Karina is on vacation to be honest with you. So I will actually be working for the next few hours like real work. So that's kind of fun. But then tomorrow, bright and early, level up hour with, I believe, the one and only Dan Walsh. So definitely stick around for that. And if you want to subscribe to our calendar, there's the link to do so. And as always, the level up hour is here for you to help level up your skills. And we'll be on at 9 a.m. Eastern 1400 UTC tomorrow. Boom, boom. And that's how it goes. So thank you, Ryan. Thank you, audience for all your questions. Yeah. So Ryan, if there's nothing else you got, sign off, man. Yeah, cool. Thanks all. Thanks again and chat. Chat's picking up activity at the end here. Yeah. Thanks to everyone in chat for following along. Hopefully this was useful. And yeah, let me know if you have other topics you would like to see. I'm Ryan Jay on Twitter. Feel free to ping me there and let me know what you'd like to see on the Office Hours channel in the future. And feel free to ping me anytime at Chris Short on Twitter. Two S's. So thank you, everybody. Stay safe out there. Stay warm if you're in the middle part of the U.S. because Lord is cold. But yeah, stay safe and we'll see you tomorrow, folks. Have a good one. See you next time.