 Good morning. Good afternoon. Good evening wherever you're handling from welcome to another edition of the developer experience office hours here on open shift TV I am Chris short executive producer of open shift TV and I am joined by several friends from red hat here. We're going to be talking about code ready workspaces kind of in depth and I would like to hand it off to David first. David please introduce yourself to the audience. Hi folks, I'm David Harris. So I'm the product manager for code ready workspaces. I also look after a bunch of other tooling that we have for the inner loop experience. So it's great to be here. Have any questions stick them in the chat be more than happy to answer them as we go. Yeah, awesome. Thank you. Angel you want to go. Yeah, my name is Angel Mushevsky. I'm senior software engineer working on code ready workspaces on eclipse Chey and stuff like that. So I work on the stuff that we're going to be talking about today. Based up here in Toronto. Oh cool and not much to not much more to add there. I have a very small window of focus but it is on what we are doing, unlike David who touches many things. Awesome. It's good to hear people from Toronto on the stream because being here in Detroit makes me feel a little closer to the folks that I'm talking to awesome. So code ready workspaces. Or Ryan did you want to introduce yourself since you keep up popping in and out. I mean, I'm going to be around in the in the chat, helping with questions and in case I pop up later I just want to say hi at the top of the show here. Yeah, say hello and chat, folks and I'll do my best to keep track of what the activity there. Sounds good. So, CRW. It's awesome. Tell me more. I think you've said all there is that needs to be said. All right, we're done here out we go. Let me play the outro. Yeah, I mean I can talk about it a bit so code ready workspaces CRW is basically a cloud ID where the idea is your writing programs that you know you're running in a Kubernetes cluster in an open shift cluster. It's a different modality than running something locally on your computer and so it makes some sense to have a way of developing things in that cluster, reusing the sort of concepts you're using to deploy, you know, multiple services, multiple back ends, and putting that putting your environment in the cluster to match sort of what you're dealing with in reality rather than, you know, having your local setup, running a bunch of Docker containers and trying to network them together so it's basically a way to simplify that setup. Put everything together in a real easy way and it give you a familiar sort of environment, but in a sort of different underlying system that it's running on. Can you add anything, David? That's good. I mean, yeah, there's kind of two, two key things to touch on like one of the principles is that anyone with the browser can easily contribute code. So instead of you need in the high spec machine. You can run everything in the clouds. It's really good for like open source projects in particular where we kind of see people dipping in and out of a lot of different runtime stacks. Instead of having to have everything on your machine to deal with all of those different environments, you can just quickly click a link provided to you by that open source project. So you've got your workspace and you've got everything you need in the browser with some of Red Hat's more enterprise security concerned customers. This is also a way where you can not like enforce control, but you can kind of get that sense of what your development teams have access to be able to provide them everything they need in a very auditable way and keep everything off very hard to secure laptops. Yeah, definitely. And you know what you're getting is the immutability of a Docker container for your environment. So your dependencies are all set to you. You know, you define what you need to build your application and everyone can immediately just grab it and start an IDE and basically so that's the that's the pitch overall I would say. Just to let the audience know kind of I'm dealing with a slightly sick dog today so if you see me doing this is me trying to calm her down and everything, but yeah. So, if it brings the, you know unified experience to a group of developers code ready workspaces does, and it allows you to not have to have these various bespoke environments in which you can code. It enables you to enable for people and organizations. It basically enables you to sort of get people started on board more quickly and also, you know, run into different situations and just kind of dive right into writing the code. Don't don't worry about setup don't worry about configuration don't worry about. So the goals you need to install to build this project here's the build process, we kind of encapsulate everything from, you know, you can have multiple containers running so that when your workspace starts up it also starts up your database container starts up a front end container. If you're working on the back end or a, you know, back end container if you're working on the front end. Basically it's a one click way of defining what your deployment looks like. So starting that up so that you can kind of focus in on the part that you need, while keeping in mind that there are related components you have to worry about so, you know, my application interacts with a database, your onboarding doc is going to say, here's you know, deploy this database here's the configuration for the database, then you start up your program locally when you're debugging it or whatever. Here's a connected the database and there's a lot of hoops to potentially jump through to get that working right. And there's some maintenance to keeping it running right if you're updating your system stuff like that. I pretty consistently run into why is my build broken now that I've, you know, updated Fedora 34 or something like that. So things like that are what we want to avoid. So encapsulating basically the components that run your workspace the network components that run your application, the networking that sort of connects everything together. And also the sort of build logic of, you know, what's the sequence you have to do things in do you have to start this start this and wait for it to be ready before starting another component that sort of thing. And yeah, we'll talk about it more I'm sure in a bit but it's all encapsulated in a single file that you know you can just keep this YAML file that defines your entire development environment and just pass it into the program and get an identical environment. You know practically for for me I work on the sort of eclipse tree project which is the upstream to code ready workspaces and we've got a large number of repos all have different setups. And some of them I don't contribute too often. But when I do contribute you need to install a bunch of you know linter is a bunch of tooling that is specific to that project with sort of code ready workspaces what I can do is just feed that project Dev file that defines the workspace that you need for that, you know, for the for the docs repo for example. I'm going to code ready workspaces and instantly have everything set up that I need. Click a few buttons to do the validations that are required and just more easily write the code. Awesome. So, if, if our new developer getting started in an organization that was using CRW or I'm an admin that's installing CRW I guess where do I kind of begin right like So I mean CRW. There's a number of ways to install it. I think the sort of main line way you'd want to do it is it's just an operator. You can go to operator hub and just grab it and deploy it and it should be pretty straightforward. Once it's up and running then on OpenShift at least with CRW you log in with your OpenShift user you know we're going through OpenShift OAuth there's nothing to set up there. And that's about it. It comes with some samples that you can play around with and then you know start diving into documentation and reading about how to set things up for yourself. We try to keep it pretty simple. But yeah operators the main lines are the way to install. There's other tooling we have CRW CTL which is a command line utility that you can use to deploy automatically on your cluster. I think there's probably links somewhere. I don't have them on hand right now. I can find them. Don't worry. You keep talking I'll go find links. Yeah. So those are the two sort of main ways to install it yourself if you have you know CRC running locally or if you have access to a cluster. If you just want to try out the application you just want to try code ready workspaces out not worry about it. Not you know deal with any setup yourself we also have a free service hosted at workspaces.OpenShift.com. You just sign in with I think your Red Hat developer ID kind of popped in there and you get a pretty decently provisioned environment where you can try out the samples write your own thing and start up a workspace see how it works. If you're interested in the architecture of the thing you can also go back to the OpenShift console look at what's happening under the hood if you're interested in that. And yeah the only downside of the workspaces.OpenShift.com is that it is sort of time limited so after a couple of weeks we kind of just wipe the account and you can you can create a new one freely but it's you know not intended to be your main workhorse. It's not your permanent development. It's not your permanent Bitcoin mining environment. That too. Which brings up the point. So just to say the advantage that we have there is because we have a way of codifying the developer environment within the dev file which we'll look at later. It's very easy to kind of recreate and get started again. Once you have repositioned on dev sandbox. That's a good thing to mention that I'm kind of glossing over so you define your workspace and that's a portable sort of YAML file that you deploy it locally or you recreate your account you could just take that same YAML file and get the identical thing. You don't actually lose data in that sense so as long as you're pushing to you know GitHub and saving the dev file you have no worries. Yep. So the first question to CRW support plugins I tried CRW with OpenShift Sandbox but can't find from where to install the plugins. Okay. Is this something that I should maybe share my screen for because we can kind of yeah I mean let's dive in and go through the UI and everything I mean that we don't have to answer that question right the second but we need to touch it right. Yeah, of course of course so let me see if I can get this running the way I expect it to. So switch over here. This is so this is actually oh no the UI is the same it's hard to distinguish them so this is this is the code ready workspace the workspaces.openshift.com I was talking about so it's the thing you would see I have a few workspaces here but the so the editor so some background before we get into plugins. We support the Thea editor which is based on Monaco which is the same sort of basis for VS code. At the end of the day what we're trying to do is support the VS code extension API so any extensions you might have from your local VS code environment, you can just pretty easily import into a workspace here so if we go through the get started. Actually let's do this way this is a little easier to see. So we go from a template let's just pick out I'm using go a lot so I'll also go with that. So this is the dev file that I was talking about this one is a little complicated because it includes some you know here's how you build here's how you run. Let me know if this text is too small I can also zoom in a little bit more. Yeah let me know audience if you can't read it and I'll happily ask them to zoom. And you know so here's your debug configuration you can see it's a VS code launch so we're a lot of the time since we're using Monaco you'll see that it looks a lot like. You know it looks like what you're used to if you're running VS code and a lot of the API is supported so that you can more easily port stuff over. The way plugins are added here is we have components kind of hard to see the. Yeah, can you actually the audience requested you to know a little more. All right, there we go so the wrapping on this is making a little bit hard to read because the URLs can get a little long but we have plugin elements to the sort of dev file structure. So here's my go plugin and you can see it's going go latest used to be more clear that this was just exactly the VS code go plugin. So exactly what you're getting on VS code is what we're going to start up in your workspace. The way we do this is we you know we're running in Docker containers you can't just grab the dependencies you need but what we can do is start up a sidecar container that runs go stuff. Put the extension in there and then remotely access it from the which you know it's like it's like this code split into multiple containers basically. The other thing we have here is the Docker image thing so that's basically defining here's a here's my you know build environment here's my running environment. This is a pretty basic example. Let me see if well okay let's just go let's just go to one I already have defined. Here's a good one so this is one of our dog fooding dev files. So you can see, we have the go plugin. We have, we're defining, you know, the dev container that has has all my dependencies already there I don't have to worry about getting the right go version anything like that. We have some endpoints which are kind of useful there, and also starts up another container which isn't used for development but is used by the actual application we're developing as a back end to add plugins to finally get to the question that was asked. We come with a built in list of plugins that we kind of maintain ourselves. So if you go into your actual workspace details, you can hit these toggles I can you know do this and next time I started it's going to have ASCII doc support. So for whatever reason you want to have both go and Java and node you can do that. You know, we can do the OpenShift connector that all start up with the OpenShift connector from that you would use on VS code. If the list that we have here isn't sufficient if there's something that you're missing. The things to keep in mind is that we do support. I think most if not all of the VS code extensions API. There's a few details where we'll hit some roughness, but it's easy to define your own plugin, especially if it's something like, you know, there's I think down here is the, well the zoom has broken a scroll to the bottom so zooming a little bit past the limits of what's expected in the UI design maybe. There's the AML plugin for example doesn't have any dependencies it just runs, you know you don't have to install anything to run that so you can just take that v6 extension file and put it into a format and all that but you put it into your workspace and then you have that running you can basically install VS code extensions is what I'm trying to say in a very long winded way. Nice. And there's something another little details going to say oh yeah so the language support like in VS code is provided through the language server protocol so anything that you've got a you know language server for that's what we're getting it from. Go is the Microsoft I think I can't remember if it's Microsoft or Google that builds the LSB for go, but you know Java is the open the JDK Java language server stuff like that so basically when you're thinking of what do we support it's VS code stuff that you find there is generally going to be compatible over. Two questions have popped up as a result of, you know, going through this demo here. One kind of the, the, the every PM wants to know question can we do it in dark mode. Oh I believe we can. I prefer light mode personally so I mean everybody has their preference right like a lot of people are used to the traditional. It should be able to go dark mode but the thing I can offer as a consolation is if I actually go ahead and start up one of these workspaces. We have dark right yeah so it's so the actual and then part yeah and then so that leads kind of the next question is, what if I hate the font I'm looking at in the UI by default can I change that default font somehow. Yes, so you should be a sweet. Actually, so I work on the back end stuff. I could be wrong on some of these details I'm fully, I'm going to be open about that I do not work on that the component of it I work on the bits that start up pods in the background. It is, you know, once this gets started it's taking some time to a bit of slowness on the cluster today but once it gets started. You'll see that it's, it's going to be very similar to VS code so you should be able to hit settings in the same way. The issue I feel like you might run into as if you know, do you have these fonts installed it's a browser so how browsers handle fonts may be different but I'm pretty sure you should be able to just set that we can maybe give it a shot. See what happens I haven't actually done that because I am personally fine with the font I totally understand disagreement there. So you can see a sort of running log of what's happening to actually get this running. Here it's loading and in a moment. I know it probably doesn't like streaming and also running CRC. No, I will say that yeah. If you have a like somewhat limited upload. Yeah, you're going to feel it. Yeah, it's the joys of Canadian Internet I have great download and then I think like 10 megabits up. Yeah, perfect five of that five of that's being used for this right now. So as advertised it's going to look like VS code. You know we have our usual UI. I don't know why that's happening or that. Yeah, it's it's going to be like VS code so let's see what happens if we go into the preferences. Look up the old font editor. I wonder. So there's font family. What's the preferred font I suppose I probably have. Vera code is what was asked for I've used hack I've used Julia. There's all kinds of ones. I'm a I'm a Iosovka kind of guy. That's the really narrow one, but still a lot of space. See the problem is that I don't remember the fonts I have installed on this machine. So I don't worry in something that no worries. But yes, you can do whatever you want. And this does very much look exactly like the VS. Yeah, the underlying stuff is the same. The same repo monaco that's used at some point in VS code that the process is complicated. So what parts they replace them not exactly sure but the sort of soul of the thing is the same. And the contributions go both ways right. We collaborate with Microsoft on their API. We contributed some of the first language servers for VS code. They come back into, you know, the code ready workspaces stuff like that. So the audience is very curious you're using the I3 window manager on what distro. This is a fedora 33 right now. Nice. Awesome. Thank you. I considered cropping the share screen window to just cut off. That's fun, man. Come on. It's a good stuff. It's just, it's just more effective. I'm very happy that this pandemic has spurred all the sort of video meeting platforms to let me actually select a single monitor at a time that used to just show ultra wide screen. Yeah. So I appreciate it. One good thing came out of all this. The video conferencing tooling has advanced. Yes. Yeah, awesome. I don't know where, is there anywhere we want to go from here? We have a thing up and running. This is just running. I mean, there is one question that, you know, we could probably address now. I was saying for a little bit later, but rapscallion Reeves regular viewer would love to know if it's possible to have like a test instance right like so some kind of automation. Yeah, I mean, it's not a use case I've sort of experimented with what we try to go for in general is enough flexibility to do what you need. If I just stop this one up and go back to over here. Let's see if this is configured in a way that makes sense for talking about this. Yeah, I mean, it's not a use case I've sort of experimented with what we try to go for in general is enough flexibility to do what you need. Let's see if this is configured in a way that makes sense for talking about this. Yeah, so this. So basically, this is maybe an example of what you're talking about. Not sure if I'm getting the exact sort of concept but this this dev file for the plugin broker which is one of the things that helps start up workspaces. We define this plugin registry this is the sort of this is the registry that the plugins are getting installed from this is what defines sort of what container you get when you're installing, you know, the go plug in for example. So this is just an image, we're going to, when we fingers crossed that I didn't break this at some point in recent times, when we start up this this workspace what it's going to do is it's going to start up that image and it's just going to run it's not idling it's basically serving plugins from within my workspace. In a similar way have a container that runs tests, you could have, you know, we don't have the logic for easily defining what look running a test looks like but if you have a container you can set the, you know, command for it to make test, however you define your testing platform, and it will just run. One of the minor details that we have to worry about here is the idea behind this is that any container you add is going to just run. It's not going to stop. It's not going to exit even successfully. If it does, what OpenShift will do is will just say, Oh, hey, this, this deployment's not healthy let's restart the whole thing. Exactly. So in general, what we try to do for all of our samples is, you know, you're going to start up your editor. It's going to come with commands that will run your tests it's going to come with commands that will do all of those details so you can run tests. The automation part, I'm not sure if I'm catching what the question is asking but it's not going to it's not going to be designed for you to use it like Jenkins necessarily. Yeah, and my window just went wonky on me. Hang on. Let me find the question again. Of course, for whatever reason my window just went to the other monitor and came back because that makes sense. Zoom does that on i3 for sure. Yeah, this is actually my browser window which makes it weirder. So if you can start the CRW with a specified bash script, then I can at least make my own tests. Right. Like, start the environment, plug the dev file in, see that it's running. Shut it down kind of thing. Right. Like, that's kind of the interesting work use case. I think reps galleons. Yeah, that would potentially work. Yeah. You know, ping me on. There's a public. You know, the eclipse matter most we have a channel there you can ask questions there and you know, we have a quite large team working on this and I'm sure someone would be happy to help you know, depends on the details of what exactly you want to do. We try to cater towards a development environment. You could, you know, I'm trying to imagine what you're gaining by doing it through something like this as opposed to just running a pod or a job on the cluster directly. Unless our, our way of configuring these things matches up perfectly with what you need, you know, we have a simpler way of deploying something complex, which is the dev file. And you know what is actually getting put on the cluster is pretty complex in that situation. But the whole light thing that comes with something simple to represent something complex is representing a small slice of the complexity and if you need something outside of what we represent then it might not work for you so that's kind of why I'm anticipating on answering this one. No, no worries. I mean, it's, Well, I got this running I can just show. This should just open up. Yeah, so this is what the registry looks like. It's basically just showing me a read me file right now. And so if I run this code here and say, I think start this will work. Okay, so I didn't actually build it first. I haven't used this one in a while and I, I wrote it for myself. It's not necessarily the best documented thing, but basically the idea is if I run my application locally it will use the other container in the workspace. This is, this is a dev container and this is the plug and registry container and they're both running. They're both started up at the same time and configured exactly how I need them to. I'll hit that compile and see how long it takes. So you were saying something before I kind of interrupted there. I'm trying to remember. Chad is very active, which is good. I'm happy to hear it. Yeah, I don't see chat at all. So, well, no, it's all empty desert. Yeah, no, it's definitely not Ryan. Did you want to chime in here? Yeah, there's a ton of people pitching in and chat. One question I had is, and there's a couple more here we can get to as well. But I wondered if you wanted to talk to at a high level, what's the best way for teams to collaborate around dev files? Is this just another YAML that I put in with each repo? Or I know you also mentioned a registry, a dev file registry. Should we have an internal registry that we share? Or do I just check a dev file into each repo and then have like a launch button on GitHub, right? That sounds like an easy way to go. But, you know, there's probably plenty of ways to do it. What's the value in the registry and is that a good way for a team to collaborate? Yeah, so I mean, again, you know, flexibility is kind of the goal, what we want something that'll work. The registry is an option for sure. We have this built in, you know, when you go here and look at, this is a little broken by this and there we go. If we look at this list here, this is the built in dev file registry. Inside this are just YAML files that define the, you know, Python stack, Go stack, Java Spring Boot, etc. You could build your own registry, you could store all your, you know, you could say here are the dev files we use. You could configure CRW if you're deploying it locally to use your registry. And then when you navigate to this page, you'll get, here's my, you know, here's my team's projects that we work on. You just click on one, instantly get a workspace. The other way, which is maybe more easy or straightforward is to store this dev file in the repo that you want to use it for. So if I go here and let's say the same repo that we were just looking at here, the plugin broker. It's a Go repo. And down here we have the dev file YAML and zoom back in because I zoomed out. If you look here, it's the same thing that we had in the UI back in the dashboard. So this defines the sort of code ready workspaces workspace for this repo. And if we actually have to go to my fork because I just fixed a little issue here that I caught. And if we go here and copy this, so you can just have a badge. I don't think I put one, I don't think I put any on this repo. This is one of the repos I worked on in the past. It's kind of been complete more or less. I haven't had to make many updates, but you can put a badge because if you have to move that over there, that would be going to the screen to make it work for you. Yeah, that was me moving the sort of little stop share thing which was covering up my URL bar. Love that. It's always fun feature. Yeah, it's perfect. Nobody uses the screen ever, right? Yeah. So this is a little ugly because the sort of provisioned environment for the sandbox workspaces is a long URL. But basically you have, this is the URL to the code ready workspaces instance. So what I can do is just pass in what we call a factory URL and pass in a link to a GitHub repo link to a bit bucket repo. You can configure sort of authorization and stuff so that you can pull from a private repo automatically, you know, use some credentials to do that. And what it'll do is tell me that I can't start more workspaces because I didn't shut down the previous one. But let me just quickly, yes, this is, this is the problem I ran into. I forgot to shut that one down. So if we do that again, and pass that URL in, it will go to GitHub, it'll grab that dev file and it will apply it. And what we'll get is the exact same thing we were just looking at. So short of building your own dev file registry, you can just put these files in your Git repos. You can also just post the files on a server. You can, you know, paste a link that when you load it will load a dev file and code ready workspaces will apply it sort of workspace from it. So yeah, flexibility, but I would say the ideal way is either keep your files in the repo and then you can just kind of, you can. This isn't a great example, but I believe. Oops, believe here somewhere. No, we have a badge that shows up on PRs for a lot of our components that you you click it, and it'll start up a workspace with the PR checked out. You can click the things to run to test to review that sort of thing. So the the easiest way is definitely here's a badge that you just click when you're in the GitHub UI. And it'll start up a workspace for this specific repo. Cool. And trying to remember how GitHub looks this zoomed in is a little tricky, but yeah, this is the sort of main upstream for code ready workspaces I'm looking at. Awesome. Cool. Another question we had and you kind of mentioned testing at the end of your answer on that last one. Is there a good way to test a dev file or lent it, or just make sure that the repo is going to successfully deploy. You have any tips for automation, or how teams would approach that. Yeah, so the design of these dev files is maybe going to be the trickiest part, especially like until you sort of understand the sort of rules around how it's done. So if you want to just write one of these things, there are schemas, you can in VS code, you can import the animal schema if you use the red hat the animal plugin. And one of the guys I work with on closely actually developed that plugin initially for for code ready workspaces and pushed it to VS code and it's very popular there. But you know there's schemas we can we can link this we can say, you know, this shows up better on the UI when it's at the normal zoom level. You can get some auto complete here so I can say I want to add an ID, and we get it auto complete for the plugins that the dashboard sort of knows about. I want to add the VS code YAML plugin. And I add other than that while I can add a CPU limit, that sort of thing so writing the thing you'll write a valid, you know, you'll write a valid dev file, we have linting for that. The detail you might run into that is a little tricky is open shift has rules about how you run containers. And we have additional rules about how you run containers. So basically the idea is any container you start up has to be non terminating. You can just do that by setting the entry point to you know, tail dev null or sleep infinity or something like that. Apart from that, it's kind of setting up what you want to actually run in here. Ideally, the sort of samples we have are a good way of demoing these features and you can kind of pull from them. The idea behind the sort of samples that you see by default is that you say, okay, I'm working on something that uses node and it's using MongoDB. Click here you have a basic skeleton of what you might want eventually. You know, we have the plugin for TypeScript we have the plugin for debugging node. We have the sort of dev container that's going to be building our node project. We have a MongoDB container that gets started up. It's configured here. I don't know if I'm answering the question. So stop me if I'm just kind of going off. Just keep going. I think you're okay. So yeah, basically, you know, you're sorry, go ahead. That was a question from Rapscallion Reeves who actually had to leave. So hopefully they'll watch the recording and get back to us at some point. But yeah, that looks like it covers a lot of it for me. Is there also a way to do kind of config injection or any other? I guess you could use kind of standard Kubernetes secrets and other things, huh? Yeah, the underlying thing here is that we're kind of abstracting over the Kubernetes API. So a lot of the fields that you're seeing here are coming, you know, are mapping in some way to a container in a pod in a deployment. You know, this is the environment I can add command so I could define commands like I do and change the sort of entry point to my container. Endpoints is the tricky one and the sort of thing that we're kind of improving here. You just define I want an endpoint named MongoDB available and we'll set up the service and the route and expose ports correctly, all of that stuff. But yeah, generally things are mapping over to over to Kubernetes or OpenShift. So the sort of ultimate pitch is you have a definition for how you're running your application on OpenShift on Kubernetes. That's going to be a bunch of YAML. Take pieces of that YAML, put it into a dev file and define your workspace that way so that, you know, I know I need these environment variables, I know I need these entry points, I know I need these volumes mounted and define those, you know, take the definition you use for your production deployment, deploy it as a workspace and start editing code in the same environment that you're going to be running in eventually. Nice. Yeah, awesome. We've done a lot of nice kind of plugin integration higher up in this file. We had another question in the chat about whether it's possible to add plugins using the UI. So add plugins to this dev file or in a more abstract sense. Let me double check that. Let's see. If we want to add plugins via the UI. If I go back and say, let's look at the Go workspace. So if I want to add ESLint, I hit this. Hit save. So it's from the editor here in chat. Yeah. So once you've actually got a workspace running and basically just showing the. From a running workspace. Yes. I haven't tried this in a while. I'm, I remember correctly. Yes, there is a way. So if you will look at the same sort of source. This is actually something that's very influx right now. So the thing I've been working on for the past year is rewriting the sort of core provisioning logic of code ready workspaces as a Kubernetes operator. So how we're doing plugins and stuff is kind of influx right now. We're going to be basically the next sort of step that we're looking towards is your dev file that you used to define your workspace is also a custom resource that you can just directly apply to the cluster. And you'll get this. So that's what we're working towards that change that requires us to change how we handle plugins slightly. But for now I'm pretty sure that the will just go to the same registry that the dashboard. The UI was just showing it's called we call the dashboard. It'll go to the same source, look at the list of plugins and go that way. I can also talk a bit about sort of what a plugin definition looks like. I don't know if there's a good way to do that unless I go. What was the plugin registry for this. I never remember synthetic moose and chat says command shift J will bring up a plugin menu. Perfect. You know more about the project than I do. This is me masquerading. Yeah, I was like, wait a minute isn't synthetic moose on the stream right now. I'm not sure what the mapping will be for you just in case it gets a bit weird on my machine because as someone mentioned I three I have said. Everything nothing works as I expect it to I. I'm a clicker most of the time unless I'm using my UI. So. There's the view menu. View. All right. Plugins. There it is. Load all the different ones. There we go. Just install them into the workspace definition. So if you've loaded from a dev file, this wouldn't override that they're following it, but you can copy and paste into that. Yeah. Thank you for picking that one up. Yeah, I knew this existed. I just couldn't remember exactly where it was. It has some nice new icons in the coming version. Yeah, it's not this puzzle piece anymore. Lovely. Yeah, so yeah, I could, you know, do this and now the animal is installed. Sometimes you have to restart the workspace. What will happen is if you know if I installed Java then the next time the workspace starts it needs a Java sidecar to have the dependencies to run the Java LSP. That'll be a quick little restart. Or a long restart in Firefox today for some reason. Apologies for that. No worries. This is how it goes sometimes. But yeah, so that's installing plugins from the UI. The plugins are defined similarly as YAML files. So the other thing you can do if you don't want to go to the sort of internal plugin registry is you can prepare one of these things yourself. And the same sort of syntax in the Dev file for referring to a plugin ID you can use to just paste in a URL that points at a plugin points at the YAML definition for a plugin and add plugins that way. So that's one of the ways we test plugins is, you know, someone contributes this YAML definition, and I can just paste it into a Dev file startup workspace and plugin is running. Try to make that sort of stuff easier. Nice. And yeah, I guess the other thing I haven't kind of mentioned is that we also have support for, so we support executing commands in these containers. That's how you would run, you know, that's how you would say run the app, build the app, debug the app, you can basically do any command line stuff. But once this gets loaded, I'll show you can also open terminals directly into each container. And kind of just have a, you know, have a remote shell on the cluster that see if I can end points. No, here we go. So in the go CLI. So yeah, I'm just in standard terminal except the terminal is basically SSH and into the go Dev container, I can go into Let's look what we have here. So we can go into the going health check. And we see the same. Same stuff that's right here. You can use this to sort of do more detailed, you know, I'm the sort of person that goes to terminal very often. I usually don't even use sort of a get UI for for committing, pushing that sort of stuff I do most of my stuff from the command line so this is the sort of thing that I look for. Maybe others are different in this approach, but I like to, I like to go to the command line for a lot of stuff. And so it's very easy to say, open up a command line into this. And you can even go and you know I can, I can open a terminal into the theater container so this is the container that's running the sort of back end for the editor right now. And down here we also see the other stuff so here's the VS code YAML plugin, here's the VS code go plugin can open a terminal into that poke around all we want fun stuff like that. So it's very worth talking about the commands that we have at the top as well. Yeah, to make it easy for people getting started with the projects you can give that sense of order in which that you need to run commands to be successful. You can define the commands themselves. If there's like a set way that you recommend if you want to know, for example, yeah, no project, you can all encapsulate that within YAML. Yeah, so this whoever wrote this dev file did a much better job than I did on the one I was showing earlier where you know we have one build the application to run the application. So you can define, you know, you can define the logic for building. The idea is you're defining the logic for everything you need so from deploying to setting up to building to testing to pushing all of that stuff defined in the dev file. So these are basic commands like you know this one is go build. And this one is just the goaling health check which is a file in the, in the, well, it's built by the application and put into your repo basically. So, you know, you can imagine more complex things here you can imagine. You can set up your startup to postgres database and then inject this information. So with one click you can sort of set things up for testing. And there are other repos I don't have them easily on hand but we have some you know demo repos that kind of get into this a little bit more set up a front end and a back end and a database and you know a few clicks to get everything going how you want it and then you can start debugging your code, running with a front end and a back end and all that stuff. So, is there any way to customize the shell within CRW that you're using that is one question that Narendra is asking you switch like ZSH for fish shell, for example, there is it's not currently it's not very easy sadly. Basically what the, there's a component, one of the plugins that automatically gets added is the machine exec plugin. Okay, what that's basically doing is it's trying to resolve a shell in your workspace and kind of opening it. It's definitely something I've done. But it will it'll generally require building a container to do it. So when I was contributing a lot to one of our repos that is just it's so much easier to contribute in this environment, I use Z shell. I sort of rebuilt the dev container to use Z shell by default. And once you do that, it just works. But it's not, you know, I can't, I can't go into the go see a good, I can't go into the go CLI dev container and install Z shell and switch my shell because that'll be lost as soon as the container stops. If, you know, if you build the go CLI, if you build your sort of dev container so that Z shell is the default, then you get Z shell. Interesting. Awesome. Folks, if you got questions now's the time to ask them we're running up here on the top of our Ryan anything that I missed in chat. I haven't seen anything else new in chat. It's been really, really cool to see though how how this is evolved and and I can imagine with those build and run actions. Those are basically just running Docker images right so you could kind of run just about anything. I'm going to go back in there and do really advanced integrations. Do you have anything that does active interaction with the ID like linting feedback or or anything extra that folks have added. I mean, what we generally do is for for linting and stuff like that, you know, we're adding probably plugins probably stuff built in but but if you wanted something total like, I don't know that I'm imagining someone writing like an operator of some sort. I'm not sure why or how, but something to do, maybe integration testing and show you the feedback or or something. I'm not sure I think you would probably have to write a plugin to interact with Thea. I've not done it myself so I don't know the details there yeah for our repos what we generally will do is, you know, we have the defined linter files and then if we need a specific linter, this comes up a lot in the docs repo because it's you sort of different builds chain. We just add those plugins and then get that linting for like sort of programmatically putting markers in your into your sort of files I don't think we support that currently though you could write a plug into it you can write a plugin to support that but built in. And there's there's a whole plugin architecture for that as well so yeah that's a a path that's kind of already outlined and and yeah that's that's really cool. Yeah. And if you if you have a you know internal vs code plugin or private vs code plugin that you wrote just for yourself. You can also run it in code ready workspaces get the same thing going. Very slick. Yeah I dropped a link to all the vs code extensions red hat offers in chat. So if you're looking for particular, you know, I want that open shift connector wellness technically already there I think right. And then, you know if you need your language server, all that stuff can be plugged in. So it's just really nice. It's potentially worth noting also that sort of code ready workspaces is the productized version from red hat. The sort of set of plugins that supported in the sort of upstream Che repo is actually larger. There's you know, we do a process to vet plugins to check them and, you know build them ourselves and support them so there's more plugins in the sort of upstream repo. You know the plugins that are available here are the ones we can vouch for. Cool. I hear that there's I think the open shift sandbox. I'm not sure if they have the open shift connector installed in there today. Are you have you been using it in coordination with workspaces. So this is, this is running on the open shift sandbox currently. This is this is the if you go and sign up if you go to I mean, I can never remember the full URL but the URL to the CRW instance is workspaces. Open shift.com. So if you go here this is basically what you will get you can do everything I'm doing today from your browser without installing anything. I think if I remember right I think the open shift connector I saw earlier in the plugins list. Yeah, I thought I saw it. So magic. It's there since you last checked. Yeah, we do release every six weeks so it's moving pretty quick. Oh cool. Yeah, that connector or, you know, is one of my favorite tools because I often log into various clusters across the planet and yeah it's hard to keep them all sorted out just from the CLI sometimes because they'll have a name like admin. Oh that's useful which one is that right. So yeah when I hit cube CTX it's like ooh let me go look in VS code right. So there's a few things where I turn away from the CLI and switching branches and switching clusters. Those are the two. This is awesome. So yeah any anything else. Yeah kind of winding up to the top of the hour. Yeah, there's anything else now is the time I can stop sharing so I can. Or I can keep sharing this probably makes more sense in case a question comes up. Talk about just maybe some things that are coming out. You know that you're proud of. I'm proud of so as I mentioned earlier. I'm not working directly on this code for the last little bit. The next step that we want to do is, you know, if I can remember how to get back to the UI I could show you but you know this is starting up a lot of services and routes and pods and volumes and all of the open shift goodness that you know you expect for a running application so your workspace is fairly complicated in the hood. We're working on moving that logic to an operator and that operator. It's a well it won't replace everything but it'll replace the parts that say this piece of the dev file maps to this sort of element of a pod. It's approaching in the near future that transition point, and that'll be a big change right because currently your dev file is just a YAML file. After the change your dev file will also be a dev workspace and you'll be able to say, here's this dev workspace and it is actually a Kubernetes resource as much as a deployment is as much as a services, you install the operator and that just works for you. So here we have time the one thing I want to shout out real quick it's not available on the sandbox currently but the sort of first out the first sort of thing that came out of us working on this is the web terminal operator. It uses a lot of the same logic. It uses the same component that does the terminal in your theater workspace, but what it does is it lets you open a terminal automatically log in and do open shift commands without logging into locally without even logging OC on your local machine. I don't think we have time to sort of get into the details of that. But if you look into the web terminal operator I can share a link with Chris after this but if you look into that that's sort of the first step in the process. Your web terminal is defined by a dev file actually. If you're using open shift for seven or later, it should be available in the hub within your cluster and you could just search for web terminal and install it it's not on operator hub. I was a little disappointed I didn't see it there but they don't have a console to integrate with so I don't know how how it would be displayed. This is this is my CRC instance and actually I've got web terminal installed from operator hub. So I don't know depends on I'm sure it depends on some underlying configuration, but web terminal. Yeah, I didn't see it posted on the public operator hub IO, but yeah, this is the one that that's the one I'm talking about is it'll be in the internal hub. Yeah. Yeah, and it's been there since I believe 45 I think 45.4 is when we first pushed it. And I'll hit this and if we run at a time we run at a time but basically this is applying the custom resource that represents a dev file onto my cluster. And once this starts up. This is going through a couple proxies to get to it but yeah so here's my command line terminal. And look at the pods. And so, you know, it's kind of like a Google Cloud Shell you're logged in, you can do your cluster administration stuff from the UI of the cluster. I can pop this out into its own tab and get just a full screen terminal. Once it goes through the proxy. Figure out but yeah so you know I'm in here I'm logged in as my open shift user. And I can do everything that I can as that user. We're hoping to push this to the sandbox, sometimes soon there's a few details to sort out but yeah, you should also expect to see this in the near future. Yeah, I guess as I was saying, if we look at my installed operators on CRC, here's code right workspaces here's web terminal, you can go through the internal operator UI and grab these and test them yourself if you like. Wonderful. Super cool. Awesome. That's some really, really great accomplishments. Yeah. Neat stuff to see moving very fast this year so yeah sounds like it and definitely feels like it, you know, just the momentum we hear about on the channel in general is just like CRW CRW CRW CRW it's great right like I love it. So yeah, the little pips on my GitHub have never looked better. That's how I know it's moving fast. Yeah. Awesome well congratulations. Seriously yeah and thanks for coming by the show to demo it all for and thanks so much for inviting me and I'm sure David as well. Sorry. Yeah, yeah, that came out a little weird but I mean like no I don't want to I speak totally get it great to have you both here. Yeah. Thank you all. And thank you all for tuning in. And there's actually nothing else scheduled for today on the channel so tune in tomorrow first thing for the level up hour at 9am Eastern 1300 utc if I'm reading this right yes. And we'll be talking about, well, we have a special guest coming and his name is Scott McCarty I think that's this week, if I remember correctly so definitely tune in. It is Scott McCarty I wasn't thinking ahead, ha brain worked in your face brain. So thank you all for joining thank you all for listening and we will catch you tomorrow.