 Hey everybody, welcome back to OpenShift TV. I am Chris Short, Principal Technical Marketing Manager on the OpenShift team here at Red Hat. I am joined today by a bunch of my friends here, but I really appreciate DevNation just now. That was a great show earlier. Thank you so much for that, the DevNation team. Sebastian Blankey did a great job, excellent work. So let's talk about what we're talking about this hour because we got another show following this one, OpenShift Commons, which is awesome. So let's get to it. We're talking about some code ready workspaces. We're talking about some Eclipse Shea, some DevFiles. Like let's do around the horn real quick intros and let's dive in Natali, all right? So who wants to go first? Natali, you're up. I'll go. My name is Natali, I'm part of the developer advocate team for OpenShift and I'm here with my mate, Ryan. Hey, how's it going? Ryan Jay, developer advocate on the OpenShift team as well. Yeah, we're excited to be talking about running hosted development environments using Eclipse Shea. Brian. Hey, so yeah, Brian Tannis. I'm also a developer advocate focusing on OpenShift. We got the whole team on here. No, we got some of the team on the call. Yeah, but it should be an interesting topic. Code ready workspaces is pretty awesome. We've talked about it a little bit before. But the latest stuff that's coming out and whatnot is really interesting and I'm kind of excited to learn some of this stuff myself. Yeah, code ready workspaces is a frequent ask on the channel, so it's good to have this out there. We'll have more and more as time goes on. More features get added. We're gonna have more content. So here we are again. Yeah, last week we actually had someone in our Twitch stream, dreams owner, I think it was suggested this topic. This very topic. Yeah, this very topic. So this is a response to your feedback. If you have questions along the way, please drop them into chat and we'll do our best to respond to them. And topics, make sure to let us know what you're interested in seeing beyond what we're covering today. Cool. So I think to start off, one of the main pieces of technology that we're going to try to focus on today is something called a dev file. Dev files currently we're using a dev file 1.0, but we're also going to discuss later on in the show kind of the future of dev files and how it relates to some of our other newly released tooling, specifically Odo 2.0. So I have a short demo I could show in regards to Odo later if we got time. But first, Natali has an environment already up and ready to go. This is an OpenShift 4.6, is that right? 4.5. 4.5? Ah, 4.5, all right. Classic. We're not talking to Serena. We're talking to Natali. We're not in the future. We're talking about, this is actually stuff you put your hand on now. We're not in the future right now. Yeah, yeah. Cool. All right. We're back to the present. And yes, I would like to show you this part here. So today, as we were saying, we are talking about, let me grab your desktop here. Can you see my screen? Yes. Yeah. We're talking about Code Ready Workspaces, which is an in-browser IDE, is the product name for Eclipse Share AppStream name. So the AppStream name is Eclipse Share and Code Ready Workspaces is the product that Red Hat put on top of OpenShift in order to have this multi-tenant in-browser IDE. So this is a little bit of context. And today, our focus is on this dev file. Why this dev file? Because the dev file is in this browser IDE, the way to create a kind of docker file for the building tool. So in our workspace, we need the runtime. We need the building tools. We need the environments variable. We need the commands, the tooling. All of that are encapsulated inside our work. Lost audio. We lost Natali completely. Whoa. Okay. Fun. Well, okay. We'll give him a minute to get back here. Yeah. What you got, Ryan? Man, let's see. Do we lose Ryan as well? No, Brian's here. Brian's here. I'm still here. All right. Yeah, man. Well, Natali, I'm sure, we'll be back momentarily. He's already rejoined. That was weird. It could have been worse. Last week, we lost the whole thing. Yeah, the whole computer crashed. Well, that was my mistake. I was trying to upload. So occasionally, I'll download all the videos from Twitch and then upload them to our G Drive for archival purposes, just in case, right? Like, and I thought that was a six-hour operation. But the second I, like, left my desk, it turned into a 16-hour operation. So flooded the upload pipe, completely smashed the box, crashed the whole system. That was crazy. It was nuts. Anyways, I put it back together on YouTube and uploaded it. Natali, you're back. Yeah, I'm back. Sorry. No worries, man. Not a problem. I crashed on Fedora. That's sometimes worse. Yeah. Yeah, but I managed to come back. Sorry for this new interruption. And let's see if we can continue from what we were talking about. So we were talking about those dev files. And the dev file itself is just a descriptor of our workspace where we will start adding our environment code-ready workspaces. And just to understand how it's made this dev file, very easy, is a YAML file, so very familiar to Kubernetes work, in which you describe which project do you want to work with in any IDE, like Visual Studio, IntelliJ, we start with project and the source you are starting with. So if it is a Git repo, if it is a zip file, if a local directory in terms of the workspace, so you can decide the source type. The second part is the component. So this dev file, this manifest file is component by a plugin list. And those can be also Visual Studio extension. You can use it with extension code-ready workspaces and you can refer to this plugin or extension. We will see also later how it works. And you can also tell which Docker image is your workspace. So you can start from a Maven Docker image, start from a Node Docker image and you can tell which image, which container image you want to work with. So we understood that this workspace is just a container image that contains some plugins mounted in this container in the Kubernetes word is a pod and you can inject environment variables and reach this workspace with commands like run configuration, like the one we used to have in any, a modern IDI. So I want to show after this little bit, a little introduction, I want to show you this on the cluster. So let's simulate a common use case. Like we did this grease last time, me, you and Madu. So we were part of an organization on GitHub, the Red Wine software organization. I love that. Yeah. And I invited also you, Brian and Ryan, if you want to have the same part of this marvelous team here. So as we are part of this organization in GitHub, I had the configuration in OpenShift to allow this team to log in in the cluster. So we have a GitHub authentication here. I'm already authenticating on GitHub. So this is my name and this is my user. So I'm accessing the cluster in a multi-tenant way and I can work only in this cluster. This is my operational point of view. If we come back to the general administrator point of view, I can show you that we have this code ready workspaces, which is our product around Eclipse chat, install it in the cluster. And when you install this in the cluster, I can show you in the developer perspective of the topology on OpenShift here. In 4.5, the operator, so you install code ready workspaces from an operator. When you install from the operator app, we'll install in this OpenShift workspaces project. We'll create this for you automatically and you can create through a custom resource your code ready workspaces installation. So very easy for an operational point of view, installing, configuring and managing code ready workspaces. Let's switch to the developer view. So this is the ops view, installing through a operator and custom resource driving distillation of code ready workspaces. So you can customize your IDE environment server from here. From a developer perspective, let's go to the topology view of the developer console in OpenShift. And here we have the topology of the IDE. So there is a server serving the content that is a DevFile registry. So this is a registry containing our DevFile. If you remember, the DevFile are the manifest for our workspaces. So we need to store them there. You have also a plugin registry. So you can store the plugin in this plugin registry a database and a single sign-on with key block. So the only thing we need to do is give to our developer team. In our case, the RedOne software, we have just to give the URL of code ready workspaces and they can log in with the to authentication integrated with OpenShift. So now we are accessing code ready workspaces. So this cluster, just to do a recap. You installed OpenShift. So it's simple new cluster. You set up GitHub authentications with your RedOne software organization, right? Then you installed the code ready workspaces operator and that's where we're at. Sounds like it. Exactly. Yeah, yeah, yeah, exactly. I did this and I integrated the all to authentication because key clock is gonna map the user in OpenShift. And once you do this, you are authenticated in code ready workspaces also. So this is my view as user, as normal user, this is my view of the code ready workspaces that will reflect in OpenShift my project view. So we can start from here. This is the quick start. Any modern IDE as a quick start for creating new project. This is our quick start and I want to show you this one. This is, those are typically one tire. If you think about this, about fuse, quarkus. This is not JS MongoDB is too tired. That means in the same workspace, you have a front-end container and a database container. So you can just test locally. When I say locally, I mean local to my workspace pod inside OpenShift cluster, not on my laptop, because the sense of this in-browser IDE is just to give you the capability to develop directly on the cluster, on the platform, test coding, building, testing, pushing and then trying inside OpenShift. Once you do this, what happens under the hood is that it's creating a new container, a new pod containing your code ready workspaces workspace. So let me, let me go to the console here. I have a bunch of cluster in the history. Let me go here. I can access OpenShift with my all out to authentication. If you remember, I can still login with my GitHub integration. I'm allowed to because my team is allowed in the all out to integration. Now here, I have this project created automatically for me by code ready workspaces. So this is my user, my handle. And code ready is just some name appended by the server. If you look here, this is very familiar to us. This is just the OpenShift topology view showing us that there is a pod deployment with a pod that is starting. And this will be our workspace. So the workspace is a pod connecting, talking with the code ready server that we have seen in the other project that we cannot see here because we are not the administrator. But this is gonna talk with the code ready server and it's gonna instantiate and initialize the workspace. You can also look from here. There is the list of what this is doing. This is just the first time. And those workspaces are persistent. So you don't need to wait all this time for the second time you will start again the workspace because all the initialization settings are saved in the persistent volume. So we have a container. We have a persistent volume here created automatically by the pod ready server. So this is a two gigabyte workspace where we're gonna save the code, the assets, the builds and all the settings that we need for starting a coding our application. So we have everybody ready to go. And if you see here, we have also this route. We have actually several routes because when you test the application in code ready workspaces you can access the workspace directly through a route. So the process will listen inside the container inside the pod. And a membership route will drive you to your application you wanna test. So now that we started the application everything is ready so we can start coding and see our application working. I wanted to show that this view that you see here I'm gonna increase also the phone is Tia, Eclipse Tia project very familiar to Visual Studio Code user, right? So this is similar to Visual Studio Code and in fact you can also run many of the Visual Studio Code extension inside code ready workspace and this is pretty cool. That was literally gonna be my question to interrupt you. The extensions are like, you know what makes Visual Studio Code powerful and having that available in Tia is pretty sweet. Exactly, exactly. And you can also customize totally your extension. If you remember that pod that we have seen before the plugin registry. So you can create a custom plugin registry in instructing the code ready custom resource and you can fill them all your custom all the extension that you want all the Visual Studio Code extension that you want because by default you have those extension here if you go to plugins you have a set of plugins those are present inside the plugin registry that is shipped from Red Hat so this is come from the Red Hat image and contains you know most famous programming language plugins for instance it contains the OpenShift plugin that we are going to install in order to work with our application. But yes this is our application, right? I want to show you that just this file here if this is the first time you see this dashboard so this is our application and Node.js we have two containers and we can review this if you go to open a terminal in specific container so we can go to the MongoDB container so this is the MongoDB container and if we do the PS command here we can see that there is a Mongo process running and also we can open a terminal going into Node.js so here we expect to find NPM Node binary so we have two containers in the same pod and we have some run configuration so we can run some tasks here for instance the most common one are to you know resolve dependency with NPM installing this case and start the application starting also our running configuration and all this all this when this task will finish we will have the output here so this is running NPM installed inside this container inside the workspace pod and when this finished we are ready to test our application and this is really cool so this is our initialization we can start testing our application if you do the NPM start inside the same context of the application this sample here is just a guestbook in Node using MongoDB so this is our guestbook this is pretty nice because we are basically working inside we are working inside contrary to workspace inside our workspace but we have an open-shift route if you remember and we can interact with this I can copy here in the chat if you want to say hello in the guestbook this is our guestbook setting the data inside the MongoDB workspace here yeah where's that link let's get the guestbook going yeah I passed the link in the can you see the link in the try to see if we can do a sequel injection try please send your burp sweet outputs to Natali evento please create an issue on the GitHub repo yeah so this is cool we can work, build and do stuff but imagine we want to start being collaborative let's look at the dev file so we talked a lot about the dev file but how it is this dev file so when you create a new workspace automatically code in your space will create automatically for you a dev file so you don't have to write it from scratch which is good right we don't have to do this so we can just look at how it's generated if you remember there was the if you can see the project section so the name of the project then if you look this it doesn't start from Git starts from a zip file contained inside the OpenShift dev file registry we have a list of components for instance the plugin the Visual Studio Code plugin for TypeScript then the Node.js the bug and we have our Docker image the container image that is actually running our commands if you remember we have two containers so we have this container that will have some environment variable here and the run configuration so we can also code this run configuration if you want to give to our team the run configuration command to just try start push deploy we can just act this one and create the dev file for this one so as an exercise now what we can do is you know adding some plugin to the workspace for instance if we want to also from this view the OpenShift connector plugin and this will contain although for instance OpenShift CLI so all the plugin that we need we can just click apply here what happens that the workspace is going to be stopped and restart because you need to mount the new plugin in the pod it's like a you know Docker volume in Kubernetes is a persistent volume you need to mount this and it will give us the OpenShift connector plugin visibility when it started so this is nice because from the UI we can do everything as we want but if we want to let our teammates to don't do this manually all the time right and maybe start from some git repository and not starting for instance from a zip file what we can do is just copy this file in our git repo and then starting creating the new workspace from the code ready workspace button that I can show you in a few moments so the idea here is just to create a code for our developer let's say we want to put in this organization here we want just to put this MongoDB example and this MongoDB example itself is part of a list of example in Eclipse Chef project so you can look at all of them and if you go here in Chef samples very easy on who has the dog I'm sorry is that you Nathali okay no worries I'm just curious so this is the very big dog of my uncle that usually start singing at 5, 6, 4 I don't know which reason maybe my voice so I'm sorry about it oh no worries I was just curious yeah as long as this the person speaking then yeah nothing can be done about it really other than take the dogs out no I don't bother with that it's fine like I'm used to having dogs around right like my dog is right there so yeah right my dog goes off every day 5 or 6 o'clock starts barking and I got to go take him on a dog or something he's predictable at least but yeah no big deal since we've paused we had a question in chat about setting memory limits on some of these environments I think you mentioned you had a 2 gigabyte limit on the development environment I believe is that correct so you can set the limits even you can set the limits in the depth file itself so for instance in this container here for Node.js there's a limit of 512 megabytes those are relevant to the workspace to the container right that's for the workload what about the at the upper level and OpenShift you can you can set quotas for the specific OpenShift project so this can could be done either from the custom from the custom resource when you start code editor spaces you have those custom resource here I'm talking about administrative limit so I can show you the yaml that you showed for code ready workspaces earlier so you're trying to so that you can set limits on the amount of workspaces that each individual user could have open at one time so that you're not consuming too many resources for each individual user you could do other things like memory limits for that environment yeah usually this type of question I would want to ask the person like okay well why are you trying to limit it and who are you trying to limit it for are you on the operation side and you want to restrict too much usage by developers or give developers a sensible ceiling that they shouldn't go beyond so they don't have memory leaks or maybe you're the developer who is trying to prevent those memory leaks I think they both need to have entry fields for folks the ones that are more limiting per project are probably what I might use for administrative on the administrative side but a lot of this I think is slightly a lot of what we're showing today is related to the setup of code ready workspaces and so a lot of what we're showing is a little bit more on the administrative side I think I think a lot of this could be reduced down if you were an average front end developer who wasn't really focused on containers or Kubernetes so this could be boiled down quite a bit more JP Dade is like right in my heart right now as he just said yes developers trust but verify I'm the ops guy those jokers will take it all if you want that's the motivation yeah so now we know right like there needs to be some limit on the amount of capacity that the regular you know user can consume as they're trying to break things right like oh DBAs is a good example right as they load in terabytes of database into memory potentially right like that's fun to deal with as an ops person I'm sure right like there's is there a way to limit that yes you can limit as we were talking about from an operator perspective or you can just give quotas to the project for instance in our cluster here there are some quota not in this case because I think I removed it but you can also set up a trigger or own a web book admission control and a web book in an open shift in Kubernetes that set up limits range or the resource quota in the new name space automatically so this can be a very helpful if you have a very large number of developer team and you want to just assign quota or limit the amount of resources that they can use because they could hack the file but they can't really act the limit range or the resource quotas yeah that's going to be your answer jp date for probably all those problems right like you're going to need that on every project right like that you're handing off to somebody or every names space that you're handing over to somebody like you should have resource limits in every container you should probably have resource limits on things that you're handing to people as well exactly exactly the code ready workspace custom res is very granular in terms that you can say okay the dog started again I'm totally fine with that I'm ignoring it at this point so you can tell okay you can use just two persistent volume pair user and the sites that can be maximum 2GB pair persistent volume so those are settings you can put in the YAML file here and you can drive the tuning of the code ready workspace instance on the storage side for instance but this is for the administrator part if we come back to the developer part what we want to do is I want to give to my team mate if you go to these check samples you can pick one of these for instance the one we selected is this Node.js MongoDB sample I want to give the capability for my team mate to create this automatically maybe with the OpenShift working connector without doing it manually so how to do it you can fork it in the in your organization so I'm going to form this repository here it's going to be forked in our Redline software organization and what I will do is create is adding a def file here so code ready workspace is with a mechanism which name is Factory will create the workspace automatically by clicking on a button and I can show you that it's really nice it's really easy so if you remember we have just to go back to our def file here we have just to change the source to type to git not using the zip file but everything then would be the same so working in our fork we just add the new file from the def file yaml so it's a yaml file we can copy the yaml file here and we can start modifying it so we say that this is git and we want to start for instance from this repository itself so we can keep the URL here of this one so let me add this and we can just copy this one so location is going to be this one so we start with a project name it's going to be a project something so the name will be not js mongo for instance and office hours we can so if somebody is doing this yourself I mean obviously make sure initially whenever you load code ready workspaces login and get the sample you created a note js sample so create one for you know whichever language there was all those choices right so start off with that and then you'll be able to easily switch out this over here right exactly exactly so once you have the death file here the nice things is to just in the redmi just click button that will start this automatically for you so in order to do this we can just copy we can just copy another death file and we can and I can show you for instance the one I was doing for the same example last time so let's go here let's go to my report I already did it let's go to the same example here so what I did is just adding this button here when you click here this will create the workspace for you using the death file and using the factory the factory so I can just copy this one going back to my example here it's somewhere we can just put here for instance and what you have to change is you have to upload this factory SPG which is the image or for this example here we can just give the URL doesn't matter but what you have to do is give the code ready workspaces endpoint so we have to copy the code ready workspaces endpoint in our case is this one so I'm going to copy here and you have to give also that you have to point to this rest API endpoint which is factory accepting the URL of the repo containing a death file in our case we just copy our repository here so we have just to go here and copying the repository let's go back and put the correct URL here HTTPS okay so this is our code ready workspaces factory file we want just to click here and start a new workspace in our user so if you imagine one of this team mate Chris Orion can just click here and when you click here everything is fine with the syntax of the death file everything is fine with the configuration of the work spaces this will load the factory endpoint so the factory endpoint will show you that we look at the death file and initialize the workspace and start the workspace so here I'm not allowed to start more than one workspace because the administrator didn't allow to do it for me so I can do two things I can relax the settings from the custom resource or starting the application or I can just stop this one because I don't need it anymore and I can let the other one start for me so we can just go back here and retry we are logged in here as normal user so we can see administrator developer view that's fine so we can start again we stop the other one so we can restart again if there is any other settings we can just relax this on the custom resource we can start more than one workspace and what happens under the hood again is that you create a new persistent volume in your server when this starts and when this pod starts you can have the workspace starting in your environment for your user so this is going to create again a new persistent volume this is going to create a new persistent volume for us when the persistent volume is okay it's going to create we can go back to the code ready here now it's going this will load our def file with the OpenShift Connector plugin with the same MongoDB and Node.js code so my teammate can start working using also the OpenShift integration we call the editor spaces you can see you probably can't see in the tally but everybody else could see and mine is also coming up now under my account you clicked the button I clicked the button I clicked that button so that I could create this workspace yeah and now we have a very limiting settings for the workspace so we can relax these settings I did it for instance for a demo with Minikube which is the same thing but using Minikube so you can relax these settings in the custom resource and you can say custom chair properties limit user workspace run count if we do this in our workspace and if anyone is not using that in the while because the chair server will be restarted then we can start multiple workspace per project you can see we can drive as administrator the limit count of the workspace and give more freedom to the developer team if they need to open more workspaces and usually they do they want to test, they want to try so now we started the workspace if you remember this is our code ready workspace the code is the same but this time the code comes actually from the kit repository so we have a kit view here if we do any change we will have the possibility to review all the change we made with our live diff, everything that a normal modern IDE is doing very cool that we have the OpenShift plugin integration and the Kubernetes one so we have OpenShift plugin integration this means that here we can create the application in OpenShift we have the Kubernetes one we can navigate all the kinds and work on that so we can really deploy this application and make it live that linked before was just living inside the workspace if we want to push inside OpenShift we can just doing from here it's going to create Node.js application in our project in OpenShift and then we can deploy in the thing so the nice thing that each developer me, you, Ryan, Chris Ryan can work in its own project so here I can work in my project, Ryan works in his project Ryan, Chris work in their project when everything is fine they test in their own project maybe they can use the tecton plugin for code ready workspace so we can have a tecton plugin here a tecton pipeline pushing the application to the platform so this is going to be the flow so now if you want we can deploy the application inside OpenShift or we can go through the talk about ODO I think the nice follow up here is that when you use code ready workspace is plugins for OpenShift connector here so you have the connector the connector to give you also a terminal inside your code ready workspace so here I have a terminal which is different by my Node.js MongoDB terminal and I have the audio so ODO which is the command line for developers pushing for pushing application inside OpenShift I have the OC command line so I have all the settings that I need only thing I want to do maybe is log into OpenShift class with the right context and I can verify that here I'm the right user and I can maybe create a new project where to put my application so this is very simple you can also mix the strategy for instance this repo has already among the already a MongoDB deployment file that you can deploy so you can use this strategy to deploy in OpenShift or you can just deploy from much much better and easier you can deploy your MongoDB instance from the developer catalog it's straightforward you just go here you pick the database in this case it's going to be Mongo you select also persistent database and here you have to put all the settings needed by the application for this application itself if we want to see how it's made we can just inspect the code we can go inside the code and we can parse for instance we can see the Mongo URI here it's user password very simple no but it's just for an example so if we want to create this we can create the database like it is expected there and when you do this you can have your MongoDB database ready to host the application to serve the application that you're going to actually push the plug-in connector this is a nice follow-up because now we're creating through an auto wrapper URI which is the OpenShift connector our application so we can just create the application we call it the guestbook we can let this application start from the workspace directly so we can start from this code ready workspace and it's okay grab the code for instance from the MongoJS the NodeJS MongoDB example so this means that audio will go there and try to deploy the application but we have to instruct and give the name of the component and the programming language so we just sell to the OpenShift connector okay use this is a NodeJS application please push it to OpenShift when you push the application to OpenShift this means that here you're going to create your application and that this will connect to the Mongo application you can do some grouping here for the application itself you can play with the arrows connect things the application is ready we can also access it from this plugin here nice that you can also create an OpenShift route so you can access the application if we call it in this case guestbook route remember that you have always to push your change because audio will push your change this is a UI wrapper for audio if everything is fine on the application itself and we can review this by logs if it could connect to Mongo with the right settings for instance if connected to Mongo yes so if everything is fine this is going to be our application which is available not anymore from my workspace but this is going to be available from my team in the cluster so this is our application that I can work in my OpenShift environment so I tested it locally when everything is fine I push it automatically with the OpenShift connector to OpenShift and thanks to that file if you remember thanks to that file I could automate this and let the OpenShift connector be the default choice for the transition want to start coding they start from my lightest change on the code and from their file using the OpenShift connector so did anybody put anything in the gasbook yeah let me check if anyone put something in the gasbook I can put something I can't click here let me say and here we go now great message from Chris yeah our inner loop no it worked out very good because we tested locally in terms of local workspace some good ops made gave to us the possibility to create workspaces persistent volume, add some storage and add the code ready server, the multi-tenancy so we implemented this developer loop here and this was for the file version 1 we know there is the file version 2 and I'm gonna pass the ball now to Ryan that can talk to us about the version 2 that can be introduced by the new version of audio you bet 11 minutes and 30 seconds go we also have a couple we have a couple questions in chat as well that I wanted to get to and we might have to save some of the dev file 2.0 for a later follow up meeting I can definitely give you the recap of what I tried last night regarding Odo 2.0 and dev file 2 but let's check in on chat real quick first so we don't lose these questions so there's a question around persistent volume groups or persistent volume and the containers for pods ncrw how many are needed what's that look like Natali do you know very much about how many pvs are used per crw session is it generally one pv per session or is a lot of that backed up in the dev file registry so there are a couple of pvs that we can show that actually if I can start shedding my screen again JP Dade was mostly interested in figuring out budgeting how many pvs per session and how many pvs per user I think he has 100 persistent volumes available I don't know how many developers he plans on onboarding but if crw uses 2 or 3 pvs just on its own and then your database might also be using a pv yeah I know I was successfully able to go into your github example click on the link to spawn my own hosted workspace it booted up and I ended up at first time it failed because I hadn't accepted the invite to the red wine software github org so I had to go back in and look at my group invites approve that yes indeed I'm a member of this github org and then I was allowed in to spawn my session so security is working which is good and I was able to just directly one click boot into a hosted IDE that also had a terminal session in there as well with git and odo and most of my tooling so thank you natali for successfully onboarding me as a junior dev that all worked I know you had to do a decent amount of setup live on the stream today we skipped over a little bit and that's partially one of the other things that JP did in chat was asking about what other work is involved in successfully rolling out this experience to a team because I'm not sure if our current documentation goes as far as connecting all the dots quite as well as what you've shown today to the point where all they have to do is just click on the github readme and they're in business right I think that like setup show is a great show right like that whole process could be a show all by itself and then you could incorporate dev files v2 and we could do that next week if you want Serena is down for v2 you're going to talk about it right now we need to check in with Serena I know Serena is I'm not sure if she's around in chat she had I think first rights to the topic for next week absolutely she knows best she says I am in next week and I don't want to stomp on if she already had a topic in mind I don't want to overrule whatever she had in mind I mean there's like two or three brewing right now so you know Serena's been watching we can talk and chat for sure but we're not stepping on any toes Serena's down for the cause right well I don't know she has things in the roadmap that it's like hey this is about to roll out and I need to talk about it at the start of September you know she's from the future and I don't know I know Serena also knows that she could get like future times for me if she wants yeah yeah yeah alright well definitely stay tuned for follow up episodes one thing that I can potentially demo or I would challenge you to try out on your own if if you're up for it if you're interested in learning more about dev files I posted a link to a github issue at the very start of this and I'm going to re-post it in chat that has a kind of the dev file 2.0 roadmap the URL it's github.com slash dev file slash api and then milestone one was the rest of the URL there and so that has kind of what's going into this dev file 2.0 release I know Odo 2.0 has been released I think this or there's like let me double check on that it may be an alpha state I think the latest one I grabbed was Odo 2.0 alpha yeah it looks like we're still on alpha releases for 2.0 which would make sense because the dev files are still kind of in progress for 2.0 but Odo 2.0 is targeting these dev file 2.0 format formatted dev files so that's kind of what we're working on out into the future some of that UI wise won't be available until I think OpenShift 4.7 is where it's currently slated feature wise so if you want to play a all future project to change but if you want to get a small taste of the future this is what I did last night I went into OpenShift 4.7 I picked the top right top right my top right option was developing on OpenShift and then the second after that you get into a submenu you can pick developing with Odo is your second top right selection in there that will automatically drop you into a catechota scenario that helps you it's different than using code ready workspaces but what I did was I went to the releases page followed the instructions for the Odo release copied in the latest Odo binary so I had Odo v2 and then went ahead and followed along with the rest of the OpenShift catechota learning scenario and it all worked perfectly there was one extra command line flag I needed to add in I needed to add in like a dash dash s2y on the java build that was a new flag that's been added but otherwise Odo 2.0 the alpha seemed to work fully backwards compatible with Odo 1.0 a new feature I saw was that you can configure Odo 2.0 you can run I think Odo registry and then you can add a dev file registry address and then Odo will try to work against that dev file registry to load, store, save and interact with those dev files I was almost expecting one to get output to disk or something I didn't see that but it almost kind of makes sense to do that but it's a little bit more architecturally to keep the IDE configuration somewhat outside of maybe the source code especially if I'm dealing with microservices I may want to decouple my IDE setup especially if a lot of people tend to configure that per user my IDE setup might not be your own but it's definitely nice having a link right in the read me that I can click on in order to load an IDE and then kind of hopefully merge some of my preferences as well so thanks again Natali great demo today we'll be back every week same time, same place thanks for joining us this week thank you to the folks in chat who had questions and we'll be back next time as well anything else to close us off with Chris no stay tuned for OpenShift Commons starting in 25 seconds alright thank you all so much see you alright thanks guys