 Good afternoon and good morning to everyone present here. I am Mohit Suman, and I work at Red Hat as a senior software engineer, and I'm based out of India. So I'll quickly start with a presentation which I have because we don't have a lot of time to showcase there. But I would like everyone to have their questions if they have in between the talks and they can just ask me at that point of time so that I want this to be a very interactive one. And this will consist mostly of few demos. So there are scenarios where it's a live demo thing, so things might not run, things might not be as expected. So keeping my old fingers crossed, I'll just start with my demo here. So the idea behind this talk is how you can deploy any application on top of OpenShift cluster using your ID. And the ID which I will be showcasing here during the demo is VS Code. So there are different workflows which I will be going through and how that will ease the developer experience around it, focusing more on top of that. So let's start with it. Good. I will just start sharing my screen. So let's go to the VS Code instance and the workflow is in VS Code we have different extensions. So from Red Hat we have this OpenShift extension. So if you go to the extension tab and you just say OpenShift, you can see OpenShift connector is there, which is already installed on my system. But if you have not installed it will provide you a prompt to install this extension. And once the extension gets installed, on the left hand view panel, if you see there will be an OpenShift icon, which basically says that, okay, your OpenShift extension is installed. So this is one of the ways to install extension directly from VS Code instance. The other way can be to go to VS Code marketplace to install from here and it will prompt you to install the extension. So this extension is hosted in VS Code marketplace and also on OpenVSx registry where folks can also install it to non-VS Code users, but which has the same interface as VS Code. So once this extension gets installed, we will be going through the workflow what we have. So the idea behind using OpenShift extension in VS Code is where you can deploy your applications, your projects, your components directly on top of that cluster with few clicks. And you need not switch between different IDs or different terminals to perform that. So the first idea will be to connect to OpenShift instance. So to connect to an instance, we have this login command over here. So once you do this login command, it will be basically prompting you to provide your credentials. So as you can see, I'm already logged into one of the OpenShift instance. So it says that I'm already logged in. So do I need to log into a different cluster? I say yes. And once I do yes, it will provide me a list of API URLs which I'm already connected to. So this list basically comes from the Qconfig file which we have. So let's say I'm already connected and won't be going there. But see, for instance, you do not have an access to an OpenShift instance. Then what do you do? So we have a provision here when you can just add OpenShift cluster. And this will basically provide you three different options to start your OpenShift instance. One is to run a locally, local version of OpenShift using Red Hat Quotity containers, which allows you to install and run OpenShift directly from your VS Code ID. So you need not switch your terminal. You need not go to any other instance just to run an OpenShift. It can be done directly from your ID extension itself. The second one is you can always go and play with developer sandbox. So this is something new which has been given provided by Red Hat OpenShift where you can create your own OpenShift environment directly and work with some pre-configured tools. And the third one is you can directly use any of the cloud providers like Azure, AWS or Google Cloud to run the hybrid cloud infrastructure. So these are three different options. So I'll quickly go through the local one. So let's say I want to run a local OpenShift instance. And for this version of the extension, the version of OpenShift will be 4.6.15. So if I do this and I say reset. So if you see, then these are some of the pre-configured wizards where you can just go ahead and this will basically allow you to run your OpenShift quickly. So first of them is to download the OpenShift version. If I downloaded it's approximately two gigs of files. So we'll skip that for the demo part. So once we have the download file ready, we need to provide the location of the executable. So I'll do select path and I'll go to search CRC. So if you see, I have the CRC Micros bundle already there and I have this executable present here. So I'll just select this executable. Once executable selected, I'll go on next. Next is I have to provide a pull secret. The pull secret is basically a file which maps our configuration to the CRC cluster, what we have currently. So we can download the pull secret from here. If I do here, it will basically open the pull secret file here. So basically we can download this pull secret or we can copy this pull secret. So I've already downloaded this file. So I'll go to my base code again and I'll select that pull secret. And I'll go here. And this is my pull secret file. I'll just open it up. I can even configure the different configurations like the number of CPU cores. These are the default values which are necessary to run this local instance. But if I change these values that also works fine. Say I need to upgrade this and I need to upgrade this memory or I need to provide a specific IPB phone server name. I can configure those, but for the demo, I'll just keep it as a default once. So once these things are configured, the next step is to set up the instance of the code ready containers, which basically is configuring all the host environments, all the file permissions and stuffs. So I'll just run setup CRC. So this will open a terminal inside my VS code. So you see, we did not switch between our terminal or our external IDs. Everything happens inside the ID itself. So once the setup parts gets completed, it will prompt us that the setup is done. Then we have to go to next part where we just start this cluster. And the cluster which will be up and running will be 4.6.15. So this part is just starting with. So if you see the exact workflow of the class, CRC setup part is to check a different configuration that is it running as non-root part is the driver which is there for the different specific OS. So for Windows, it might be different for Mac. It is different for Linux. It is different for the file permissions of different configurations. So all these environment checks are done and even checks are done whether the bundle exists or not. Once everything is passed, it says setup is complete. Now you can run start and I will say, okay, great. I'll run start CRC cluster. So once the cluster is up and running, you can see I had already started this cluster so that it becomes easier for the demo. So once the cluster is done up and running, you can see a code in the containers for OpenShift 4.6.15 is already up and running. So we have an OpenShift instance which is running right now in our VS code itself. We have this credentials where we can use the login ones, the QBitamin username password and the developer username password. So I'll go here. And in the view, you can see the different informations which are present like what's the disk usage or the cache usage is there, which version of code it is running and what is the current version of OpenShift. I can open my console dashboard directly from here. I can stop my cluster and I can perform all the operations directly from the IDHL. So once we are done with this, the next thing is to connect to that cluster, right? So let's say I have, let me connect to a different cluster which I have started with. Okay. So let me connect to this cluster. Okay. This is the QBitamin file. Okay. So this is the Qube config file which I have, which has since starting. I'll go here and I'll go to login part. It will prompt me that I'm already logged into a cluster. So what should I do next? I'll say I'll do this. I'll provide a new URL. So I'm showcasing this to just show how a new user will do it. Okay. This I'm not going to copy it. It will prompt me what type of access I need so I can do a credentials login like username, password or directly pass the token. So for now I'll do this. So the username is QBitamin. I need to provide a password. I'll just see what the password was. This was the password. I'll copy this. So if you see here, it basically performs login to this cluster. And once it is done, I can save the username password which maybe for the next iteration I did not do it again. So I'll say save username password. I'll say yes. So once this gets connected, the idea is to deploy any application on top of this OpenShift cluster. So there are two different ways which we can do it. One is using DevFile support. One is using source to image support. So for the demo, I will be using DevFile support. So for that, I have in my workspace, I have two different applications. One is a React application, which basically performs some basic operations like checking the weather up of a particular city and how does that works on top of Node.js. And the other one is a COVID-19 application where we can deploy this COVID-19 application on top of OpenShift cluster, which uses Node.js as a DevFile support. So I'll quickly showcase one of them because we do not have a lot of time. So here if you go and if you see list catalog components, like, so this basically are the different type of components which are supported here. Let me close this one. So if you see Java, Node.js, .NET, Golang and different type of components which are currently supported for this instance of the extension. By name, I mean, for example, right now I'm deploying a React application. So it uses Node.js as a support. If I'm using some Java application, it will use Java as support. So let's go ahead and start. So first of all, we'll be to create a new project. So I'll say project is on DefconnCZ. See, there are notifications that the project is created successfully. So I'll just go ahead and do a new component and a new service. So by component, I mean the application or the project which you want to deploy on top of it. And by service, this basically means a different type of, let's say, a database like MongoDB or MySQL, which you connect to that project to do a communication or linking between them. So this is the difference between the component and the service. So I'll go ahead and do components. So first of all, it asks me an application name. I'll say application 123. So now there are three different type of scenarios where you can create a component. One is Git repository where you can directly provide your Git URL. The second one is a binary file where you can just pass on to your .ZZ file or any other file as a component. And third one is the local workspace. So for this demo, I'm using a local workspace. I go ahead and you can see, I have three different projects which was running in my workspace. So it basically prompts which one I want to select. So I'll go and do, let's say, a React Web App and I provide Node.js. So how does it figure out that it's a Node.js name? It's basically figured out from the Def file which we have in that project. So I'll go back to that project and showcase how it does, but for now, let's say it's a Node.js name or we can name it as Node.js weather. Just to differentiate. Okay. Now as it is created, it's in the not post state. So basically means it's in the class, not in the cluster yet. It's in your local system. So to push it in the top of the cluster, you just have to do push. Once you do push, the terminal will open and it will be doing the push commands to this OpenShift cluster which you have connected to. So if we can see in the debug file, we have this while dating the scenarios, waiting for the components to start. So meanwhile, this is going, you can also do, you can follow the logs which are happening. So this will basically open up the log instance in our ID itself. So whatever changes which happened inside the commands which is pushing to the cluster, these logs will automatically be updated in this view itself. So meanwhile, this thing is happening. Let's go back to this repository and see, if you see this react where by which we selected and we have a def file scenario. So this def file basically mentioned that it gets a Node.js starter project and the name of the metadata Node.js. That's why the name of the component was automatically populated as Node.js. If there would have not been a def files components and then it would have prompted us to select which type of def file supported what it's a Java or it's a Node.js or it's a source to image scenario and then we could have proceeded forward. So as the def file is already there, the extension is smart enough to figure it out and deploy that type of component. So once it is getting installed, let's wait for that scenario to happen. So if you can see, there are different options which being done. It performs all the scenarios like NPM install, NPM run debug, it pushes. So now when this changes are pushed to the component which means that this Node.js application is present on my OpenShift cluster and when we do it, you can see the route is automatically at it. This route gets created from the def file scenario when we open this def file. You can see the endpoints are already mentioned here. So based on this endpoint, the route is taken and it gets created in the cluster. So let's open this route and see what is there. If the application is running or something has broken or what the scenario is. Okay, so something has not been deployed. Okay, great. It happens in live demos, but let me just figure it out. So the other way can be to see what was broken. So what we can do is we can open control dashboard directly from here and it will basically open our dashboard. Let me just see. So there is a code here. See the password. So the idea is if something breaks or something is not working, you can anytime directly go to the control dashboard directly from your repository itself. The other two views, if you can see the watch sessions and the debug session, this basically means that if the component is already pushed and you can go ahead and do watch, this view will be populated. And if you do debug, this view will be populated. And as I mentioned, this is one type of component which is created now. Let's say I have already also created a service which is based on like MongoDB. So we can link to components. We can link to component and a service and those scenarios can be confirmed. Let's go back and see if it opens up. Great. So this was an example where I could showcase that how to deploy a React application. The same way, this was a weather application. The same way we can deploy a COVID-19 application on top of it. The idea is basically you can deploy any type of application on top of OpenSheet within few clicks and you can just test it out at how that works. So if I go here into the developer view, I'll skip it. Let this get opened. So let me search my project with DefConCZ. It will take some time to open. Okay. So if we see this application which we created, not just weather. This was the type of application which got created. Let's see if something is broken. Okay. The pod is still running. So that's why it's not up out there. Let me check. So this is how you can basically verify and figure out if something is broken, something is not working directly from your ID itself. But there are no errors shown here. So it should have worked. But say if we need to push it again, what we can do is we can just do undeploy here. It will ask you to want to deploy this application. I will say yes. And once it's undeployed, you see, it changes into not posted. So it's not in the cluster yet. It's again in the local system. And from here, I can do a push. So the difference here is if you're in the not posted, there are only few actions which are available to the user. But if you already post this to the component, you will have other options like linking, watch, debug and other scenarios which can do on top of this component. So try to push it again. So once we do it again, the terminal which will be opened and it will start doing all this information. And I can again do follow log just to see what type of scenarios are there. If you see this gets updated, the streaming is always on. If anyone has any questions, I'm not sure if we have Well, there are no any questions. But if you have some, please ask because we have five minutes left. Yeah, you can go on. I'll just copy paste the URL which is the project is hosted on so that folks can try that out directly. So this is the repository link that the code is hosted. And this is the link in the marketplace. So if you see, the component is again successfully deployed. These are the similar steps which is there and you can see the other streams which are there. Okay. So you can see the error is because it's missing a debug script. So when basically we need to change that in a project. And once that change is done, we can again push it. So I think this will not work again because the project is destination exchange. Yeah. Sorry for this. It happens during live demos. There are some scenarios. But if you see the idea is basically to improve the developer experience where the user can perform all the operations related to deploying verifying testing and seeing the logs debug and watch on every component on top of open shift directly from the ID and the extension itself. And this also has a support where you can just go ahead and switch context. So by switching context, I mean the whatever context which is there in a few config file and you want to just switch directly to that context instead of logging to that context, you can just switch to that context and it will automatically be updated. And the important scenario is for the users who do not have any specific access to an open shift instance, this extension will provide you directly to run that open shift instance on top of it. So as I mentioned, if I do again this and I can go here again, so as you already had us code ready containers instance running, this will automatically understand, okay, I have already a code instance running what the status of it and the status is still running. I can stop it. I can again refresh it or I can open this in the console dashboard. So where do you find this instance? If you go on VS code is command shift P and Mac and you do open user settings and you just type open shift. So if you see this basically in stores information that this was the location of your code ready containers. This was a configuration for CPU course. This was for memory and this was the location of your pull secret file. So these information is already stored in your VS code settings. Once you run your this wizard for the first time, so that next time you did not run this wizard again, it will automatically figure it out and will start your code ready continuous directly.