 Welcome, please to the next speaker and go about the pitch. Thank you. Good morning everyone. So now everybody already saw about the workflow, how a developer can start and then able to create a container, create a container image, then deploy it in different platforms. In this particular talk, what we are going to talk about is like how many of you used OpenShift? Oh, nice. So there are a lot of people who used OpenShift. And how many of you heard about something called OpenShift Local? How many of you heard about code-ready containers? So code-ready containers, it was the old name, we have something called OpenShift Local now. So as name suggests, you can create a cluster locally on your system. And it's a platform independent. So you can create the cluster on Mac, Windows or Linux. And then you can play around with the OpenShift. And once you are ready with your applications, you see like, okay, everything is working fine for me. I can now deploy it on kind of products and kind of cluster. You can just change the context of OpenShift and then you can deploy your application. So as per agenda, we are just going to I think everybody knows what OpenShift is. So yeah, in the heart, we are still the Kubernetes. Everything is the Kubernetes and then we have top of some add-ons which makes the developers life easy, admins life easy, right? And so this is very, very kind of a top view overview about the Kubernetes. There's just some, there's components of the Kubernetes which makes the Kubernetes what we do. We have API, we have CLI, we have some applications, then we have Ingress to create the routes and everything. But if you see about OpenShift, we have everything what the Kubernetes have and then we have the add-ons like as I said to make the developers life easy. So we have Dev console, we have templates, we have build config, as to why then everything on top of it. So now what's the OpenShift local? This is the definition what we provide, we have in the readme. But the basic line is that you should run, you should able to run the OpenShift like it is running in somewhere in the cloud or somewhere remotely, directly on your system and able to play around with it. So this is your laptop, whatever the host operating system it is running, we try to consume the native hypervisor and then the CRC is the CLI to interact with that virtualization and then create the cluster, start the cluster and then you can use the cluster. So on the top we have like three simple commands. We say OK CRC setup and then OK start and then start using it. So it's as simple as possible. On much more overlay like there are different components in the OpenShift locals, we have something called VM driver, we create everything in a bundle which have OpenShift client binary, we have VM images and if you go on the top again, you will see what the each command actually doing. So set up actually set up the, kind of set the prerequisites for the host and then you start, it will start the VM trying to connect everything together and using the OC ENV you can just start using the OC client binary with that particular cluster. Installation it's very simple so you go this particular page, you can just say which operating system I want to download it, download the OpenShift local with the pulse you create, run it and that's it. You are able to run it and the link is already in the slide which we will share. So now everything around the theory is completed, we can start with the demo demo. So demo we have very basic application so we have a database which is Postgres database, then there is a to-do application which is trying to connect to this database and then we try to expose that particular to-do list locally and try to see if everything is working as our expected way. So as I said, so I have the CRC running so the first thing I usually do is getting the status of the machine and it will actually provide me the status of the VM, it will also provide me the status of the OpenShift which is there, which version of the OpenShift it is running, what is the use, RAM and CPU is used by the VM and how much cash it used for storing all those kind of things. So I am using the 4.11.7, everything is running, so that's great. The next command is OCENV, what it do is that if you don't have the OC which is OpenShift client binary install in your local system, what you can do is that you can use this command, it will put that particular path on your system variable path and then you have the OC directly working without downloading to the internet. So I have that also, so now check which server I am connected to, so if you see here I am connected locally, if you can see there is already a domain name which is there for interacting with the CRC VM, this is the API which is used to use with and OC is connected to that particular API, check the different context, so I am right now using this particular context but I have some CRC developer context and admin context which I can use or I can create a new user because everything is running locally in my system. So about the application now, about the app which we want to deploy on this particular system or this particular cluster, so let's increase the phone, is it visible? So I put this particular app already on the GitHub, we will put it in the slides like how to get it but the idea of this particular app is it is a go application, so the interesting thing like what we actually have here, like it is a go application which tries to run this to-do list and it is supposed to connect in these environment variables if we have the database, if it is getting from the environment variable, get from the environment variable otherwise use the local host same as the port, user, password and database name. And then what you usually do, what you usually developer do, once they code it, they have this kind of repository and then the first thing they do I have everything there, now the first thing to put it in any cloud or any Kubernetes resource I have to first create the image. So with CRC we also expose the portman socket, so if you don't have the portman install on your system, so what you can do is CRC, portman env. What it do is that it will also expose the portman socket, add it to your local system, add, make sure that the portman client binary is associated with your path and you can run the portman on top of it. So in the last session like you saw, you can interact with the portman you can create the image and everything. So I have a very basic container file which is like multi-stage build. So what I'm doing is like, okay, I am going to get this go tool set base image because our application is in Golang and then I'll build it first and then I'll try to use a very minimal go UBI image from Relate and yes, and then I'll, so in my first build whatever the artifact I created which is a binary which is called to do I'll put that to the second stage of the build and then I'll just add the entry point of that particular to do binary. So, okay, I created my container files and everything. Now, the way you're supposed to create the build is portman build, right? No cache. I don't want to have any cache in between of those images. I want to tag my image to my name space and then to do list and this is the container file. Right? Now it is safe. Cannot connect to the portman because this, so what I have to do is let's see if I have the portman socket enabled. Right to Relate. See if I can get the image. Yes. Now it is connected. Let's build it. So now it's start building. I have this base image and everything already in cache because everything is running locally. It is very fast. I can create it very fast, create the container image and then everything is created. After that what we usually do, after creating the container image, we push it to the respective container registry. So here I use the query.io. This is the registry which I have. So I use simple, right? So once I push that, it goes to this particular my name space with the to do with the latest tag. I already have that so I'm not going to use that state. So now everything is in place. So now I have the image and I already pushed it to the registry. The next step what we usually do after that, we create the container sorry, we create the Kubernetes resources which is like we create the ports, we create the deployment config, we create the service and all those kind of things, right? And then try to, you know, put those resources, use oc create or kubectl create and then specify those file names whatever we created. So here I have all those resources as part of this open directory. So what I did is that everything from the starting from the name space till the service, everything is in this file. So if we just take a quick look of this particular file, what we see is here, okay? This is the first thing, the first resource we are creating, okay? As soon as I deploy it, I want to deploy it in a name space called damnation and then once I deploy it, I have a secret which actually contains all the details about my DB names, DB passwords, and DB users, right? And once I deploy the secret, then I have a deployment. So this deployment, so I suppose to have two deployments, one is for the database and one for my application so that it can able to connect, right? So here I have the first deployment which is for the Postgres and then I have the second deployment after that. So first I will create a deployment, create a service and then create the second, okay? No. Anyway, it's a lot of so we have like, okay, these many lines but the thing is that in the next session you will also learn like you don't need to create those resources. Right now what we are going through a typical developer journey where initially the user has to do all those kind of things, right? Writing the YAML and trying it out, trying to see if that is going to work to change the YAML and it's very difficult to find these kind of small issues. So in the next session we are going to use something called Audio and in that you will understand like, okay, I don't need to create those YAMLs also. I can directly use that image and I can directly create with two, three commands, I can create my application. But for this session, I wanted to go like from very, very basic of the Kubernetes resources, how to deploy it and all those kind of things. So now we have the deploy deployment scripts or resources and everything is in this YAML. So what we have to do, we just say, okay, let's see. Right? We don't care about the warnings. Let's have it there. But you see there are a lot of things are already existed. It's already existed because everything was there before. But let's see which project we are in. We are already, okay, we are in the default project. So we delete the project called dev nation right? And let's try it again from the stretch it says it's already, okay, the dev nation is already here. It's been clear it yet. Yes. So now it took some time to the cube to, you know, remove all those kind of things which we created before. But now everything is created from beginning. So now if you see I suppose to go in this namespace, so I'll in OpenShift I can see, okay, project. So I'm in this project now Now I want to see all the resources which is part of there. Everything is already running, right? And till now everything is the way we did. It's not only the OpenShift specific, but as I said in the beginning, in the heart we are still the Kubernetes. So you can use the same step of any Kubernetes cluster either on the locally or on a remote cluster. You can use all the things. It will source something like that only, right? But now the thing which we will do is like we want so right now everything is running, but how do I see my application, right? How do I access my application? And in the OpenShift we have something called route for to do that. So this is what we are going to create. So we are going to expose the service. And the service which we have is called to psql xjumpl. And as soon as I expose that particular service, it's create automatically a route. So a route is created. So now I have a route resource. And if I go, I will hit this endpoint. My application is there, right? So now I have an application and let's add something to that. So this is a very basic application. So we are adding the things to the DB. And the way we use the DB, so we use something called the empty directory as a mounted on the DB. So as soon as the port crashes, everything is removed. So it's only for the demo purpose, but usually what you use to do is that you add a PV and then attach to the PV to the DB so that your things are there even the ports are restarted. So everything is working. I can delete when the things are done. I can edit the list or whatsoever. So right now what we did is that we usually created a route and we accessed the route to access our application. But in case of the Kubernetes, right now where there is no resource called route, you can't use it. So there are different applications to have your ingress. Using the ingress, you can create the route and then you can use that. If you don't want to create a route, so what you can do on the Kubernetes is you can actually expose a particular port and that port which you expose is directly exposed from the container and that container port also you can access as we access for the route. So let's try to do that also. So I say the ports which we have, we have two different ports. One is the post-grace, another one is the studio app and then let's try to export it. Port export and what we want to do is that we want to do this one and we say I want to export this one second. Everything doesn't work on the first time in the demo. It's not port export, it's called port forward. So it's good to have some kind of history. So now what we did is that from the port directly, so in our port, this particular port is what our application is exposed and now it is exposed directly to our local system. So if I hit this particular thing here, I suppose to see the same. Still there, everything is working and let's add something like hello and see right now I'm exported ports but I can also access it from the route which I created which is like and if you see here if I add the data here it should have the same data because it's using the same DB in the background, it's the same thing, nothing changed here. So now everything is done. I have deployed the things in the open shift locally and I'm very happy about this particular application. Everything is working in my application. I can do whatever this particular application tries to do and now I want to push that particular application to some remote cluster or some production cluster. So to do that, like in the previous session we used something called Dev Sandbox so you can create a cluster on the Dev Sandbox and what you only have to do is you just try to do this OC create and this particular deploy dot eml. So let's see if I have a phone fix set. So I also have a Dev Sandbox cluster which I already logged in just before the demo. So I try to use that context so that now my open shift cluster to point to this particular so now I switch to that particular cluster so if I see or see who am I. So now it's my cluster is the Dev Sandbox cluster. The only thing is that the Dev Sandbox we have only one single namespace so I can't create a different namespace there so I have to change a little bit of in my script which is the deploy one. What I'll do is I know that my namespace on the Dev Sandbox is so if I do now what I did is only I change the context I actually logged into a different cluster and now everything again I able to deploy everything there. Because I already deployed it it says like this configuration is not changed this configuration is not changed but the thing is that on this cluster now I have the same application which is running and I can use that application. So back to our slides just so this is the demo which I had and then there are free resources which you can try it out after that and then all the demo links and everything we'll put it in the presentation and we'll share it to you guys. So yeah, thank you. Hey Praveen, question I mean open shift, local and mini shift is there anything related? So when we created the mini shift it was mostly the open shift 3.x and the way open shift now 4.x actually starts over also configured is completely different it used to be for the 3.x so we have to we can't use the same kind of logics and everything which you use in the mini shift for open shift 4 so now for open shift 4 we have this open shift local and then all the things are changed