 Good morning, I'm Shobhik, I work for Red Hat. I work as an engineering architect and lead for the Red Hat developer group. My job is to lead teams to ensure that developers are empowered to build tooling and applications without having to worry about things that they shouldn't really be worrying about and should be done for them. For example, OpenShift and Kubernetes, we know it's powerful, but it's complex. There is a huge learning curve. If you knew how to use it well, if you had the right tools, you could build really powerful applications with that. My presentation will be on how you could develop on OpenShift using your VS Code IDE. So we don't have this problem often that, hey, this works fine when I ran it in my local, but then when I put it in a container and then when I ran it on OpenShift or Kubernetes, it didn't work quite well. So the problem we wanted to solve with containers and Kubernetes and OpenShift came back, which means you still have the problem of some things working locally because you didn't have the right tooling to emulate how a deployed environment would look like. And then yeah, you did have a problem of how to ensure my code changes can quickly be live on a cluster. A typical method for that would be you write code locally, you do a build-up build or a Docker build and then you create an image, you push it somewhere and then you go and pull it and run it on your cluster. So for a small code change, you would actually have to do all these steps to be able to see it running live on OpenShift or Kubernetes for that matter. But then that looks like a solution which nobody would technically use because it's too long, so what they would potentially do is work on a project for three months and throw it over to the DevOps folks to try it out and run it on OpenShift or Kubernetes. So what we do is we actually work on multiple ways to take your source code and create a container out of it. One way is call the source to image which is effectively, we have a nice framework to say here is my source code and this is a Node.js source so we take out the Node.js builder image and then we smash those two together and we say here is your application container image that we just created for you which has all the checks and balances in place and that can be now deployed on your environment. So this framework lets us do versioned images just like you would do for anything else, like your code. And then what we did is we connected that with VS code so that you could potentially stay within your VS code and not have to leave the comfort of your developer tooling to be able to use this. So let's see how it goes. So what we effectively do is which I'm gonna show you in a minute is that you could either run your OpenShift somewhere else which is not on your local or on Minishift which is on your laptop. You could use like a Cube CTL or an OC command line to talk to it and then there is another tool on top of it which we built called OpenShift Do which abstracts out all the Kubernetes and OpenShift troubles for you which means to create a normal route or let's say to deploy an application and reach it in OpenShift you have to effectively do a build of that application you have to push it somewhere you have to have a deployment which needs to use that image and build pods for you and then you need to create a service and then you need to create a route out of it. That's really a lot of things for a developer who just wants to get started and get it running. So what we did is we built a solution where we said okay, you're in your VS Code you just say push to OpenShift and we do all that for you and then you say hey, get me a URL we create a service on a route and wire it up with your pod for that. Let's take a look at a demo for that. So the OpenShift connector sits nicely on your VS Code it's an extension you can install for free from the marketplace. It detects if you have a cluster you're already connected to if not you could connect to it using the general Kubernetes OpenShift ways of connecting to a cluster. So what I'm gonna do is I'm gonna start with creating a namespace or a project. Let's say call it Boston one and then what I'm gonna do is I'm gonna say I need a new application. Now we have the concept of applications and components and application is effectively let's say your Power Circuit Management application which can have tons of services inside it a billing service, a customer agreement service a maintenance service inside it. So an application is an aggregation of multiple microservices which together solve a business case I would say. So let's say I'm creating an application inside my project or namespace in Kubernetes. So I'll call this Power Circuit Management so I can see I have a new application here but this is just an abstraction and aggregation that this doesn't really have anything running yet. So what I do is I'll go and say, hey, I'd like to create a new component there which means now I'd really like to have a workload associated with it. So I'd say let's go for a Git repository you could do this from multiple ways but I'll start with a Git repository and I'll say I have a Node.js application which is on my version control and I'd like to deploy that. And it'll quickly ask me, okay, which brands do you really need to deploy? And I said master and then I'll come up with a nice name. So let's say this is the billing service of that bigger application and the component type it automatically gives me a list of, I mean, in technical terms it gives me a list of possible builder images that it could use to build my application. So I say it's a Node.js application, awesome. And then in the final we have the option to build multiple versions of Node applications. I said 10. And then it also asked me on the bottom right, do you want to really clone it locally or do you want to just deploy? And I said, I'm fine. I just go ahead with deploying the application which means, so what I'm doing which printed out here in the logs below is I'm effectively creating a component. What that effectively does is it checks out your code. It takes a source and creates an image out of it. You didn't have to worry about Docker build. You didn't have to worry about learning what containers do. You didn't have to worry about OCI or anything else. You just gave your source code. You said it's a Node.js source code and it's of version 10 and it did the rest for you. So let's quickly take a look at that namespace and the project in my OpenShift cluster. I can see it's something got deployed here. And if I click here, I can see that it's a billing service part, it's a billing service part circuit management. So it's a billing service inside this application. So which is why it appended the name. It has a pod. It is running a deployment config if you're familiar with Kubernetes. And if you see the resources, yeah, it has created a service as well in Kubernetes. That's it. So it did a bunch of things for you without you having to really tell Kubernetes explicitly to do all these things. And now I want to try out how my application looks like. So what I'll do is I'll say I want to, I want a new URL. And then what that effectively does is it goes and creates a route for me. So I'll say, I want to call this customer facing URL so that I know that I could possibly have some ports which should be specifically being used internally and not for the customers. So I name it as a customer facing URL. And then right here, I could click here and it'll take me to the deployed application which I just took from source. I just, all I did was I want to deploy the source code and this is a Node.js project so you know and it's version 10. And then it created my deployment config or deployment that deployed the pods. It created a service out of it. And then I said, I want to be able to access it using a URL. What it did is it went and created a Kubernetes route for me or an OpenShift route for me. All good. And then now I can repeat the same thing again inside the same application which is, let's say I did the same thing. I deploy another different application here. And I could keep doing this on and on. Let's say this is a grievance service. And again, it's a Node.js application. And then I said, no, I'm not interested in making changes to the code. I just want to deploy it. So that's what it goes and does. It creates everything for me and then I can go in and say, hey, there is a second application which has showed up here. And then I could, if I'm a developer who also knows Kubernetes OpenShift but doesn't want to deal with it all day, I could use VS Code for deploying things and then go in here and dig deeper as much as you want in the world of Kubernetes. So there you go. I just demonstrated to you that you could take any source code inside your VS Code ID and deploy it on OpenShift without having to leave the comfort of your ID. So this is kind of more of the outer loop development, not exactly our outer loop. So we have two forms of development. One is inner loop and outer loop. Inner loop is effectively your making code changes and you want to see the changes live in some URL. Outer loop is effectively your making code commits that hits your CI solution, tells you your code code is good and then you choose to deploy it. That becomes your CD. But today we're focusing on the inner loop which means I have source code and I want to be able to quickly iterate on top of it. One of the first things that I want to do is I have a nice GitHub project. I want to be able to quickly deploy an OpenShift from my VS Code and see if it works. So we saw it works and now what we'll do is we'll try to wire things up, make some mistakes and then see how can we come back and get it still running. So let's say we create another application or let's say we have the grievance service here which we just deployed. And I think that needed a database. So let's see. We'll see how it goes here. Let me figure out a way to reach the application. Let's say, again, customer facing route. Yeah, it's a typical, you know, a fruit stock management app application which I tried to, let's add a fruit here called mango for and the moment I tried to save it, it says it gives us a weird error and if I went and dug into the logs of this, so let's say this one. Oh, it said that it had some trouble connecting to 127.0.1 and then with 543.2. So the typical guest would be trying to connect to a possibly SQL database and it doesn't have one. So let's see how a developer would go about fixing this. So let's say I am in my own namespace here. So let's say I just got a fresh namespace where I'll deploy things from the scratch. I'll probably go back here and let's say I have my project and I did a git clone so I have my source code with me now. I enter here, my code looks good and then as a developer I would open the code in this VS code. I have it here. I'll quickly look at my OpenShift cluster. Right, I could access OpenShift cluster which I was using in the other instance of VS code and what I do is let's say I'm in a developer namespace. It's like dev boss for example. And then I say, hey, I wanna create a new application which is nothing but workspace for my stuff to go in. Then I say, hey, I want a new component and I'd like to use the workspace directory for that which means I'm not going for GitHub URL. I have some source code in my local and I would wanna use that. That's it, it found out the source code which we used and the same thing, let's say fruit stock manager. It turned out to be a Node.js thing and I'd say the type version 10 and so what I'm doing is I'm actually starting with doing the initial deployment of my application in a separate workspace, see it fail and then I'll try going and fixing it. It's taken slightly longer because it's a slightly more complex application than the first one. In the meantime, let's visit the namespace which I just created. Yeah, I can see things moving here and then I go to the topology and I can see something has worked out, right. So it looks like the application has been deployed almost. In the meantime, I could go and look at what are the different resources associated with it. There's a pod here. Yeah, you could see a bunch of things mounted here. This is effectively your VS code plugin injecting a bunch of things there to actually make it quicker for you to write code locally and mount it to a volume on your cluster and instead of having to really build it push it to external registry and then pull it. So that's what it does. Right, let's say successfully all changes have been pushed to this component and let me simulate the production issue which we just saw and say dev URL and that should go in and create a way for me to access my application which I know doesn't work for now. So I said this and it doesn't work. So what do I do now? So I go in and I realize that I showed you that it's probably looking for database to connect to but it doesn't have one. So what I do is from within the comfort of my ID I'm going to quickly deploy on my PostgreSQL database. So go to my workspace. I say new service and there are a bunch of different service templates. I'll go for a PostgreSQL. Then I'll say this is the FruDev just so that we recognize what it is. So what it now is doing for you is it is taking an image of PostgreSQL and running it on your cluster and making it available for you to bind to your application. The next step what we'll do is bind it of course but we'll wait for the PostgreSQL service to come up. We can of course monitor that in our cluster. We could see there's something. There's a PostgreSQL which is showing up here. We could all again do some more advanced views. And we can see there's a pod and there's a service that is here. And it says successfully created which means my database is here. So I can see nice visualization here saying that this is a database. It's called FruDev. But then fair enough I have it but that doesn't solve my problems. I need to ensure that my service can talk to my database. So what I'm gonna do is I'm gonna go here and say I want to link a service. Then it'll say which one? FruDev because that's the only one you have there. I said yes. And what it's effectively doing is it's creating environment variables in my pod for fruit stock manager so that it can connect to the database. So what we'll check is it says the link service. So it's trying to link it on the bottom right. You can say it's in progress. So behind the scenes, if you are a Kubernetes nerd what it's effectively doing is using an admission webhook. It is injecting your environment variables into your pod and restarting them so that next time your pod comes up that your fruit stock manager it has all those environment variables injected in them. And to do so it creates a secret and then it basically mounts that secret into your container and that's how you get to get your environment variables. But then I would expect stuff to work. Let's say now it says the binding is done and I, whoops, yeah, let me just hit the same URL again and I see still nothing shows up. Yeah, still nothing work. Let's go in and see what's going wrong here. Here is a pod. I could check logs here on my cluster or I could also go in here and say show log. And it could show me log right in my base code as well. I don't really have to go to the cluster but then I keep going there. So it tells me similar things that I couldn't connect to the database. Now I'm starting to wonder, probably I need some code changes here. It's awesome, that should not be a problem. So I go to my code and then I say, hey, this is looking for an environment variable called my database service host. It's looking for DB username. It's looking for DB password. It assumes that the database name is my data. There's a ton of assumptions the code is making. As a developer, this is where I start iterating on my code and I see changes. So the first thing I'll say is okay, what were the environment variables that got mounted because they have to match with this. So go to my cluster again. I just quickly do an nv or I do an nv, crep, pause grass and see what are the things. Yeah, so I see a bunch of these and environment variables have been mounted here. Awesome, so I should probably make some of my code changes based on that. Right, so it needs a possibly SQL service host. So I'll probably go and modify this. And yeah, it should be good because it should be able to read it. Let's try it out. So how do I now push my changes there? In a typical world, you would do a Docker build or a build a build, push it somewhere and then go in there and show your image and get subbed into your pod and then you should run it. But what I'll do is, since I'm using OpenShift Connector, I'll quickly go and say push and that's it. It's gonna take my code changes and push them into my cluster, get my pod up and running and I should be able to see those changes in which may or may not work, we will find out. Yeah, you could always go back to your cluster and see things happening in real times. So what this does is, yeah, it says it has successfully pushed my changes to the component. So I would not expect things to be working. Let's see, bunch of these here. Oops, still doesn't work. I do the same debugging dance again where I go in here and I say show me the logs. And it tells me, it doesn't complain about not being able to read your database. Now it's complaining about I can't read the database, but the authentication doesn't work. So I'm saying, hey, come on. What I do here is, so yeah, I'll probably again go back to my node and see, hey, you know, what are the environment variables you have given from a username password? So I say, hey, and then let's say there's something called username, it does look like, yeah, there's something called username and that's why I'm guessing there would be something like password as well. So the name of the environment variable except for its username and the rest is password. And in my connection string, I see there is another place where things can go really wrong, which is the name of the database. Let me try to find it. So let's say, I just name and something should, oh yeah, it says database name sample database, which means I need to read from this environment variable. So I say, probably I'll just do something messy here. I'll just copy paste this, sample db, awesome. The stuff you do when you're trying to get a code working. Let's say, awesome. So I made some more changes and then I'll try to again push them instead of doing five, six different steps of getting my application running on OpenShift. Because my focus is to get my code running, not to become an OpenShift export as part of this. So what this will again do is, this will mount the changes into the same container, do a build and get it running. And the reason it is quicker than usual is that it doesn't actually do a build behind the scenes on your local system. It does not do that. The main reason does it and do that is because that's time consuming, which means if you're using a Docker socket, it has to send it to the Docker socket, build the code and then you need to push it to a registry and then that has to pull from the registry. But all that didn't happen, which is why this is quicker. So let's say we did a bunch of changes and before getting the scare of whether it's working or not, I'll just quickly see a, do a show log. Yeah, it says database elitid, which means things should work this time. Yeah, things still don't work. Yeah, I could again go into my, I could say it uses a username, password. It has a sample DB, it has a PostgreSQL service. Oh, so yeah, things should work. Let's see, let's refresh the logs. That looks like there is still a problem with the authentication. So I could go back here and say, hey, come on, what is the password? So I say, this is my password, awesome. So the username looks good. The database host looks good. So probably going to make part directly. Let's say I was hitting a wrong URL. So I hit it and I see my data has now been picked up from the database and I can interact this as a create, read, update, delete application. So let's say I don't see a mango here. I'm gonna just, let's say 78 because I really love mangoes. And there you go. I could see stuff reflecting on my application. So what we just did is we debug the Node.js application which was having a hard time connecting to a database. We pushed those changes as I was working on inside the ID and my application is live and now those problems are gone. And I could see my Node.js application talking to a PostgreSQL database. So what we just did is in a few minutes we showed that within your VS code you could interact with your OpenShift clusters. You could deploy applications there. You could deploy services from the service catalog for example there and then you could link them together. So that's it from my demo. I'll move back to the slides. So yeah, the project is open source. So this is basically a common combination of two projects. The one is audio which is the short form of OpenShift do which is also CLI tool which means you could use it outside VS code as well. It's like draft to an extent I would say but the whole point it does is as a developer who doesn't want to learn OpenShift Kubernetes you can avoid doing that and still deploy your applications on OpenShift. And we have the OpenShift connector which effectively helps you connect to OpenShift clusters and underneath it uses the same open source project. You could go and submit issues if you find them. You could go and contribute them and you could go and see what our roadmap for the project is. Thank you. Any questions? Do you want to sign? Thank you. Hi, my name is Minisha. So as you are updating the configuration of your OpenShift cluster is it also gathering all those configuration files so that when you need to take this to production you already have all of that? Yes. So behind the scenes it used a builder image which is also production friendly which means those, I mean, Red Hat provides a bunch of builder images which are validated for and like they're constantly updated which means your builder image itself is secure and we use the same builder image to deploy here. So which means if tomorrow you take the source code to production, just say, hey, I use a node just builder image, let's just take this and we can deploy it. What we do not have as part of the configuration is here in this demo is we don't really export the configuration of routes and services here. The only thing you can take from here is how to build an image and run the same image on production. So it is gathering all those configuration files and whatever that is? Yeah, I mean, so it's actually not even a YAML file I would say. I mean, in a more production friendly scenario that's effectively in OpenShift or Kubernetes, it's basically a build config where you say, here's my source code, here's a builder image, now go build and deploy this. And that's the same thing we did here as well. Thanks. Thank you. Any more questions or should we call you today today? Thank you.