 So hello all, this is Varsha Sharma and I have Zeeshan with me. We both will be presenting you a topic which is audio that is a developer friendly command line too. So let me share my screen first. Let me know if it is visible. Yeah, is it visible guys? We can say it. Okay. Thank you. Thank you so much. Me and Zeeshan, we both work at Red Hat, so we are based out of India. And I work as a technical engineer in middleware domain and Zeeshan is an audio developer. And he likes creating cool stuff with coding. So today we'll be presenting you a topic on audio that is a developer friendly command line too. It actually makes developers life easier. Who are not much more known about the Kubernetes and OpenShift container platform. It actually abstracts away the core concepts of Kubernetes and OpenShift and helps the developers to focus on what is important for them. That is the code. So how it improvises developers life, you'll figure out this in this upcoming session. So basically we'll be covering this whole talk in three topics. We'll discuss about audio installation, its architecture, and we'll show you a short demo on how you can deploy a sample Node.js application using audio. So audio improvises developers workflow. So how does it improvises the workflow? A typical workflow starts with creating code on your local system and ends with the containers running on OpenShift container or Kubernetes container platform, cluster platform, with your source code running on it. So you can visualize this workflow in terms of inner and outer loop. So in the inner loop, developers are the primary audience here. It can run on your local laptop. So over here, all sorts of development activities, which is under developers control, goes on like editing, debugging, compiling, building up the code, etc. All sorts of activities. And the outer loop, it comprises of a larger team. It actually consists of all those steps which a code has to go through while it's way on to the cluster, like integration tests, compilants and compiling, building, deploying to staging. So several of these steps are often automated using CIC system. So in the inner loop, when a developer reaches a degree of satisfaction that yes, a feature of a degree is achieved, so the code is transitioned from the inner to the outer loop. So for that, several of steps are carried out in order to transition the code from inner to the outer loop. There are a lot of steps like how you have to configure the OS for the container, then packaging the application for the container image, creating the container image, and then pushing the container image to the repository. So these are a lot of steps and thereafter deploying the application and then deploying the services on which the applications depends on. So some of these steps are automated by Odio. So let's discuss about it. So Odio is basically a command line tool which is written in a very simple language that is Go programming language. It provides fast automated source code deployments. It helps developers to focus on the application source code rather than worrying about how the application gets deployed onto the OpenShift or Kubernetes cluster. And it has got easy iterative development cycles. And it is entirely on local client, so no additional server is required. And it can basically run from any terminal, Mac, Windows and Linux. So you can simply download the binaries from here and run it on Linux, Macs or Windows, whichever is the suitable platform for you. Then who is Odio for? So Odio is basically a tool for developers to focus on the source code, like I said, who are not familiar with the OpenShift and Kubernetes concepts who doesn't have deep understanding of OpenShift and Kubernetes and don't want to create complex YAML resource files and they just want to know how their code works just from the starting till the end on the cluster, OpenShift cluster and the OpenShift platform. So several other client's command line tools are there like OC and Qubectl, which are more operation focused. So you don't have to worry about that how OC or Qubectl works. You just can easily deploy your application using simple Odio commands. So it minimizes the use of OC and Qubectl. It doesn't replace them, but yeah, it minimizes the use of them. So what Odio does, like I said, there are a lot of steps while the code, source code is transitioned from the inner to the outer loop. So it automates the build configuration, deployment configuration, service routes and other OpenShift elements which are required while transitioning the code. And it is designed for quick iteration, like for example it will deploy the changes onto the cluster and it will also give the developers the feedback at the real time in order to make some changes. So Odio is provided in these IDs like VS code, OpenShift connector, OpenShift connector for IntelliJ, then code wine for Ellipstree and it supports Node.js and Java components. And this with the new release which is recently introduced today, that's 24th September, Odio v2.0 with that NPM, Spring Boot and Quarkus has also supported. So I'll give you a pictorial representation of how you can deploy an application onto the OpenShift cluster or the Kubernetes cluster. So in Odio, applications are the basic, like you can say application name is my app, the default name, and you can use the virtual concept of grouping the components together. So components are the things which the application is made of. So the next you can just simply create an application by using Odio create Node.js. Node.js with Odio create, you specify the language in which the application is written in and with hyphen I can app flag you can just pass the application name. So if you're familiar with the Git flow, Odio follows the same flow like Git. Like in Git, we do Git init to initialize that where our local code is running, where our source code is running in the local directory and editing after the source code locally after Odio create is similar to doing Git commit. So Odio basically follows the flow same as Git. So after creating a simple sample component like Node.js, you just simply need to create you a storage by specifying the path and the size for that particular component and it can be attached to that component. Likewise, you can list the available components in the Odio using catalog list command it will list out the components available. And in order to create a service for your component, you can simply use service create command to create a database like for example, and link that database to the back end component which you have created. Then after creating the database and linking it to the back end component, you can create a front end up front end component for this back end component. It's an example just a pictorial representation of a simple configuration that you can do on Odio. Then after creating it, you can simply link both the front end and the back end components using a simple Odio link command. Then you can create a URL through which your application will be accessed. So in Odio, everything is in context. So context is something where your code is going to code is present. So default is the current working directory on your local system. So this was a simple configuration, a one-time configuration which you did, which you can do using Odio. So after doing this configuration, this configuration can be just pushed onto the repository. And after doing all these configuration, you just simply need to do audio push. So what Odopush does, it will automate the creation of deployment config. It will create index files. It will create other OpenShift elements which are required like PVC, services, route. It will create everything and it will push the source code onto the container. It will copy your local code onto the container under Temp's source directory. And after the code is copied, it will restart the application service without killing or rebuilding the application. So this gives us hot reload. So even if you want to alter your local source code after doing audio push, you can just simply change the source code and you can simply do audio push again in order to reflect the changes. So this was a brief overview of what you can do with Odio. So in simpler terms, Odio is like an elder magic wand for the developers on working on OpenShift and Kubernetes platform. Those who are not aware of the deep understanding of the understanding terminologies of Kubernetes and OpenShift. So enough of the overview. After this, you are going to see a demo where we'll show you how you can deploy a sample NodeJS application using Odio. So I'll hand it over to Zshan from here. Zshan, it's all yours now. That was a good overview of what Odio does. So let's see if I can start sharing now. Let me have a stop from my end. Yeah. So I hope everyone can see the screen share. Yes, it's visible. Okay. So like what Zshan said, we are going to be doing a small demo. We're not going to do something complicated this time, but the shortage of time. And also we're going to be using the latest version of Odio, which is Odio v2 that got released yesterday. So this is the app we're going to be deploying. This is a simple NodeJS Todo app. And this is what it looks like. It uses its CD at the back to store this information. So let's go ahead and try to deploy this app. So I already have the Odio. So I'm going to, the first command is obviously I need to create that application. But before that, I can do auto catalog list components. So this basically is information about what kind of components are actually available in the browser and shows them. Like for example, there are def file components and there are normal S2I based components, which are available on my cluster. So we are going to go with def file components, because that's going to be the default going forward. And the way this works is through registries. So there's actually a command to add your own registries and stuff. You're not going to get into that in this demo, but just for the sort of information. So for now we are going to deploy a little NodeJS app right here. So I'll say auto create NodeJS and let's say I give it an example name. So it's auto create kind of component and then the name is called the default name. Okay. So what just happened was it created a def file in your repository. So this is basically a local configuration. It explains what your local development environment should look like. Generally, you will not have to touch this. So it was already working on adding commands to able to manipulate everything. I mean, at least all the main stuff like you are in order. The order is there, but there are other stuff that can manipulate, which the commands are incoming for that in the future versions. But for now you can even manipulate this def file if you really want to, but it in most cases that should not be required. So this actually explains like this is a NodeJS component and how do you run the component and stuff like that. And this is committable. Like you can commit on to your data positive anyone else can just want to push the future and they will get the same environment. And there's also a BNV.tml, which is not to be pushed into the data positive. This is to contain information specific to your environment that you might have created. So let's go ahead and push this component. So the first time, of course, the container was a network object needs to happen. So it can take a little bit more time. But once the flow starts and once the first push has done, you will see a drop in the amount of time that the application needs to deploy. So you can see that there was a URL defined in that component components and stuff. So all of that stuff is always being created for you. And if I see, you can see that there's a NodeJS thing that's running there. So it's deployed. So let's go to this URL. Hmm. This is interesting. Just give me a sec. This sometimes happens to my browser. So sorry about that. This might actually be a problem with the cluster. Let's just give me a second. So that's another cluster, a backup cluster. So let me just switch to that instead. In case the demo calls you're not with us today, just give me a second. So let's just call this URL. Yeah, I can see that the content is there. There's something wrong with the browser. Maybe. So what I'm going to do is I'm going to go for a different app instead. This is something that again, I prepared for. So let's go ahead and try a different app. A simpler one. This one is not going to be that crazy. This is a very simple. So. So the problem here is that there's something that I faced before where the cluster itself was acting up. So instead of that, instead of doing a little bit more complicated application, it's going to do a simple one instead. So here goes nothing. Okay. So there's also a command called URL list, which you can see if any of us created what where they were created. Yeah. So you can see that there is a reply from that application. Okay. So it works. Now, let's say in a typical development environment, what will I do? I'll go to the app and I will change something. Let's say I go to my server.js. And instead of saying this, I will say blah. I'll add something to this. And I will then do a little push. You can see that the amount of time that it took for the first time and the next time is reduced. So if I go back to my app and if I refresh this, I should be able to see the change. Yeah. So you can see that whatever I need to change is published. So another thing that you can do is actually create services. So go to catalog services list is a commander actually gets the information about services from other way around. It gives the list of all the services that are present in the cluster. And that can include operator based services or template based services. So all open shift or had something called templates service, service templates. So you can use those if you have those. But for this, we're going to use operators. So this is an operator that that a couple of operators that do exist on my cluster. So a family right now is that the operators are the operator back services were added recently as in beta. So it is not completely like awesome. But the basic functionality is there. So how would I create a service? Right. So I'll say service aid. And then I will give the name and the annual of these on my requirement. And it should create a class greater service for me. So a service has been created. So if I do auto service list, so you can see that there is a city culture example service. So I'll just copy this name and then let you get a list. We can link components to each other. You can also link services to each other. And then I do an order push this time. The reason why it was longer again was because this is a conflict change. I'm adding a service and I'm linking it. And it's up. So in the other example, I could have actually shown you the city being used. But since we if we skip that what what I can definitely show you is that if I get parts and this is my example part for example, sorry, my part for example, and then I'm going to do an OC exact hyphen of this part. And if I do a print. Env and if I grab it city. You can see an and I will call it city cluster. So I think that my app can now use to connect to the city cluster. So these and my variables are basically being sort of time based on practices like are you are the even the operator, the person who created the operator can specify what he wants the enemies to look like to be exposed to the service. So this is for that city. It is just predefined that most people use that city cluster. So that's what it uses. So you can obviously if your app can consume this, if your app uses it and do whatever you want to do with that. And then there is also a really cool command that was easy that was added. Call go to watch. Say hi. Okay, I was going to start up. So let's say I do a watch. So what this does is just run it in a terminal separately and then you can go ahead and start editing your file. So let's say I edit my server. And I add something else to it and save it. It's already being pushed. You don't have to worry. It's going to unsupply in its own terminal. And that's what a tool like VS code or something would do. They would just run this in a terminal and forget and find for it. So you edit, you save, it's done, it's pushed. So this is actually a slightly known bug where it sometimes pushes twice, but it finally gets your stuff out. So if I refresh this, okay, it's something to do with demo gods, I guess. We don't have time to sit and debug this. So yeah, I guess that's it. So let's go continue with our presentation then. I guess. What second you can continue the presentation. I don't have it open right now. I'll share it. Is it visible, Zisha? Yeah, I think it is. So yeah, like I said that it automates several of the processes when we transition the code from the inner to the outer loop. So what basically the architecture is while the deployment config is built at basically there are containers created like edit containers than the application runtime containers. And the component configuration. So all these configurations are automated like you saw in the demo that while we do the audio push, it does all those things in the background. So by default it uses the depth while as the deployment mechanism for deploying audio on OpenShift or the Kubernetes cluster in this recent v2 release. And previously there was S2I available. And if you want to deploy using the S2I deployment method, you just need to specify hyphen hyphen S2I and the code will be deployed using the S2I image. So you can find the depth file repository file contents at this link, which I have mentioned over here. I guess my slide changed. Yeah, over here. Let me show you over here and you will find the complete architecture for the S2I image, which was previously available compared to this version. And for the audio recent, the audio depth file repository, here are the... Here are the... This is the official site for audio.dev where you can find all of the basic understandings about audio regarding installing and how you can deploy a single component application, multi-component application and like the example, the pictorial representation which I showed you, it was a multi-component which you deployed using, which you can deploy using audio. Next, there is a supervisor daemon process which gets started as soon as we do audio push. So it is the first PID which gets started. So as soon as we do audio push, after creating deployment, configure all those open shift elements which are required, it will start the assemble and restart script, which is in control of supervisor d process. So what will happen, the code will get synced with the supervisor d restart, assemble and restart scripts and the service will restart. So the supervisor d along with some scripts, there are several scripts available that CLI needs to execute on the application container which are provided by the init container. So as soon as we do audio push, the init container starts and it acts as a vector and does all of the processes in the background. We don't have to worry about how that particular application is going to get hosted on the open shift or the Kubernetes cluster. We just need to do audio push and it will take care of all these processes in the background. So like this, it makes developers life easier who are not familiar with open shift and Kubernetes concepts. So with the latest upgrade which we have that is the dev file which is released yesterday. It is P2.0, the official site which I just showed you, that is audio.dev. Over here, the default mechanism to deploy is the dev file. What are these dev files? These are basically the resource YAML files with a defined schema and you can find more details about this over here. These are the registry dev files. So you can contribute and help us make improve this project better. You can just go to the site and explore it in order to know more about audio. So yeah, this was it from our site. If you have any questions, please share with us. Okay, thank you for Varsha and Muhammad's great talks. So if anybody have any questions, just type in the chat panel so that we can feed these questions to them. Just to begin with, I do have several questions. The first one is that what's the difference between auto and kubectl control? Okay, so kubectl is a very, very kubectl-specific CLI, which basically means that it works directly with the kubectl resources. You need to give it a deployment config. kubectl create a deployment config. And the deployment config is written by you. So that is what kubectl is. It's more admin focused. You directly interact with the communities resources. Here the thing is you don't see those communities resources at all. This is hidden in the background. You give a deployment, you give that defile or whatever config file to us, which is either you downloaded by doing the create or you give your own defile in your application. And that is converted into the aspect of kubectl resources automatically. So at the end you will get a deployment config. If you say we'll see get DC, you will see deployment configs there. You will see parts there. You will see routes there or ingresses there, but you will never see it through the auto CLI. Okay, okay. Thanks for the answer. And I have another question is that I noticed that you are installing Node.js, right? So we know that for Node.js, it can automatically sync with the changes you have on the code, right? But do we need, is that possible to automatically, I mean, for the web page to sync with the changes or you have to rebuild to push again? Okay, so at this point, you would have to do a lot of push for the sync, like because the code is synced to do a lot of push and you need to do a Node.js sitting inside the container. It's not a Node.js on your local laptop, right? It's a Node.js sitting inside a container. So you need to definitely get the code there. And the only way to get the code there is to do a sync, which is to do a little push. But after that, we just have no to do what it does. And yeah, this is actually there. For example, Java is a thing, right? So Java also has a hot reload, right? So there, if you use a Java container, it actually doesn't do a build at all. So similar kind of thing. If the container supports it, we do, because we just push it to the container and run the respective commands. What happens in the container is the container's responsibility. Thank you. On the first question, some audience mentioned that one main advantage although is that user doesn't need to know too much about the Kubernetes when developing their applications. Although it handles the Kubernetes results, it's more like, although it's in the upstream of the Kubernetes control, right? So that they can, in the high level. So, okay, for more advanced user, they can still take their file to have a fine-grained control on those. Also can also handle the application view logic for the user, which is captured in the Dell as well. So that it looks like although it can have a broad, I mean, have a, I mean, can also have a fine-grained tricks on the files. I mean, with respect, I mean, regardless of the, it's more like in the upstream of the Kubernetes. Okay. Okay. Thanks for the explanations. Any other questions? So for the, so, let's wait for a couple of minutes. I don't know if or not. So people still have any other questions. By the way, do you know the reason the, the applications fails? Is that belongs to the cluster or the application itself? The reason when you try to deploy? That's a cluster problem because I've had this before. So I've tried this twice before doing this the first time it worked fine. Second time it failed. I redeployed the cluster and it worked after that. So there's something going on with the cluster. Because if there's a problem with, it's not a problem with the application because it works. I know for a fact that it works because it has worked twice for me. So it was something weird that happened to the cluster for some random reason, which can happen because it has happened as, it has happened when I ran through this once or three times before. Okay. Okay. Okay. Thank you. If, if, if no more questions. So we think for this talk, we're going to end here. And thanks again for the workshop and more hammered for the great talks about about the open shift to developer demonstrations. So it's, it's very fascinating. So everybody deserve it to give a try after the talks. And so the next meeting I think are going to happen at 1130. So the speaker is at mid. So he's going to talk about the cogito cloud native business automations. I'll go also typing the type channel so that audience can see it. Yeah. So, um, so if, uh, that's all the, uh, if that's all the question for the first talk, we're going to stop here. So just wait for 10 more minutes for another talk to get started. Okay. Uh, thank you. Varsha. And thank you. More hammered. I have to remove you guys from the, from the, uh, video sharing because I want to restart. Yeah. Thank you. Thank you. Yeah. Bye.