 Thank you so much and we go with the last session that it's about accelerating and modernize application development with OpenShift Developer Console, the ODC and ODO. Jai and Parvi. Hello everyone. Good afternoon. I'm very excited and nervous about this talk. All right, so in this talk we are going to talk about how you can accelerate and modernize your application development with OpenShift Developer Console and ODO. It's mouthful, I know, but we couldn't help it. So my name is Parvi and I work as a software engineer for Red Hat, specifically Red Hat developer tools. So I'm one of the developers for ODO. What ODO is, we'll talk in a bit. I have a background in quality engineering, I've worked on products such as Red Hat CloudForms and Red Hat Insight. I have huge fondness for dogs. I have a big support for them. I have a beautiful six-year-old Labrador. We can talk about it if you want. I would like my co-speaker to introduce himself and tell you a bit about the agenda for this talk. So over to you, Jai. Hello everyone. I'm really excited to be here in front of you all and talk about how we're going to accelerate or provide enhanced experience for a developer. So brief about me. My name is Jai and I'm part of Red Hat. I'm working as an engineering manager in OpenShift Developer Console and I contribute to projects like OpenShift Console, OpenShift Dev Console and my area of interest lies around anything related to web UI or Kubernetes. That was brief about me and in today's agenda we are going to talk about developer workflow, the pain which they go through and the toolings and the products which we have to enhance their experience like ODO, OpenShift Console and OpenShift Developer Console. Now let's see what's our mission. Our mission is basically developer productivity and we look for the challenges the developer face and how to overcome them. We provide them a solution. So in next subsequent slide we'll be talking through it and trying to show you demos how we are going to achieve that. With that I'll pass to Parthi. When we talk about developer workflow we mainly talk about two things. The first one being in a loop, second one being out a loop. You'll hear these terms a lot today so please bear with us. When we say in a loop it is the initial phase of your application development life cycle when you are continuously coding, building and testing your application locally until you are satisfied with it and want to share it with your team for them to review. I hope that was clear. So next is out a loop development cycle which happens at a larger team level where your code is going through a review where you are doing integration testing, security and compliance is checked until you're ready to move to production and release your application. Odo is targeted towards your inner loop development cycle while OpenShift Developer Console which Jay is going to talk about is focused on the out a loop development cycle. So let's talk sorry so Red Hat does provide a wide variety of tools for both your inner loop and outer loop. You can take a look at that. The image is not updated but we couldn't fit all the products in one phase. What is Odo? The name is a bit weird. I don't know I find it weird but Odo is a very simple CLI tool for application developers who want to work closely with the cloud native environment. Maybe it's OpenShift or Kubernetes or whatnot but find it difficult to do so because of the complexities and steep learning curve involved in running your application on the cluster. So let's take an example of this application developer who wants to run their application on OpenShift cluster. Now you might ask why does he want to do that? Why do they want to do that? Why can't they just do it locally? But this application once it is ready for production will be deployed on maybe OpenShift or Kubernetes or the likewise cloud native environments and so our application developer would like to run their inner loop in a production-like environment. So this image it shows it gives you a brief idea about the complexities involved. You need to know about deployments, services, per-sensiton volumes and whatnot to run your application on the cluster. Odo can help abstract these complexities. So with the help of Odo and DevFile this application developer can run their application on OpenShift or Kubernetes or likewise with two or three simple commands. Since this is a Dev Nation day I'm going to use OpenShift but you can also you know so how am I doing so far? All good. I haven't gone blank which is a good sign. Cool. So I am going to use a very simple Hello World application. I know people don't like it but you know it's simple, it works, this works. So it is a very simple Go project and I would like to run this project on OpenShift cluster. So if anyone wants to do a live demo if someone is like me and wants to do these things with me you can check out the GitHub link, this repository. Sorry, you can try this out you can clone it locally and follow it with me and Odo since it's a CLI tool it is supported for all these standard operating systems. We have Linux, Mac OS, Windows so it's a very simple command you can just install it and run. For my case I've already installed Odo. Let's quickly check its version. It's V3, we recently went GA with that which is why I'm talking about it here. So let's take a look at our main.go which is the only file that we have. I hope people are comfortable with Go if not I can just walk you through it. So we have, our application has two simple endpoints. The first one is this it's a ping, when you call it it gives a response in Pong. The other one is connect but we'll talk about it in a bit. Now I mentioned before that Odo uses dev file. Now what is dev file? Dev file, so nowadays we're defining everything as code you have infrastructure as code you have CICDS code so why not your development environment as a code. Dev file will help you define your development environment as a code and Odo uses dev file to understand the resources it should create for you and the commands it should run on those resources to run your application on a cluster. So currently we do not have a dev file and we will use Odo to fetch a dev file for us. The command is Odo init so when you do Odo init it will analyze your source code and determine the best suitable dev file for you. In this case it is go which is correct and it will fetch it from the default dev file registry. So dev file registry is something like image registry if you've used docker hub or key.io or you know the kinds. Dev file registry is similar to that it's just that it supports a lot of different languages and you would find their file for it there. All right so it has a single component here which is all right and it exports port 8080 which is what we need so we'll just hit configuration is correct. I'm going to use the default name that it's detected and we see that we now have a dev file. So let's quickly take a look at it. Now there are two important things that I would like for you to look at. The first one is components. Components defines the resources that should be created. Now we're going to work inside a cloud native environment which means our application will essentially run inside a container and which is why we have defined a container component here. It uses this image that is provided by defy. It exposes this endpoint which we saw earlier. You can define memory limits and the likes and then I said that you can also auto also uses def file to know the commands it should run to run your application. So here we have us we have two simple commands. We have build command which will build our main.go file and we have the run command which will run the binary for us. There are also information such as metadata which essentially tells you more about def file and the schema version and startup project but we don't need to bother with that right now. So we have def file. We should be ready to you know run our application on the cluster but there are two things we need to do first. The first thing is we need to log in to our open shift cluster. So if you have mini cube locally you can just use mini cube you don't need to log in you can skip this step but since I'm on open shift I would need to log in. The second step is that we need to define a namespace where all of our resources will be created. So all the resources will be confined to this particular namespace for this we'll use auto create namespace let's say dndi auto all right. We see that auto created it and it is now ready for use. So next we have def file we have a project I'm simply going to hit order def so this will first create all the necessary Kubernetes resources. Like I said auto abstracts all the complexities so if you were listening to Praveen talk he mentioned that he deployed his application using a manifest he has this used manifest file with Kubernetes resources but with auto takes care of creating those resources for you so all you can do is focus on your code. Now once it creates the resources it will sync the files from your local system to the container and it will build the image so we took a look at the build command in our dev file and the run command so it runs those commands in that order and port forwards our application to our local host so I don't need increases or routes or you know things like that I can simply call this URL and it's just fine now I want to extend my application to you know maybe print a custom hello world message for this I will modify my main.go file so if you see auto detected changes in the main.go file and it is now of syncing the file changes again to your cluster sorry give me a second. Auto dev it continuously watches your project directory for any changes that you might make in this case I made change to main.go so it detected those changes it made sure that my resources were running it synced the file changes to the container and rebuilt the application so now if I hit the same curl command again with my name it says hello par 3 which is perfect it's working as expected now moving on to the second part of our demo I mentioned about this other API endpoint which was connect to this essentially what it does is it pings the MongoDB server and make sure that it can connect to it we have this whole function here we don't need to bother with that right now it I've defined username password and host so this application it obtains these information from the environment variables right now if I was to curl this command let's see what we get it says that it failed to connect to the server because it did not find username which is fine because we haven't connected to a database now since we're working in a cloud native environment my database is deployed as mic service I'm assuming that your you know devops people would provide you with the environment so you don't need to bother with creating these services in my case I've already created the service so now what I need to do is connect my application to the service for which I would use the auto add binding command now my service is not in this namespace is what I know it's an accessible namespace if there's an persona namespace we go to that MongoDB instance is the service that I want to connect to so I hit MongoDB hit yes we'll use the default binding name now I mentioned before that our application relies on environment variables to get information such as username password and host so we'll bind this information as environment variable we'll use the default naming strategy and as soon as we do this we see that auto detected changes in the dev file dot yamal and it created something called service binding so auto uses this operator called service binding operator to bind your application to you know it could be another application or it could be a microservice currently auto only supports connecting to a few microservice out of the box but there are workarounds that you would do so it created service binding resource and it will then recreate the pod and re-sync our changes and re-run our application let's wait for it to work okay so our application is now ready and if we hit people command again it says that you have successfully pinged the server now I'm satisfied with this I don't want to go or take any more time so I'm just going to hit control c on my auto dev and let it delete the resources that I've created I'm now ready to move on to my outer loop which is where I'm going to hand it over to Jay but yeah I think that brings in to my demo now if you want to know more about auto we have odio.dev website you can visit it we have quite a few quick start guides so it doesn't matter what framework you're using or if you modify your dev file correctly you know odio will help you run your inner loop so this is we have quick start guides for these for now and without all right so we saw that odio is a cli tool but if you wanted to run this thing directly from your IDE let's say vs code or intel ej we have open shift toolkit it's a plugin which you can install which essentially uses odio we have a talk coming up Mohit Suman in some time and if you would like to get in touch with us we are on kubernetes.slag.com at odio and it's an open source project so feel free to try it out and you know contribute to it if you wish I'll hand it over to Jay thank you everyone thank you pathway I hope I'm audible so that was great in last few sessions we have seen how from yaml we came to odio cli and with few clicks or few basically terminal options we can easily code and build and test our application next we'll be seeing open set console so I think all the sessions pretty much people have used console in some form to show you guys so I'm not sure how many of you have actually used it can we get a quick hands on how many of you have used open set console or basically aware of it I know I mean basically in open set for kubernetes you need to lot of know lot of things those things have become easier with odio but what if I tell you with few clicks you can create or deploy your application and make it cloud native up and running in few minutes so before going that let me take a brief step back and talk about open set console open set console is primarily interface through which any user can interact with it and we have two different perspectives in open set console so I will not be taking you through the other slides because I know it's pretty much around lunchtime and you guys would be feeling hungry or sleepy so we'll stick to the demos so there are two perspectives one is administrator and another one is developer so why do we have two perspectives basically admin perspective is focused more on the guides from the infrastructure of operations sites who are more worried or wanted to have information about the cluster so in this particular screen what we call overview you can see the cluster health overall cluster status at a glance the activity happening around it along with the usage about the cluster utilization there are a lot of this information available to you and cluster admin can do a lot more than that there is option for administrator again where you can take cluster settings you can take a look at the upgrade part do the upgrades as needed as you can see in the particular screen this cluster is in 4.10.36 and we have two paths which is one so admin can also take control of the compute be it like nodes or machine state whatever they want to do in this particular cluster and you also have ability to add resource quotas that's more like restricting name spaces depending on utilization if you want to do that and apart from that one thing I would like to highlight now itself is like about operators with opposite 4.x operator hub is a place where which you can install a lot of things it provides a lot of operators out there which can plug into the cluster and enhance the overall experience or try to achieve something which I mean which you'd like to do so in this particular cluster you can see there are a lot of operators but few I have already installed here for this demo which are listed over here I would like to highlight few which I'll be talking through in this session one is like open set serverless open set pipelines web terminal etc so that was about admin perspective or administrator now let me go back to their perspective so in developer perspective we have a particular dedicated section which is tailored just for developers or keeping them in mind whatever they do they do their day-to-day work so be it like if you have your repo code first into gitlab with bucket or bithub you want to quickly deploy it or if you have a image somewhere you want to quickly try it out or if you are working on java-based system and if you have even jar as well along with you for workers or spring code you can usually do upload the jar and click you can make it cloud native basically so in this section you can see there are a lot of options like import from git which I spoke about container image and then we have upload jar file and even import yaml if you like yaml a lot and it's not just that some cart if your thing is shown for eventing here so these are being shown because I have installed open set serverless operator into it which I'll go into it a bit later so let me quickly go to import from git and try to show you something if you select any of these options here like say ruby so what I have done till now is just provided the github URL and it has detected the runtime for me so we have pretty much a lot of runtimes which will cover everything this is for s2i flow we even have docker file or def file flow depending on what your repro contains you can use that so in this particular example it was ruby so it connected to ruby let me change it and use something for nodejs so I'll use a nodejs app now basically build time is node it has been detected already and this is basically a nodejs game as the name says is built on gss so let me try to go you and show other options because everything is pretty popular for you I don't think you need to make a any click or change anything I'll just try to remove the application name it's not needed but it would like to and then we have resources where user is shown with three types of resources here one is deployment and deployment config and serverless deployment user can choose whatever they want to if you want to go with a default deployment is the thing and next I would like to go ahead and show you the advanced options there are a lot of things you can configure if you'd like to do like health checks over there build configuration scaling resource limit etc but I'm not doing anything all I want to show you with basically two clicks you can deploy your application so the first one was while providing the repo and next I'm going to do now create so this will take me to a view which we call topology I would like to highlight about topology view as well so topology view is more like a one single spot for developers where they can take a look at all the applications which they have in their name space they can see the relationship between them they can see the resources it creates a lot of other options so as you can see here currently the Node.js scale is being deployed and build is running if I click on it you'll see the status you can check the logs and behind the scenes it has created already bunch of resources for you like build config services routes and in turn to be creating for you spots and up as well or in this team as well so now you see it is coming up what they're getting created so that's the power of or basically how easy it makes for a end user to deploy their application so the application is pretty much up and let me go back to the ad flow again to show you one more thing that is about developer catalog so it's not just the list being shown over here we have a full-fledged catalog over here where you can go and select different options depending on what you want to achieve it even has def file which are to be spoke about if you want to try out a def file directly so that was about the catalog now let me go back and try out something with container image quickly so I will use this is one of the image which I have so just you need to provide your image if it's public one it will be validated automatically if it's private one you need to provide the credentials and next is again everything is pre-populated I will not make any changes again here and I'll try to select serverless deployment now so what is serverless deployment let me talk about that for a bit so I've installed open source serverless operator into the cluster that is the reason you are able to see the third option into the resource section so serverless is based on upstream project called k-native and it has two primary components one is serving and another one is eventing what I'm showing you guys now is serving so serverless deployment if I select and press create it will try to create a k-native service for me which we call it serverless deployment the benefit here is it can scale to zero depending on the configuration what has been provided so suppose if there are no traffic being received there will be a scale to zero so in this particular view you also see two similar tangle boxes these are against serverless deployments which I have created prior to this session if you see it's called auto scale to zero because there are no traffic being received by this service and scramola is as well up and I know just game is as well up let me try to click on the route and see if it's working fine so game is here it's ready to play but I will not do it right away I'll go back to my side again and this is scramola so the scramola is basically a simple game not a game basically a story pointing poker state where you can provide your name and join the game you can give your name you can select the game name say sprint number whatever join as creator or as a player and you can start the game add stories and a lot of things you can do achieve over there so let me go back to the UI again so that was about container image flow and the import from it I wanted to quickly show you one more thing which I really like uh uh upload from jar so you can click on this and go to the view or if you are in topology view I'll just try to scroll it uh minimize it so I have one parkas app over here if I select and drop overhead as you can see the whole area is being highlighted now and user will be taken to the form no need to fill anything it's one dnd I will go with default things you can keep application name as well and I will say create but you can again change and configure anything or overwrite things if you need to so I'll be doing create now so some of the jar is being deployed and if you click on the status you can see from the information as I saw before so now let me try to go back to scrambola again that the kinect service and I would like to talk about traffic splitting basically it's not just that what is being deployed you see in the right side there are revisions we can have multiple revisions and we can split the traffic among those revisions so how to achieve that this time I'll be editing my application so with one click you're back to the source how you created it and this time I'll go to a scaling options and I'll say min part as one and max as three and I'll do say so in topology view you can see again a new revision has come up and the old parts are being permitted and then the option parts that classic distribution which I would like to show you now it opens the model you can see depending on your interest what tactic one two we received by each of the revisions I'll say 50% for over two and say 40% for one and I'll do say boom we have our distribution being shown into the UI as well you can try it out later I will tell you the link how to quickly check it out so that was about serving now let me go to the next component for serverless that is eventing so basically eventing we have different things based on difficult cloud events there is a way where which you can subscribe to cloud events there are different things like channels or brokers depending on the usage I'm not going into detail of those as the registration from Mohit post lunch which you can join and for now I'll be trying to create something let me create a channel so there are different kinds of channel currently I have in memory present over here I'll just select in memory and say create so channel is visualized like this and as you can see there are no event sources for subscriber let me create an event source if I go to event sources you can see bunch of event sources over there there are many from Apache you can subscribe you can create any of the sources read from telegram aws cancels or plaque etc I will stick to pink source for now just to show you demonstrate to something I'll do a create pink source I can provide definition as my data I'm scheduled I'll go with pause again and in the input target I'll select the channel which I just created so no further changes and you can also go to yaml view if you would like to see but that's fine for now I'll go ahead and do a create so as you can see this is a connection between my source and channel but still channel doesn't have any subscription let me try to subscribe this channel to some of the services so what I'll be doing for now there are two event display services this is basically a simple service which locked the cloud event which it receives so I'll do a drag and drop on the first service it's here somewhere for me just do a click again drag and drop to other service and do a click the connections are being formed so we have our setup ready cloud events are being emitted through the event source goes to the channel and then to the services so as you can see the still auto scale to zero so in a while you will be seeing events coming in and the law will printing that event definition in this case which I have provided so let it it will take some time so meanwhile I know many of you will be missing even terminal right I mean you like terminal so much you'd like to try a lot of things over there so if we have inbuilt terminal which is powered by web terminal operator again into the open set web console and if you click on this particular icon you will see that so this is very useful useful if you want to do something serious or want to make some changes or you are in some devices which doesn't have terminal like tablets or something like that you can make use of this and it has a lot of tools already installed for you if you can take a look it's there you can also make it full stream if you would like to or maximize it so great I'll go back to the demo now and I'll close the terminal yes and as you can see the ports are up if I click on the service in the port logs you can see the events being emitted here so that was about the serverless next thing you which I would like to talk about is open set pipelines so we have seen the s2i flow initially which I showed you guys so offensive pipelines is based on tecton which is an upstream project again let me go to import from get quickly and this time I'll deploy a goaling app I'll select deployment and there's an option to add pipelines so pipelines is basically basically different tasks that together which forms a pipeline this is a pipeline definition which you are and once we create hit create it will create or invoke that pipelines for us and do the fetch build and deploy so literally build is in progress some tasks are running and some are in pending state if you want to take a closer look you can go to the pipelines tab and just click on pipeline runs and it shows you the current status along with the logs for it so that was about the offensive serverless and pipelines which I know to cover and now this is okay great we have built so many workloads how to observe it or how to basically monitor it or how to track it in the console we also have a section dedicated for observability if you click on that it reloads with some other dashboards I will select bots for now and as you can see it is showing me the CPU utilisation CPU quota etc you can even go to metrics tab and try to customize or use any query from queries if you want to play with or tweak anything and try to get the information what you need so that was about the monitoring sections and these sections are available in admin as well if any of the administrator wants to take a look they can take a look at a glance at across all name spaces so this is what I wanted to talk about pretty much and next thing which I'd like to show about a few guys are missing basically dark theme we also have user preferences section where you can select the language of your choice perspective of your choice which you want to learn to or you can select dark theme and so we have a lot of things to offer over here I'll go back to slide I know we are during the near the lunch time now so these slides pretty much I covered during my presentation so I'll just do a quick recap with talk about the admin and depth perspective you saw what odc is or obviously depth console and the benefit it provides to the end users then we discussed about the topology view it provides a lot of information at a glance to the user and we discussed about web terminal offensive serverless both the components which it provides like serving and eventing and then we discussed about the pipelines and how easy it is to invoke the pipelines with open set web console and we discussed how to monitor or track your applications so it's not just that it we have a lot of other things like with start direct to sample applications with authentication manager services and many many more I would highly encourage you guys to try it out if you want now you can go to redhat developer sandbox it's free just click on it and start using it restoring open set console and depth console together so that was our time thanks again everyone for being here it was really nice talking to you all thank you for listening to us yeah perfect thank you so much