 Hello, everyone. Nice crowd. Thanks for being here. So today, we are going to talk about development on OpenSIFT. It's mastering development on OpenSIFT. It's just an introduction to OpenSIFT Developer Console. And for introductions, my name is Rohit Rai. I'm a senior software engineer. I work out of a Bangalore office in India. And I work on OpenSIFT Developer Console. So what's the purpose of this talk? The first of all, introduce Developer Console. Just show up the shiny new thing that we built. Get some patch on the back. So why do we need Developer Console? That's what we are going to find out. Why do we need it? What was the reason that we built it? And try it out. Hands on experience and deploy some applications and check out the experience. So what is Developer Console? Any of you have used OpenSIFT before? It came with Web Console, which had an admin perspective. But being a developer, personally, when I started with OpenSIFT, I was kind of lost in that admin console. Because it had a lot of things which I didn't care about. I usually would like to see experience where it's focused on developer. I would want to get the app deployed and managed and test it out. So that's where I was kind of lost. And that's the reason that we built Developer Console. It's a fresh new perspective on OpenSIFT. So it will sit beside the admin console and introduce developer-focused workflows to the developers. So that's the main goal of Developer Console. So why do we need Developer Console? I just answered that we add this new perspective. And we give this whole new workflow experience for developers, where developers can easily focus on their workflows and don't worry about cluster admins or administration, those things. You don't have to worry about. You just need to deploy the app, test it, and go away with your life. So developer perspective can make a developer productive without even knowing OpenSIFT and Kubernetes. So OpenSIFT and Kubernetes world is really big, huge. You have to understand a lot of things, get the foundations done, then only you can start to get productive. So Developer Console can get you productive in no time and without even knowing or having the deeper knowledge of OpenSIFT and Kubernetes. So what else it does? It enables a developer to focus on the full lifecycle of a app, like deploy it, test it, manage it, and monitor it. It helps you do that. So it seamlessly integrates OpenSIFT and Kubernetes. And also integrates a lot of other projects, like OpenSIFT serverless, OpenSIFT pipelines, and code ready workspaces. It integrates that and gives you a nice experience. So I really like this line. It's copied from one of our PMs. It says use Developer Console to feel home in OpenSIFT and Kubernetes world. So if you feel alienated in Kubernetes world, use Developer Console. You'll feel like home. So what else do we get? What are the features? The first one, I would say, is developer-focused workflows. So we've designed these specific workflows around whatever day-to-day life tasks that our developer does. And we've created experience around that. We've given you the workflows that makes your life easier. And one of the examples would be deploying and testing an application that you have on a GitHub repository. That's the most common one, I would say. So this is just a screenshot from Developer Console. These are the workflows that we get if you use Dev Console. First one would be from Git, wherein you import a GitHub repository, and then you deploy your application and test it. The other one would be you already have a container image built on some Quay or Docker Hub somewhere deployed. And then you import that and deploy on OpenSIFT and test it. The other one would be we have a catalog of resources, catalog of applications, builder images that you can use as a sample, or you can use your own code base and build on top of that. The other one would be Docker file. So you have a Docker file in your GitHub repo, which defines how you need to build your project. How do you deploy that on OpenSIFT? So this workflow is specific to that. Then we have a YAML where you can import any of the Kubernetes native resources, YAML resources, and create your own applications. Then we have a database workflow wherein most common workflow happens wherein you have an application and you want to add a backend service or backend database to it. How do you do that? So you import a database and then you connect those applications. So next feature would be the topology view. What is a topology view? You might have seen topology in other projects, like Kiali has a network topology. So topology is the simple way to visualize, monitor, and manage your applications in OpenSIFT. It's built around the application workloads, not around the network. So you get to see the application data, application health. So this is a screenshot for our topology view. You can see you have a different set of workloads deployed on it. You have different connections showing up. You click on a workload, you get the sidebar open up, and then you get the details of those workloads. Like you can see there are pods, there are builds. You can start a build. You can check the logs of the build. You can see the routes. And you also have those little thingies wherein those. Yeah, so one of them would be to take you to the route of the application. One of them shows the build is successful, the green check sign. If the build fails, it shows a red exclamation mark telling you that, OK, your build failed. And then we have code-ready workspaces icon wherein you click on it and you directly get taken to code-ready workspace where you edit your code. OK, what else? So we also integrate OpenSIFT serverless into Dev Console. Just to give you a little introduction about serverless. Using a serverless mode, your application only uses a fixed amount of compute resources. And it can scale up or down based on the number of users that are using your application. So it uses Knative as the foundation. And Knative is a Kubernetes-based platform. And it helps you deploy and manage your serverless workloads. So that's integrated into Developer Console. This would be a screenshot of a workload that has been deployed as a serverless application. So if you see the first one, the Node.js one, it shows that it has a traffic going to 100% to that revision. And the other one has two revisions. The square thingy is denoting the service. And then the round icons, those are the revisions. So you see the other one, they have 50% traffic set to both of these revisions. One of them is black, one of them is blue. So one of them is receiving traffic. The other one is not receiving traffic. So it's terminating the workload. It's terminating the pod. It's getting down. The other one shows just white blank circle. That means that the pod is already scaled down to zero. And there's no traffic to the workload. So you can easily visualize. And you can easily manage your Knative workloads here using topology. And we also have OpenSIFT Pipelines integration. OpenSIFT Pipeline is a Kubernetes-native CICD platform, wherein you create and define your pipeline, you create and define your tasks, and you use it to deploy your application. So it's based on Tecton. Tecton is a community project that defines the building blocks of building your pipeline. So we have integration of developer console with OpenSIFT Pipelines as well. So just to give you a brief introduction of how a pipeline works, there are multiple CRDs, multiple custom resource definitions, defined by OpenSIFT Pipelines operator. And one of them is tasks. So a task basically defines what you want to do in a pipeline. Just one basic simple task. And you define multiple steps in it. It runs in one pod in multiple containers. And you define your pipeline building out of these tasks. So it creates your tasks that does something. And then you say, in what sequence your tasks would be executed, that would define one pipeline. And then your pipeline would be run, and it would create a pipeline run resource. And that has a task run resource. Every pipeline needs some input and some output. For example, it needs a GitHub repo input, and then supposedly it needs an image registry output. So the URL to the image registry, it builds some UI repo, and then it deploys it on the image registry. So those are done by using pipeline resources. You create these resources saying that, OK, this is a Git resource, and your URL is there. You give them, give your pipeline the input. And then it outputs the build pod or build image to your image registry. Then the Tecton controller runs your pipeline, and you get the results. OK, so this is a screen sort of how your pipeline would be visualized on Pipeline's details page. So you would see that, OK, there are two parallel tasks, build API and build UI. And then there are sequential tasks, which needs to be run one after another. And you can check the ML of the pipeline. You can see the parameters. You can see the pipeline runs and see what are the last pipeline runs that was successful or failed, what are the logs. You can check that. OK, so the other thing that we did was we created a new operator called Service Binding Operator. Usually what we do is we create an application. We deploy it on OpenSift, but it needs some kind of operator backed service at the back end. Suppose the most common example is the database. But in order to be useful, in order to use that database in your application, you would need to pull all the secrets, config maps from the database deployment to your own application, add them as environment variables, configure everything manually. And then that becomes very cumbersome. So we created this operator called Service Binding Operator that helps you manage that task. So it automatically takes the secrets, config maps, and it injects into your application. So we have Service Binding Operator, Service Binding Operator available as a community operator. And we have that integration into topology. You can easily use Service Binding Operator through topology. So this is a basic diagram of Service Binding Operator. What it does is you need to create a Service Binding request, which basically defines that, OK, this is the application. That other thing is the database. You need to service bind it. So you need to pull all the secrets and config maps from whatever is available and inject into the application so that your application recognizes it as the operator backed service. But all this seems very confusing again. Like you would need to create this Service Binding request. How is this easy? So what we did, we included this feature into topology where you create your node application and you create your PostgreSQL database. And when you hover upon the workload, you see a blue arrow. You pull that and leave it on the database. It automatically creates Service Binding request. It automatically pulls all the secrets and injects it into your application. And your application starts to create a new part so that it's available. So the next thing we did was we integrated code-ready workspaces into developer console. The next thing that we need to do is we already deployed the application. We already have a database back to it. We already saw how we can write pipelines or how we can create a k-native service. The next thing is how do we edit the code? How do we integrate that with already available online ideas like EclipseJ? So code-ready workspace is the productized version of EclipseJ. And we have the integration into Dev Console wherein you directly get the link to your EclipseJ dashboard or Workspaces dashboard. And on every workload that you create, every application that you import, you get the little icon there. You click on it. It automatically creates a workspace for you in code-ready workspace. You start to edit that, post it to GitHub. Again, your application builds. So it follows the whole lifecycle of your application. OK, so where's the demo? It's demo time. Let's see some of these use cases and live code. So this is a fresh new cluster, 4.3 cluster. Let's log in. Too slow. What do we do? This is just safe. OK. I don't have the code. OK. If you don't have it, I don't have it, so it's me. I have one in my bag, actually. So we are done. Finally. OK, so we are logged in. And this is what we get when we log in to the open set web console, the admin console. But there's an option to change the perspective to developer. I'll just change that. I'll create a new project called defconf. And you get redirected to topology where you see the same ad flows. Like, there's no workload in your namespace. So we see that there's an option to create new workload. It says that there are no workloads. Just go create some. So let's start with import from git. I have a GitHub repo here. It's a Node.js application, crowd application. I'll put the URL here. And it starts to validate. It validated the URL. And then it automatically directs that it's a Node.js project. So it selects the builder image to you. And it says that, OK, this is the recommended builder image. If you want to change it, you can change it. But I want to use a Node.js builder image here. If you want to configure your git config here, wherein you want to define some context directory that you want to build into, you can define that. You can define the git reference, which branch of your repository you need to build out of. And you can create a source secret wherein you would be able to pull a private repository as well. So I just hide that. Then you can change the Node.js version here. I would go ahead with 10. Then this is the application group name. I would say it's NodeAppGroup. And then I would say NodeApp. There you get to select the resources that you can create. The deployment is the native Kubernetes deployment. And deployment config is open safe deployment config. And then there's an option to create a Knative resource as well. But we'll save it for later. And we have some advanced options. There will be a route created for you automatically. And you can configure that route. You can add the hostname and path to it, target both to your route. If you want to secure your route, you can add your certificates as well. You can add some build configs. You can add some environment variables. You can configure your deployment. You can add environment variables and set triggers here. You can set up scaling, how many parts you want. Let's just make it two. Then you can define some resource limits that your application can use. And you can define your own custom labels. So I'll click Create. And we see that this is a workload that has been created. I click on the workload. I see the resources that it's created. You can see that build one is pending. It has started the build. I can see the build logs here. So it's cloning the repo. There are no pods currently, because the build is still in progress. There is a service created. There is a route created for you. But then if I access this route, I won't be able to see the application, because the build has not finished yet. So we'll wait for the build to finish. In the meanwhile, we can see these decorators here. It says build running. This one is for route. This one is for code ready workspaces. While the application is getting built, let's go and check the code ready workspaces dashboard. So you click on this link and you see code ready dashboard. Let's open this up. Since we are using self-signed certificates, it's saying that it's not secure. It tells me to log in. Get permissions. Create the user. So the workspace is ready. But there is no workspace created for any of the repos here. It's just the dashboard we can create the workspace. So let's just leave it there. Meanwhile, you can see the build is finished. It's been green. And the pod is getting started. Once the pod is up, we'll be able to see the application. So you can see that the application is ready and it's usable. But then for the database connection, it says not applicable because it needs a database and we've not connected any database yet. So what do we need to do? We go to add. We go to from catalog. Here we see all the applications that we can create from the catalog. And we want an operator backed service. So this is a operator backed database that can be easily bindable to your application. So let's create this application. So the database is created. You go to topology again. And we see that the database is getting created here. What else we can do is we... So this gray thing, this is the application group that I talked about. We created this application group to kind of group all of the applications that are related to each other. For example, if it's a front-end application, if it's a back-end application, it might have a service. It might have a front-end. It might have a back-end database. So you can group all those applications together and you can easily do that. This one is not in any group. So what I'll do is I'll just drag this and drop this into the group and automatically add the labels and it becomes part of this group. So the database is ready. This is ready. Now I want to bind this to the database. So you can see that these blue arrows, I drag this and drop this. It creates this service binding. And now you can see that a new container is getting built because it did something in the background. We'll just check what it did. So we'll go to this and environment variable. Now you can see that there's a secret getting mapped and it's getting injected into the deployment. So this secret contains all the database details. Let's just check what these secrets contains. Let's search for secrets. And you can see that there's a secret that was created and you see database config, DB host, DB name, DB password. All of these came out from the secrets from the database. So who did this? Let's see the service binding request then. And you see this resource getting created. This is the YAML. You see that, okay, we want to bind node app to database and then it says that binding status is success. So it's already successful. Now, if I click on this, it says DB process. So now you have the database connection and you just had to drag and drop one arrow from one application to the database. So it makes you much more productive and easier to use. Okay, so let's go back. Now what we want to do is, now the other part is how do you edit this, application edit some code and then redeploy it. How do we do that? So what we can do is we can set up a GitHub web hook trigger. We go to build config and there's option to copy URL with secret. I'll go to the repo settings, web hooks and web hook. Okay, so we just paste that same URL that we copied from build config. We say that we want this to be a JSON and since it's the cluster that doesn't have proper certificate, that's why we want it to be HTTP, not STTPS because it won't recognize that and it won't send the request. So we say add a web hook and you see this green check mark says that the web hooks addition is successful and it can successfully ping the server that it needs to ping. So now we need to be able to edit the code and then push something to GitHub. It should ideally update your application in topology. So now we click on this decorator. It says edit source code. What it does is it takes you to a URL in code ready workspace which creates a dev file for your repository. If your repository already has a dev file, it takes that and it builds out of that. If not, then it uses a basic dev file wherein you can edit your code and you can use it. So it's starting the workspace. In the meantime, let's go and create a k-native service while it's creating the. So this time I choose deploy image workflow wherein you have a container image already present on Docker Hub or Quay and you pull that and you create your application. So I already have this image on Docker Hub. I added that and pressed enter. It serves that image and it shows the details of that image. So now I want to create another application group called serverless, name would be fine and I want to select k-native service instead of deployment or deployment config. So this defines that you want to create a serverless application and not a normal application. So you see this square box. This defines the k-native service. You click on this, you go to the details of this k-native service. Since the pod is starting, it's not showing anything. Once the pod is up, it shows the revision in between. So you see that revision is there and it's still getting created. The pod is getting created. It's still starting the workspace. It takes some time. Okay, so the pod is up. You can see that it shows a URL here. Route for this service, it's still getting up and you see that this arrow says 100%. So this square thing is the k-native service and then this circle thing is k-native revision. So every time you change your service in k-native, it creates another revision and you can manage your traffic to various revisions. So you can ideally create a canary-based deployment or something wherein you create a new revision, you set traffic to 10% or something and then if it all works, you convert everything to 100%. Okay, so the application is up. Let's create another revision for this and see how the traffic splitting works. We go and edit something. I'm just editing a random thing here. Save this and go back to topology. It should be creating another revision here. Yes, so you see there are two revisions now available. Once this revision is up, we can set the traffic. So you can see that there are some labels, namespace, details of your service as well. You have some accents here wherein you can edit application groupings, set traffic distribution, edit labels, edit annotation service or delete the service. So the revision is created. Either we can set it from here, edit traffic distribution or you can right click on this and you see the same accent menu. You click on traffic distribution. So you can see that 100% of the traffic is getting redirected or directed to this particular revision. I want to make it 50-50. I say this old one and this is new. And select this revision, the second one. Click save. So you can see that the pod is getting terminated. The black circle means that pod is getting terminated and it's because we don't have any traffic to the application. So that's why it's scaling down to zero. You can see it's scaled down to zero. You have autoscale to zero, so on here. Now we're good and you see that the pod is getting up. You can also go to a particular revision by clicking on the route icon on that particular revision. It will open up that new revision. So if there are some changes you want to check that out, you want to test that, you can go directly to that particular revision and you can see that the pod is up again. Okay, let's go to the workspace again. Again, the same problem with self-signed certificates. It's stopping this to open up. So let's just open this up and browse it directly. Okay, so we have our workspace ready. You can see all the code files are there. You can edit something. Let's just edit something and post it to GitHub and see if the deployment works correctly. So let's just change the title here from DB to database connection. Save it, add a commit and post this to GitHub. So it needs your username and password. So it's post the code to GitHub. Let's go to GitHub and check if the code has been posted here. You can see there's a new commit just now and if we go back to topology, you would see that the new build is running. So once that build is finished, the new changes that we just pushed. So since we configured the webhook, it triggered this new build and once the build is complete, we can see the new change in the application directly. Okay, while the build is running, let's just check out the pipelines feature, how we integrate pipelines. So right now we don't have the ability to create pipelines directly or have a good UI use or experience to create pipelines but you can create it using the YAMLs. So I have some pipeline YAMLs ready. So this is a task that defines applying some manifest for your application. So I'll click on this. This is import YAML flow which you can also go through this. I add this task, add one more task and finally I add this pipeline. So this pipeline just defines the sequence of tasks. So if you look at it closely, it says tasks and it defines what needs to be run at what sequence like build API needs to be run. Then after that you need to run apply manifest after build API. Then you need to run update API image, things like that. So it's a sequence of tasks. I create this pipeline and you can see the details of this pipeline. You can see what are the steps in your task. These two tasks are going to be run parallelly and then you get these tasks that are running sequentially. You can check these tasks by clicking here as well. YAML of this task is also visible. Now you can click on accents and you can click on start. So we need pipeline resources in order to give the input and give the output. So let's just create those resources and Dev Console lets you create these resources on the fly. So let's add API repo at the URL. It just created a pipeline resource for you. Similarly, we need to provide the output image URL which is the internal open-sift registry. So all the resources, whatever the pipeline needs, whatever input, whatever output it needs, we've already created those pipeline resources. I click start. It takes us to pipeline run. Here we can see the visualization that the first task build API and build UI is running. You can check out the logs by clicking here. You can see two different tasks are running parallely and you can check those logs directly here. You go to pipelines and you see this and you see the details of the pipeline. You see that the task is running here. If the task is successful, it shows green. If it fails, it shows red. You can start the run again, start last run, which is basically whatever resources you had earlier in the last run, it would create the resource again. Okay. So let's go to topology again and see if that application has been built again. So now you can see that the change is already there, database connection, Postgres, and whatever change we did in the editor, it automatically sent to GitHub and it automatically built the application for you in open-sift. So I think that's it for the demo. So what's coming next? So we have plans for Helm charts integration, wherein you would be able to create your applications using Helm charts and see all the Helm releases and update and roll out. And then we have interactive pipelines builder. We just created this pipeline using the YAML, but we will be seeing a pipeline builder wherein you will be dragging and dropping some tasks and you will be able to create pipelines easily. Then we'll see network and service mesh data wherein you'll be able to see the traffic data from one application to service or database and then you'll be able to see the application health in the topology. If any application is having some errors, you would be able to easily debug them and check them. Then we'll have improved experience around operator backed services and a lot more features. So thank you. If you have any questions, yes? Yeah, right? Okay. So you can use the same thing. So usually you would have a cluster admin who would create these roles and give it to all the developers. They create these applications, they test them. So it would be in a developer environment or testing environment and then there would be a production environment managed by your cluster admin or your SRE team or whoever and they will manage those applications. So it's basically, it lets you create these applications and test it easily and then push it out to production before actually pushing it out to production. Yeah, yeah. So right now we don't have that capability wherein you create and deploy a set of applications and then you directly import those configs and it would be deployed on production. Yeah, it's not there but we might have some plans later on. It's just a very new project. We started with this in 4.2. The first release of developer console came with open set 4.2. We are still learning and we are still adapting to developers' needs and what needs to be added as a feature. So we might have something like that in future. From, sorry? From a pull request. Yeah, so you can create custom pipelines and you can watch for your web book triggers and based on that you can build your pipeline whenever the pull request comes. So while creating the web hook, you can say that, okay, just ping me when a new pull request comes in and when that comes in, you would create a custom pipeline which would pull the code from that particular pull request and then build it up and then deploy it so that anyone can test it. And we've actually done this previously in our testing environment wherein we were using some other repo which we needed to test with open-sift console repos. So what we did was we created similar CI pipeline wherein you create a pull request and that bot would create a new deployment on open-sift and give the URL to your pull request as a comment. So any reviewer can come and click on that and easily test that UI or your application. That's like a pipeline you guys built that's not native to this. Yeah. It'd be really cool if it was. We have plans for adding pipeline templates to a pipeline operator. Pipeline team is working on that and we already have integration into Dev Console while you're creating your application while importing from Git. You might have seen a pipeline section but with exclamation marks saying that there's no pipeline template for your runtime. So later on when the pipeline operator team adds those templates, you'd be able to use those templates easily and create that. So that's the plan. We'll have those templates which would do something extra from what build config and deployment config does for you. So similar thing. Yeah. Okay. Yeah. So we have a team working on GitHub Spark wherein it would make it easier for you to do GitOps using Dev Console and open-sift in general. But right now we don't have that capability but yeah, it's coming. It's a very new project. We are still a very kind of a newborn child. So we'll grow. I think my summit... Yes, so for serverless, the same triggers are available. Whatever you saw, like web hook trigger or image change trigger or service change trigger. So whenever a service changes, it creates a new revision or whenever your code changes, you can set up those web hook triggers and then you can easily integrate that. Yeah? App group concept. Right. Is it implemented via labels or is it actually a resource? Yes, it's implemented via labels. So you see that same thing in Odo as well. Via the CLI, it creates this application group concept wherein it uses labels to manage those. Yeah? Okay. Thank you. Great audience.