 Hi everyone. So, I am going to talk about YouTube is again another developer experience enhancement tool on Kubernetes. So, how many of you have used Kubernetes here at Docker? How many have used Docker? So, this is only as the title says, it is about continuous deployment to Kubernetes using it. So, there is a lot of Docker Kubernetes stuff involved here. I hope everyone will be able to do both of this. So, a little bit about myself. So, I am Shahid. You can find me on Twitter. So, I am from this company called Hasuda. So, anyone heard of Hasuda before from the ground? Okay. Nice to see some hands. So, we work on Kubernetes, Docker, Postgres, GraphQL and some of the other trending latest new tech stuff out there right now, open source tech stuff. So, we recently open sourced our GraphQL engine on Postgres and that is the primary product that we offer. We also have a Kubernetes platform which is a more enterprise oriented product. So, you can find all of it on Twitter, go there and star it and it makes us really happy. So, I work at Hasuda, especially on the Kubernetes and Docker CLI stuff. So, any Heroku users here, people have used Heroku. So, Heroku introduced this concept of Git push. So, earlier what everyone was doing is they write code and test it at a clock early, test it at a clock early. And when they want to get it on the server, on the internet, what they had to do was somehow, I don't know, in a real race, I remember using FTP to copy files on the machine, but later people figured that they can just clone the reverse tree on the server, run some scripts over there and get it working. And then Heroku came in and introduced this new way of doing things, which is Git push. You had to just write the code and just do Heroku app create and then just do Heroku, Git push, Heroku master. So, what this Git push, it takes the language that your code is written in using something called buildpads, and then it employs it on Heroku. So, this was very eye-opening for developers. Now, they did not have to depend on, imagine the life of a front-end developer. The only thing the front-end developer cares about is my application is working in my browser. I know Java script, I know how to fix things. I don't need to care about SSH installing some stuff on that, and getting installed, I am installing whatever. I just need to get this application delivered to my users, and Heroku made that possible with just one Git push. So, now comes Kubernetes, or before that there is a tab Docker. Docker also introduced this way of doing things, packaging your application into a container, which can be a container. So, a developer only need to create a container by writing a profile, and then if they can put this container anywhere they want and get their code working. So, this was also a very good step for developers. Now, they only need to learn about whatever staff they have working on, and just keep it all a little bit Docker. So, if the Docker container can run out of their machine, they can just give it off to anybody who can then give it on to a server on the internet. Either using Kubernetes, a lot of the Docker swarm, a lot of the compoto. Then, Kubernetes came, and then it did. Now, developers write Docker file, write Kubernetes YAML files. Once they give it off to anybody else, they can just implement it. So, we thought, okay, this process can be easily, can be made a lot easier. As a developer, okay, you might need to know Docker, you might need to know a little bit about the Kubernetes YAML you want. But, if you want to get your changes immediately on a Kubernetes cluster, how can we, how do we get to from your laptop to Kubernetes in the fastest way possible? So, this is similar to a lot of constant and OpenShift introduced, but OpenShift is on top of Kubernetes, and it's a different, one more thing that you need to do. OpenShift is readily available on all the Kubernetes cloud providers right now. You can click two buttons and get an OpenShift cluster. So, this was the scenario before and after the interview, let's see. So, there are never three commands. You write code, you do a Docker build, then you do a Docker push to any registry. Then you use Qubectl command to edit the image, image link to the current image that you built. So, give you a change to simplify this process. And this is not just about three commands. You need to also think about how much time you need to take. So, if you're building a complex application and the Dockerfire contains several installation steps, what several difference is. Depending on infinite connection, all these build and push steps can take from anywhere, from few minutes to a few tens of minutes. So, if you have couple of GPs to the build and push and all, it will take a lot of time. So, we thought let's make this process a bit easier. What we need is to build the container. So, why don't we just build the container in the cluster itself, where the high speed element is guaranteed with the cloud provider. So, the way this is achieved is using githubes. So, how many of you used githubes earlier? Okay, great number. Very interesting probably. So, githubes lets you move certain actions to your key commands. So, pre-commit hooks are there, pre-push hooks are there, post-receiving, pre-receiving hooks are there. So, what you can do with githubes, githubes are very amazing. The common thing that I do is to add a pre-commit hook, which makes some reading and other kind of checks, before I commit any code whereupon. So, we make use of a hook, which is called post-receive. This is used to validate the code that you push to a remote before, after the code is received, after the objects are received, but before the tree is moved forward with that code, that object. So, I'll come back to this later. So, githubes combined with githubes gives you a lot of power. So, the demo codes were not in stereo. Let's see how that works out now. So, I have a short demo. I also have a recording available in case, but that's a new video. So, internet is in there. I don't know how it's going to work out. So, let's look at the demo. It's going to be on my display. So, this is the github repository. It's open source tool ready to go. So, contributions will come. Please go and start it. It just makes us feel better. So, you can install githubes on any Kubernetes cluster just by doing this kubectl create command. So, we have also made a CLI just to make this thing easier so that you don't need to do this YAML every time. So, I'm going to use the CLI right now, but CLI is not required. The whole value proposition here is that you don't need anything else other than githubes on your system. That's the whole value proposition. You don't need Docker on your local machine. You don't need kubectl on your local machine. All you need is githubes. So, I have the githubes CLI installed. I also have a Kubernetes cluster on GKE here. I realized this yesterday. So, my githubes is pointing to that. So, what I'm going to do is to do, this is in the front visible, should I do that? Okay. So, when I do githubes install, it's going to install certain components on the Kubernetes cluster. And it's going to ask you how do you want to expose to service. So, these are all Kubernetes cluster. So, I'm just going to say load balancer. Because this is GKE, it's the easiest way to move forward. So, while that is getting created, I have got also, there are many configurations available. If you will help you build your source code and deploy it, it will also apply certain Kubernetes elements for you. And it can also apply help chance. So, I'm going to show the monorepo example in which this manifest directory contains my condensing engine as deployment service object for Kubernetes. Microservice is there. It contains proper file. And an industrial HTML file. So, if you look at the industrial HTML, so there is some HTML here. Now, this is not a github pose. I'm going to initialize a github pose here. And I'm going to add all this and come in this github pose. So, this can be any github pose tree. Now, I'm going to github. So, this thing, you have it right. So, github makes use of something called Kubernetes Custom Resource Technicians. Many of you have heard of CRDs. So, it's something which will actually do this. Github, you can still get remotes. So, it's a no resource monorepo. So, to make github work, you need to write something called a github, a remote.yaml. And again, one more yaml to deal with. So, we thought let's make developers life easier. We have a constraint for that. githubremote create-yaml. You see the github file name. I'll use generate. So, it will help me create this remote.yaml file. I have to give all the man keypads of these questions. So, I'm going to skip Kubernetes Manifest. So, I'm going to give this. I'm not going to configure it of the registry right now. I'm just going to create this for this service. So, these are the Dockerfile path and we'll call this path for the Docker command. So, this creates monorepo, remote.yaml in this directory now. So, I have this yaml file available to me. So, now, what I'm going to do is githubcpl create this yaml file or a pressure method in the CLA also to create iphone monorepo, remote.yaml. So, the remote is created on the cluster and it's telling me this is the remote available. So, this can be obtained from the... if I do githubcpl create, I can get this from the remote object. For example, if I look at the yaml for this thing I can find the remote developer here also. So, I do githubremote and do this. So, now if I look at my githremote, there's a remote called example. Then all I need to do is I need to go into the Kubernetes cluster. So, this is the ip of the Kubernetes cluster which we can see from here. There is a githubt service available which has a public IP. Now, this IP... PUSHES are only possible with this IP using SSH key organization. So, if you look at this remote here, I have my SSH public here available here and only I can push to this remote. So, it's probably okay to leave this off to the public. So, all I need to do is githubcpl example master. So, now you can see the Kubernetes and SSR being applied. Deployment created, service created. And this is all coming from the Kubernetes cluster. So, if this is all the githremote responding to the client and Docker image is being built, you can see that Docker image is being built. And it's also saying that deployment service is built or not. So, if I look at the deployments now, I can see that WWW is coming. And if I look at the service now, I can see that it's still waiting for an instant IP but the service was available. So, what I did here is I created a Kubernetes deployment and service object. And I pushed my source code, built and Docker image is also going to roll down just with a key push. So, if you are used to the Kubernetes locker space, you would understand how this means your life is here. Because you would have executed 10 different commands till now and it would have taken a lot of time. So, let's look at the public IPs available. So, now if I go with this IP, oops, if I go with this IP, I can see the engineer's container is serving instant IP. Now, we have to go through a little bit of Kubernetes stuff here. The remote on YAMC had to be created, some githremote had to be set up. But once this is done, for every edit that you make, if you look at it or not, if I want to make some modifications to my HNX HTML, all I am going to do is, so let's say what I am going to do here is, I am going to remove this, I am going to add a githremote. So, all I am going to do is git, command, and then just push again. So, by doing this, the locker space being rebuilt and deployed on the cluster. And I did not execute it on the command, I did not execute anything to the command. I will go and refresh my page, still loading. Oops, did not work, I guess. Okay, this is going to happen. Jiffy is a bit lazy to load today. Anyway, so your code is changed now, you got the new files running in there. So, anyway, wherever you want to make changes, you edit, you come in and you can push, and your changes will be live on the code in this cluster. We will come back and see that, edit these loads. So, going back to my presentation, let's talk a little bit about how this is going to work. Yeah, so, how this is going to work, let's talk a little bit about the architecture. So, when we did gitcube install, all it did was ds and Kubernetes objects. And one of them is this, yeah, is this Kubernetes controller. I will talk a little bit more about it later. So, then what happened is, you had your computer where you created a remote from there, and what happened is, it created a Kubernetes custom resource definition on the cluster, and the controller sees that a new work definition has been created. It creates a git remote control cluster, and now you can push to this git remote, and that will update the deployment or create the deployment, and also build an upper image and all of that. So, here we are achieving the local build using a git pre-receive group control cluster. These are all implementation details. If you are really interested in how gitcube works, then you need to worry about all these things. Otherwise, you don't need to care about any of this. You can just keep pushing your code of any Kubernetes cluster. So, the authentication, as I mentioned, is such keys. This also brings in a new pattern of doing things using Kubernetes operators and controllers. So, this is a custom resource definition that we saw. There is the API version, IIT and all you can see. This is a custom YAML that we created and Kubernetes supports this. So, there is access coverage using access keys, and you can just push your directly in any configuration. You just need to tell the gitcube controller where your local file is and where it should be in the local file. So, why should you use this? It's easier. Everybody knows git. You can just keep pushing things. Quick iteration time, again. You don't need to wait for the local and real local push to stuff to happen. Again, there is no complicated unbacked rules. You can restrict what gitcube can do on the cluster using unbacked. So, there will be organizations where organization is a problem. Your DevOps, and administrators can set up the gitcube for you and then provide access using simple authentication keys, just how git works. Now, it's a very small tool. It can be replaced with whatever you want later to move the production or whenever you want a more complicated file to start. So, the idea of doing things with git push can be extended to more DevOps tasks. So, you can modify this hook to do all sort of things you want. Gitcube doesn't do this at the moment, but it can easily fault you, edit the hook, and do all those things. So, this is an idea or an upcoming thing in the DevOps slash CIOps world where doing things using git is becoming very prominent. So, you can build and run unit tasks, deploy your code, deploy configuration, apply stateful migrations. All things can be done from gitcube. So, as I mentioned earlier, there's a new pattern of doing applications of Kubernetes, which is through this operator pattern. So, Kubernetes is the new kernel, okay, mentioned something like that. Kubernetes recorded people managing this course. So, Kubernetes is a new application platform. So, I wasn't aware earlier this year and if you know this, this founding team of Kubernetes, Craig McGluckey, Jorveda and Brandenburgs. So, I was at the top by Craig McGluckey and he was, he talked about how Kubernetes is a platform platform. It's a platform to build platforms. That's more like what Kubernetes is trying to achieve. It's giving you all these constructs with which you can build complex applications or complex platforms for your own use cases. So, the way to achieve this is to make use of Kubernetes primitives called custom resource definitions. So, you already have Kubernetes controllers doing things like deployments, services and stuff like that. Now you can define your own controllers watching on a custom resource definition called operators. And there are nice tools from CoreOS team which is now that had been called operator SDK. There is another thing called QBuilder. A lot of things which will help you achieve this pattern. Where along with the Kubernetes YAML, you will have your own application-specific YAMLs. And your own application-specific controllers is working constantly in the background to realize this YAML definition that you have to find by observing the parameters. This is a common way of doing things in the Kubernetes ecosystem. There is a controller looking at the currency in the cluster. It also knows what the desired state should be and it's doing actions to realize the desired state. This brings us to something called this can be a comparison can be drawn from a typical pipeline, a box pipeline and equivalence in the Kubernetes domain. So, building and running tests can be done in a log file. Multi-stage log file can do this very easily. Stateful tasks and all these things can be Kubernetes manifest. And then, if you have your own customers or definitions, you can do a lot of other extra stuff with it. And integration tests can be run with Kubernetes in containers, in jobs and all these things. So, this introduces a new paradigm called ETOPS and this is the definition of ETOPS given by the folks who point with the team and people. And where everything is will be declarative and version controlled. And you can go through this in detail later. So, you can have whatever can be described can be observed. And technically what is always done in it is whatever you can define in your kitchen can be automated and can be easily observable later. So, here is a, is ETOPS different from ETOPS or ETOPS is it a new thing? It's a way of going to ETOPS. It's not a replacement for ETOPS. We, we at Hasuna believe that CIOPS or ETOPS and the pipelines are way of doing the jobs. And ETOPS is declarative levels. If you look at a pipeline where you define certain steps to achieve a final state it's imperative. Your steps are getting different there. But ETOPS is you are only defining the final state and there are controllers working in the background to achieve the final state. So, defining the steps of the pipeline in a YAML file so that's a good thing that you need to do. So, these are the advantages of ETOPS declarative. You can go and control on it. You get a code review process on your configuration. You get monitoring with the current state and the desired state. But this also brings very clear developer and operations boundaries where developers only worry about the code and operators worry about how to deploy this onto cluster. And developers can deploy it using ETOPS to the cluster. So, but there are challenges to be solved yet. You cannot put everything in an interrupt or secrets other than the requirements are there. New tools are required to achieve certain things which have been traditionally done in Indonesia. But these problems are getting solved and a lot of companies are working towards achieving this in Indonesia. So, that's my talk. Thanks for listening. I have couple of minutes for questions. So, if you have any questions feel free to ask. Otherwise just go check it out, get to you, contributions are welcome in us on this order, when it is like wherever yeah, start the report any questions? Pardon? I think our continuation is done using SSS publicities. So, if you want your administrator or yourself can create a phone. Whoever is a cluster administrator can create a phone. And you can say that only this SSH keys are allowed to be used. So, this will help you to do a very easy user management. Which is to add a SSH key to the IPv5. So, that's how you do the management. So, if you have to push a development branch A on your local you have to look at push remote name, your branch name called master. This will push your branch to the master of remote. So, that every push will result in a deployment process. So, if you push 10 comments together in a single push all these 10 comments are taken together and a new docker image is built and with that latest comment. And then deployed as a new Kubernetes product. So, you can achieve 0.10 rollouts and all using this same unit. So, you can update you can say which deployment you want to update, which container in that deployment you want to update for any push to this input. So, all these are configurable into the other product. Yeah, any other questions? So, right now the way it works is so, the question was about provisioning does it also consider provisioning in your account? No, because the way it is designed you will see that as a developer tool in your development cases. So, wherever you want the application to run you would install the cube there itself. So, it will update same cluster. So, it is in the same cluster it will not contact a different cluster in the current state, but you can in your case also you can always achieve a lot of things. Ok, thank you thanks a lot you can find me here. I have some old hostila stickers. So, right here on the graph. Thanks a lot. Thanks a lot. In minutes of our tea break tea is served on the 8th floor and in between you can visit the exhibition area it is in the page of the central ground. So, we have a