 Cool. Oh ya. Hei, hi. So, I'm here today to present about one of the hidden gem that I found out during Hector Surface. It's called Co. So, before we begin, just a little bit about myself. I'm Stanley. I'm currently working for Sandit. We're building payment infrastructure for Southeast Asia. In my free time, I helped co-organize Go Singapore as well. And I deliver my application with Go and Note using Docker and Kubernetes. So, logistic of the way, what is Co exactly? Co is a tool for building and deploying Go application to Kubernetes. It's built around Go import path convention, which I will explain later how it's mirrored. But before that, let's take a deep dive into the problem that it's made for. So, I'm assuming that most of us in the audience would be familiar with containerization and Docker. For those who don't know what containerization is, it's basically a technique of bundling all your application components with dependencies and configuration and runtime, etc. in a self-contained module to be run in a host machine. So, the basic process of dockerizing a Go application is to deploy into Kubernetes, first to go build, then docker build, and then you push that docker image to a docker repository and then you can finally kubectl apply into your Kubernetes clusters. So, fast-forward to the edge of microservices, we have to manage tons and tons of different builds. Each one has its own set of docker files. Things start to grow out of control and with lots and lots of configurations in forms of docker files and jammel files. Now, some of you might be thinking that tools like scuffle or makefile can wrap this process for different languages and docker files to make this process more manageable and faster. But as application developers, we would still need to write a lot of docker files and cipliki more jammel file to describe the orchestration. There's a pattern here that I have observed for optimization and development effort. It's a major pain point that dockerization is too invasive in my development environment requiring a lot of effort for configuration. I'm not saying that this is not necessary, because at times we need to carefully handcraft docker files for customization for business requirements. But at other times, mostly my apps are delivered in the same build configuration for majority of the microservices. So code is targeting to solve problems for this group of users by providing a way to eliminate needs for build and deploys configuration to eventually make containers invisible infrastructures and one less thing that application developers like us worry about. So this is a new model of our deployment process that code is constructing instead of having multiple docker files, building images from each one of them and pulling from the published repo and deployed to Kubernetes cluster. We just need a one liner that will handle everything to get our microservices up and running in the clusters. This is particularly useful for K-native, because you have a very large number of containerized images and as a matter of fact, code was actually made for K-native. And it used to live inside one of the K-native repositories before being extracted out into a separate repo as a standalone tool. Okay, cool. So that is the problem that the code is trying to solve. So how does code achieve this zero config deployment workflow? So it links into the goal medium to simplify the whole process and eliminate configurations in the same way that go get command install binaries into your local machine using import path. Code use go import path to refer to the command that would start your containers. So our concerns are now simplified docker push plus docker publish kubernetes applies into just code. So I'm going to go into some of the components that make code possible and also its features. Code publish simply builds and publishes images for each import path as an argument. The main function of the program must be in the path for code build and publish a go docker containers that contain the binaries. Go publish also support relatives import path in the context of a go path repo. So where does code publishes image to? It will publish to a local docker repo by default but we can set that by using this environment variable code docker repo. So the reason that it's being extracted into an environment variable because for different developers or different build environment we want to publish into different docker repository so this should be an environment variable for that reason. So the next component is code resolve so it will basically take a kubernetes jammel file in the same way as k apply and then determines the go path to build, docker rise and publish. So the output of this is to apply on a kubernetes cluster but to output a string of a conkineted jammel with go path replaced by the publish image hash digest, sorry. So the final piece of the puzzle is code apply. So it's basically parallel to k apply if you use kubernetes. I think on the same output from code resolve so as developer make changes in their code application they can just do a code apply to rapidly rebuild and repush and redeploy into their development clusters. And last but not least we have code delete which is a convenient wrapper for k delete. Okay so while code aims to have zero config there are times where we still want to overwrite some of the default behaviors where a code jammel comes in handy. In code jammel we have two options just two, yeah. We try to have the least configuration as possible. So the first one is default base image. By default code would build an image just from a nice shoeless image but then we can replace that with a base image of our choice. Or we can even overwrite for each of the import path individually. So now some of you might be wondering that how do we include static assets into our application because that's important, isn't it? Code data path environment variable is the answer to this. All content inside this path will be copied into your images and make available under the same environment variable. So this is actually another goal idiom. It draws from the same idiom as goal test data. So where test data will be included and made available for goal test code data is included and made available for co-deploy applications. So let's go into the cool stuff demo. Okay, so what my demo is and I have five application in my that I'm going to deploy using Docker first then I will show you how to do the same thing. So each of my services are each of my services has a deployment from these goal application and then exposed through a service. So to deploy this application first okay, hang on. I will need to do a oh, it's already I need to do a Docker build and tag it with the image that I want to publish as is it clear to text? I think if I change it to hang on. Let me just change it to should be good, right? Okay, is it better now? Nice. Alright. Okay. So basically is building the hello earth image and then okay. So let me just go through each of the Docker file. So this is like my handcrafted Docker file that I normally use to deploy my Docker image for goal application. So while it's publishing, I'm just going to quickly go through the application. So this app basically serve a HTML string and an image. So even in my Docker build I'm trying to mirror the code data path. It's just because I'm lazy and I don't want to to different static config for each. Okay, so now we can Docker publish a Docker push, sorry. Then because these are very tedious I'm just going to run a Mac file that will build and push all of these images. So yeah, I think I can start singing as This is taking forever. So just showing you all that let me just open my Docker Hub and show you all the publish image. So all my images are being published Oh, I think they are published. Are they all published? Oh, still want one more, I think. Sorry, I should have reduced it to 3 services. To publishing. Yeah, so just showing you that there's no pre push images. So I'm actually doing everything from scratch. Yeah, lucky that my images are light. So you don't have to another hour. Okay, cool. So all the images have been published. Now I can do a apply. Oh, I'm using Minikube behind the scene by the way. But I just started it before the talk because it takes forever to start. So I have all my services up and running. Now I'm going to show you Okay. Oh, sorry. All my services are up and running. So probably let's travel to Earth. Oh, shit. Oh, yes. It works. So, but that was lots and lots of effort, right? Okay, now I'm going to bring down everything and show you how I can I could do that with Co. So all of those handcrafted docker image and hard work in publishing can be safe with Co. Okay, and I don't even need my docker daimans to be running. Yeah, I'm going to do a Co apply. Oh, yeah. So I'm showing you my Co. Co power Kubernetes. So I'm using my import path here. Yeah, let's go. Oh, sorry. So it's right now is running Co. Publish. So if you run Co. Publish, you would get these same messages publishing to these and then these images are being published here. Sorry. Sorry, Murphy's law. What? Am I being blocked by the wifi or something? Yeah, but somehow I already log in, right? Yeah. Okay. Sorry. Maybe I need this running now that I'm trying to publish to docker. Can't run from after or maybe. But yeah, Co doesn't need docker daimans to be running to build its image because it's basically compressing the image by itself without using any docker functionalities. Hopefully it works. Okay, I'm going to publish to my local local docker repo instead now that it's not working with the remote one. It's okay. I will just demonstrate using my local docker repo. Okay, cool. So, everything is all the deployment has been created if I get service of my application are being run and I would get the same as trouble to Venus. Yup. So, with Co, it's just a one-liner. Okay. There's some trouble, but if you count from when I publish to local repo, then yeah, it's just a one-liner and then you have your cluster up and running. So, I think this is quite useful for me if I'm trying to mirror the behavior of my remote clusters in my local. So, I can quickly rebuild and get everything running without much push and all that. So, it can make my development faster. I would use this for my development but for actual deployment, I don't know about your technical requirements to recommend. Yup, that's cool. So, any questions? So, actually, for Co, you can just install it using go get and then I must show you. It's written in Go, so it's pretty straightforward to just go get to install the command and you have everything yeah, it's a Go application. I mean, Go command-line application, yeah. But yeah, you just need to do this and you're done. Yeah, they try to make it as simple as possible to get it up and running and everything. So, for the benefit of the online audience who might not be reading the chat, we do have half a dozen watches. Any questions, Stanley? You can put directly into the chat. Alright. Okay, cool. Brilliant. And no questions in the room. It's done in silence. Terima kasih.