 Hello everyone. Thank you for attending our session virtually today. We are going to talk about how to multiply the power of our go projects by using them together. It's a really exciting topic for us since we have done a lot for the past year. My name is Hong-Wan. My co-speaker is Alexander Mutu-Chentsov. We are both principal engineers from Intuit. Here is the overview. We will talk about who we are, then move to the project update, and last but not the least, the most exciting demo. We are from Intuit. We have 9,000 employees in 20 countries, serving 16 million customers all over the world. We create financial products for our customers, and we are the creator of TurboTag, QuickBooks, and Meet. The ARBO project is a set of Kubernetes native tools for deploying, running jobs, and applications. It uses GitOps paradigms, such as continuous delivery and progressive delivery, and enables ML ops on Kubernetes. It is made of four independent projects. I will go into individual one soon. We got a very strong community. The product has been recognized and used by a lot of companies. We got accepted as a CNCF incubator project and have 10,000 stars, 500 contributors, and 170 end-user companies. Recently, we created the Bootstrap committee with members from Intuit, BlackRock, Red Hat, and Alibaba to help with the governance. The idea is to keep the project stable and provide the direction for each product. We are very proud of the current progress and enjoy being part of the open source community. Then let's jump into the individual projects we have. Alright, let's start with workflow and events. ARGO workflows is a container native workflow engine. ARGO events is event-based dependency manager. We recently released the ARGO events 1.0, which was a huge milestone for us. Please check it out. It is more flexible, stable, and scalable. Regarding ARGO workflows, the core features are relatively stable, so we went deeper to help with more special use cases. For example, memorization, which helps reuse the remembered output when rerun to speed up the progress and reduce cost. And Mutex is help guard the non-shareable resources like database. The Chrome workflow is another handy enhancement to replace Chrome job, which allows you run more than just a single part in a Chrome manner. And also a lot of the user experience enhancement. For the future, we will address more scaling, reliability, and performance issues, and plan to add the user interface for ARGO events. ARGO CD is a very popular project, and a lot of companies have used it as the backbone deployment solution to Kubernetes clusters. So we have worked very hard on the performance and the scalability related issues. Currently, it can support 2,000 applications, 100 clusters, and also monolith repositories. With the ARGO ecosystem, several sub-projects have been reached to a good level of maturity. And the strengths and the ARGO CD story here. Applications set to support the application of applications pattern. GitOps engine as a reusable library that implements core GitOps feature. Image updater, which monitor Docker registry and automatically update image. Notifications helps notify users about the important state changes. For the future, we plan to do the GitOps agent, which will be a lightweighted ARGO CD based on the GitOps engine. And also the serviceability is another area we would like to put more effort as more and more companies are running ARGO CD in a large scale. We would like to provide more tools to get inside around the system. ARGO routes is the youngest ARGO project as a declarative progressive delivery and experimentation tool. However, it's got a good momentum. More and more companies see our gap, which to find a solution in this area. ADP, Spotify, Intune, and many other companies have adopted ARGO routes. From the basic standpoint, ARGO routes supports two update strategies, blue grain and canary. However, it shines in the detail. It supports the fine grain weighted traffic shifting with the integration of ingress controller and the service mesh. The rich metric provider integrations also give the possibility to do a more sophisticated experimentation to find control the route progress. Okay. Let me hand over to Alexander. We can check out his amazing demo together. Thank you. Thank you for interaction Hong and thank you for great update about every ARGO project. And I'm going to talk about how you can get the maximum out of ARGO by using the projects together. So ARGO projects are not really tightly coupled with each other. They are focused on just one use case and trying to do it as best as possible. But still you can use them all together and really complement nicely each other and together all four projects combine a very powerful application delivery platform. And we can talk a lot about why it is true. So we're going to build a scalable and resilient system, which implements a real use case. And we're not going to code instead we're going to use existing open source projects and each project again don't know much about each other and just do its work. And we're going to use ARGO to go everything together and make them work together. And as I mentioned, we're not going to write a single line of code. Except we will have to write some YAML. And thank you for laughing on talk about YAML. If you are. But to be serious, I'm doing the demo I'm going to talk about. So I'm going to write some YAML and thank you for laughing on talk about YAML if you are. But to be serious, I'm doing the demo I'm going to prove you that that YAML provide us so much benefit. So it's totally worth to spend time writing it. Okay, let's move on. So a little bit more details about what exactly we are going to build. We're going to create a web service. So, and web service is going to have a user interface, which is here on the graph. So files dash is going to be our web user interface files, files dash is an open source project that implements a user interface to different storages, including S3. Next component is our S3 storage, which is Minio. And I'm sure a lot of you've heard about Minio. So it's a great S3 compatible storage which you can run inside of Kubernetes. And that's going to be our infrastructure layer component. So Argo events is another infrastructure layer component that is going to listen to every event in Minio. And every time a new file is uploaded, Argo events will trigger Argo workflow in background. So an Argo workflow is going to download the file from the Minio. It will use face detect, which is an open open source project that implements all the machine learning magic. So and face detect will get access to that image and it will detect faces on that image, and it will produce a new image which has all the faces highlighted and uploaded back to S3. And then finally, files dash will show us, you know, the uploaded image with the faces. Plus, we are going to have kind of two control plane components. One is Argo CD and rollouts. So rollouts is going to manage files dash and CD manages kind of the whole thing. Okay, seems like I think we're ready to go. So, and by the way, you can do the same demo by yourself. So if you open that URL right here, let me open it. And so you will be redirected to the readme page of the Git repository. And the readme file pretty much have introduction to this presentation. It explains, you know, what we're going to build, what functionality we're going to get. And you really, you can start from let's do it part. And it's time to do it ourselves during the demo. So, so far we kind of talked about that system from user perspective, you know, top to bottom. So we talked about application, the user interface, and then infrastructure and then control plane to execute the demo we have to kind of do it in reverse order. So first of all, we need to create the control plane. And before we create a control plane, we need the cluster. So it takes some time to create a cluster. So I did it ahead of time. And I already have Minicube running. And you're free to use whatever cluster you want to use. So let me go ahead and create Argos CD first. So before I create anything, I just want to show you that I have brand new Minicube cluster it was created 26 minutes ago. And what I need to do, I need to execute two QPCTL commands to create Argos CD. One is just create Argos CD namespace. Let me make it a little bigger. And second, I need to execute QPCTL apply. And all the QPCTL apply just push the typical Argos CD installation into my Minicube cluster. So let's go ahead and do it. And this is Argos CD is created in the background Minicube download images that containers. So, and the next thing we should do is we need to configure our control plane to deploy the rest of the infrastructure. So, and as you might know, Argos CD provides user interface and the CLI. So you can use CLI or UI to configure Argos CD imperatively, or you can use a declarative configuration, which is really nice for the demo. And to apply the declarative configuration, I just need to execute one more QPCTL apply command, which is very handy. And I just want to go ahead and do it right away. And next, I want to jump back to the repository and show you what we just did. So, first of all, let's take a look at the file which we just sent to the our Minicube cluster. So, here is that file. And it's just a YAML file which has several objects. And let's talk about those objects. First of all, we are seeing here an instance of customer resource definition called application. And let's spend a couple minutes just to talk about what application is. So, application is an abstraction which Argos CD introduces. And it's pretty much Argos CD configuration. To better understand what it is, we should just take a look at applications back. And we really just need to focus on these two fields. So, first of all, we are seeing there is a source and destination. Let's talk about source. So, source explained Argos CD, where is your desired cluster state located? So, we just point to a directory in a Git repository. And in this case, it's the same Git repository. And that is destination. And destination explains to Argos CD where manifest supposed to be installed. And in this case, we are going to install into the same cluster where Argos CD is running. And we are going to use KubeCon demo namespace. So, once everything is created, we should get KubeCon demo namespace. And that's pretty much it. We have several applications here. Instead of, you know, looking at these applications in this YAML file, we can move back to the repository and take a look at the files in that repository. So, we have infrastructure components here and our web application that leverages infrastructure. Let's take a look at infrastructure first. And there is nothing, this is the least interesting part, I guess. So, here you could just see, you know, some minor tweaks which are related to install Argo events, like rollouts, workflows, and Minio, such as pre-configured bucket and some RBAC rules. And next, more interesting part, the web application which actually leverages that infrastructure. So, first of all, here we have a rollout object. Let me find it. Here it is. So, a rollout, as you might know, is like a big brother of native Kubernetes as deployment. And it does the same job as deployment. It creates replicas, which creates ports and eventually run some containers. Plus, it supports advanced deployment strategies. And in this case, I'm using blue-green. So, and as you can see, this particular rollout eventually creates files-container. That's it. And in addition to rollout, we have two interesting objects. One is event source. Event source is configured, is configured Argo events to watch for events in industry bucket. And then sensor is the most interesting one. It eventually produces the workflow. And this workflow literally consists of two steps. So, first step, find faces. And it's based on this image which I created for the demo, basically just triggers face-detect and, you know, couple of parameters. And finally, the second step, upload result back to S3. Yeah, I think this is it. We covered everything in that repository and we're ready to go check what Argo CD did for us in the background. So, I need to, first of all, start a mini-cube, also a start tunnel. So, we can access all the services. Okay, while it's working. Let me just talk a little bit about how we can do that. So, we can use Qubectl to inspect our cluster, but that is kind of, it's boring and not easy, especially given that we have Argo UI. And Argo UI provides us a lot of information about our cluster. It seems like Tunnel has started. Oh, I know why. So, next, we need to actually patch Argo CD service and change it to load balancer type so it will be available outside of a cluster. Next, Tunnel wants me to ask to give it a password so it can open access to port 18. And finally, Argo CD should be available just on local port 18. It seems like it's up. This is great. Next, we need to log in. And built-in user is admin and password is a host name, is a pod name. So, let me go ahead and use it. And, great. So, as you can see that, okay, by the way, this is Argo CD user interface and what it does, it shows us all the applications in, you know, configured in Argo CD. And as expected, we have five applications here for infrastructure components such as Argo events, rollouts, workflows and so on. And there is a web service. And as you can see, all the icons are green. And this is great because it showed to us that all the components are up and running. And running means not just created deployment, but also all the ports started readiness probes are not failing. They're successful. All the services goes with IP addresses and pretty much everything is ready to use. This is kind of demonstrate that Argo CD is not just the GitOps operator. This is also a very powerful Kubernetes dashboard. And you can use it to learn a lot about your Kubernetes cluster. So, if you want to learn about more details about your application, you can locate required application on the screen, click on the element and let's use Argo workflows as example. So I just clicked on that element and I wasn't directed to workflow details page and it actually has a lot of details here. And I guess the most interesting one is that three of you, which show us all the resources, which are part of Argo workflows application. But in this demo, I'm really interested in just in the service. So let me see the filter service here. And Argo workflows has couple of services. And I'm interested in Argo server service. So it's available on port on local host because I use mini cube. And I really need a port from here. And let's go ahead and try to open the URL. Yeah, so here we go. We have Argo workflows user interface. And we, I opened it on purpose because we are going to need it for the demo. And Argo workflow user interface is pretty much it's an operator console that operator can use to see background which jobs, you know, how they progressing in real time. And then finally, we need one more URL. So we basically, we want to access our application, which is file stage. And I also just need a port. So port is that zero nine zero zero one. Okay. Here's file stage user interface. And as expected, it has no images here because we have not uploaded anything here to me. So we're just showing to us that nothing is created. And this is it. So I think I just want to kind of take a quick pause and recap what we've done. So we've started from an empty mini cube cluster. And we literally run just three commands one to create namespace and then to keep CTL apply commands. And we've got the whole system up and running. And let's go ahead and prove it actually working and upload an image. Here's the upload button. Of course, I have image prepared ahead of time. I'm going to use that friends poster. Sorry, stranger things. I like friends more. So what should happen now is that we uploaded an image and our go work, our go events should detect it, and it should trigger a workflow. So if we move to our go workflows use interface, we can see that one workflow is running right now. Let's click on it. Click on it and see the details. So basically, this UI is going to visualize steps as the steps, you know, execute. And you can use it that you are to even get more information. For example, you can click on the step. And you can go to events and see what exactly is happening. So it was downloading image. Then it started the container. Now it waits for container. And let's give it a couple more seconds to complete. Please. Come on. Yeah, still enjoying. Oh, finally, it's running. Awesome. And immediately succeeded. So basically that was fine phases. It hopefully found the phases and then eventually pass it back. Pass it next to upload result step. It's doing the same. Okay. So it's putting the image created the container started the container. And I suspect my mean cook is quite busy. But let's give it a couple more seconds to complete. So it's running. Hopefully uploading data to S3. And it's done. Awesome. So if everything worked fine, we should go back to a false dash and refresh the page. And we've got a second image which has phases on it. So awesome. And I feel like it's really excited. And I did promise to talk about YAML a little bit. And so. YAML didn't just give us a fully working system. And we would start it in one command. In addition to that, that YAML actually encapsulated the whole application. Life cycle. And we don't have to reinvent it again. And I'm referring to Argo rollouts object in this case. So Argo rollouts object actually supports blue green strategy. And the way blue green is supposed to be done is actually encapsulated in this like four lines of YAML. So I think it's really excited. And. Yeah, that's ends my demo. Thank you. And please go ahead and try to do it yourself. And try to upgrade a file stash using constructions in me. And see blue green. Upgraded strategy life. And. This is not the end. I did want to mention one more component. Which is kind of hidden, but it's also a very important. And that component is GitOps engine. So what is GitOps engine? It's a library that was kind of created out of Argo CD. So what we did, we took out of Argo CD. The implementation of core GitOps features. And put it into a library, a reusable goal and library. And that library includes not just a basic. It also has advanced GitOps features such as sync hooks, sync waves. And many, many much more. And that library powers Argo CD for quite a while already. And what happened is that library is getting mature more and more. And we are really happy to welcome a new consumer. That is GitLab. So, and I'm super excited to tell you that. So, and I'm super excited to tell you that GitLab actually recently released an alpha version of GitLab agent, which is active in cluster component that pretty much integrates GitLab and Kubernetes. And it is powered by GitOps engine. And why it is so exciting is that because now we actually share the same code. So both GitLab and Argo CD does GitOps in pretty much the same way. And GitOps are actively carbonyl calibrating and working on new features and bug fixes. So you will see a lot of great stuff soon. And this is it. Thanks a lot for listening to our demo and update about the project. And please go ahead, ask your questions if you have any.