 Merhaba. Herkese hoşgeldiniz. Bugün şahsızınızdan görüşemos. Bizim Replye'nin parçalarımız arka işleminin bütün inclusif tarzı almak için izlemeliyiz. Y föreçRaT'ın, işleminin, ve seenlerimiz, We try to provide unified solutions to common problems in random technologies. Let's start with introducing ourselves. I am a software engineer at the delivery team where our main focus is developer productivity. Some of my personal interests are distributed systems, application platforms, and sometimes I enjoy writing mobiles. From this point, Tolkien will continue and he will talk about who are we, our current tech status and the journey of job scheduler with Argo Workflows international. Thanks Ismail. Hello everyone, I'm Luka. I'm a software engineer at TranDiode. I'm a functional production delivery tools such as the Argo products. And also, I am a special research interest for computational engineering, GP platforms and its applications on Metaverse plus EI research. I constantly do developments with C++. I hope you find the session interesting and also insightful. So, let's head into our agenda. We will talk a bit about TranDiode and then we will head over the story of our job scheduler needs and solutions. Finally, we will pass over what's coming next and all the cornerization. So, as a reminder, any questions are welcome. Please don't hesitate to ask your questions. Let's start with which TranDiode. TranDiode is an e-com and technology company and also first decacorn in Turkey. And one of the top e-commerce platforms in the world. TranDiode mission is to make a positive impact by offering a seamless e-commerce experience to customers and sellers. Here on the screen, you can see some of the important facts about TranDiode. As you can see, we saw multiple platforms to millions of people and also actively operating in Turkey and Germany and a few European region countries too. That's why our platforms are always in some kind of pursuit of innovation. If you want to learn more about TranDiode, please visit our LinkedIn page. And therefore, we can look at our current tech status. I will talk a little bit about our application infra and platform status. Our current status heavily relies on the Kubernetes ecosystem. In this journey, there are more than 250 clusters and we are one of the faster adapters of Argo and maintaining all Argo products family internally such as Argo CD, Argo workflows, Argo rollouts and Argo events. Now, let's look at our metrics. In this part, infrastructure metrics are showed up. Service count is more than 8200 and 7500 registered on Argo CD. All of our infrastructure metrics are publicly reachable for those who are curious about in Argo from. So, please feel free to inspect. There is lots of info in real time about regions, memory and CPU usage. Virtual machines, clusters and many more system ticks. The square info in here shows registered job info on Argo workflows. Currently by the team. Assembling and processing data from all these places via our service catalog data collector. For Chrome workflow kinds. And calculating service adaptation at general team by team. We definitely love Argo rollouts. We care about. The current way is this like in general. But how did we get here? And how did we reach the Argo workflows? This is the second section in our agenda. Where we try to explain why we needed a distributed job scheduler and why we choose Argo workflows. We all know that maintaining and scheduling service shops in big technology companies could be pretty challenging. And this could make developers uncomfortable. When you have such conditions, innovation is surely indeed. In our situation at first, teams were keeping their jobs on different platforms and environments. Failures and doubts were constantly rising. Runners were blocking, CPU utilization was rising. Also, there was a monopoly of visibility. Jobs had an issue of lack of stability, manageability and effectiveness. We have more than 750 scheduled jobs distributed on different tools and solutions. So handling cron jobs in such a way was overhead for us. It was increasing our error rates and eventually caging outages. In time, developers requirements dropped into our backup because developer experience should have been improved. And eventually our search for a new platform started. The first criterion was the endland track jobs without issues and complexity of configurations. The unified solution for distributors job scheduler needs is of dealing with scalability and disaster recovery scenarios. Compared and scared various platforms for days. Then analyzed different platforms to manage workload without pay for more than 100 teams and more than 750 jobs. So our first choice was to find a Kubernetes native platform as our entire infrastructure is Kubernetes based. Finally, we decided to head into Argo. Because we were already part of the Argo ecosystem with Argo CD. So the mindset is the same. With this platform, we want all of these and the rest is between. Thanks to the Argo community. Thank you Dokan. Dokan told us the starting point of our journey. Why we needed it and what we were looking for. And now I will try to explain why we chose Argo workflows. And here in this slide we mark some of advantages that Argo workflows served us. I will briefly pass over that. It natively suppose Kubernetes. Okay guys. It is our first criteria while searching the right tool for our needs. So this is a big plus for us. It's product of Argo family. We have been using Argo CD for more than a year. So it was close to our ecosystem and we had a know-how of it. Ease of deployment and maintenance. In trend as it takes tech keep growing we want to keep our footprint smaller every day. We want to minimize effort to maintain these tools. So the Argo workflows is based on custom resource definitions on Kubernetes. We can keep all the conflicts as it was and it stays stored on Kubernetes. It's containerized. We want to keep the crons on containers to use our resources more effectively. It's fault tolerant. Argo workflows can utilize schedules with a backup mechanism. That's really important to us. So far we talk about our adventure and right now we can jump right into our architectural overview. Now we can start to talk about the architecture. This architecture overview shows us how we make it possible for this kind of platform delivery. I will try to explain in two parts. One for installation and backup processes. The other one for Arbex and manifest of workflows. Our aesthetic team is maintaining our installation and backup repositories over pipelines. We backup conflicts every night. The conflicts are all the workflow kinds, namespaces and Arbex configurations. All of them can be up and running in just minutes. Before diving into details I would like to mention that Kubernetes native nature of Argo workflows takes away all the scabilt and disaster recovery conflicts to flow us. We can just export and apply emails instantly. Besides that, we separate team resources by Kubernetes namespaces. Argo workflows support this feature with SSO Arbex namespace delegation that comes with version 3.3 which is currently in beta. This feature helps us to define Arbex for each team's namespace. With this feature, Kubernetes namespaces help us to prevent closed team modifications on workflows. Here you can see we are using workflows closely with Argo CD. Which helps manifest our infra escort. Let's move to the next slide. We will explore the Arbex mechanism deeply and the integration of teams. Let's dive into installation and backup processes. SNS are maintaining infrarity through repositories. One for installation where they can also keep it simply the latest versions. On the other side, we provide DEX concrete in the installation repository to use Argo CDX error for SSO. Another one is backup repository. It is just another Chrome that backup all the workflow cluster every day. In the case of emergency we can move Argo workflow to any data center and cluster in just minutes. We have a convention of using customized with Argo CD. We keep Arbex configuration and workflows as customized manifest and apply through Argo CD to Argo workflow cluster. As we mentioned in the earlier steps we separate teams by Kubernetes namespaces. Argo CD serserkans to all team members over LDAP Groups. Since the Argo CD DEX server is already integrated with our LDAP server we can reach LDAP Groups from Argo workflows. For that we just needed to integrate two products of Argo. Actually we had 3 serserkans for a namespace. Serserkans with Arbex annotations as a block. So with the presidents highest one is admin the lowest one is default. Default one is just a return name which we use that just for previews. So how do we integrate teams? We just add a namespace and Arbex to our manifest then Argo CD text job. This is an example of how we add Arbex rule annotation to our group. We just modify this over customize and fetch the our base serserkans. Teams are onboarding themselves over our integration guidelines so they can move the jobs within the following documentation. That's how we empower Argo workflows with smooth authorization and platform delivery. And here you can see our roadmap for the Argo workflow platform inside Trantium. So that's why we want to distribute this platform to the most of our developing teams. Besides that we mentioned we have five different of a centers. So another focus will be a multi DC stability distribution and job networking. By iterating all this we will continue to follow the new features of Argo workflow we appreciate all contributions from the community. That's all we have for today. Thanks for having us here. I hope you enjoyed the session. If you have further questions on our approach please drop your questions.