 Felly, hi. Mae ydy o'r bydd yn ysgolwyddiad ar gyfer gael argynrif, a'r ddyfodol yng nghymru. Felly, rydyn ni'n Ysgolwyddiad. Felly, mae'r enginir ar Gymraeg. Felly, rydyn ni'n Beci Pauli, a rydyn ni'n gwyfnod ar gyfer gyffredinol Gertstag ynglo. Rydym ni'n gweithio gyda'r ddyfodol gyda'r ddyfodol ar gyfer'r ddyfodol ar gyfodol. Mae'r cyfnod bwysig o'r cyfnod ar gyfer o'r ffondol, ac mae'n gweithio'n gweithio'r concept yma yn llwytaeth cwyddiol, o'r cyfnod ymlaen pethau o'r cyfnod, cymdeithasiaeth ffondol ar gyfer ymgyrch, cyfnod yw'r cyfnod ar gyfer ac argyndo. Mae'n gwneud o'r ddiogel i'r cyfnod o'r cyfnod yma, ac yn fwyaf gyfnod ymgyrch gan gyfnod ymgyrch, mae'n gweithio'n gwybod i'r cyfnod, a chybwysig yn y gweld ar y argo ffordd. Felly, ydych yn ffordd o'r ffordd yma'n unig ffordd yma'r ffordd. Argo ffordd, fel y gallwn gwirionedd, yn ffordd o'r argo ffordd. A yw'r pethau ffordd o'r ffordd o'r ffordd o'r ffordd o'r ffordd o'r ffordd o'r ffordd o'r ffordd i Cubanetys. Yn y ffordd? Y ffordd yw'r ffordd o'r ffordd o'r ffordd o'r ffordd ac mae'n deill iddyn nhw'n defnyddio ni'n meddwl fel y gallwn y welfod ddim yn hitr. Ond fe hynny'n fydag o enwedig. Mae hyn yn fanaell Focus ddyn nhw'n rhan o'r ffordd yma eich ffordd. Aet dechrau ddim yn ffordd o'r ffordd i'u leGenfau a'r rhan o'r ffordd. Oes ymweld y ffordd eich ffordd eich rhan o'r ffordd o'r ffordd yn ffordd o'r ffordd o'r ffordd. A we may have a complex pipeline where we need to manage multiple dependencies between different tasks and Argo does this using a DAG or GRAFF. This flexibility means that Argo workflow is great for running compute intensive jobs so things like machine learning, data processing in an efficient way ac mae'n unig i'r cymhredu cyfle i gael cyfnodd cymhredu a gyda'r cymhredu cicid. Felly, dwi'n meddwl yw argyflos arrych arni, ydy'r ffwrdd sy'n mynd i'r ysgrifennu, mae hynny'n cael ei gwybod a'r ffawr ar y cyfnod cywyddiol, a'r cyfnod ar y cyfnod cyfnod cyfnod cywyddiol ac mae'r argyflos argyflos yn rhai i'r ysgrifennu. Mae eisiau'r ysgrifennu cael eu gael yng Nghaerbydd. ond o'r llyfr o unrhyw yma, rwy'n rhaid i'w fawr. So, bydd yna y gallu fawr yn ei ddechrau i gweithio'r fawr yn Cyfynetys? Yn yma'r fawr yn ystod ar gyfer ar gyfer argyfwyr, rwy'n credu unrhyw gweithio'r rhesawdd yn y llyfr. A oeddi'r fawr yn cyflawni'r fawr yn cyflawni, ond yn cyflawni, rwy'n cyflawni'r fawr yn cyflawni, We recommend you look at the charts in the Argo Helm repository. Here you have our simplified version, and while some of these are typical resources we would expect to see in any installation, so RBAC missions, service accounts, there are some resources that are worth looking at here a little bit more closely. These include our two deployments Argo Server and Workflow Controller as well as eight custom resource definitions or CRDs for short. Just so we are on the same page, custom resource definitions are what allow us to extend the Kubernetes API to create custom resource types. For example, I can create a custom resource definition for a workflow and this allows me to create an interact with my new workflow custom resource just like I would a pod or a deployment. Now you'll be pleased to hear we are not going to spend the next 20 minutes just talking about CRDs but it's worth a brief overview. So we've got a little bit of a sense of the custom resources that exist within Argo workflows. Some of these resources handle the creation and management of workflows and there are four CRDs that we could broadly place into this category. These include the workflow CRD which we've already mentioned and the Cron workflow CRD which unsurprisingly is for running our workflows on a schedule. We also have the workflow template and cluster workflow template CRDs which we can think of as allowing us to define a library of reusable workflows for use within either a namespace or a cluster. The remaining CRDs handle exchange of data between the main workflow controller and the Argo exec agent, storing the output of workflow tasks, automated garbage collection of workflow artifacts, and workflow event bindings which allow us to specify events that can trigger our workflows. So we have some context on the custom resources defined in Argo workflows but now that these resources exist in our cluster what we want to know is how these workflows actually happen, how are they orchestrated and this is where workflow controller comes in. So workflow controller is the main component of Argo workflows and it does all the heavy lifting. We previously mentioned CRDs so workflow controller job is to react on custom resources. It manages workflows and everything workflows related so things like Cron workflows, workflow templates and others. Workflow controller implements operator pattern so basically workflow controller is and Kubernetes operator. The Kubernetes operator pattern defines a way for configuring, deploying and orchestrating instances of stateful application on behalf of a human actor. When you need to manage a complex application you need to have domain knowledge about that application so in this case operator can help by automating all the moving parts. User can then focus on other things. Since workflow controller is an operator it mostly talks with Kubernetes. It can be very chatty so be careful. You can run workflow controller in high availability modes or you can have a namespace deployments. Here's the example of how high availability deployment works. In high availability mode one instance is a leader and other are in standby mode. When leader goes down election happens and a new leader is selected. This selected leader continues the work. You can also deploy workflow controller per namespace. In this case each workflow controller is responsible for workflows in its own namespaces. This way pressure on a single workflow controller is decreased. Yeah, if you're familiar with the operator pattern then the inner workings of workflow controller will be familiar to you. If not here's the brief overview of how workflow controller and operator pattern in general works. So basically workflow controller uses Kubernetes API to watch for our workflow custom resources and act. When you create a workflow you actually create a workflow custom resource. This custom resource is going to exist even without workflow controller. It would exist in Kubernetes and the ETCD but there'll be nothing to make the workflow happen. So workflow controller detects that there is a workflow custom resource and it acts upon that info. This happens using informers. Informer is an abstraction that allows controller to not constantly talk to the Kubernetes API and to only act on change. This way it lowers the pressure on the Kubernetes API and at the end the workflow custom resource is added to the workflow queue and workers are processing your workflow. Workers are the components that are doing the actual job. So they inspect the workflow custom resource and they do the appropriate action. And these actions are things like creating pods, pod reconciliation and other things. So yeah, you can interact with workflow controller using Kube CTL, but Argo workflows has another solution to make your life a little bit easier and this component is Argo server. So Argo server is mostly used for the communication with the outside world and it's usually used by the user that are not that much familiar with Kubernetes. There are some other functionalities that Argo provides and that we are going to mention later. Argo server provides an API that people can use. So for example, you do not like Kube CTL but you still want to manipulate with workflows using terminal so you can use Argo CLI. But if you remember how workflow controller works, you know that workflow controller is not exposing anything that can interact with end user. This is where Argo server comes in. Without Argo server and its API layer, you cannot use Argo CLI. Maybe you are not that much familiar with terminal but you still like workflows. So yeah, Argo server provides web UI and you can also use that. From all this, you can see that Argo server is not a mission critical component but it has some functionalities that are important and it's highly useful to have it. Argo server is also responsible for the authentication and yeah, you can use the standard Kubernetes authentication or you can use single sign on using DEX. DEX is a tool responsible for authentication and supports a wide range of identity providers and protocols. But Argo server also has a big role in features like workflow archive, offloading large workflows and events. For example, you can use kubectl to list your workflows but you're only going to get workflows that exist inside Kubernetes. You cannot list the workflows inside your archive because they exist in the external database and they're not in the Kubernetes. So you need to use things like CLI, UI or Argo server API to get them. You can also trigger your workflow using a workflow event. So you just need to create a workflow event binding custom resource with appropriate selectors and after that call the Argo workflow events endpoint. So yeah, here's the simplified version of how workflow is created. So in the beginning you submit the workflow using CLI UI API and yeah, Argo server gets your request and creates a workflow custom resource using Kubernetes API. Workflow controller detects that there is a custom resource that requires action and starts examining that custom resource, creates all the pods that are needed and monitor the workflow status. In previous couple of slides we saw two Argo workflows components. We saw what workflow controller is, what its role and how it works. We also had Argo server, its functionalities and how end user can use it. And the end you saw entire flow from the moment user creates workflow to the moment first pod is created in the cluster. So let's take a closer look at our Kubernetes pod and in particular how an understanding of the humble Kubernetes pod can be so important, so useful in understanding our workflows. This is because regardless of how simple or complex our workflow is, whether it is a single step or more likely many, a step in our workflow is almost always equal to one running pod. Each step or task in our workflow runs as its own pod. Of course there are a few exceptions. So if we're using a suspend template, a workflow of workflows pattern, this might look slightly different, but almost all the time a step or a task in our workflow is equal to one running pod. And you will hear me repeat this a few more times. Why is it so important? Well, when we understand that a step is simply a pod running in Kubernetes, it helps us approach our questions differently. Even if I have very little knowledge of Argo workflows, perhaps I have a good understanding of Kubernetes. So we can start with the assumption that if I can do this thing in a normal Kubernetes pod, I can do it in a workflow step. And sometimes actually as soon as we shift our thinking like this, we already have our answer. Even when we're still not quite sure this concept of a pod still helps us. For example, instead of asking how do I use Argo workflows to build a Docker image and hoping that someone else asked that exact same question on Stack Overflow before us, we can ask how do I build a Docker image in Kubernetes? Because that is pretty well documented. But there is more. Want to understand resource utilisation, we reason about this like we would any other Kubernetes pod, setting resource requests and limits, thinking about quality of services classes and node sizing. For our pods to perform actions, we need to give them the correct RBAC permissions. We can mount volumes and containers to each step and gather metrics about each step because it is a pod. Now, there is another side to this and that is that each step has its own workspace. So we do need to think about saving off and fetching artifacts parameters between our steps, which the weight and the net containers can help us with. And this might be a shift in thinking if we're used to a model where we've sort of used one big mega pod to run lots of steps in our workflow. I've had some people ask, well, how do I actually do this? Like, do you have some examples? The Argo Workflows GitHub repo has some great examples and PipeKit have some blogs about how to do this, either using Minio or your cloud object storage provider of choice. If you have very short lived steps, perhaps these don't make sense to run by themselves. So could you combine them? Could you use a container set template instead? So this is nearly the last time you will hear me say this. We have established that a step is a pod and a pod is a step, but of course, that leaves us with one final question. What is actually going on inside the pod for each step in my workflow? For the sake of time, I'll simplify slightly, but in general, if we zoom into our pod running our workflow step, then we will see three containers. First, we have an init container and this does what we would expect it to. It runs first when our pod starts before the main container fetching artifacts parameters, making things available. Once we have those dependencies, our main container can run and this is what is actually executing the desired actions for our step. Depending on the step template that we use, this actually works slightly differently, so let's look at some examples. So if we look at the step templates available to us, perhaps how this main container works is more obvious in some cases than others. When using the container template, the script template and the container set template, we define the main container or containers where our step should run in a way that looks a lot like a pod spec. What's interesting here though, you may never need to know this, but there is still some level of abstraction. Argo exec is mounted as a volume to our main container, so it's this Argo exec utility that actually serves as the main container, the main command, sorry, for our container and it calls the command we configured as a sub process. But what about templates where we don't explicitly define a container in our workflow, a HTTP template or resource template, for example? Will these still run inside a pod but how it works might be slightly less obvious. In these cases, although we don't explicitly define a container for our step to run in, Argo handles this for us. It creates its own main container inside a new pod for our step using the Argo exec image. So for these step templates, a step is still a pod, but the container used is abstracted away from us in the template definition. The weight container, so we've had a net container, our main container, the weight container does exactly what it says on the tin. It waits and performs tasks that are needed for cleanup, so saving off parameters, artefacts, for example, to object storage. And there you have it, a deep dive into the architecture of Argo workflows in less than 20 minutes. The CRDs that allow us to create Argo workflows resources, the role of the workflow controller in orchestrating our workflows, Argo server in interacting with end users and the importance of the Kubernetes pod. I'm sure that some of us will want to dig around a little bit deeper and try some of these things out for ourselves. So we've got some recommendations about places that you could get started. Killer Coda have a really fantastic introductory course. It is very hands-on, so we'd really recommend having a look there if you haven't used it before. You can also have a look at the docs and the Argo workflows GitHub repo, particularly the examples directory. There are things you can sort of take from there and try in your own cluster, really, really good. If you want to know the sorts of things that we're interested in trying out, then have a look at the pipe kit and verify jet stack consult blogs. And pipe kit also hold a weekly office hours where you can come along and ask questions. So definitely pop along there too. Okay, yeah, and yeah. Now we'll tell a little bit about our companies. So I work for pipe kit and pipe kit help team scale their workflows from creating a proof of concept to the fully scalable workflow. We also provide a control plane for Argo workflows. We can advise you and help you to better understand what is happening inside your cluster and your workflow. We are part of the Argo community and active maintainer of various projects like Argo workflows. You can visit us at BootsE34. And I work for a verified jet stack consult. We offer cloud native consultancy training and strategic advisory services bringing deep technical expertise, breadth of knowledge and a unique professional ethos to support your cloud native platform operation. If you want to talk more about what we do or you are just curious about what we've been building, you want to tell us what you've been building, then you can find us either in our jet stack t-shirts or visit the Venify booth L8. You can also visit our awesome cert manager friends at the cert manager booth in the project pavilion. We hope that you found these ideas, this information useful. If you have questions, comments, feedback, we would love to hear from you and thank you for listening.