 Hello everyone. My name is Katie Gomanje and currently I am a senior field engineer at Apple. I have joined this role last year and in my role I'm trying to bring cloud native and Kubernetes expertise to different teams and products within Apple. As well I am one of the TOC or Technical Oversight Committee member for CNCF or Cloud Native Computing Foundation. In this role I'm joining 10 of the champions within the industry and we try to provide a perspective and a clear view of how to navigate the landscape. I have many other roles in the community, one of them being an advisory board member for CAPTEN which currently is an incubating CNCF project and I am the creator of the cloud native fundamentals course which you can find on Udacity. Now this is a free course so completely selfless here. However if you know anyone who would be interested in pursuing a cloud native career I would definitely recommend to look at this course for them to navigate and understand the fundamentals but to apply them in production as well. Now today however I would like to talk about cloud bare metal chronicles and more importantly the intertwining between Cluster API, Tinkerbell and GitOps. And to do so I would like to firstly introduce Cluster API and how it provides one set of standards to deploy your infrastructure to any cloud provider. Next I'm going to focus on bare metal provisioning and here is when I'm going to introduce Tinkerbell but more importantly I'm going to focus on the coalition between Tinkerbell and Cluster API which is going to result with KPT or Cluster API provider for Tinkerbell. And lastly just to sprinkle it up a bit I'm going to introduce some GitOps into the architecture that I'm going to show. So pretty much when we deal with cluster provisioning we're never going to have one cluster we have to manage multiple clusters and we need to introduce automation to ensure a sustainable deployment of our infrastructure. Now by the show of hands how many of you are familiar with Cluster API? Just make sure that I introduce the fundamentals right. Cool actually that's a very good show of hands. How many of you are familiar with Tinkerbell and not the character? Okay some of you, good. And how many of you are using GitOps or have heard about GitOps at the moment? Okay that's good too. And another question which is going to be very relevant, how many of you do you have a need for bare metal provisioning or are you deploying the infrastructure to bare metal? Okay that's actually a very good show of hands. For everyone else I would hope that you'd be more inspired to use Tinkerbell or to actually explore the cloud native bare metal provisioning. And actually there is a reason why I'm giving this talk at this moment. Now if you look at the ecosystem we have multiple tools that cross the chasm at the moment and we've crossing the chasm we have more and more late adopters. The more important feature of these late adopters is that these are regulated industries and they have a need to deploy their infrastructure on bare metal. Hence they look into provisioning their infrastructure on Kubernetes using bare metal. Now the thing is at the beginning the picture was very different. If you look nine years ago we had a vast majority of container orchestrators such as Docosware, Apache Mesos, CoreOS, Fleet, Kubernetes. And all of them provided a viable solution to run containers at scale. However Kubernetes would delete in defining the principles of how to run these containerized workloads. Nowadays Kubernetes is known for its portability and adaptability. But more importantly for its approach towards declarative configuration and automation. And we can see this in numbers as well. Based on the VMware Tanzu state of Kubernetes report which was released this year 99% of the organizations see a clear benefit of using Kubernetes. The first reason being a better usage of their CPU memory and the second one being an ease application management especially throughout the upgrade process. A metric I would like to highlight which is very relevant for this talk is that 52% of the organizations still have a need for bare metal. Now very important this number is actually declining from last year. Last year it was 55% this year it was 52. So we see this very slow declining trend overall. However this does not dismisses the fact that more than half of these organizations still need to deploy their infrastructure on prem. And another metric I would like to highlight is that more than 80% of the organizations manage more than six clusters. And you can think about QA, staging and production that's gonna be just three of them. However this multiplies massively especially if you have a multi-region or multi-cluster strategy. And it's very important for us to introduce automation when it comes to deployment of our infrastructure. However the global community around Kubernetes has been extremely beneficial for it because over time multiple tools were built around it to extend its functionalities. And we're talking about integrating different runtime storage observability metrics to satisfy your software needs. And this created what today we know as the cloud native landscape which resides under the CNCF umbrella. And this is the landscape that we as the TOC try to provide a technical perspective and vision for. Now at this stage we know that Kubernetes is a very pluggable and extensive system. However at the same time with all of this tool being deployed around it we had multiple bootstrap tools being developed at the same time. So we're talking about Qube ADM, Tectonic installer if you go back to the CoreOS house, Qube Spray, COPS and many more. However if you look at all of these tools it's very difficult to find a common denominator. What it actually means is that if I'm using one tool to deploy my infrastructure to Azure it's gonna be very difficult, pretty much difficult to use the same tool to deploy our infrastructure to GCP for example. Usually we'll have to introduce a new tool and this is not sustainable especially if you pursue a multi-cloud strategy. This is why we had cluster API entering the space and solving this problem. Cluster API pretty much is a set of declarative APIs for cluster creation, management and deletion across multiple cloud providers. So it provides this one unique interface or one set of standards that you can use to deploy your infrastructure anywhere. No one referred to cluster API, referred to C-cluster lifecycle which had its first initial release in April 2019. Since then it had multiple releases and currently it's in a V1 beta one endpoint which was a very big milestone for the team this year. And I mentioned that it integrates with multiple cloud providers and currently there are 16 of them actively collaborating with cluster API. Of course we're gonna have support for major cloud providers such as GCP, AWS and Azure. However, more importantly, we're gonna have support for Chinese providers such as Alibaba cloud, Tencent and Baidu cloud. And if you had to deploy your infrastructure to China you would know that it's a very challenging task because usually you have to use the set of tooling available in that region to ensure the availability of your clusters. Now when it comes to provisioning with cluster API at least you have the same manner of bootstrapping your clusters across the great firewall. And lately we have new initiatives to use cluster API to deploy on bare metal. And this is an initiative led by metal free packet and of course thinker bell as well. Now I've seen that some of you are new to cluster API so I'm just very quickly gonna introduce introduction to make sure that everyone is following the story as well. Supposedly that you would like to deploy a couple of clusters in different regions, different cloud providers. The first thing we need is a Kubernetes cluster. We need a management cluster and this is something that I call a cubes option. You need to keep this clustering to deploy more of them. However, for testing purposes you can use kind to deploy a management cluster and kind is just a dockerized version of your Kubernetes. If you wanna use cluster API in production it's recommended to use a fully fledged cluster and this is because it comes with a more sophisticated failover mechanism. Now once you have a management cluster you'll require the dependencies installed on top of it and these are usually the controllers. And there are three sets of controllers we need to take care of. Cluster API CRDs or custom resource definitions, infrastructure provider and bootstrap provider. Now the first controller is gonna be the cluster API CRDs. Pretty much we need a controller to make sure that we can create, reconcile, new CRDs introduced by cluster API and those are five new customer source definitions that I'm gonna introduce later on as well. The second set of controllers is gonna be the bootstrap provider and this is the component that translates the YAML configuration into cloud init script and it will make sure to attach an instance from a cloud provider to the cluster as a node. Now currently this capability is provisioned by QBDM, TALIS and quite recently AWS EKS as well. And the third component we need to take care of is gonna be the infrastructure provider and this is the particle that interacts with provider APIs and actually creates the resources such as instances, VPCs, subnet security groups and many more. Now if you'd like to provision clusters in multiple cloud providers, you will need a controller for each of them. So if you want to deploy to GCP, you'll need a controller for GCP infrastructure provider and if you want to use Thinker Bell, you'll need that provider as well. So it's pretty much the relationship is one to many here. Now once we have our dependencies or all of our controllers up and running, we'll be able to deploy our target clusters and the target clusters are the ones we deliver to our application teams to install their services on top of and these are gonna be the clusters that your customers will interact with while consuming your services as well. Now a very important concept that Cluster API brings into the picture is cluster as a resource. You'll be able to define your infrastructure scope using YAML manifests and this is done using the five customer source definitions that I'm gonna mention now. The first resource you need to take care of is gonna be a cluster resource and this pretty much takes care of major networking components for your cluster. You can specify the subnets for your pods and services or you can specify DNS suffix. By default with every single cluster you have a control plane resource associated with it and the control plane resource pretty much programmatically manages a set of machines that have the control plane label or the control plane components installed on top of it. And machine here very similar to an instance. Pretty much you can specify the version of Kubernetes instance type and networking and security controls that you'd like to attach to your control plane. Now this is a vanilla set up for Cluster API. By default you just have a couple of machines that have the control plane label. If you want to deploy any workloads you'll require a data plane and this in Cluster API is translated or is managed through a machine deployment. So machine deployment with anyone can move with Kubernetes and I hope you are. Machine deployment is very similar to deployment. It will pretty much make sure to roll out different strategies between machine set resources. Machine set, very similar to replica set. It will ensure that we have an amount of machine resources up and running at all time. And machine here, again, this is an instance. We can specify the version of Kubernetes instance type and so forth. However, the label for this particular nodes are gonna be worker node. Now I mentioned that Cluster API introduces this Cluster as a resource concept. So we can use all of these custom resource definitions here to say that we want a cluster with 10 nodes, three of them being the control plane, seven of them being in the data plane. Here we say where we want to deploy this infrastructure, for example, to Azure. Here where we say all of the networking and security controls that you would like to have for your cluster. So this is gonna be your infrastructure's code. Instead of using, for example, teraphrone ansible, it's gonna be AML. During the Kubernetes world, it's gonna be AML. Now, just to kind of make it a bit more digestible, to see how you can use this infrastructure's code, I have written here or showcased here an example of a cluster resource for AWS. So we have here a cluster resource with the name demo cluster in the spec section which is a slash 16 subnet for our pods. And towards the end, you can see that we have a control plane reference. This is gonna be by default. However, I'd like to draw your attention with the infrastructure reference. Here's where we say that we want this cluster to be deployed to AWS. And by default, this is gonna pick up all of the configurations that we've configured. Well, all of the parameters that we have configured for AWS. In this case, we say that our cluster should be deployed in EU central one. And we want to attach a default, as I said, key name with the name default. However, very important here. If we want to deploy the same cluster to GCP, these are gonna be the changes required. Now the cluster resource pretty much stays similar. However, you just need to change your infrastructure reference. Now it's gonna be a GCP cluster. And this is gonna pull all of the configuration we defined for GCP. The region naming convention is a bit different. So we deploy it to Europe West Free. We have the concept of a project within GCP. So we attach our cluster to a project called CAPI. And if you'd like to choose a network, we can do so by specifying its full name. In this case, default CAPI. Now if you want to deploy a cluster using Tinkerbell, these are gonna be the changes required. I think again, just we can have this, we already have the set of standards and interfaces that we can use to deploy our infrastructure anywhere in a similar manner. So here for Tinkerbell, for example, I have chosen to parameterize the image repository, which can be a public or private registry to pull our images to be installed on the operating system. Now, this is the preamble for cluster API. So far, we can deploy our infrastructure anywhere or any cloud provider. But more importantly, we can do so in a unified manner. We have standards that we can reuse pretty much across all of these vendors. However, what happens if as an organization, you do not want to deploy infrastructure to a cloud provider? What happens if you want to fully have a bare metal provisioning for your infrastructure? Well, in this case, we have Tinkerbell that enters the picture. Now, Tinkerbell is an engine for provisioning bare metal anywhere, not just Kubernetes. Kubernetes is just a subset of that and we're gonna see how it actually works. Tinkerbell was developed by the Equinix Metal team in 2019 and it was donated to CNCF, the sandbox project in 2020. Now, there's a thing that I would like to kind of highlight about sandbox projects. This means it's still green-filled and it still requires a lot of diversification when it comes to maintainorship and contributions. As such, if you have any use for bare metal or you should like to improve the open-source ecosystem when it comes to bare metal, I definitely recommend you to go to Tinkerbell and contribute, adopt, provide feedback, contribute. Again, contribution is not just code. Feedback is very essential as well. And the mission that actually Tinkerbell has all the way through is to provision bare metal across public cloud, well actually, excuse me, data centers and age devices. But more importantly, it tries to do so automatically and to simplify the steps to do that. Now, let's look at how Tinkerbell works. If you'd like to provision a bare metal using Tinkerbell, there are three sets of configuration that you need to take care of. Hardware, template and workflow. The hardware pretty much specifies your inventory. You need to make Tink server aware of what Tink or servers you have available. So for example, you have 10 Raspberry Pi machines and you need to enumerate all of those. And this can be uniquely identified using the Mac and IP address. The next thing we're gonna need is gonna be a template. And a template is just a set of actions that you would like to perform on top of your machine. Think about installing an operating system, any dependencies, middleware or any applications. So by the end, you have a server in the state that you wanted. And workflow pretty much attaches a hardware to a template. So this is where you actually orchestrate things. So if you have 10 Raspberry Pi machines, you can say that on five of them, I want to install Windows, for example, and their respective dependencies. And on the other five, I want to install Mac OS and their respective dependencies. So here is where you actually orchestrate how you want to deploy and manage your bare metal. Now once we have all of these three sets of configuration, we'll be able to use the Tink CLI to send it to the Tink server. Tink server should be pretty much up and running somewhere within your environment or locally if you're doing a demo. But by the end of it, you're gonna take the hardware, the available actions, and one by one, you're gonna perform all of the actions specified. So by the end, you should have a server or a VM in the production state that you wanted. Now as I mentioned, with Tink Bell, you have bare metal provisioning anywhere, not just Kubernetes. So the question is the following, what happens if I want to deploy a cluster on bare metal? Well, here's where you bring the cluster API and Tink Bell association together, and this is gonna be crowned by the KPT or a cluster API provider for Tink Bell. So let's see how we can deploy our infrastructure, our Kubernetes cluster using Tink Bell and cluster API. So here I'm gonna look at three sets of configurations, one from the Tink Bell side, one from the management cluster, and the result is gonna be our target cluster. So from the Tink Bell side, what we need is a set of our hardware. We still need our inventory. For example, we have those 10 Raspberry Pi machines. The next thing we're gonna need is the Tink Bell server as well. We need this to be up and running and be aware of our hardware and be able to connect to our service as well. Now going back to the management cluster, this is gonna be a recap of cluster API. On the management cluster, we need those free dependencies, free controller managers on top of it. So we're gonna have the cluster API CRD, we're gonna have our bootstrap provider, but our infrastructure provider is gonna be Tink Bell. So KPT, cluster API provider for Tink Bell. As well on the management cluster, we need our infrastructure's code. We need the definition of our infrastructure. So for example, we want a cluster with seven nodes, supposedly three of them being in the control plane, four of them being in the data plane. In addition to all of that, however, we have a hardware demo as well, because if we have 10 Raspberry Pi machines, supposedly that Tink server is aware of, you wanna maybe use only seven of them to be part of the Kubernetes cluster. So you need to specify from these machines what's available for cluster API to use to bootstrap your cluster on top of it. Now, this is the preamble we need on the management cluster. The best thing about Tink Bell, or the KPT provider, it comes with a pretty fine sets of templates and workflow, so you don't need to define those actions to bootstrap Kubernetes. These are already available. So for example, if you want a new machine, we have already all of the actions defined, such as you install, for example, a Linux operating system, you can install an Inetrakin kube-led SSH certificate as well. You install all the certificates on top of it, so by the end of it, the machine is gonna be attached to the cluster as a node. And this can be done repetitively up to the point it satisfies the YAML or the infrastructure's code that you defined. Cool, let's take a breather here. So far, we can see that we can deploy bare metal Kubernetes anywhere. However, I would like to draw your attention towards the beginning of the presentation, where I mentioned that 88% of the companies are using more than six clusters or managing more than six clusters. It is very steep or a very steep learning curve to deploy Kubernetes once, but once you do it, you can repeat that throughout multiple environments. But more importantly, you will not ideally, you'll not have separate configurations for your clusters. You would introduce some templating and automation in all of this process. And here's where we can actually use gate ops as well. Since cluster API introduces YAML manifest to define your infrastructure's code, we can use gate ops. Now, there was a very good show of hands around gate ops. I would still very quickly like to introduce it as a principle. Gate ops is pretty much referring to gate repositories to define the desired state of your application. And in our case, our infrastructure. What it actually means is by default, you're gonna have a PR based rollout. That means that the delta between a local environment and production is just one PR way. Another important thing about gate ops is that we have automatic reconciliation. That means that we have this tool watching our repository. And if new commits are identified, these are gonna be extrapolated and applied to the cluster straight away. But more importantly, we're gonna have a version state of our cluster. Pretty much we have these different historical data points that we can refer to very easily. So for example, if you are in an incident, you can very easily refer to a green state and a known state just using a couple of gate commands. Now, looking into the cloud native ecosystem, the gate ops principle is very heavily covered by tools such as Flux and ArgoCity, both of which are currently incubating CNCF projects. However, both of them applying for graduation as well. So hopefully very soon, we're gonna see them with a new status showcasing their maturity in audience, adoption and feature development as well. However, let's see where exactly we can introduce gate ops in our infrastructure provisioning. For now, I would like to make obstruction of Tinkerbell because cluster API standardize actually makes that very standard for us to deploy it. So I'm just gonna focus on how can use cluster API and gate ops and where exactly automation can be introduced in all of this cycle. So going back to our management cluster, what we need is our controllers. So we have those three controllers that we need. In addition to that, we're gonna have our infrastructure's code. These are gonna be YAML manifests that we can store in GitHub. Now, if we have GitHub, by default we can use an Argo city or a GitHub stool. So in this case, I've chosen to install Argo city on the management cluster. So now any changes that we introduced to our gate repository and any new commits are gonna be identified by Argo and applied to the target cluster straight away. So all of the results are gonna be happening in our target cluster. Now, very optional. This is a very optional component that we can introduce. This is part of the solutions, but again, you're not gonna manage just one cluster. You're gonna manage multiple clusters. So it's very important to introduce some templating within your manifests. In this case, I've chosen to use Helm as an example. So here I'm using Helm to parametrize three variables just for the demo purposes, such as the version of Kubernetes, which is gonna be 124, the amount of control pin nodes and amount of worker nodes as well. Now in this instance, once I'm introducing a Helm chart, Argo city is gonna watch the Helm chart for any changes. Once these are applied or committed to the GitHub repository, Argo city is gonna pick them up, apply them to the management cluster with, well, the provider is gonna pick up all of the changes and apply changes to our target clusters. We are gonna set this in action. And I think this is the time for a live demo. Now going back, what I have in the setup is a management cluster and the target cluster already provisioned. The thing is, I am provisioning this using AWS just for the ease of it. I'd love to do this on a couple of bare metal servers. However, traveling with a bit of Raspberry Pi is across the borders. It's a bit more challenging. So what I'm gonna do, I'm just gonna use AWS just to kind of simulate the compute, but more importantly, we'll be able to use the same process using Trinkabell as well. So what I have here is a management cluster, my local machine using kind. I have Argo city installed. I have cluster by controllers already installed up and running. As well, I have a target cluster already provisioned. Since it's AWS, I've provisioned it beforehand, mainly because it takes around five minutes to provision the VPC. So I haven't done this fully live, but we're gonna scale our cluster. And this is gonna be a live demo. Now another disclaimer I would like to introduce here is that my computer had hard reset just before the talk. So I hope that everything is gonna be up and running. But if it's not, I have a recording of it, but hopefully I'm not gonna use it. So can everyone see what's going on? I've made it a bit bigger. However, I'm gonna go for all of the lines again. Can everyone see? Can I get some thumbs up? Okay, amazing. So what I have here is pretty much all of my pods within my management cluster. So this is where I install cluster API. Now Kappa here is gonna be standing for cluster API provider for AWS. Since I'm using AWS as an example, it's gonna be the Kappa provider instead of KPT that we've mentioned before. The bootstrapping provider is gonna be Q by DM. And here is where it's installed. And then we have the cluster API controller to manage all of our CRDs. And we have this up pretty much the free sets of controllers that we require. As well I have Argo CD up and running. So that's gonna be the management cluster. And everything that I've done, can everyone see this one? I'm gonna go for this as well. Awesome. So here, what I've provisioned is a cluster with four nodes, three of them being the control plane, one of them being the data plane. Now on the top side, you can see the management cluster. So you can see our machine resources. So this is a CRD introduced by cluster API. On the bottom side, you can see the target cluster. So this is the target or the cluster that we have provisioned using cluster API. And as you can see here, we have three control plane machines. So we're gonna have the control plane in the name. And MD here stands for machine deployment. So machine deployment is gonna be our data plane. And the thing is we can see the same amount of nodes with the same roles within our target cluster. Three of them, let me highlight properly. Three of them being the control plane and one of them being in the data plane. Now all of these infrastructure is managed using a Helm chart and Argo CD. So I would like to showcase my Helm chart. This is pretty much the input files that we're gonna use to parameterize our manifest. So very similar also on the slides. I've chosen to parameterize the version of Kubernetes and the amount of replica nodes for our control plane and worker nodes. Now for anyone new to Helm and how it actually reconstructs the manifest, I would like to showcase an example. Here's for example, a machine deployment resource that manages our data plane. And this is where we actually pull the values from the Helm chart for the amount of replicas. So these based on the input file that we had before or values file that we had before, there's gonna be one replica. And the version of Kubernetes here is gonna be 124.0. And then everything that I'd like to showcase just to kind of crown the moment and make sure that everything is consistent. This is the cluster resource that I've shown on the slides as well for AWS. Here is where I specified the slash 16 subnet for our pods. And this is our AWS infrastructure reference as well. So all of these is managed using the Helm chart. Now I have our ecosystem already installed and we can see this is gonna be very overwhelming, but this is how all of these components, all of these customer resource definitions are kind of showcasing or providing a visual representation of our cluster. Now an important thing I would like to do is to scale or to provide or introduce changes to our infrastructure. And for that, I would actually like to increase the amount of control data plane nodes. And to do so, the only changes I need to do is gonna be the Helm chart. So I'm just gonna go to the values that demo file and instead of one replicas, I would like to have four instead. Now since this is Git, I just need to use a Git command to commit our changes and push them to the main branch. So I'm gonna use a Git, let's see. Ooh, oh, okay. It's still, let's see. Okay, commit, here it is. And I'm gonna put a meaningful demo message for our commit. Hopefully my internet is good enough to make this happen. Yes, and then I'm gonna use a Git push. I'm just gonna push straight away to the main branch. Did this happen? Awesome. Now, if you'd not use, for example, Argo CD, you'd have to apply these changes manually. So you'd have to use a kube apply or kubectl apply or kubectl patch command. However, with Argo CD, we don't need to do that. It's actually gonna take care of that. What we need to do, we go to the Argo CD view. Let me make this a bit bigger. So usually Argo CD is gonna watch the repository and refresh it every single couple of minutes because we don't have that. We just hit a free fresh, which means it's gonna look at the repository and see if there's any new commits. We already can see that we are out of sync and we can even verify that by clicking on the commit tag as well. We can see our changes. We modified our infrastructure from one nodes to four nodes. Now with Argo CD, we can have automatic reconciliation. So all of these changes can be automatically applied to our infrastructure once they identified. However, I've chosen a manual strategy just to review these changes and have an impact for the demo as well. So all I need to do is to hit the sync button and synchronize. And you can see already that ideally we should have, this is like a machine resource within Argo CD, which is a very crumbled visual presentation. But we can go back to our terminal and we can see that on our management cluster, we already have free machine resources that are spinned. By the end of it, ideally we should see free new machines that are part of our target cluster as well. Now, another thing we can do here is actually go to our AWS console and actually see that we have, let me just make this bigger and get this out of the way. So we already see that we have free instances initializing. Usually this process takes literally less than one minute. So hopefully if we are a bit more patient, we'll already see that the instances there, here it is. This is one node that we have. I already installed the CNI, I installed Celium in our target cluster. So we should see all of the nodes in a ready state as well. So if we wait for maybe two more seconds, ideally we should see it all up and running. Shall we wait for it or shall we go back to the slides? It's happening. It's, yeah, maybe, mm-hmm. Yes, here it is. Yay. Nice. And the thing is, all of this has been done. We can do this completely hands off. So once the only thing you need to do is to apply your Git command or to push your Git command. And this will pretty much affect your infrastructure, your applied changes to your infrastructure. Now going back, this is the recorded demo just in case. The same process can be applied or used with Tinkerbell because Cluster API deploys our infrastructure in a similar manner, irrespective of the cloud provider. So we can use the same strategy with Tinkerbell. So if you'd like to do that, including the GitHub strategy in all of this architecture, we're still gonna look at three sets of configuration. One from the Tinkerbell side, one from the management cluster and the result is gonna be applied to the target cluster. So on Tinkerbell side, again, we need our hardware. We still need an enumeration or an inventory of all of the hardware that we have available. And we still need the Tink server to be up and running and be able to communicate with our hardware. On the management cluster, we still need all of our controllers. Infrastructure reference is gonna be a Tinkerbell provider. And all of our YAML manifest or all of our infrastructure's code in this case is gonna be managed using a Helm chart. We don't have the, you know, to actually manage the full manifest ourselves, we have Helm chart to parameterize that. But more importantly, the Helm chart is gonna be watched by an Argo CD instance. So what happens? For example, in the demo that I've showcased, if you want to provision a new machine, what's gonna happen? You apply the changes to your Helm chart and to actually commit to GitHub. Argo CD is gonna identify these commits and apply them to the management cluster. In the management cluster, we have the Tinkerbell server or the Tinkerbell controller watching any change that we need to apply to our infrastructure. If we need a new machine, this is gonna be identified. And we have already all of the templates and workflows provisioned by Tinkerbell. So we're gonna install a Linux operating system. We're gonna pretty much install QBled, all of the certs. So by the end of that machine is gonna be attached to the cluster as a node. And we can do this in a repetitive manner to actually satisfy the desired state that we wanted for our infrastructure. And just to provide a bird's-eye view of all of this intertwinement between the different tools. If you want to use or to provision bare metal, you're gonna look at Tinkerbell. If you wanna deploy your Kubernetes cluster on bare metal, you're gonna introduce cluster API into the picture. If you want to automate the deployment of your infrastructure, this is where GitOps come into play or a tool such as Argo CD. Because you have all of your infrastructure as YAML so you can use GitOps. And in addition to that, you're able to manage any applications you have on top of your cluster. So this can go a further layer if you choose so. And all of this has been possible because of the power of building blocks within the cloud native ecosystem. All of this intertwinement between these tools is a driving force that even looked into the bare metal provisioning in an automatic declarative and interoperable way. Now, without saying, we are hiring. So if you'd like to work with my team at Apple or if you'd like to work within the Apple cloud services team, please go at jobs.apple.com. We'll be able to find roles across Europe and US. And if you have any questions in regards to the roles, I'm more than happy to help you throughout the process and make it as easy for you to join our team. If you have any curiosity about cluster API and GitOps and the combination between these two, I've written quite a few blogs about this and you can find them on my Medium account. And if you have any questions, I'm more than happy to answer them. After this talk, this is actually a QR code to the cloud native fundamentals course. Again, it's free, you don't have to pay for it. But if you know someone who's interested in pursuing a cloud native career, I definitely recommend looking at these scores. As well, if you have any questions, I'm gonna be available on social media, such as Twitter and LinkedIn. This is Katie Gamangin and I look forward to seeing how you can shape the cloud native ecosystem. Thank you and enjoy the rest of the conference. Thank you.