 Hi everyone, welcome to this Github's Con presentation. My name is Adarsh Vincent Chitlapalli, I work as a cloud software engineer at Intel. So I've been working with Intel for the past one year and I've been engaged most of with Github's related work. We're doing cool Edge stuff and I'm glad to be part of it. Thanks Adarsh, I definitely agree with you when it comes to cool Edge stuff. Anyways, I'm Igor DC, software engineer at Intel. Been doing on and off work around Edge and cloud. Currently developer and maintainer of Emco and me and others will be here. Talking a little bit about Github's and a little bit about Emco. Fundamentally Emco with Github's and at the end we'll have a demo as well. So I hope you enjoy. For the last couple of years we at Intel have been working on Emco as a solution for Edge deployments. And for the last couple of months we have been looking at Github's and integrating it to Emco. And we have realized like how powerful Emco and Github's is to solve this problem of Edge deployment. And that's what we are going to talk about today. So just as a recap before we jump into our problem statement, we would like to just recap what Github's is. This is a basic Github's flow. We have the developer on the left hand side. So developer puts in the source code which is like resource yaml's etc. And to some CI it goes to the caters resource git repo. And on the cluster we have like few config agents like Flux or Argo which continuously monitors this git repo for resources or any changes. So as soon as it finds any resources or it sees any changes in the resources it will pull it up and deploy it to the cluster. So this is a basic gist of what Github's is. Now let us see like what technically it is defined as. So technically it is defined as a set of practices to manage infrastructure and application configurations using it. So git is like an important point here. The git is a single source of truth. So the entire state of the system is monitored and tracked in git itself. And as I mentioned earlier we have like an agent like Flux CD or Argo CD which are deployed in the cluster. To continuously monitor this repo and which does the job of doing the deployment of applications. So we can say like what you see is what you get on intender. Let us understand the problem stated. In a real world scenario for an edge deployment there would be like n number of clusters. For example let us consider cluster number one. So let us assume this cluster is deployed in Azure for instance. So this cluster can have like app one, app two and so on like five apps. The same way we can have another cluster for example cluster two and let us assume that this is deployed in AWS. And it has three apps app two app four app three. Similarly we have cluster three. This can be on-premise cluster and it can have like four apps on it. And so on we can have had like n number of clusters with n unique different apps. So we can see from this illustration itself like how complex it can become. Multi cluster, multi app deployment can go out of hands pretty quickly. So such a sort of a scenario brings in some complexities with it as well. So one such complexity is management. So just as we saw like such a huge deployment with a lot of apps requires an orchestrator in place. Which can manage the deployment of these applications in a smooth manner. And also we require something which could monitor the status of these applications. Like did the deployment go fine? Are the applications working properly? Is the health good of those applications? And also in certain scenarios certain applications would require certain sort of resources like specific hardware. So we also have to ensure that those apps get deployed to those type of clusters. The other complexity that we could encounter is security. So we need like a secure way for this deployment to go through. The API calls that we make should be secure. And also any sort of credentials or any sensitive data that we might require for this deployment to go through should also be maintained in a secure way. The third complexity that may encounter is consistency. So as I mentioned your consistency is a key. For any deployment we must have this assurity that what we desire is what we get. And even if there's a drift that happens in the system like the versions are old. The entire configuration should be able to remediate this issue and ensure that consistency is always maintained. To address the edge deployment complexities that were just outlined. I would like to introduce you to MCO or the edge multi-closure orchestrator. This is the Linux Foundation Networking Project currently in its sandbox stage. And some of the things we can do with it are for example, intent-based deployment of cloud-native applications. So MCO is designed to have an intent-based API and to be pluggable to fulfill those intents. Its main focus is cloud-native applications on the edge. You can deploy two sets of Kubernetes clusters, does a multi-cluster nature of it. But those Kubernetes clusters can be reached in different ways through different implementations of backends. Namely the typical cube config-based approach with direct API authentication as well as GitOps-based approaches. Flexibility, modularity and scalability of MCO are major selling points of it. And specifically when it comes to the modularity we can attach different backends like I just said regarding how to reach those Kubernetes clusters. Namely GitOps for particular implementations out there, both known names from the public cloud industry as well as lesser known implementations of such. Through MCO we get access to single pane of view for all the clusters we have as well as the applications that are in deployed. Such applications are what we call composite applications. They really are collections of applications that get deployed as a single package with logic to represent how to distribute that composite application, those components of the application across the multiple clusters. And MCO provides the engine and the language to achieve that distribution of such composite applications. So now let us understand the basic MCO flow with GitOps. So this flow is basically to deploy resources to three clusters. So as shown in the diagram we have three clusters, cluster one, cluster two and cluster three. So cluster one is like a direct cluster, it has no GitOps component in it. Cluster two and cluster three are GitOps managed. So in this cluster two uses flux CD as a config agent and cluster three is like a Azure Arc managed cluster. So basically we would onboard this cluster to Azure Arc and thus it's a Azure managed cluster. We have a Git server here. So this Git server can be any Git server, it can be like a local Git server, it can be GitHub, GitLab and etc. Then we have MCO here. So we can deploy MCO to any of this, it can be co-located with any of these clusters or we can deploy MCO to a different cluster also. So we have shown like three important components of MCO that are more components than this but cluster is star orchestrator, resource synchronizer and MongoDB. These are like few important components of MCO. Now let us assume, let us look at a scenario. We have user and the user want to deploy an app to these clusters. So first step that you would take is basically registering these clusters with MCO. So for cluster one, since it's a direct cluster, it will be the cube config will be the registration key. So the user will provide that and for cluster two and cluster three, since they are GitOps based cluster, the interaction would go through Git server. So we require some grid credentials. And in term, in case of cluster three, we also require few Azure credentials. So all these credentials and cube config is provided by user one, user to the cluster registrar. And this then gets stored in the MongoDB. The step two, the user calls instantiate. So basically he wants to instantiate app one on this cluster. So he calls that API command to orchestrator. An orchestrator then gives this command to resourcing. So resource synchronizer does a job of basically applying the resources and in case of GitOps writing the resource files to Git server. So that what will happen the next step. So resourcing gets the credentials from the MongoDB. In case of the direct cluster, it directly applies the resources. In case of the GitOps cluster, cluster two and cluster three, it writes the resources to the Git server. For cluster three, which is like an Azure cluster, an Azure config should also be created. And that's what we see here. It calls that Azure API to create that config. The next step is step five. So in step five, for cluster three, which is an Azure managed cluster, it will configure the flux CD on that cluster. And then after the flux CD is configured, it will start pulling resources from the Git server and seeing the resources. And thus, like after all these five steps, all the, all these three clusters have app one deployed and installed on them. In terms of GitOps backend support by Amco, we currently offer three upstream. We have base basic flux v2, which allows us to use GitOps with any kind of Kubernetes cluster as long as we run that flux v2 controller and operator inside. We have Azure Arc, which happens to internally also use flux v2. But this Azure Arc, a GitOps extension for Amco is flux v2 plus the additional extensions to extract the maximum functionality from Azure. And then we also have Anthos. So we can run GitOps for, for Google Cloud. This is this is Anthos config management, including repo sync repo sync is a structure that in Anthos we can use to scope deployments to particular namespaces and also to apply role based access control there using the repo syncs respective service account in Kubernetes. So these are the three ones we have flux v2 just a generic flux. Azure Arc, which is also based on flux, but for Azure with additional extensions and in Anthos config management. So Git server is like one of the important components of the GitOps flow. Usually we depend on local Git servers like GitHub, GitLab, etc. But it's also desired to have a local Git server and Amco comes bundled with a local Git server. This Git server is powered by GitT and the UI is very similar to what we see for GitHub. We can see here, it's very similar to what GitHub provides. And one of the best USBs of having your own Git server is that the user will have full control over the data and essentially there is no privacy leaks or anything like that. And also another issue that can happen with Git servers like GitHub is API rate limiting. So you also won't face this issue if you have your own Git server. So Amco with GitOps, we can think of Git as a source of truth for these clusters that are synchronizing with the Git repos. And then we can also think of Amco as the entity defining that truth to be put into the Git repos. GitOps is a great addition to Amco because it really aligns very well when it comes to making the most out of Amco's existing functionality such as on-demand instantiation of applications, intelligent placement of workloads on clusters including clusters of different natures from different interfaces including GitOps. The customization of resources post instantiation of the application, that's something Amco can do. So we deploy the application and it can customize it later. And GitOps also attaches very well in that point. And then finally automation of service mesh and other connectivity and networking and the security infrastructure. The automated distribution of security certificates and things of such. Amco and GitOps go along very well together. So just to recap the complexities we discussed earlier, we are doing a recap here. So there's three complexities that would face an energy deployment or management, security and consistency. Now let us see how Amco plus GitOps can solve the complexities that we discussed earlier. The first complexity that we mentioned was management. Now using GitOps helps to deploy these resources very easily. So deployment is taken care of by GitOps. Now what Amco adds on to this is intelligent placement of resources. So considering the scenario where a certain application requires a certain amount of CPU or RAM. So it should be placed in such a cluster which have enough resources to handle that apps needs. That is taken care of by Amco. Also what Amco provides is like a one-stop solution for monitoring. So to find out if the apps are healthy in the cluster, if they are deployed properly, if they are updated properly, all of this is taken care of by Amco. What also Amco does is application dependency management between clusters. So consider an example where you have app one which is deployed on cluster one. But then there's app two which depends on this app one. So it can only be deployed after app one is deployed. So all of this is taken care of by Amco. So we can mark this problem as solved. The second complexity that we discussed was security. Now Amco seamlessly integrates with Istio and service mesh. Due to this authentication of users becomes very easy and we can do that in a very secure manner. Also by using Istio all the API calls between the Amco services are secured. Also the calls from Amco the Git server uses HTTPS protocol which also means it's secured. So this ensures that no one can sniff to the API calls and get data from it. Now the flux config agents that we discussed they are all deployed in the clusters. So this helps us to avoid the risk of storing credentials anywhere other than the clusters. So we don't need to store the credentials in the Git repo anywhere else. Everything is confined in the clusters. So we can also say like we have solved this complexity of security. The third complexity that we are bugged with was consistency. Now what Amco provides our guarantees is consistent intents and what GitOps guarantees is consistent rendering. The combination of both this ensures that what we desire is what we get and that doing a consistent manner. Amco also has a very unique directory structure which ensures that the apps are properly deployed to the right cluster every time. So we can also consider this complexity problem solved. Just recapping our initial problem statement. So we have like n clusters with n different types of app. So with all discussion that we made so far we recognize that Amco plus GitOps is a perfect solution. And by having them as a center of this deployment we can easily simplify the deployments. of these applications on any complex cluster. Hello everyone. So we'll bring a small demo to showcase how Amco with GitOps can be used to apply applications to the clusters. So right hand side we have three cluster in total and right hand side we have the cluster which has Amco installed in it. And then we have the two target clusters. So we'll first start by installing flux in the target cluster. So I'll make use of the boot trap mechanism to install flux. The important thing to note here is the path. So we are considering this as cluster 2. So this is the actual path where the resources will be synced. We will just apply this command which is connecting to GitOps. It will write the resources to GitOps and then it also try use that to sync the flux components to the same cluster. Okay so now it's done. The second step is to install monitor. So what monitor does is basically it will monitor the resources in this cluster. And we'll report it to the Git repository and the service running in Amco. We'll monitor the status on the Git repository and note it down. So we'll make use of the help install command to install monitor. There are a few parameters to it. But the important one is a cluster name which again is the same one as when discussed above it should be the one with cluster 2 in it. So we can see the monitor is now deployed. We can just quickly check they're all deployed properly. You can see Amco monitor is deployed in under an Amco name space. And the flux system components are deployed in the flux system name space. Similarly for the other target cluster we'll do the same steps install flux, install monitor. I've already done it to save time. So now moving on. So Amco comes bundled with this test examples. So one test example is the test flux example. This has a few files in it, YAML files and a setup script. So the setup script is where we specify which apps to deploy to with all clusters. So we can see here we have two clusters. And we are saying for collectD is deployed in cluster 1 and cluster 2 and operator in cluster 1. It also comes handy with the test all in one script. So we'll be using that to apply the resources. We'll call apply to it and this will instantiate the logical cloud as well as the deployment. So the deployment is in progress. Deployment succeeded. So we can just quickly see what all steps were taken. So the logical cloud is the first step that gets instantiated. So it got succeeded and the second step is basically the deployment. That also got succeeded. Now let us check the git repository. So in git repository we have the two folders, Flux Flux R1 and Flux Flux R2. And in that we have the files for Flux and this is Flux R1. So under this context ID, context context ID path, we have apps folder which has the two applications that we want to deploy. So collectD has all collectD files and similarly operator will have all operator files. And similarly we have cluster2 as well. We have cluster2 plus only at collectD. So similar to the cluster1 case, we have all the files in git here. Now let me check the clusters. Did they get deployed? So this is cluster1 which has operator and collectD. So operator also has few tcd ports with it. So everything got deployed here. And similarly for the other cluster, it only had collectD in it. We only had wanted collectD in cluster2 and that's the only thing that got deployed here. You quickly see how status gets tracked. So it's a different branch for each cluster. So in cluster1 we go to cluster folder. We will see the status. So status for both collectD and operator. And similarly in the branch call cluster2, the status for cluster2 is meant. Now let's call delete on this resources. Again using the script and calling delete. So delete can went through as expected. We can see logical cloud termination succeeded. Deployment termination succeeded. So let's again go to the folder. So clusters. So we can see operator just got deleted in front of our eyes. Let's check again. So everything got deleted from the git repository. It means like the resources should also get deleted from the clusters. So it's not terminating state which means it will get terminated soon. Let's check cluster1. So everything got deleted in this cluster. Let's again check cluster2. So all the resources got deleted. So this was our short demo on MCOVID GitOps. Thanks for watching. Thank you.