 Thank you. So today, we'll be talking about a very interesting topic, trust as a service. So we are from data communications, me and Sachin. So as we were already introduced, so we are mainly working on two kinds of offering from data communication, one is the cloud Kubernetes part and then a specific version for the edge kind of deployments. So me and Sachin, we both are working on a very similar solution. And so we are presenting part of that solution that we are working. So our agenda for this presentation would be basically to just give a broad overview on how the TCL Edge Management introduction, TCL Edge Management and where trust as a service fits in and why trust as a service is very important. So when we say trust as a service, so we already have a lot of tools, deployment methods that we have upstream like keep straight our crops for installing and managing Kubernetes. But there are a lot of pitfalls with that in managing large number of clusters and managing days, day one, day two kind of operations on that. So that's where the upstream project as a API is gaining a lot of attention off rate under Kubernetes 6.2. So this is something that data communication we have leveraged. And we would like to show that in this presentation along with that would be the application manager. So how the cluster page that is available upstream is extended and it is integrated with app manager and how that could be manufactured. So to start with, so from our TCL perspective, we are managing hundreds of clusters that are spread across globe. So essentially what we envision to provide as a wide solution or a wide product is a single pane of glass where a user can come in and he can manage n number of Kubernetes clusters. And so once a user has provisioned his Kubernetes cluster from a single pane of glass, then if he wants to orchestrate containers on that clusters independent of the cloud provider, it could be that the Kubernetes is deployed on AWS or on Azure or on any of the cloud provider or it could be on a data center or it could be on a small edge box. This edge box could be a simple Linux box that is sitting on a very remote location like a steel plant or for some kind of IoT applications. So how do you manage them which is actually not orchestrated by any cloud provider? So one is on the cluster orchestration, then application orchestration across that cluster. So when you say application, it could be that container application or still a lot of applications are in the process of containerization for them. We also envision to provide a VM orchestration on top of Kubernetes which is also quite heavily used of it. And in addition to that will be a generic monitoring and the logging capabilities along with native detox integration and there's a copy integration. So the overall session will be straight into two parts. Firstly, I'll initially run through the concepts of both cluster API, how we explain the cluster API and how we have integrated with that measure. And as a second part of the presentation, session will take over on the demo part and we will create two QB discusses during this demo and we will see how it comes up and how we orchestrate the application on them. So as some of you might already be aware of cluster API which is basically a single set of APIs that has been open source from Kubernetes 6 community and it is being used to spin up a cluster on any of the cloud provider. What you see on the left hand side, the first block is the extended Kubernetes APIs which are CRDs using which from the upstream what we can do is like we can spin up QB idiom clusters onto any of the cloud provider as of today. And that has been extended now with multiple bootstrap providers. So it could be that we use the same cluster API with the extended version with the same principles we would be able to spin up a K3S cluster onto any of the cloud provider that we see on the right hand side or we would be able to spin up OpenShift clusters or OpenShift use cases or TataIKS which is Tata's version of Kubernetes service that could also be deployed which is what actually used in our wide spread deployments. And on the infrastructure part, we have validated a lot with MAS as we have our point of presence across India on multiple data centers, we have spread across like more than 28 locations which accounts up to around 150 servers to 200 servers which are managed by MAS and we are using MAS as a cloud provider to spin up a cluster on demand. So now once the Kubernetes cluster is installed that's just not the only story. That's where the story actually begins, right? So that's where the application manager comes into the picture. So now let's say now there is a specific use case for 5G. Now we wanna deploy 5G as a service. So that means that a cluster is spun up on any of the locations and immediately as soon as the cluster is up the applications are all over. That's where the integration with the application manager comes in. So from the API perspective, it is a simple API that you see. From the user perspective, it is very simple. All the automation part that is happening under the hood. What you can see here in this diagram we have cluster API that is running, the Xnet cluster API that is running in one management cluster and using that management cluster, it's a Kubernetes cluster that could be running anywhere in the cloud. And that could be acting as a single control plane for the cluster management which will be used to spin up Kubernetes clusters onto any of the cloud providers that you see on the bottom. So it could be on any cloud provider. And to achieve the same all we have done is extending the existing cluster API that you see on the right-hand side. It is a very straightforward spec that you can understand. It's a kind cluster where a user specifies what version of Kubernetes he wants to install if it is a version 1.20 or 1.21 or 1.22 and what networking driver which translates to the CNI that needs to be installed when the cluster is spun up. And the way how we install the CNI is again with the integration with the application manager. And then we specify how many number of control or how many number of control plane nodes we want, one, three or five. And similarly how many number of worker nodes we want and the operating system for that. Then under managed services we will specify what kind of application needs to be running. So as I was telling earlier, if it is a 5G edge so then all we have to do is create a template that is required for the app manager and as soon as the cluster is up as soon as the cluster of the application will also come up. In fact, the way how we are using is extensively with CDN. So as we know like in CDN networks the traffic spike can happen or quite frequently on an unnoticed scenarios. So under such situations ideally we auto scale the cluster to an available nodes. If there are no nodes available on that cluster or if there are no nodes available on the data center in case of mass then the cluster API will be integrated to an umbrella API which is managed by OSS or BSS that will trigger the cluster API to create the Kubernetes cluster on demand. So sometimes we see the traffic spike there is no resources on that data center. Now we will spin up a cluster on demand on public cloud and the CDN application comes up on demand. It's just an example of the CDN it could be any other application for that matter. And spinning a bit on the API itself what you see on the left hand side is what is available from upstream. So as we use cluster API when we create a kind cluster under the hood it creates multiple CRDs in fact in fact with the cluster API there are certain drawbacks where it's not a single API. Essentially we will have to create machine deployments machine deployment translates to the worker nodes that are created from cluster API and QVADM control plane translates to the control plane node that we have to create. So then there are others the other API is like machine templates QVADM conflict templates all this has to be linked to one another. So it's not a single API what is available from cluster API it is a set of APIs. So what we have done is like abstracted that and we have one simple API that is acting as an umbrella API on the top that takes care of all the automation what is required for the cluster API and along with that there's an integration with the application manager where as soon as the cluster comes up it will create a kind multicluster app which is an extended API again. And with the we have provided a native integration with the auto-scaler. So the auto-scaler actually this is available upstream plus auto-scaler for which cluster API acts as one of the provider like how AWS or GCP or Azure can act as a product cluster API itself can act as a cloud provider. So a cluster auto-scaler automatically kicks in when cluster API sees certain when the parts are not triggered when the resources are not available then the cluster will automatically scale the cluster. And with the integration with the OSSBS is that I mentioned earlier if there are no nodes available then it will scale across the different cloud itself. Yeah, that's on the cluster API part where we could do the cluster orchestration. So now coming to the application manager we it's again a custom API that we have extended which acts as a global control pane. So it's a kind multicluster app using which we can deploy any kind of Kubernetes resources that you see here on the right hand side it could be a kind deployment it could be a kind virtual machine it could be a kind service config map anything. So a single point of contact on the management cluster which acts as a global control pane that could go and deploy and do the lifecycle management of the application onto any number of clusters. So this might be seen a bit analogous to the Qt Fed or to the Agro CD. There are substantial differences coming to that part as well. Qt Fed for example, it only handles the CD part of it. App manager what we have is something there is both CI and CD flow where application build can happen the test can happen with the natively integrated API and along with that also the testing and then the CD part of rolling out to the clusters or comparing to the Agro CD it's again CD only. So which is gonna be the main differentiator in that perspective. A bit elaborated information on the same app manager. So as I was explaining so now it's the kind multicluster app with CRD that we have developed and with which we have a native integration to GitLab and to the Jenkins. So all we have to do is if we have to use both CI and CD part of it all we have to do is like specify the Git repo and the specific file that needs to be built and then integration with Jenkins to construct the pipelines followed by which native integration with tests like SonarQ or JUnit that we already have and that's on the CI part and on the CD so ideally we treat them as stages. So the applications that we want to deploy it can be deployed on different stages. Each stage can itself consists of multiple clusters what you see on the right hand side the stage deployment. So developers cluster could be in number of clusters where on each of these clusters the application will be deployed. Similarly on the staging there could be a number of clusters on each of the clusters it could be deployed and same applies to the production. So the integration with the cluster pay that we have done is mainly for the CD part of it. So as soon as the cluster is up we have the applications coming up. So that's where the CD comes in. And with this ideally we can offer application cluster as a service not just cluster it's gonna be the application cluster as a service and that cluster could be open shift or K3S or QBDM that we are using. Yeah, that was on the conceptual part. I would like to hand it over to my colleague Sachin. We can showcase the same thing on the demo. Yeah, thanks Vishal. Hi everyone. So let me share my screen. Let me know when my screen is visible. Yes, please go ahead Sachin. So as Vishal mentioned we'll be showcasing the demo on one on AWS and second on Mars. So we have prepared two email files for basically showing this demo. So one which will be the intro provider as AWS and the bootstrap provider as QBDM. So here we will mention what type of provider we want to use, the basic cluster specification, what type of networking driver we want to deploy after the cluster is ready. Similarly some of the control plane, how many replicas do we want, what type of operating system on control plane we want. Similarly on the worker side. For the managed application that needs to be deployed once the cluster is ready, we can specify what managed services should be deployed after the cluster is installed. Like for this particular case, we are mentioning the Prometheus and the Fluendee should be installed along with on the cluster security, we need to install OPA security once the cluster is ready. I take you to the AWS UI basically. So right now we can see there are no instances which are running similarly on VPC and Internet gateways and all. So once we start creating the cluster, all these things will be created automatically by the cluster API. So once we apply the ML, we will be able to see the status on the API side. Basically we will say it has started creating the Internet gateways already created, subnet is created. The same thing we can verify from the UI as well. So we can see for each cluster, the Internet gateway will be automatically created. Similarly on the VPC side, we can see for a particular cluster, the VPC is also created and the load balancer for the API server. So once this cluster gets created, we will see through one of the clusters already deployed. When Meshap was showing the presentation, I started the cluster creation for the mass. So basically if we see, so it was a two-mode cluster that I created. So both of those are in ready state. Same thing we can verify from the UI as well. We used the this KCD Gen A zone, which is deployed with 1.20.9 version. And if we look at the application end of piece, so we can see as the cluster came up, all the applications that we specified, the networking driver, the OPA security policies that we mentioned, the Fluendi and the Prometheus, all are in deployed status basically. So if we look at any one of the multi-cluster app, so in the detailed status, we get to know what all the resources which are deployed for a particular application. So this is for OPA security. So we can see the cluster role, road bandings. So all the resources which are required for operator and their policies, those will be created on the subsequent cluster. Same thing we can verify from the cluster as well. So we can see Prometheus is deployed along with OPA gatekeeper going back to the AWS one. So we can see six out of seven prerequisites for a cluster are already created, which includes cluster security groups, internet gateway, load balancer, any internet gateway which is required, VPCs and subnets. So now if we can see the machine have started provisioning for the control plane and seeing if we can verify it from the UI as well. The control plane machine is created. So we will just get a queue config for this particular cluster. So we will wait for the machine to come into running state. Once the machine will be running, we will be able to reach the API server. So as we can see, the first node is ready and we can verify from the machine API as well. As we can see, it has started deploying the worker node on AWS. And we can see that the machine is now ready. So we can see that it has started deploying the worker node on AWS. And the same thing we can see it from the AWS UI. So it has started creating this particular machine. So basically the main thing is from the single API, we can manage the cluster lifecycle, be it creation, be it deletion, or any upgrade. Suppose if you want to upgrade a cluster, so we just have to change the cluster version here and apply the ML again. It will automatically do in a pipeline format. It will first upgrade the control plane and then subsequently the worker node. Similarly, if we want to the worker replicas or the control plane replicas, we can just do update on one API and it will automatically trigger the required cluster APIs for scaling up or scaling down. Yeah, and just to add, even the cluster installation, it takes about three to four minutes or five minutes on AWS in fact. Compared to, for example, the other standard installation tools would have cops or ansible, which takes quite a long time. So now we can see the second worker node is also added to the cluster. Yeah, and with this, we come to the conclusion of the demo. We can take any questions if you have. Thank you so much for the valuable questions. I noticed that you already answered some couple of questions on the chat and the first question of the answer. So one thing which I'd like to clarify is when you refer to like the Tata communications, Kubernetes as a service of the country, going to act as a of approximately for platform. So when you say the multi-core platform, where are we going to monitor all the multi-core platform? Is the Tata communications providing opportunity to monitor that? Correct, that's right. Yeah, so we do have a single pane of glass, but each of these clusters can be independently monitored. The logging for the same, for each of the applications and also advanced analytics if you want to do on top of those data, there is a specific analytics framework that we have that is integrated. So it's going to be one big orchestrator under which trust as a service is one module that we present to you. So recently there are some things, similar thing which I was, I believe I was noting, something like an Anthos in a GK or Google Cloud. There is some service called Anthos in a Google Cloud, which is also doing a similar type of thing. Is it my understanding? Is it right? That's right. There is substantial synergy on that. However, there are some differentiators where in fact we can orchestrate the virtual machines on the proper Kubernetes cluster, which Google does it, Anthos does it in a bit different way. And there are more differentiators when it comes to connectivity. So we spoke only on the clusters creation, application onboarding. Now if those applications running on different cluster want to interconnect, then we do have something like a network mesh that we can establish for these applications to interconnect. So there are some differentiators there, although there is some synergy. Yes. Thank you so much for your valuable session. So I'm not seeing any more questions on that. If any other questions are there, we can post on the Slack or somewhere else. Sure. Thank you. Thanks a lot for the opportunity. Thank you.