 Yeah, we have just 10 minutes. Unfortunately, we have material for hours to talk, but we will give just high-level overview. So first of all, my name is Sergey Sergeev from AppDynamics Cisco. Hi, I'm Ayush. I'm a principal engineer at Cisco Systems. Yeah, and we will talk about cellular architecture and how to get up to fight. So first about AppDynamics, it's a full-stack observability platform. It provides insights for businesses, users, applications, infrastructure, network, and security. We have such deployments around the world, hundreds of billions of data points a day. So let's talk about cellular architecture and what it is. So it's a collection of components grouped from design and implementation into deployment. It's basically independently deployable, manageable, and observable. And it just provides a way to split, like infinitely scaling problem into smaller chunks. So you basically shard your customers into each cell. So it provides a way to apply some upper bound for your cell scalability. And basically, it provides a way to limit blast radius. So if something happens with one cell deployment, it affects only that part of customers. It also provides a way to reason about capacity and scalability. So you can define some upper bounds for each individual cell scalability. Also, it is disaster recovery. It's very hard to do disaster recovery for a huge deployment, infinitely growing. How do you recover it from disaster? So it's very hard with cell-based architecture. It can be tested, validated, so you can apply some CI pipeline for your disaster recovery processes. Also, it helps to move some of the heavy traffic customers to an individual cell. So they do not act as noisy neighbors for other customers. And also, it minimizes resources over provisioning. So basically, you may over provision resources in one cell, but once you have enough cell deployments, so it just limits resources over provisioning. So what is a cell? So it's basically over 100 of applications. We use microservices pattern at FD. So it's over 100 of different microservices with data stores and all the infrastructure and foundational components needed to run that system. Also, we have a global control plane which hosts basically global services needed for cells to operate, like a global event bus to communicate and orchestrate tenant provisioning or tenant management. Also, we have the management cell itself, which does basically fleet management. So it provisions new cells. It manages and monitors basically the rest of the cells. And just, again, to highlight the scale, it's hundreds of applications contributed by dozens of engineering teams. So it's quite a lot of things going on in each individual cell. And with this, I will usually provide more details about the GitOps side of the things. Yes, so we have a single management cell which is responsible to provision all of our tenant cell and the control cell. So to add a tenant cell, you basically manifest something like this where you say which control cell and to deploy in which cloud, which reason, the number of tenant cell and things like that. So what it does is that once you commit this to our main GitOps depository, the flux in the management cell will reconcile and that will kick in the cross plane in that management cell. That will actually create all the base cloud resources like the VPC, the subnet, the security group and the Kubernetes cluster itself. It will also create the node group based on the profile that has been provided here. And once the cluster is up and running, it will basically bootstrap, once the cluster is up and running, it will bootstrap flux in the cluster and with the cluster path that is provided here. So the cluster path that was provided in the cell creation manifest is a path to the cluster configuration. So cluster configuration is a way to encapsulate all the components that go in a cluster or that needs to be installed in a cluster. This necessarily doesn't need to be the Kubernetes resources. This can be any of the cloud resources that needs to be provisioned in that cell. So in our case, it's like a layer cake. So at start, it's like a layer cake and like the reconciliation order is from bottom to top and the layer above is depending on the services which are being provided from the layer below. So it starts with our foundational components like the HEM repositories, the foundational teams, definitions, the gateways and the service mesh. Once the foundational components are created, we start adding the security components, security and compliance components like Kibirno and OPA. So this ensures that everything that's added after this point is compliant with our security and compliance policies. So after the security components are good and ready, we onboard the teams. In the onboarding teams, it basically the namespaces, the service accounts are back. So this gives us a very tight control on what the teams can or cannot do in the cell. So after the teams are added, we add the infrastructure components like the cross-plane, the monitoring stuff. So, and this infrastructure component is used by the shared data stores and the application components. In shared data source, we have Kafka and Druid. So after the shared data source are created which are shared by multiple teams, the applications components are being reconciled. And we also encapsulate all the components that the application needs, like the private data stores, their ingress, their egress rule and any of the security components that is specific to their applications. So, and at Abdynamics, we use a monorepository and so we use a monorepository. So a flux basically reconciles from a directory in the cluster overlay which is basically derived from the base cluster overlay. We have two flavors of our deployment, basically the CI flavor and the cloud flavor. So in the ephemeral flavor, what we do is we provision continuous workload and if you basically spin up the same application in a cloud environment, you will spin up actual cloud resources, give an example. Let's say you have an application which is deployed and let's say you have an application which needs Redis. So if you deploy it in an ephemeral cluster, it will use Redis container, but if you deploy the same application in Kubernetes or in AWS environment, it will spin up Elastic Cache. So this gives us a uniform pattern to deploy the whole of SAS Abdynamics platform or part of it in CI, CD, local or production environment. And Abdynamics Hiring, come join us and see you in the break. Yeah, we made it here. Each slide deserves like half an hour talk. Sorry, we just provided some high level review. Meet us at the break in the hall so we will be happy to chat with you in more details to answer your questions. And if you're excited about what we do, come join us, Abdi is hiring. So, and thank you so much for coming. So many people excited to see some back to normal. Thank you. Thank you.