 All right, good morning. Almost end of the morning, just five more minutes before it becomes the afternoon. So welcome to KubeCon. My name is Arun Gupta, and I work for Amazon. And I'm Rafael Edifatio from Zalanda. Fantastic. Today, we are here to talk about mastering Kubernetes on AWS. We know it's a short session. We'll try to do as much justice as we can. The slides, our recommendations, our guidelines, all of those are publicly available in a GitHub repo. So we'll be happy to share that towards the end of the session, anyway, with you. So let's get started with that. So I work for Amazon. I'm a principal open source technologist. My role is I'm a CNCF board member for AWS. And I've been in the containers land for a while. I like to run. I like to do other fun things as well. As I said, I'm Rafael. I work for Zalando. Love the topic of container orchestration and go on a bunch of other things. And as an Italian, you love wine. Yeah, that's a fact. All right. So today, instead of this, we're not expecting this to be a Kubernetes 101, although we have a lot of content around that topic as well. But we're going to pick three topics. And we're going to deep dive into those topics. And we're going to share our recommendations. And the way we have structured the session is I'll talk about what are we seeing in Amazon, how our customers are using Kubernetes in different scenarios. I was pretty thrilled. I have built actually a personal affection for Game of Thrones now because it runs Kubernetes on AWS. So that is pretty cool. I learned quite a few things from that session. But we'll also share a broader perspective what our customers are doing it. And then Rafael, I will talk about it exactly. Practices are good, but here is how we run it in an opinionated way. Hopefully, you'll get a good balance to be of it. And the three topics we chose is, how do you set up your cluster? How do you do identity and access management using Kubernetes? And then finally, the visibility or telemetry, there are lots of names for this. Let's get started. So what are your choices if you are running? Pardon for my voice, actually, I've been talking too much here. But what are your choices when you are trying to set up a Kubernetes cluster? What does that mean, really? How do I install a cluster? How do I operate a cluster? How do I upgrade a cluster? How do I deploy an applications to it? What are my choices? Well, deploying is simple because that's cube cuddle. That's your CLI, essentially. Or maybe some IDE or Maven package and all. But we'll focus primarily on the cluster setup itself. Now, if you are a developer and you want to set up a single node Kubernetes cluster, Minicube is a credible option today. You can download Minicube. It starts using hypervisor or virtual box. And it spins up a single node, single worker, single master cluster on your machine. And you can interact with it using cube cuddle. All of that works pretty good. Docker at DockerCon also announced that Docker desktop will also have integration of Kubernetes in there. I'm still waiting. I'm a Docker captain. So I'm still waiting for those bits to come into my hands so that I can start trying it out as part of Docker for Mac. That brings a more seamless experience, essentially, because from your Java files or whatever your language files are, you go build a Docker image using Docker for Mac, and then you use cube cuddle. So you already have Docker for Mac. Why download a new tool? That's kind of cool. On the community side, there are 18 different ways you can actually deploy Kubernetes on AWS. I'm not going to list all of them, but one of the favorite one that we have seen a lot of our customers use is COPS. This tool is actually built in the community by AWS Seag, essentially. So huge props to Justin Santa Barbara and Chris Love who are the main maintainers and a bunch of other people as well who have been maintaining this tool really slick and easy. And I'll show an example of how COPS really work. If you are looking for a complete list on the 18 different ways and other flavors, look at Kubernetes aws.io. At Reinvent last week, we introduced EKS, which is Elastic Container Service for Kubernetes. That is a managed service from Amazon. You go to Amazon Console or AWS Console. So give me a cluster, and we give you an API URL, and that's it. We take care of managing the cluster. We'll talk about that. There are other options by CoreOS Tectonic and Red Hat OpenShift, which runs an opinionated view of Kubernetes because it provides a lot of tooling on top of Kubernetes as well. Last but not the least, there is, of course, Cloud Formation. We see a lot of customers using Cloud Formation because they really want to handcraft how their cluster is set up, how their networking policy looks like, and then Terraform, of course, is another option. So these are some of the options that we see that our customers use primarily between one of these. And then, of course, we have partners. So Docker, Heptio, Mezosphere, they all provide recommendations on how do you build and run these applications. Now, let's take a look at it, COPS, a little bit. So what is COPS? Well, COPS is community supported. It is primarily built, as I said, as part of CIG AWS. It's a top level Kubernetes project. So github.com slash Kubernetes slash COPS is where you get all the details over there. There are COPS office hours, Slack channel. There is no production support if you're interested in that. You are on your own. You're really relying upon a Slack channel and all, that channel is very active. It can also optionally generate cloud formation or Terraform scripts in case you need to. So then you can take that as a starting point and then deploy your cluster using that. As I said, this is a top level project. Let's take a look at the CLI for this. Now, how does it work? Well, you are deploying your COPS cluster in a region with three availability zones. You are looking at a master with high availability. What that means is you are deploying three masters, and each master also has a collocated at CD with it. So essentially what I'm saying is my availability zones in this case are US East 1B, 1A, 1B, 1C, or in this case, 1B, C, and D. COPS also stores the cluster state in a S3 bucket. So I'm exploiting the environment variable. And then I say, OK, I'm going to create a cluster. In this case, I'm just giving the cluster name as cluster.kates.local. And just by giving kates.local, I don't need to do any DNS setup, et cetera. It uses gossip protocol to discover the nodes among themselves. It uses weave networking for that. We've matched essentially. And it creates a cluster. And then I'm specifying my master count, my master instance type, my node count, my node instance type. I can provide different kinds of networking availability here. In this case, I'm using Calico. But there is work going on already in the next version of COPS by which you can actually use the CNI plugin that we released last week. And then essentially you say, now go create the cluster. Let's talk a little bit about ECS, or EKS, actually, Elastic Container Service for Kubernetes. So what we give you as part of that is essentially a managed control plane. Our customers have come to us and told us, run Kubernetes for me so that I can focus on my applications. And that's what is giving you. You come to, you use AWS console or CLI, and say, create a cluster for me. We create a cluster, and we give you API URL. Worker nodes are in your account. The master is in our account. You bring your own worker nodes. We give you CloudFormation templates. We give you pre-built armies. You take those armies, and you say, now bring those armies and connect that to the server, and then you're good to go. And then we automatically scale the master for you, depending upon the number of parts, the number of requests that are coming in. We automatically scale the cluster. We give you option to manually upgrade the cluster or automatic upgrade the cluster. So let's look at the core tenets for what are the core tenets for EKS. Well, the first thing is, in a very classical Amazon way, availability, reliability, security, all the illities that you are aware of, that Amazon gives you from AWS, that is what we give you exactly. You can run your full enterprise-grade production workloads on EKS. Because we are doing all the heavy lifting for you. One of the most important tenets, and they're all equally important, but one of the ones that I'm super excited about is we're going to provide a 100% upstream experience. What that means is you have your Kubernetes cluster running on premise. You build an application over there. You can just switch your kubectl config, and say, now I'm going to talk to this cluster and deploy my applications over there. So whatever the upstream experience is going to be, there is no forking. There is no private branches running. All the work is going to be done in upstream. So you can literally bring your own upstream cluster and say, now I'm going to deploy to EKS, and we'll take care of it. We will provide deep integration with the rest of the AWS stack, so IAM integration. For example, with kubectl, we'll provide deep integration with that. IAM integration on the pod runtime. We will provide with that with CloudWatch, CloudTrail, with xray. All of those integrations will be there, but those are completely optional for you to use. That's what our customers like. So we provide those integrations. If you want to use them, they're ready for you. Now this is the tenet that I am super excited about, because we are changing the company culture here. We are going to be actively contributing to Kubernetes project. And what that means is all the work that's going to be done in EKS, the managed control plane, that is going to be ours, but literally be fully compliant and done 100% out in the open source. So as a matter of fact, we built a CNI plugin, which allows you to give a secondary IP address from your VPC network to a pod. That work, that CNI plugin, is fully open sourced. So essentially, you can take that CNI plugin, plug it into your cluster, and build your own cluster that way. Simple API, we can say AWS, EKS, create a cluster. I can say, give me the cluster name. We're going to provide different version support. And then eventually, you can say, I am role. And that is the I am role that will be used to authenticate with the cluster and then can propagate all the way to the pod. So how does it work? Well, essentially, if you come to us, we say, give me a cluster. Today, if you are building a cluster, you're saying master and HCD, and then your worker nodes, which are in different availability zones. You come to EKS, all you get is, all right, we take care of all the heavy lifting for you. We give you a control plane, and then you bring your worker nodes. So the responsibility is a very standard shared responsibility model of AWS. And then those worker nodes are connected to the control plane. We take care of managing the control plane for you. And then you use your standard kubectl to deploy the application to the cluster. All right, so as I said, I'm Rafael from Zalando. So now you heard some ways you can actually provision or update your clusters. What I'm going to tell you now is a little bit like how we do this at Zalando. So really, the opinionated way of this presentation. Before doing that, I have to tell you what Zalando is, because might be that you don't know it, as we are in Texas. And Zalando is actually the largest European fashion company. This means we sell clothings and all this stuff online. So it's an e-commerce, essentially. To let you understand the scale of the company, I'll just show you some numbers. So you operate only Europe, as I said, in 15 markets. We have some six fulfillment centers, 22 million Arctic customers, and some other numbers. But what is really interesting is that we have 1,900 employees in tech, most of which are actually engineers, and they want to deploy their applications. Historically, we come from a point where we had a non-premise infrastructure based on a bare metal setup, in which, of course, we had a limited setup for scaling. And for example, starting new projects or new teams and starting new applications. Moreover, our tooling was entirely custom based on some software we developed. We then decided to migrate to the cloud specifically to AWS to make sure to have this high speed, the desired velocity for developers and scaling the teams. We did that by going to a model where we have essentially C2 instances with an EMI we baked and one single Docker container per instance, which meant that developers needed to understand what was the right instance for them. What do they need in terms of resources and a lot of other things? And we used CloudFormation to deploy that. After that, we decided to migrate to Kubernetes, which allow us, first of all, to have a higher density, so packing all those workloads together and to give an API that actually makes more sense to our developers and remove them the need of understanding what is a T2 micro instead of for X-Large. To give a little bit more context, just as I said, we had lots of teams. We had, already when we went to the cloud, something around 100 engineering teams. And each team was, again, responsible as well for the CI CD part and all this kind of things. You heard this morning in the keynote from Kelsey, just you don't want to deploy from Kube CDL. So part of this all migration to Kubernetes is to enforce compliance and best practices by means of making sure that people can only deploy to clusters only via a proper CI CD setup. And enforcing this ends off operations by not having them directly access containers and do random stuff with them. OK, so let's jump at the Kubernetes cluster setup. So currently we have multiple AWS accounts and we provision exactly one Kubernetes cluster per account. And we are managing something around 50 Kubernetes clusters. So as you may imagine, this is not a terribly huge number if you are Google or Amazon, but it's a pretty big cluster number because you are a fashion e-commerce. Those cluster actually designed to be small such that we can, for example, limit the impact of possible outages or problems with those clusters. And as well, Kubernetes code was not optimized to deal with AWS rate limiting. I think about, for example, mounting AWS volumes. It was not used not to be perfect. It's improving, but there's still a bunch of things to do. Our setup is strongly based on CloudFormation. We use CloudFormation entirely to provision those clusters. And we chose that because, first of all, it's AWS native. So we wanted to have something that could tailor exactly the cloud we were deploying to, so AWS. And because we have a pretty good existing experience inside the company. So we decided, let's take the tool that we are familiar with. Our setup is based on container Linux. We don't do right now any AMI customization. So we use whatever you can use as well. And an additional decision that we took, for example, is to use Flannel as networking to support more than 50 nodes. Amazon now is jumping on the ship. So it's helping with the CNI plug-in, but it used to be quite tricky to have a cluster that more than 50 nodes do to a limitation in AWS right in table. So as I said, we have all these clusters. We use this approach of immutable infrastructure. We don't do updates and reboots of nodes. Instead, we replace them in a rolling fashion. Our cluster setup looks mostly like that. So we have two autoscaling groups. The master is spanning three availability zone, as well as the worker nodes, as the worker autoscaling groups. We run everything ourselves, so also ATD. ATD runs in their own cloud formation stack, so outside of the master nodes. We use additionally ELB, so it is a classic load balancer in front of the master nodes to achieve the HA setup. And we use this both for the interaction from the QCDLs or the users or the deploy tool and the worker nodes. So as I said, we have 50 clusters. It's quite a big number. And we have to find a way to operate them in a way that doesn't require us to do this model operation. It's nice to say your developers cannot deploy to production manually, and then you operate the cluster manually. This doesn't really make sense, right? So what we did is essentially developing two tools. One is the cluster registry, which contains some metadata for the cluster, so really information, descriptive information, of what the cluster should look like, and a reference to the configuration of this cluster, which by the way is open source. You'll find it on get.com. And the tool to just watch this configuration and applies this by means of cloud formation stack and some additional AWS resources. So IAM. All right, let's talk about IAM. Now, IAM is the way how AWS customers understand how to do security and how do they do access management. So let's talk about how does IAM work. It really enables access to AWS resources. When you create a cops-based cluster, essentially what happens is there are two IAM roles created for you, one for master nodes and one for worker nodes. And the policies that are attached to them are a bit different, and which is by design, because the capabilities that master needs to have are a bit more, essentially. Now, what we need is a little bit more granular control for kube-cuddle and pods, because essentially, if you think about it, your kube-cuddle relies upon kube-config to talk to your master, but there is no IAM integration over there. I want to be able to say, use this IAM to authenticate with the cluster, because that's sort of the language that AWS customers are used to. And then when you are running the pod, you want to see that, what is the IAM role that the pod can take at the runtime so that the policies can be applied accordingly? So let's take a look at it. Now, for the IAM role to kube-cuddle, our customers like to are looking at authenticated. That's something that we are also looking at as part of EKS service, essentially. And what it gives you is a project by Heptio Labs, which is one of our good partners. And essentially, if you look at it, there is a kube-cuddle, there is a Kubernetes cluster running, and then in AWS environment, we have AWS Auth Service running here. So the kube-cuddle passes the AWS identity, which is basically an IAM role. It passes it on. Remember, we saw the role or name. So we pass that role, IAM role, to the Kubernetes cluster. The Kubernetes, excuse me. The Kubernetes cluster then talks to the AWS Auth Service. That, okay, this is the IAM role. This is sort of what policies are available. It gets the policies back. Authentication is done using AWS Auth, and the authorization is done using the RBAC that is baked into the Kubernetes cluster itself. And then accordingly, the action is allowed or not allowed. So that's sort of how Authenticator model works. Let's take a look at it from the IAM identity for the pod itself. Now, if you look at today, the pod, if it wants to do something within the EC2 instance, it'll talk to the EC2 metadata service. That I want to do this. Can I do this or not? Kube2IM is a project that has been extremely popular, which basically gives you the ability to assign an IAM role to a pod. And it says, okay, take this role. So let's see how that works. Now, my same setup here. First thing, what we do is we set up a Kube2IM as a daemon set that gets it up. Now the pod, there's a secure token service that is also running back in AWS. Now, the pod, when it's trying to talk to the EC2 metadata service, the call is intercepted by the daemon set. That daemon set then guides it to the secure token service, figures out, okay, this is the IAM role that is assigned to the pod. What are the policies that are applicable to it? And then it makes a call to the EC2 metadata service. So the whole pod being, you could assign a pod IAM role to the pod at runtime and you can use an IAM role at the KubeCuttle level as well. Both are possible. And the way IAM role is assigned to the pod is literally just an annotation here. So I have my deployment here and in here, all I'm saying is this is my IAM role. I give my IAM role R in here and there it goes. Now, Kube2IM is a great project. It has done a great job, but there are certain issues with it. It requires really the node to have an assumed role capability, which is a much wider capability. So that's one of the projects that we are looking to actively contribute to because essentially that's the capability that we are looking our customers will continue to use. So that's the project where the EKS team will contribute back in the open source and see how we can make it better. There are other ways by which our customers are looking at as well on how to create those temporary IAM credentials. So HashiCut Vault is one of the offerings that our customers like to use because you can essentially generate temporary IAM credentials from a Vault server running either on premises or in the cloud. And that's certainly a possibility. And in the long run, we are also looking at Spiffy essentially, that which allows you to do identity propagation across different clouds, across different container orchestration is independent of the container runtime. And we are particularly excited because it's not really tied to Kubernetes or container D or any other runtime because then it can bleed into other CNCF projects as well. So the possibilities of Spiffy, even though it's still a futuristic thing, are pretty huge. So talking about IAM and Zalando, we actually have to first of all distinguish between two different systems. So we talk about AWS IAM and we talk about an IAM system that we use internally for service to service communication with on a microservice infrastructure and employee to service communication. Regarding AWS IAM, use Kube2IAM as a demon set in a very similar fashion to what Arun already shown. And but an interesting part is actually why this works for us, right? So why this is actually reasonable for us. So we put this annotation as you've seen into those deployments. So why do we trust people to actually do that and what does it works for us? It works for this idea of porous principle. So as I said, no deployments happens manually to production. It always happens through a version YAML and this YAML has to be, those configuration needs to be approved. And this means whenever two engineers actually are saying, okay, this is right. This is the role we need. This is fine for us. Talking about the other IAM system. So which here I'm calling platform. As I said, we run a microservice infrastructure. We have hundreds of microservice. They talk to each other providing all tokens as well as employees. They also use all tokens to communicate with some of those services. We wanted, first of all, to make sure that employees could use the same tokens they were used to using this infrastructure. For example, to talk with the Kubernetes API server. We did that by a natural extension points of the Kubernetes API server that maybe you're familiar with, which is the webhook. Essentially, whenever you do Kube CTL, whatever you do, get pods. If you pass the token, this will be, this will arrive to the API server and the API server. Whenever you're having this webhook mode for authentication authorization, this token will be passed to a webhook, which is in the end a custom software that you can write. And this webhook, in our case, is responsible for validating this token and verifying that the user is actually authorized to do anything that needs to be done. All right, so this is for users talking with systems, right? So with APIs. But what about those applications that needs to talk with each other? For that, we extend the Kubernetes by means of a custom resource definition, which is, and it's a small system which is called, in this case, credentials provider. What does the system do in the end? The user will just create this custom resource definition and say, I want this credentials, this tokens for my application, and the system would just read from the Kubernetes API this desire of having tokens, and it will talk with internal IAM infrastructure that we had before Kubernetes, and it will write back Kubernetes secrets to the API. This means, in this case, that the application can really use a native Kubernetes concept, which is our secrets. We are pretty handy to mountains, set your pods, and with that we are integrated and they are happy to just use the tokens. All right, so let's move to the last topic, visibility. All right, let's talk about visibility in a cluster. I mean, if you look at it, in terms of the visibility of the cluster, if I look in the top, there are five different aspects that we see that are important for a cluster. You want to see the end-to-end logs, log coming from different sources. You want to see the metrics, that, okay, how is my node doing? How is my pod doing? You want to take a look at events. If my node is going down, my cluster is going up, what's really happening over there? You want to have alerts. You want to have some alert-based auto-scalers rigging up, and then eventually you want tracing, which is end-to-end tracing of an application. So those are sort of five different visibility aspects that we think about it. Now those five aspects are really coming from four different sources, essentially, if you look at it. There is, of course, a cluster, which is sort of a Uber concept, then there's a node, and then within the node is where your container is running, and then in between that, there is a concept of an application. So the point being, if you're looking for a visibility in a Kubernetes cluster, in general, in an AWS concept is, this is what you are really looking for. Now make sure your matrix is checked off that in all rows and all columns. Now, let's take a look at logs. Well, the logs, of course, the easiest way people look at it is kubectl logs. That's a rather easy way. That's only the first cut of it. It doesn't give you all the details, all the information. You can get events and things like that as well, but kubectl logs is definitely more popular. The stack that we see a lot of our customers using is the F stack or Elasticsearch, Fluendee, and Kibana. So let's take a look at it, how that combination really works. But essentially what you have is a cluster, and this cluster is running in a region and two separate availability zones, okay. Keep feeding it in and out. Are we good? Okay, cool. So I have the cluster running in two availability zones that are shown in horizontal rows, and then I have auto scaling groups as well. So essentially two master, which is not a good design, should really be one or three or a number of masters, and then I have a number of workers over here, okay. So if you were to deploy like a F stack as either a Helm chart or by yourself, then what you will have is a Fluendee deployed really as a demon set on all of your nodes over there. Once you have it, then essentially the logs from your container are dumped onto the file system, which are then put into Fluendee, and that then forward that to Amazon CloudWatch logs. So once it goes into the CloudWatch logs, you have an ability to subscribe to a Elasticsearch cluster, which again is running on Amazon itself. So you have an Elasticsearch service, you say any log that comes to CloudWatch automatically send it to Elasticsearch, and on top of that essentially what you can do is you can set up a Kibana cluster. So that's sort of a F stack that we see a lot of our customers running. Now there are a lot of different variations for it. That's not the only way, but that is one of the most prominent ones that we see. Take a look at metrics for example. What kind of metrics do we see? Now there's of course node that gives you a lot of metrics. Node exporter is a common tool that we see a lot of our customers using in just spitting out metrics about the node itself. In terms of pod slash container, you can look at cube state metrics, you know that is actually bundled with Kubernetes, or you can be installed depending upon the configuration. C Advisor is a pretty popular tool that developers look at or customers look at. And then from the application perspective, let's say if you're a Java developer, if you're building a Java application, you can start exposing your metrics that were done using JMX, using slash metrics endpoint. And why slash metrics endpoint? Because essentially what you need is a cluster wide aggregator. You could use either Prometheus or Heapster as one of those sources to be a cluster wide aggregator that is actually pulling the data is scraping the data from all slash metrics endpoint. Now you do need a data model as well. Well Prometheus is a time series database as well, so you can either store it in there, or you can have a specific influx DB or a graphite database where your data is stored, and then you can run some analysis on top of that. On top of that, you would like to build like an alerting mechanism where, okay, now I got all the data now, if my memory threshold goes below a certain number, trigger an alert for me. So you can build either alerting your own or there are plenty of tools available. And last but not the least, what you really need is a visualizer on top of that. There is a dashboard that comes as part of Kubernetes. You can certainly use that, but there are Grafana, Kibana dashboards, if that is sort of your corporate standard, or that is what you're using in your team, feel free to pick those dashboards. There are plenty of dashboards that are already available that you can use. Good. So talking about logging or a Zalando, so we currently use a centralized login solution. So we don't have a managed stack like that, but use scaler as our centralized login solution and we deploy a demon set. So a pod in each of these nodes, they actually just streams this log to this software as a service. This means this is extremely easy for developers to just bought their application. They don't have to care about it. We have this piece of infrastructure. They just stream those logs and they just need to log to standard Apple standard. Regarding monitoring, we have an existing monitoring solution in place since many years now. This is called Zetmon. It's actually open source, you'll find it on GitHub. And what we did essentially when we started migrating to Kubernetes is essentially integrating this system with Kubernetes. So making some of the native Kubernetes resources available from the system. By using from it is not exporting to get a system metrics and Hipser to collect pod metrics. What is really important though, it's really from the user point of view, right? So make sure that the user, not only they can deploy, right? So most of the discussions around Kubernetes are Kube-CTL deploy something, right? That's easy. But how do you do it like real in production with a compliance system in a way that people can actually monitor everything. So what we do is we integrate with our existing systems and we provide some default checks and alerts for all the applications that are deployed that the teams can actually just use. Additionally to that, we have an ingress controller and we have some standard metrics regarding latency, error rate that people can just reuse to build some proper reporting. All right, so all of this data will not be interesting if you don't have a way to actually have a look at it. We have some custom dashboards. The picture is quite small, but this is actually the dashboard that we used to monitor all these clusters that we run. This means we have some metrics regarding the API server latency, the number of pods, the number of deployment, which is a great indicator in case you are running Kubernetes clusters just to get a feeling of what is actually happening. And then we have some default dashboard for users that they can actually import, clone to just monitor the memories, CPU usage, latency, and all the other things. All right, so this was just a quick look at some of the important things that we learned, we discovered, and we recommend regarding Kubernetes or AWS in this case. But there are definitely some more factors to consider. For example, you want to have a look to configure some sane defaults. Kubernetes will not limit the number of pods you can run in a namespace by default. So you want to put a quote on that. Or you want to have a limit range to have some default resource limits for your application. And you want to know your cluster limits. It's easy to say I can run 10 pods, but as you can see, Kelsey didn't want to even run the 10 million pods that someone asked this morning in the keynote, and there is a reason for that. And you also want to, as well, work on simplifying the user experience by, for example, starting using things like Ingress or external DNS, which is actually an open source project to get some nice DNS name for apps which we develop and it's in the Kubernetes incubator. All right, that was it. Time is over, so I'm not sure which question we can get. So one of the last things that we would like to share with you is this is a workshop that we have been building for the past several months now. This is a workshop that we are sharing with our customers, our partners, our developers. If you are new to Kubernetes and you're gonna get started with Kubernetes on AWS, you can literally start with this workshop and you can spend a few days, literally, if you wanna go through each chapter, end by end. And what we are encouraging is, if you don't like something about the workshop, this is done completely in an open source way, submit an issue, send a pull request, and that's what we are looking for from you. We are really trying to show all possible ways our customers are using it in this workshop. We wanna engage with you, we wanna work with you on how we can contribute and make this workshop more successful. So, start it, let us know how we can be helpful to you. Thank you. Thank you. If you wanna talk about disasters, things that went wrong, failures, HCD dying, all this kind of stuff, where I'm outside, there is also other members of my team, that would be fun. Thank you.