 Hello everybody. My name is Jay Pipes. I'm an engineer on the EKS team at AWS and I focus on open-source contribution in particular in the Kubernetes ecosystem. And today I'm going to be talking a little bit about the AWS Controllers for Kubernetes Project or ACK. So to introduce you to ACK, I'm going to tell you a little story and it should be pretty familiar to quite a few of you. So we've got Alice. Alice is a web developer. She develops a Node.js application and she's following most modern application build techniques. She's building her application into immutable Docker images and developing on Node.js and she chooses to use SQLite for simple storage needs for her web application. Oh, I forgot to mention that Alice is a huge Kubernetes fan of course. So Alice goes to deploy her application to a target Kubernetes cluster and to do so she does the typical kube cuddle apply for her deployment and a service and possibly an ingress resource for top-level inbound routing. Everything is fine. But then quite predictably SQLite falls over. It's not really designed for heavily concurrent access and so when 10 users try to use her website all at once, well, things don't go very well. So Alice, she realizes that she needs a real database and she knows that Postgres is a good example of a quote real database that is heavily concurrent and so she goes and she googles for some tutorials on how do I set up Postgres to run on Kubernetes and most of those tutorials they sort of boil down to what's on the screen here which is creating a secret in Kubernetes, creating a persistent volume claim for some storage, creating a deployment for the database server heads and then a service record for the Postgres database service. All of that goes to plan but unfortunately Alice realizes that she's now in the database administrator game and that's definitely not what Alice had in mind when she deployed her initial Node.js application to Kubernetes. So Alice really wants to focus on her application. She doesn't really want to be a DBA. That's not what she had in mind. So what can she do? So Alice hears about Amazon RDS, the relational database service and it kind of sounds like, hey, this is a great solution here. It's going to take away all the pain points that she has about installing and maintaining Postgres by doing all that for Alice. But there is a little bit of a problem and I did mention that Alice was a huge Kubernetes fan and Alice goes to use Amazon RDS and she goes to the Amazon web console and she clicks through some sort of gooey wizards to create her database instance and she just doesn't really like it. Where's her cozy Kubernetes experience that she loves? Well, she doesn't have to use the web console, of course. She could use the AWS CLI tool or she could use a tool like Cloud Formation or Terraform. But at the end of the day, none of those things are Kubernetes and, like I mentioned, Alice really does love Kubernetes. So what is she to do? Well, what if she could just do this? What if she could kubectl apply a Kubernetes manifest with some YAML that describes her RDS database instance and send that off to the Kubernetes API server and have a Kubernetes controller manage the lifecycle of her RDS database instance for her? Well, this is pretty much what ACK is. This is allowing Kubernetes users to continue using the Kubernetes API and the configuration language of the Kubernetes API machinery to manage AWS infrastructure resources, things like RDS databases and S3 buckets and SNS topics, et cetera. So that, in a nutshell, is what AWS Controller for Kubernetes is all about. It's bridging the world of AWS service APIs with the Kubernetes API and it's solving Alice's problems and let's see if it'll solve yours too. So ACK stands for AWS Controllers for Kubernetes. And in particular, that's controller's plural. It is a collection of Kubernetes custom controllers, one for each AWS service API. And the ACK service controller for that particular service, say S3 or RDS or SNS, they manage the backend AWS resources for that service API on behalf of a Kubernetes user. So the Kubernetes user submits their RDS instance or S3 bucket as a Kubernetes manifest, the Kubernetes API server receives that, writes it to etcd and then the ACK service controller gets notified of a new custom resource of a particular kind that it is interested in. And the ACK service controller for that service then goes ahead and manages the life cycle of that resource by calling the AWS APIs itself. A little bit about the design of ACK. So as many of you might know, AWS has a lot of services. I think we're like 165 plus at this point. When we started designing ACK, we've realized early on that it wasn't feasible to hand build, manually create 160 plus different custom Kubernetes controllers. And so we set out early on to do everything using code generation in ACK. So we generate the API types from a set of JSON model documents. In addition to the API types, we also generate all of the service controller implementation itself. Now this makes ACK a little bit different from something like KubeBuilder, which is an awesome project. The difference is that KubeBuilder when you generate your custom controller using KubeBuilder, it provides you with like a skeleton, a stub for the controller, and then you're responsible for going and implementing that controller. And like I said, we realized that we couldn't hand implement 160 plus service controllers. So we actually generate the entire implementation of the service controllers in ACK directly from the API models themselves. So that's a fairly big difference about ACK compared to something like KubeBuilder. Behind the scenes, we actually, KubeBuilder and ACK both rely on the controller tools and controller gen binary to do various low level code generation for CRDs and the deep copy gen, things like that. One way that we are different is that we don't use cloud formation. The ACK project had its genesis and another project called the AWS service operator ASO. And an old colleague of mine, Chris Hine, created the ASO project maybe two years ago. When he built ASO, he used cloud formation behind the scenes. So when, for instance, you create an S3 bucket through ASO, it would actually create a cloud formation stack which created an S3 bucket. And we thought that that user experience was a little surprising for users. When we started investigating ACK and diving into some implementation proposals, we also realized that at the end of the day, a Kubernetes custom controller, it relies on the Kubernetes API server and at CD to be the single source of truth for the desired state of a resource. And cloud formation, because, well, it is managing resources for the user, it has its own idea about who has the desired state of truth for a resource. And by using cloud formation, you kind of get into these race conditions and this conflict between, well, who owns the state of a particular resource? So we didn't want to have that conflict. And so the design of ACK, we do not use cloud formation. Instead, we directly call the AWS service APIs themselves. It's important to point out that ACK service controllers can be installed on any Kubernetes cluster. There's nothing about ACK that is specific to EKS. You can run an ACK service controller on a GKE instance of Kubernetes or on-prem or a COPS cluster running on EC2. There's absolutely nothing about ACK that is specific to EKS. And then finally, the way that we are building the ACK service controllers is that we are working hand-in-hand with the individual AWS service teams like ElastiCache or API Gateway, working with their engineers in developing the custom code for their particular ACK service controller, along with a set of end-to-end tests that validate, verify that the service controller in ACK is calling their API in a behaviorally and semantically correct fashion. So we are actively collaborating with the AWS service teams. It's one big feature that we rolled out in ACK about three or four weeks ago. It's something called cross-account resource management. A contributor to ACK named Amin Hilali is the mastermind behind this particular feature. And let me explain a little bit about why it's important. So as I explained earlier, ACK is a set of Kubernetes service or Kubernetes custom controllers, one for each AWS service. And we realize that that experience may be a little cumbersome for users to have to install multiple pods containing an ACK service controller for each individual AWS service. Well, we didn't want to compound that particular encumbrance of the user by also having the user have to install an ACK service controller in lots of different Kubernetes clusters in order to manage resources across multiple AWS accounts. And look, we talked to many customers and it's almost universal. They all use multiple AWS accounts to sort of segregate and sort of isolate resources within their organization. So some application development teams, they get their own AWS account, it might be within an organization, an IAM organization, or it might be separate. But we find it very common that users have the need to manage the lifecycle of resources across lots of different AWS accounts. And so what cross-account resource management allows is for the Kubernetes cluster admin to annotate a Kubernetes namespace with specific annotation services dot kates dot aws forward slash owner dash account dash ID with the AWS account ID that that should own all of the resources that get created within that Kubernetes namespace. So if I go ahead and do a kubectl apply and I pass in a manifest for an S3 bucket and that custom resource has namespace X, it is created in namespace X. And the cluster admin has annotated namespace X with a particular owner account ID, AWS account ID. What the ACK service controller will do is it will call sts assume role to pivot the AWS client that is that lives within the service controller so that it can start managing resources in a target, a different target AWS account that is associated with the IAM role under which it is was running by default. And in this way, a single ACK service controller for S3 or RDS can manage resources within that particular service across lots of different target accounts. You don't need to install lots of different ACK service controllers, one for each different AWS account ID that you need to manage. Related to the cross account resource management is the topic of authorization and access control. The reason I bring it up, it's a fairly complex topic and especially with ACK, you need to remember that there are two R box systems in play at any given time. One is the Kubernetes R box system, the role-based access control system, and that will dictate what Kubernetes users, for instance, Alice, can create, list, patch, delete different custom resource kinds in the Kubernetes API. Once the Kubernetes API performs its authorization checks, that's the end of the Kubernetes R box system, or at least for the purposes of ACK. After that point, when an ACK service controller receives an event notification of a new custom resource of a particular kind, it needs to talk to the AWS APIs. And in order to do that, that's where the AWS IAM R box system comes into play. And there is an IAM role that is associated with the service account that is running, that the pod with the service controller is running as. That IAM role has a set of permissions or policies that allow it to manage the life cycle of resources in a particular AWS API. The two R box systems, Kubernetes and IAM, they don't overlap, right? And if you go to the link that's on this page here, I show a diagram explaining just how those two R box systems come into play, but they don't actually overlap with each other. And it's important as you start using ACK and testing it out, trying it out, that you understand where these two R box systems come into play. So what about secret things? Those of you who are familiar with the RDS API, you might know that the createDB instance API call unfortunately passes in plain text a field called master user password. Clearly, that is not something that is a Kubernetes best practice with regards to secret like fields. Instead, the Kubernetes best practices, of course, to create a Kubernetes secret object and then a key within that secret object and then refer to that key within the secret from another resource. And so our secret replacement feature, which should be coming out in the next month or so, does just that. It replaces certain fields in various resources like DB instance, RDS DB instance, replaces those plain text string types with a reference to a key within a secret. So Alice can set master user password to be name of DB secrets and key of master user password. And Kubernetes cluster admin kind of created a Kubernetes secret called DB secrets and put in the actual master user password into that secret, as opposed to passing it in plain text, both to and from the AWS APIs. So which AWS services do we have currently in developer preview? Well, as of today, which is the 27th of October, we have seven services in developer preview, S3, SNS, SQS, ECR, DynamoDB, API Gateway B2 and ElastiCache. We have a roadmap that is publicly available at the link that you see on your screen. And that lists our rough timelines of when we are bringing new services into developer preview and when we plan on getting the services that we already have in developer preview into a beta state and a GA state. You'll also find links to from the GitHub repository here to our documentation that describes our release criteria for developer preview, beta and GA. Basically, it boils down to beta. You'll have the ability to easily install the service controller using Helm. And there'll be quite a bit more testing and documentation for all of the resources exposed in the API. And then GA, it's really about stabilization of the API types and a low level of reported bugs for that particular service controller. So our roadmap also includes a couple more important items. One is the normalization of the representation for AWS tags. So those of you familiar with AWS APIs probably know that various service APIs in the AWS API universe use different representations for AWS tags. I'm use a map of string to string, some use a list of structs with a key and a value, et cetera. It's all different. Anyway, we'll be standardizing that to all be spec.tags, which will be a map of string to string. There's an adopt a resource feature where if you create something out of band from ACK, like let's say you have a preexisting S3 bucket and you just want to bring it under the ownership of the ACK service controller, you'll be able to do that by annotating the custom resource with a particular ARN or Amazon resource name. We'll have a common rate limiting and throttling library that's going to be separate from ACK but will be used by ACK along with other projects like crossplane or maybe even the cluster API for AWS or cloud provider AWS in the core Kubernetes universe. So I very much encourage you to check out the repository here, github.com, AWS, AWS controllers, Kate's. We hang out on the provider AWS channel on the Kubernetes Slack community and anytime you can get in touch with me, Twitter, GitHub, Slack, I'm at jpipes. Thank you very much.