 Okay, let's get started. Thank you everyone for joining us today. Welcome to today's CNCF webinar, AWS Controllers for Kubernetes. I'm Jerry Fallon, I'll be moderating today's webinar. We would like to welcome our presenter today, Jay Pipes, Principal Open Source Engineer at Amazon Web Services. Just a few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and ask such a subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of the Code of Conduct. Please be respectful of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. And with that, I'll hand it over to Jay for today's presentation. Thank you, Jerry. I really appreciate the opportunity. So yeah, today we're going to talk a little bit about this ACK project, which is a brand new set of open source service controllers for Kubernetes that allows a bridging of the AWS universe of managed services with Kubernetes. So we're going to get started here with what should be a pretty familiar story for lots of folks. And it's a story that sort of highlights what the benefits of ACK are and where it fits into things. So we've got Alice. She is a web developer and she's a huge Kubernetes fan, of course. She's developed this Node.js application for her internal department at her company. And she's using modern development practices and building her application into an immutable Docker image. And she's using, at least initially, she chose to use SQLite as sort of a simple storage database for her application. And that was all fine and dandy. So Alice, being a huge Kubernetes fan, she goes and deploys her application into a Kubernetes cluster. And she does this usually using the normal kubectl apply for a deployment and a service for some sort of top-level networking stuff. And maybe she also does some ingress load bouncer resources for her application. And that's all fine and dandy. Everything is running great. And so like 10 users try using her site at once and kind of predictably SQLite falls over because it's just not built for that. And Alice, she realizes she's got to set up some sort of real database. And she knows that Postgres is a real RDBMS, a real relational database management system that supports concurrent access and all this kind of stuff. And Alice is, like I said, a huge Kubernetes user. And so she Googles, hey, how do I set up Postgres in Kubernetes? And of course, there's lots of tutorials out there. And they all kind of boil down to what you see here on your screen. She creates a secret using kubectl and then persistent volume claims. So she's got some persistent storage for the database and the deployment file and the service manifest. And she goes and deploys Postgres and changes her application so that it is connecting to her Postgres cluster instead of SQLite. And this all works great. The only problem with that is now Alice, she is now in the DBA game. And that's not really what she had in mind. She wanted to focus on writing her application and not necessarily administering databases. So what is she to do? She hears about AWS's RDS database service, which provides a managed relational database experience. And she thinks, oh, that's great. Now I don't need to be the DBA. I'll just set up an RDS instance. And Amazon will do all the heavy lifting around managing the database instances. But she notices that there's a problem. So she goes to create this RDS database instance and she logs into the AWS console. And everything is just kind of incongruent for Alice. She really loves her cozy Kubernetes experience. And having to go into the web console and click through a wizard to create database instances is just not really what she wanted. I mean, she didn't have to use the AWS console, right? She could have also used the AWS CLI tool. She could have used something like CloudFormation or Terraform, all of those things are perfectly good tools. But at the end of the day, those aren't Kubernetes. And Alice really likes Kubernetes. She wants to stick in her Kubernetes cozy universe and use the Kubernetes API to manage all the resources in her application, including the dependencies of those applications like a database service. She loves Kubernetes, but not quite enough to be a DBA, right? She wants to use RDS so that it takes away all that like management of database stuff away from her. She doesn't have to deal with the pain point of all that administrative. What could she do? Well, that's pretty much what ACK is, right? It allows Alice to simply kubectl apply a Kubernetes manifest that describes an RDS database instance in this case, right? So instead of logging into the AWS web console or using CloudFormation or the AWS CLI or any of those non-Kubernetes tools, she just simply writes a Kubernetes manifest to the Kubernetes API and boom, an ACK service controller for RDS takes over the management of the lifecycle of that particular resource. And that's pretty much what ACK is, right? It kind of boils down to let's allow Kubernetes users to stay in the Kubernetes API, use the familiar Kubernetes manifest and configuration language, but have service custom controllers for Kubernetes manage those resources in the AWS APIs. So hopefully ACK was solving Alice's problems. Let's take a look sort of under the covers and see if it can help solve some of yours too. So like I mentioned, Kubernetes experience for AWS services, it's kind of providing a bridge, right? This sort of integration bridge between the AWS services and I say AWS managed services here, but it's really any AWS service, regardless of whether it's a managed service like RDS or something like that, right? So there are custom controllers within the ACK project one for each AWS service. So there's an S3 service controller for ACK and SNS service controller for ACK, et cetera. And like all custom controllers in the Kubernetes universe, Kubernetes stores the desired resource state, right? So when Alice writes a Kubernetes manifest for an RDS DB instance kind through to the Kubernetes API, she does so using kubectl apply. Kubernetes API server stores what Alice had requested as the desired resource state for her DB instance. And then the ACK service controller, which is the Kubernetes custom controller for that particular service, handles the lifecycle of that managed service resource. So in the case of the RDS ACK service controller, it will call create DB instance in the RDS API and manage the lifecycle of the DB instance for the Kubernetes user. One important thing that I like to bring up early on is that there is no use of cloud formation in ACK. The reason I bring this up is ACK, the AWS controllers for Kubernetes project is a sort of redesign or a rethink of a project called the AWS service operator or ASO, which ex-colleague of mine, Chris Hein, created back in 2018. And ASO, the AWS service operator, was a fairly thin shim across cloud formation. So when you, for instance, created an S3 bucket via the AWS service operator, what actually happened behind the scenes was that a cloud formation stack was created and within that cloud formation stack, an S3 bucket was created. And when we were thinking about how do we redesign the AWS service operator and sort of bring it into some of the more modern Kubernetes client libraries and controller runtime and that kind of thing, we were thinking, well, is that user experience really kind of surprising? I mean, if someone creates an S3 bucket via a Kubernetes manifest and the service controller actually creates a cloud formation stack behind the scenes that creates that S3 bucket, and then someone sort of logs into the AWS console or looks at CloudWatch or something and sees that a cloud formation stack was created, is that a, we thought that was a surprising user experience. And so we decided not to use cloud formation within the design of ACK. And that's why I put it here as, just to warn people, you know, this is a, it's not just a thin layer on top of cloud formation. As I mentioned, each AWS service has its own separate ACK service controller. Way back when in the early sort of design of ACK, we thought about making a single binary, right? Which frankly is the way that the AWS service operator was structured, right? A single binary that could communicate with lots of different AWS services and manage the lifecycle of the resources in all of those APIs. After discussing with a number of our more security conscious folks, we decided that it was a better idea to have separate service controller binaries for managing the resources in one particular AWS service. And the reason for that was so that we could promote and encourage a best practice of having a very finely scoped set of IM role policies that only allowed the IM role that the service controller was executing in to manage the resources in one particular API. If we had a single binary, the IM role and the policy associated with that IM role that was running that single binary would essentially need to have like this sort of super user sort of God level scope. And that's something that we didn't really want to promote. And that's the reason why we chose to create a separate service controller binaries, one for each service. So we could find grain scope that IM role. What we would like to do, this is a little bit aspirational as I'll explain here in a second when I talk about our release process, but you will install ACK service controllers using Helm or static manifest that we will distribute as artifacts for each of the releases. Or we're actually putting together helper scripts. Like since we do have lots of these separate ACK service controllers, one for each AWS service, and we do have lots and lots of AWS services. I mean, I think there's like 170 AWS service APIs at this point or more. We knew that it's not a great user experience to actually ask people to manually install either with Helm install or manually with like kook cuddle or customized or something, over a hundred different service controllers. And so we're writing some helper scripts that essentially automate this process of installing service controllers for a list of services so that you don't have to repeat the installation process. Another important aspect of the ACK design is that everything, including the controller implementation itself is generated. So many of you might be familiar with a project called kube builder, right? kube builder is, it's frankly an awesome project, but it generates the code for custom Kubernetes controllers and the API types. And it uses a set of libraries called controller tools, which has this controller gen binary in it, that can generate the different CRDs and deep copy files and roles and all sorts of the sort of foundational stuff that you need in a Kubernetes custom controller. What kube builder does not do, however, is generate the controller implementation for you. So basically what it does is it outputs a stub of a controller and then it's up to you to go ahead and write the go code for implementing that particular controller. Well, and that's all fine and dandy, only we realized that with 170 plus AWS services, hand building an implementation of each service controller was just not really feasible. And so we set about to create a code generator that actually generates the full service controller implementation. And that includes the linkage with the AWS SDK go, which is the library that we use to communicate with the backend AWS services. We have a sort of a small ACK runtime that provides this linkage between like a reconciling controller and the various AWS SDK go calls that we make. But at the end of the day, each service controller is fully code generated. And that's kind of what makes ACK different from some other things, right? Another important thing to, two important things to point out, we consult with the AWS service teams in question to make sure that what we are generating for their service controller actually, is calling their API in a semantically and behaviorally correct way. So for instance, we're working hand in hand with the ElastiCache team and the step functions and Lambda team to make sure that the ACK service controller for ElastiCache and step functions and Lambda and SageMaker and these other services actually behaves the way that they expect it to behave. It's making calls in the way that it should be. And then finally, there is absolutely nothing that is specific to EKS. So ACK service controllers can be installed in any target Kubernetes cluster whatsoever. Regardless of whether you choose to use the managed control plain flavor of EKS. Let's talk a little bit more about the code generation. I mentioned that we generate the entire controller implementation and that is true. We actually have this multi-phase approach to code generation. And we use as the source of truth the AWS SDK GO model or API models that are actually included in the AWS SDK GO source repository. We use these models, which are JSON files that describe each of the API operations and these things called shapes, right? Which are essentially described the payloads and resources of that top level API. Anyway, we use these model files and consume them. And when we generate the Kubernetes API files, right? The GO files that represent the custom resource definitions. We, for each of the top level resources in the API that we identify. And then once we do that, we move on to this sort of second phase of generating the deep copy code, the object code for the Kubernetes API machinery. And we generate some CRD configuration files, the YAML manifests that describe the particular custom resource definition. And then we generate the entire controller for the service. We generate the RBoc configuration stuff as well. So it's this sort of like multi-phase waterfall of code generation that happens for each of the services. Let's talk a little bit about access control, authorization, Authosy. I put a link here, which if you go and download the files or you can just follow this link. It has a diagram on that page. And that diagram is primarily there to focus your attention on the fact that there are two different RBoc systems, role-based access control systems in place with ACK at any given time. And that they don't overlap with each other. And it's very important to understand how these different RBoc systems are used, right? So Alice, the Kubernetes user that calls kubectl apply and passes in like rdsdbinstance.yaml file, right? Alice is a Kubernetes user who is associated with a role, a Kubernetes role. And that Kubernetes role has a role binding, which allows Alice to read or write custom resources of a particular kind. In Alice's case, it would be rds.services.kates.aws forward slash dbinstance, like that would be the kind that she has permission to create. That is the Kubernetes role-based access control system in play, right? That once the Kubernetes API receives a request from Alice and determines the role that she is operating under, it then performs its authorization and access control to determine whether or not Alice, the Kubernetes user, has the ability to write a custom resource of that kind to the server. However, once that's done and the Kubernetes API server writes the custom resource representing the RDS database instance to etcd behind the scenes, it returns success to Alice. That is the end of the Kubernetes RBox scope. At that point, the ACK service controller for RDS, it has picked up in its reconciliation loop that there is a new custom resource of kind, you know, RDS dbinstance. And then at that point, it's going to need to call the AWS RDS API, right? To manage the life cycle of dbinstances in a particular AWS account. And that RBox system, the IAM role-based RBox system is in place for the server, or it's in place for the IAM role associated with the service account that the ACK service controller runs as. And there is no overlap whatsoever between the Kubernetes RBox that Alice, you know, is controlled by and the IAM role that the ACK service controller is using in order to determine whether it has the rights to manage the life cycle in the RDS API. It's very important to understand like the scope of where those two different RBox systems come into play. We are recommending, for those of you who are not familiar, we have something called podIRSA, or pod IAM roles for service accounts. It is our recommended way of providing fine-grained IAM permissions for a specific pod. And this is in contrast to the default setup where the IAM role associated with the worker node that the Kubelet is running on, those permissions are used by default for pod. So with podIRSA, you should be able to associate an IAM role to the service account that a specific pod is running as. And that IAM role is used in the ACK service controller to determine the policy, the IAM policy that it has in order to make the calls to the AWS RDS API. All right, so one last thing around authorization and access control. Something I'm super excited about. So one of the contributors to the ACK project named Amin Hillali, he has been working on this project called cross-account resource management, or C-A-R-M. And when we realized that, okay, we're gonna be having lots of these different ACK service controllers, we didn't want to have a user experience where in order to control resources across multiple AWS accounts, that the user would have to install an ACK service controller in lots of different Kubernetes clusters, each associated with a separate AWS account. We just like just did not want that user experience of having to install hundreds of these service controllers. So instead, what the cross-account resource management project allows is for a cluster admin to associate an AWS account ID to a Kubernetes namespace via an annotation. And when a user creates a custom resource in that Kubernetes namespace, the ACK service controller can say, oh, hey, look, there is an annotation. I think it's services.kates.aws forward slash owner-account-id. And it sees that annotation exists for the namespace. And as soon as it sees that, it says, okay, I'm gonna need to call STS assume role to pivot the AWS client that's inside the ACK service controller. So that it can make API calls against the AWS API as a separate AWS owner-account. And in this way, a single ACK service controller can manage the lifecycle of resources across lots of different AWS accounts. So if you're in an organization that has lots of different, that manages lots of different AWS accounts, this frankly should be like top of your mind as far as the features that are coming soon to ACK because it'll make your life a whole lot easier. Another thing, what about secret stuff? Any of you who are familiar with the RDS, create DB instance API call, know that it has a little bit of an issue. You send the master user password in plain text in the create DB instance API call. Clearly that's not a Kubernetes best practice. And obviously the Kubernetes best practice is to store secret stuff in secrets and then reference that secret where you need to in your resources, in your custom resource. So what the secret reference project does is implement basically that. It replaces the master user passwords data type underneath in the custom resource definition from string to a secret reference, right? Actually it's a key reference within a secret. And this allows a cluster admin to set up a secret called DB secrets with a key within that secret called master user password and they can control the access and RBuc and all that kind of stuff on the secret themselves. And then all Alice needs to do is reference that by name. She doesn't need to do anything of them. Some other things that I'm excited about coming soon. And when I say soon, I mean within the next few months. Okay, so standardized AWS tag representation for all ACK resources. And then the second bullet point tags that all custom resources within a namespace should have kind of related. So the first one refers to the fact that they're across the universe of AWS service APIs the way that tags are represented meaning the data type that a tag takes is very inconsistent. And some of the APIs they allow tagging a resource on the create call like basically setting a set of tags. Some don't allow that. Some of the service APIs represent it as a map of string to string. Other APIs represent it as a list of structs with a key and a value. And then there's other representations as well. This first bullet point is about having ACK standardized that representation. So that any custom resource that ACK manages, you specify the tags in spec.tags and it is a map of string to string, that's it. No inconsistent representation of the tag data structure. The second bullet point is allowing a specific set of AWS tags that all custom resources within a namespace should always have. So if the cluster admin wants to make sure that any RDS instance that is created within namespace foo should be tagged with, should have an AWS tag of bar then they would annotate the namespace with that set of tags that should always be placed on DB instance, custom resources. Finally, common rate limiting and throttling support. So I was actually talking with Jason de Tiberis and the cluster API folks about how can we have a common rate limiting and throttling support library for AWS APIs in ACK that can be then referenced from cluster API and projects like crossplane so that we don't have to like sort of constantly repeat ourselves and all of us sort of like work on various variations of the same theme. So this common rate limiting and throttling support for AWS API calls is something that I'm really excited to get done in the next few months. And then finally, there is this idea that look, you've created an S3 bucket or an RDS database instance or an SNS topic or SQSQ or whatever in the AWS console completely outside of ACK's knowledge. And you don't, you want to essentially have ACK start managing that resource. Well, in this resource adoption, GitHub issue and project, we are allowing that. So you will annotate the custom resource with an ARN, an AWS resource name and that is an indication to the ACK service controller that it should expect that the resource with this particular ARN already exists. And it should just essentially place that resource under its own management as opposed to attempting to recreate a resource with that name. Okay, all right, this final set of things. I just wanna discuss sort of how we're handling the release cycle or the release cadence for ACK. As I've mentioned a few times now, there are well over 150 AWS service APIs. We wanna get to all of them, right? We wanna support all of them in ACK, but it's just, it's not feasible to do that all in one go. So the way that we are thinking about it is we'll have, we have these phases where a group of services will get their controllers generated and then included in the ACK source repository and get binary Docker images created and Helm charts created and pushed up to a Docker registry and Helm repository. These phases of services are documented on the AWS controller for Kubernetes GitHub page. We have a project that shows the sort of release map for these phases of controllers. We're going initially into what we're calling developer preview and that essentially just means the Helm chart is not currently available for easy installation and the way that you work with these service controllers is frankly not particularly user friendly. It's very sort of like developer-y. You use test credentials and anyway, long story short, it's not particularly user friendly in developer preview. When we get a set of bugs for these phases of service controllers and we get those bugs fixed and we're happy with the stability of the service controllers, then we'll move them into a beta phase and then we're aiming to get these phases of controllers into GA within three months of placing them into developer preview. The services that we initially placed into developer preview were listed here, S3, SNS, SQS, ECR, DynamoDB and API Gateway V2. Of those, unfortunately SQS had a bit of an issue and it's not yet in the ACK source repository. DynamoDB should be by the end of the week as well as API Gateway V2. We're just waiting on a couple end to end tests. The next phase of ACK service controllers is RDS, ElastiCache. We've got some parts of CloudFront, some parts of EC2 and EKS and those should be coming out in the next few weeks, next couple of weeks. And then, sorry, after that, we're looking at the Kafka service. We're looking at Lambda step functions and more. So the project that you see here linked, you can go there and see the release roadmap of what we have planned, what is currently targeting for developer preview and currently like work in progress and then beta and GA after that. And I'll just wrap up by saying everything about ACK is open source and we are absolutely jazzed to get feedback from everybody and contributions if you feel like it. And these two links should get you started going in the right direction. So with that, I will wrap up the presentation and I'm looking forward to answering some questions that folks have. Thank you very much for a wonderful presentation. We have a few questions here. How different is this from EKS? All right, so Najib, I hope I'm pronouncing your name right. So ACK is entirely different from EKS. EKS is an AWS service that installs a managed control plane and recently a more managed data plane with managed node groups, but a managed control plane for Kubernetes. So ACK is a set of Kubernetes native applications, Kubernetes custom controllers that allow a Kubernetes native way of managing resources outside in the AWS APIs. Okay, can you elaborate on the level of effort needed to run ACK on any Kubernetes de-stream? That's a very good question, Najib. Right now in developer preview, unless you are pretty comfortable using things like customize and manually deploying pods and deployments using kubectl, I would maybe wait a little while, maybe a month or two until we get the initial phase of controllers into beta, which then we will have helm charts for those controllers, which will make frankly the installation and management of ACK service controllers much easier. How are cross resource references implemented? So much, so I think it depends on the resource. If it is within a specific, okay. So if it's within a particular API, for instance, within RDS, if you look at the API call that references another resource object within that API, we may be replacing the custom resource definition field from let's say an ARN to instead be an object reference that refers to a different custom resource within the RDS set of custom resource definitions. Now, if the cross resource reference is across APIs, for instance, if it's API gateway to ECD, EC2 VPC or ElastiCache to EC2 security groups, things like that, we will likely continue to refer to those things via ARN and not have an object reference type. So much, I hope that answers your question. Please let me know if it didn't. I think that's what you were asking, but okay. So Ryan's asking, what kinds of tags does ACK apply to create a AWS resource? Is there a way to guard against accidental kubectl delete? Even if it is just, I really don't want to delete this flag. Very nice. We haven't decided this yet. There is an issue. If you go to the GitHub site that's on your screen now and go to the issues list, there are two issues. You should search for something called, gosh, I'm trying to remember. Destructive operations or what, I think it's either delete operations or destructive operations or destructive behavior. There is an issue that talks about basically how do we prevent deletion of important resources? I think what'll end up happening is that we will have some annotations on Kubernetes namespace that will allow the ACK service controller to be configured in a certain way for CRs in that particular Kubernetes namespace to essentially allow some sort of like deletion propagation or deletion policy or protection, that kind of thing. It's likely going to be fairly dependent on the AWS API behind it. And it's likely going to be up to a cluster admin to configure a specific CRD or a specific custom resource type or kind to behave in certain ways. Because we've frankly run the gamut as far as feedback that we've gotten from people. Some folks want ACK just to take over management of the resource and just do the necessary. And others are a lot more skeptical about it and would like to have deletion protection on objects. And the other issue that is related to tags, there are two, there's an issue that is around the standardization of AWS tags and the representation of those tags for custom resources that ACK manages. And there was also an issue about what AWS tags should be auto created on any custom resource that that ACK service controller is managing. So there's two resources there. Ryan, I definitely encourage you to check out, or sorry, there's two issues there. Ryan, I definitely encourage you to comment and plus one or whatever, each of those issues. All right, let's see. Najib is asking, will I still be charged for invoking APIs through ACK, like one paying for invoking native AWS? Yes, absolutely. So look, ACK doesn't remove the charges for resources that it creates. The charges are exactly the same. So regardless of whether ACK is the thing that ends up calling createDB instance for the RDS API, the charges that you will accumulate are exactly the same, right? So very similar to cloud formation, right? There was announcements for something, so anonymous is asking, there was an announcement for something similar in 2018, was it admission controllers? Not entirely sure about that, sorry, anonymous. You may be thinking about the AWS service operator, which is sort of one of the things that originated the idea for AWS controllers for Kubernetes. Yeah, you got it. The service operator for Kubernetes, right? This is the next generation of that, the reincarnation of that. Okay, if anyone would like to ask any more questions, please go right ahead and do so. We only have about 15 minutes left. Oh, will ACK provide deeper visibility into the AWS services? I don't know whether, I don't think it will provide deeper visibility. I think that it will provide a different type of visibility, Najeeb. So for those users, those AWS customers or AWS users that prefer the Kubernetes environment, prefer the Kubernetes API and tooling and the KubeCuttle experience, the way that they will have visibility into AWS resources will be different, right? They'll be able to make a call to KubeCuttle, get DB instances and see a list of their RDS database instances, as opposed to calling the AWS CLI tool or logging into the AWS web console. If you're referring to, you know, the things like cloud trail or cloud watch logs or that kind of thing, there's nothing about ACK that's going to change the setup on the auditability or traceability of a particular AWS service. What I would like to do, and one of the things I was actually talking to a new contributor about this morning, we need to get our Prometheus metric story started. And one of the things that we would like to do is have Prometheus metrics that are dimensioned based on the AWS API call that ACK service controllers are making so that you can see specifically how many and of what kind the AWS client is calling a specific AWS API. So you'll be able to say, you know, like, okay, how many times is, you know, I don't know, code deploy, get deployments being called per hour or something, right? We want to provide those types of metrics via a standardized set of Prometheus metrics that are dimensioned by what is called the operation identifier within the AWS API. Okay. So is there any way of enabling cross-account resource management? Yes, there will be. Oh, hi, Harris. So yes, there will be. We're probably a couple of weeks out from the cross-account resource management being fully enabled. I merged the code, the largest part of the code, which incorporates some caching mechanisms for namespaces and config maps earlier last week. We still need a little bit of work there. You will be able to, quote, enable the cross-account resource management by setting an annotation on a namespace. So look for that in the next two to three weeks. All right, let's see. Najib, I'm sorry, I'm not entirely following what you mean by native visibility of Kubernetes. Perhaps you can elaborate a bit there. Okay, Fahad is asking, how do import existing resources in AWS into Kubernetes manifest under ACK management? For example, if I don't want to delete my existing RDS or S3 bucket. Yeah, so this was the adopt a resource functionality that I referred to a couple of slides ago. And the way that you will signal to the ACK service controller that you want it to start managing the life cycle of a particular resource is you will create the Kubernetes manifest and within the annotations for that custom resource, you will have the owner account, I'm sorry, the ARN, the services.kates.aws forward slash ARN. And that will indicate that the ACK service controller should expect that that resource already exists and not try to create it again. Hi, Harish, how are you? Harish is a member of the ACS team. So what do you think about leveraging ACK to do heavy lifting for AWS cloud provider behind the scenes for managing and provisioning AWS resources instead of the current implementation of AWS cloud provider? I've actually thought about that, Harish. And I've had some conversations with some of the cluster API folks. I've had conversations with cross-plane folks from up bound about how do we adapt the ACK generate command line tool, which is the primary code generator inside of ACK. So then instead of spitting out, you know, Kubernetes API types and a custom controller implementation for ACK service controllers, that in it instead it spits out basically all the generated code for AWS cloud provider or in the case of cross-plane the cloud, I think it's cloud provider dash AWS package, right? So I'm actually got some prototype code going locally where I've been playing around with this idea of making the ACK generate CLI tool a lot more extensible so that it can kind of spit out go code that fulfills sort of non-ACK core use cases. So yeah, I think in the future, it definitely will be possible to at least have ACK service controllers provide a sort of lower level, lower layer of functionality that then could be built upon in things like cluster API and cross-plane. Okay, so how is ACK different from cross-plane? I'll just knock this one out real quick. So they're actually very complimentary technologies. ACK, its entire mission is to provide a Kubernetes native API for managing AWS resources. That's it. It's not trying to do anything more than that. Cross-plane has a much broader mission, right? Cross-plane has a mission to support cross-cloud meaning, you know, like to GKE and EKS and AKS and all different cloud providers, right? And have some sort of standardization for cluster creation, Kubernetes cluster creation, as well as some of the managed service creation for each of those different cloud providers. So it's got a much broader mission. I think, well, I hope that ACK, at least the code generator inside of ACK can in the future be a library or a sort of input to the cross-plane AWS provider at least. And let's see, Najeeb, will there be a performance penalty for using ACK? Because of two hops now. One is ACK and then native data. No, there is no performance penalty. There actually isn't two hops. So the Kubernetes user is communicating with the Kubernetes API, right? And the ACK service controller is communicating with the AWS API. So it's not like the Kubernetes user is communicating with the AWS API. Instead, they're only talking to the Kubernetes API and then the service controller for ACK is the thing that's communicating with the AWS API. Okay, fine, oh, I think I already answered that. So Prometheus, when I said Prometheus earlier, I'm more referring to just the format, right? Of the expected metrics endpoint. Anyone has anything else that they would like to ask? We have about five minutes left. I'd also like to point out something that I didn't include here, unfortunately, but I'm on the provider-aws channel in the Kubernetes Slack community. So please feel free to hit me up with any questions that you might think of after this webinar. I'm on there and happy to answer questions. How is HA handled for ACK controllers? If a controller crashes in the middle of RDS SRE creation API call. Good question, Fahad. So the way that we've built the service controllers should not depend on the leader election. Within Kubernetes, I still need to work on some test cases to ensure that multiple ACK service controllers, multiple pods running the same ACK service controller can have concurrently executing reconciliation loops and not trample on each other. But there's nothing that we're doing inside the ACK service controller, for instance, setting a latest observed version or latest observed sort of state. We're not setting that from the ACK service controllers. And the reason we're not doing that is because by having that latest observed version field within the status of a custom resource, you essentially force the architecture of the controller to be a single writer. And we did not want that, right? We wanted to be able to have multiple concurrent service controllers for the same service be able to execute in multiple pods and not have them trample over each other. And one of the ways to do that is to ensure that you're not writing bits of information into the status struct, the status field of a custom resource that represents the view of only a single writer. And that's what latest observed version actually is. It's not the latest observed version for the resource. It's the latest observed version for that particular controller that is observing the resource. And by getting rid of that, we hope to have a more concurrent approach. Hope that answers your question. Will ACK provide Kubernetes secret integration? Yes, absolutely it will. There is a slide, I'll kind of go up here. Oops. Whoops, I stopped screen sharing by accident. Yeah, there is a set of slides that explain that. They, there are some fields within AWS back-end API calls. For instance, create DB instance where you're passing in a plain text string. We will be replacing those types of fields with secret reference fields or fields with a secret reference data type. And that means that you'll be able to set up a Kubernetes secret ahead of time and then reference that, reference a key within that secret from your custom resource. Any planned integration with parameter store and secret manager for alternative to secret manager and Kubernetes? Not within ACK, but that actually I had a meeting with the AWS config team recently about a similar topic. Find me on the provider AWS channel on Slack and we can chat about it there. Anyone else have any last minute questions at all? We have about another minute or so before we wrap it up. Faha was asking earlier in the chat, is there support for Lambda and ACK? Is there any plans for serverless services support? Not currently, I'm aiming for mid to end November for both Lambda and step functions. Luckily, both those APIs are actually fairly reasonable and sensible and concrete with very few exceptions to them and very few inconsistencies. So yeah, we're aiming for mid to late November for both step functions and Lambda. Once again, thank you very much, Jerry and to the CNCF for inviting me out here. It's a pleasure. It's our pleasure. Thank you so much for joining us today. That should just about wrap up our webinar for today. As I said before, today's recording and slides will be posted on the CNCF webinar page. We'd like to thank everybody once again for joining us today and to you as well, Jay. Everyone take care, stay safe and we will see you next time.