 So originally, I said, I was talking about cloud native at AWS, and I got a few new things we announced last week that we actually want to talk about. So a little bit about the cloud-native principles, and then we'll talk about open-sourced AWS, and then dive into a few of the things we've been doing on CNI. What is this new Fargate thing that we used for container provisioning? What we're doing to help Kubernetes run on AWS, and then just end up with EKS, Kubernetes as a service. Okay, so cloud-native principles. I've got a whole long presentation on this, and if you want the hour-long version of it, there's a video up from re-invent last week. You can go Google for Arc 219 if you're trying to find that. But these are the principles that I think are important that make something cloud-native. That it pays you go, and you're paying after the event. You're not having to invest in a data center and guess how much capacity you're going to need next year. It's self-service, it's no waiting, everything is API deployed and provisioned. Globally distributed by default, the cloud is out there, you can go use things around the world. You don't have to be constrained to just where you happen to have built a data center. The availability models take into account zones and regions. And then the other really important thing is to get very high utilization. You can save a lot of money by turning things off extremely aggressively. And then immutable code deployments. And I think some of the things that we came up with back in the days with Netflix, we were using Amazon Machine Images as containers. And we were baking those containers and then we weren't changing them, we were replacing them. And that's kind of the deployment model that Docker picked up on when they containerized it. So there is a Linux container, but the idea of a container is basically an immutable deployment entity that you bake as you go through. So all of these things tie in. But I haven't got the time to go through all this, so I'm gonna talk a bit about what else we're doing here. Here we go. So the open source team, AWS has really got these three things we're working on. We're trying to grow communities. We're doing things like joining CNCF, that's part of that. We're improving the code and we're increasing the contributions that AWS and Amazon make into open source projects, as well as input things from the community. So what AWS is doing with CNCF, and you can look this up in the blog post I wrote when we joined last August, jointly we're promoting cloud native to enterprise customers, that's the goal of the whole of the CNCF. What we're doing specifically though is we're integrating CNCF components into AWS products, including ECS, things, and I'll talk more about that in a minute. We're integrating Kubernetes with AWS, working on a few things there. And we're working with the CNCF serverless working group. One of the other things we've done, we've also provided some credits to the CNCF to support Kubernetes scalability testing. And we're in the process of getting that up to speed. Okay, so here's some code contributions. We're a founding member of Container D, the Docker runtime. We're working on moving towards getting that. Now it's 1.0, I think that's a great move. The goal is that we end up using this as our container runtime across all of the cloud services at AWS. We're doing some work on Kubernetes, install a security networking and overall integration with AWS. We've put quite a bit into CNI. We've extended it and integrated it with the AWS networking ENI, the native networking interface on AWS. And we're upstream that already into CNI for supporting ECS and into EKS. So I'm going to explain a little bit about how this works. The key thing here is on AWS, we have a native network adapter that is incredibly highly tuned. It has hardware support. We have 25 gigabit networks. It's pretty hard to drive a 25 gigabit network flat out. So we have all kinds of support to make that work. And then we have VPCs, we have security. All of these things are baked into the ENI. What we've done is we've made it so that CNI plug-in that combines that in. And that means that your containers get native networking with full capacity, support, efficiency, and all of the features that you'd get on an instance level, and you now get a container level. That makes it easy to do. We've open sourced it. We've upstreamed all of this support into GitHub. So if you're building any, you can build any container scheduler that you happen to be building. You just use CNI, you get all that support automatically. So what this looks like within a VPC, CNI is there allocating on the behalf of the containers. So there's the network interface, which your instance has already got because you've fired up an instance. And CNI basically takes these pods and it allocates, can I talk into the VPC, says give me some extra addresses on this network interface. And I want those addresses to have secondary IPs within that VPC. Those are then allocated directly to the pods. So what you have now is completely native networking at the pod level going all the way out within the VPC construct. And we've subsequently worked with Pinterest, Weave, and Tigera to help get this into a Kubernetes, to upstream this Kubernetes. So we've got the changes in there. We haven't finished making all these changes and upstream them yet. We're in the process of working that through. But this is already in production at AWS in ECS. So if we look at ECS, this is the native container service at AWS. It's a very large scale. We're currently managing over 100,000 clusters, not containers, over 100,000 clusters with ECS. And some of those clusters are tens of thousands of containers, and it's a multi-tenant service. So it's a different beast than Kubernetes, and it serves a different need for us. But we've already got CNI baked into that, so all of that cluster. So the CNI that we've upstreamed is already in production at really high volume. But I wanted to talk about a new feature that we announced last week called Fargate. This is also now integrated into ECS, and we're looking at how do we bring this to other container schedulers. So what is Fargate? Fargate is a really results in us looking at all of the different things you want to do in the cloud. If you start off by saying, okay, I have these instances, these machine instances which are virtual machines, and we natively know how to manage those on AWS. And then we came out with Lambda. So now we have functions which are natively managed by AWS. But containers were a thing that ran inside an instance. What Fargate does is it says a container is a native thing, which AWS now knows how to manage. You can say, give me a container, here's the code, go run it for me. And there is actually a fourth one that you also announced last week, which is bare metal. So we have bare metal instances on AWS now. We have a couple of instance types. But you can bring your own hypervisor, or if you wanted to, you could deploy everything on bare metal on AWS. So we now have four different first class deployment entities. They're metal, virtual machines, containers, and functions. And you can deploy to any of those. And they all support different aspects of what you might want to do. But the key thing here is we want to be easy to use to deploy containers. So here's a typical actually ECS deployment. We have a bunch of containers. They are running on instances. Those instances run in agent, they run Docker, and they're built into an AMI. We have a scheduling and orchestration cluster manager placement engine. And what Fargate does is it just takes away all of the AMI. It takes away the need to run a demon. It takes away the need to figure out what we're doing. You just say, I have my scheduling and orchestration. Here's my containers. Let's move on. Interesting thing here within ECS, it's the same task definition schema. Use the same API to launch them. And we mix them. So this isn't an all on one. You don't say, I'm having everything running on instances. Or I'm having everything run just on Fargate. You blend the two. You can have some of your containers scheduled on one, some containers scheduled on another. And as we look forward into emerging this into Kubernetes, we want to sort of keep all of these flexibility in there. So you should see it as an option. You might want to pack in a whole bunch of containers into instances that you are managing for a very specific purpose. But you may just say, I just have some containers, and I don't actually want to deal with managing a set of instances underneath that. You want to blend the two together. So talk about Kubernetes. One of the reasons we joined AWS was we were hearing from our customers that there was a lot going on here. And from a CNC survey, 63% of Kubernetes running on AWS. And we listened to our customers and we figured out, OK, they need some things. One of the things they wanted was to keep Kubernetes as a completely open source experience. What that means for us is we're not forking Kubernetes. Everything we do is upstreamed. If we can't persuade the community to do it, we'll back out and try a different thing. There was a couple of cases already where we would propose stuff where we said, no, that's not quite right, and we've backed out and done other things around. So I think this is something where you want to be able to run exactly the same bits on every cloud and in your data center, wherever you want it to be. It doesn't help if we have a custom version on AWS. So we really want this to be a pure experience. The other thing, though, is we want native AWS integrations. Where you want to connect into an AWS feature, we want to make that easier. We want to do the heavy lifting for you and we want to make it efficient and easy to do. So here's a couple of these. I am authentication. We have identity access management and we need to be able to authenticate into AWS. So we've been working with Heptio on their I am authenticator. This gives us an approach, an open source project. We've been just working together on this and we've been productizing this. So here's CUBE control. There's the API, it passes in an identity. That goes to the authentication. That authentication is then looked up in the RBAC within Kubernetes. And then that basically controls what you can and can't do in terms of your cluster, right? So that way we can set up an AWS identity, a role which manages that back in. This is what it looks like. Oh, sorry, there's another thing. The other thing is the I am roles in pod. So within a pod, you want to have that pod have a role on AWS because that's how we give instances permissions to manage certain services. So if you want to sort to S3 or you want to deploy something, you have to have a permission and you have to be in the right role. Those roles are stored out in I am and we have to transfer those roles back in. So that uses CUBE to I am, which is again a community project and it runs as a little demon set. There's a security token service and you can get credentials. You can look it up in the metadata service. And basically what you have to do is put an entry in your role, annotation in there. So that's, we're working on this as we're trying to, these are projects that were out there and we want to work with these, the contributors to make these really production ready and integrate them in. But there's a few other ways that people want to use roles in pods. Again, we're working on this too. So getting hashical vault integrated, that's another great way of storing tokens and passwords. And also Spiffy, which is a proposed project which is being considered by CNCF and we kind of think this is a good way forward for how we're gonna manage all of these things. So we're gathering together, working with the communities and we're pulling in what we need to make all of this work in a production way but also work on, integrate properly with the AWS features and security models. But then, we had this one last request say, okay, please run Kubernetes for me. Make it easier to run Kubernetes. And so we have, last week we announced Amazon EKS, Container Service. And we have a few tenants. What are we trying to do here? The first thing was this is a platform for enterprises to run production grade workloads. Our focus here is mostly on large scale production deployments of Kubernetes. We have a fairly, we have a number of customers doing, that are running their entire businesses on this already. Second tenant, we wanted a native and upstream Kubernetes experience as I mentioned. We wanted to always contribute everything that we're doing to build this as a service was still contributing everything back up into Kubernetes itself. Third one, we want all of the integrations to be seamless but still eliminate extra work. And the fourth one, we're actively contributing to the EKS project, the Kubernetes project from EKS. And we're, as we build out our team, we're making more contributions. So we're, hopefully, I'll put in one of the standard plugs here is we're obviously, we're trying to find more people to work on this with us. But if you work with us, you're not going to be sitting off in a back room working on a private version of things. We're looking at building teams that contribute actively to the project. Thank you. So if we look at what Kubernetes looks like today, you have masters, you have etcd and you have all of your instances. The default way we expect this to run is across three zones. So you get high availability with replicated master and etcd. So what happens when you run EKS is that control plane disappears and it's replaced by an endpoint, but cube control still works. So it basically, the endpoint you would normally use to talk to your control plane still behaves in exactly the same way. It just, it's invisibly being run on your behalf. And then you start up to containers in a zone, they all connect into that central control plane. Okay, so that's just one final thing for we can get to the break here, which is Fargate. Fargate just came out. We're still figuring out exactly what APIs we need to expose and what the integration looks like. But we're concentrating right now on getting EKS just out, right? So we're not gonna block EKS on rolling Fargate into it. We're just gonna get the thing out that we're currently working on. But in parallel we'll be looking at how do we fold Fargate into this and in a subsequent release of EKS you'll be able to blend together instances that you're managing to run containers plus Fargate. So that's it. Thanks for all the contributions. Looking forward to working with everybody and thanks very much.