 All right. Great talk. Really excited to see what SUSE has done and what IBM's done personally. I'm here today to talk very briefly about Cloud Foundry on Google and specifically some best practices that we've seen customers use to design infrastructure on GCP in a security-first world. So I'm going to give you five facts in five minutes. My name is Evan Brown. I lead a team of dedicated engineers at Google that build the CPI for GCE, the Service Broker, a nozzle for Cloud Foundry to Firehose, as well as a few other things. It's what we do 100% of the time. I've been involved in this project for about three years now. First best practice is to isolate network control with shared VPC. So is anyone using shared VPC on GCE today? Cool. No one is. But you will be soon. That's wonderful. The idea behind shared VPC is fairly simple. It's to isolate control of networking components by separating them into different projects very simply. This means that in Project B, all of your network constructs, like subnetworks, firewall rules, and routes, they exist in that context. And VMs that are deployed for foundation exist in a separate project. And this allows network operators to control the network infrastructure and the Cloud Foundry or platform operators to do their deployment in a separate project. Fact, the second is to isolate services with private load balancers. GCE supports a layer four TCP private load balancer that we recommend using in front of services like UAA, Credhub, or Cloud Controller and GoRouter. The third fact, manage database with private IP only. Cloud SQL, which supports MySQL and Postgres engines, allows you to reserve IP addresses in your RFC 1918 space inside your subnetwork in a shared VPC. So private only connectivity to a fully managed database with failovers, automated backups, and customizable maintenance windows. Fourth fact, access APIs from NAT. So when you deploy a foundation and you use the service broker to provision access to Cloud ML or Cloud Vision, your application deployed in a Diego cell needs to access those APIs. They're APIs on the public internet. Typically you would need NAT to do this. There's a checkbox called private access. So in a subnetwork inside your VPC, you enable private access and you magically have access to Google APIs, all Google APIs including maps, without needing public IP addresses on your Diego cells and without needing NAT. So it's a nice quick way to access those services with very little or zero overhead. But in the event that you do need traditional NAT for outbound access, this morning we announced Cloud NAT in beta today. Cloud NAT is a little bit different than what you've seen from other providers. It's not a VM based solution. It's done at the network level via the software defined network that powers GCE. Works for GKE, works for GCE and Cloud Foundry and it's pretty flexible. I've got one minute left. So this is the sixth tip. I kind of messed this slide up. This should be number six. I just added it right at the last second there. So the CPI for GCE supports the multi CPI model in Bosch. And so maybe a bit more esoteric but if you're especially security focused, you can use the multi CPI model to provision multiple GCE. GCP CPI is on a Bosch director and use each of those CPIs to deploy a different component to a different project. And what this allows you to do is to restrict credentials or service account credentials for say UAA to one particular project and isolate those credentials for CredHub to a different project and then get a different project or a third project for your other foundational services like the router and the cloud controller or Diego. We've got several customers that use this model today. Relatively recent support for multi CPI but it does kind of blend well with the tight security model. So thanks everyone, we've had a great time here. Our booth, we have a lot of socks left. So if you're in need of socks, please come collect some socks because I do not want to drag them back home to Seattle. Thanks everyone.