 Hi, I'm Megan Chelland. I'm a software engineer at Google, and I work on a team that does integrations between Cloud Foundry and Google Cloud Platform. And this presentation is just going to be about how Cloud Foundry and Google Cloud work together, and then how you can use Google Cloud services in your Cloud Foundry applications. And then I'll also talk a little bit about the roadmap of my team going forward. So first, I'm just going to give an overview of what Google Cloud is in case you're not familiar. Then I'll talk about how Cloud Foundry can run on top of Google Cloud and how that works. My team works on two projects that help with this. One is the Bosch Cloud Provider Interface, or CPI. And the other is the Cloud Foundry Service Broker. And then like I said, I'll talk about our roadmap and things you can look forward to in the next couple months. So Google has been building products for about 18 years, and we have billions of users. You probably use our products every day, whether it's Gmail or Search Android. There are quite a few of them. And we built all of these products on top of custom infrastructure that was built at Google as well. And it was built to support high load and scale because our products have quite a few users, as I mentioned. And Google Cloud is really meant to surface those services out to other software engineers and developers like you. So I'm just going to go over really quickly a couple of these services that are kind of fundamental. One is Compute Engine. So that's our virtual machine service, and these are VMs that run in our data centers worldwide. Some differentiating factors are that we have really quick VM boot time, so it only takes about 30 seconds to start. We also, in terms of pricing, we have a couple services. One is sustained use discount. So if you have workloads that need to run for a long time, you can get discounts on that. And if you have workloads that can handle machine failures or that are really short lived, we can use preemptible VMs to get some discounts there. We also have custom machine types or predefined machine types, so you can get exactly what you need and only pay for the resources that you consume. And on top of that, we also do write size recommendations. So if you're underutilizing a resource or overutilizing a resource, we'll let you know, and we'll tell you how much money you can save by resizing. And then we do permanent building. So that just helps to save some money on short lived workloads. These workloads that run on those VMs can take advantage of cloud networking. That's Google's global fiber network. And one of the things that is most exciting for me about networking is the global HTTP load balancer. So this is the same infrastructure that Google's products use for load balancing, and it can support over a million queries per second. So that's kind of a background how Quad Foundry utilizes these services. If you had a user who's trying to use an application that you're running on a Cloud Foundry VM, the user would first hit our global load balancer, which would pass them through to the Quad Foundry Go router, which runs on cloud networking. And then eventually, the Go router would send the user to a Quad Foundry VM, which runs on Compute Engine, and your application will be running on those VMs. So I'm just going to show a quick overview. This is the Google Cloud console. These VMs here are the Quad Foundry machines. And I can search for things like the Diego brain, for example, if I want to find what VM that's running on, I can just search for the label here. And you can see those labels that we applied to the VMs in here. We just apply these labels when we set up Quad Foundry. You can also see the network that we're running in is Bosch. And then this is a subnet that is running all of my Quad Foundry VMs. And then I can also look at... We can search for services in this little search bar, which I really like. These are the load balancers I have for Quad Foundry. This is the Go router load balancer. And then I have another one for the TCP router. And I can just click here to see the Go router also. And you can deploy multiple Go routers if you want to. And all of these resources are wrapped up within projects. So you can see I have three different projects for different things I'm working on. All of the resources are sort of pulled within a project. So how Quad Foundry does that? Like I mentioned earlier, we have the Bosch cloud provider interface. And what that is is it's a piece of software that runs on the Bosch director. So if the Bosch director wants to do something like create a VM, it'll use the CPI to call through to the cloud API, which will then create the VM. It'll pass back some information to the director about the VM so it can access it. And then the director will use that information to install the Bosch agent onto the VM. And it'll use the agent from that point forward to monitor and manage the VM. And one of the things that we added recently is support for cross-project networking. This is a feature that was available on GCP, but we recently added it to the Bosch CPI. So you can use it for your Cloud Foundry installations now. What we found is that some companies that are larger want to have different departments wrap up their resources within different projects for budgeting and access control reasons, but they still want to be able to share all of the network and network resources within one project. So that's about how Cloud Foundry uses GCP, but you probably are more interested in how you can use GCP in your Cloud Foundry applications. And for that, we have the service broker. So the service broker is a way of provisioning resources that are used within your Cloud Foundry applications. So if you're a CF developer and you type something like CF create service Google storage, which is our large object store, your Cloud Foundry would call through the service broker to the Cloud API, which will provision a Google storage bucket. And then if you find that service to your application, you can store and get objects out of that bucket through your Cloud Foundry applications. Of the services I showed you earlier, these are the ones that are available through the service broker. And we're just adding these based on customer demand. So as people ask for more features, we're adding them. For storage, we have Bigtable. That's our new SQL database. Cloud Storage, like I said, is our object of Lobstore. And Cloud SQL is a relational database. Spanner is kind of special, so I have a slide for that later, and I'll talk in more detail about that. For data, we have BigQuery, which is our managed big data analytics platform. And PubSub is our messaging service. So publish and subscribe. It allows communication between independent applications. And then, of course, we have our machine learning APIs. These are the pre-trained models that Google created and surfaced to people to use. So we have the Vision API, which is, it takes in pictures and applies labels to it. So if you ask it to classify like a dog, for example, it can tell you it's a dog, and it can sometimes even tell you like a breed of dog. And then it can also tell you websites where that image appears. The Video Intelligence API is pretty similar, but for videos. So it could tell you like, there's an elephant in this video at one minute and three seconds and stuff like that. The Speech API does speech-to-text translation, and then the Translate API translates from one language to another. And we also have the Natural Language API, which does things like sentiment analysis of text. Spanner. So Spanner is one of the more recent additions to the Service Broker. It is the first horizontally scalable, strongly consistent relational database service. So it offers both strong consistency and horizontal scalability. And even though it's horizontally scalable, it has the traditional benefits of a relational database. So schema, asset transactions, empty bookings, and Google Play and AdWords have been using this internally at Google. So it's been tested out for over five years. A couple other things we added recently to the Service Broker are Stackdriver, Trace, and Debugger. Trace is our tracing system that allows you to find performance insights, bottlenecks, and do issue detection. It collects latency information and then displays it in the Google Cloud console. And then you can use that to get near real-time performance insights. And the Debugger is really cool. I recommend trying it out because it seems kind of magical. You can pull in source code from GitHub or even local files. And then you can play application state through it. So you can take a snapshot of your application state and just play it through the Debugger. And then you can even share that with your teammates just using the URL. And it's integrated into the CFL pack for Java and now for Go. So it's really easy to set up with your Cloud Foundry applications. And another thing that we worked on recently on my team is the Stackdriver nozzle. So this connects the Cloud Foundry logger gator fire hose with Google's logging, Stackdriver logging and monitoring tools. So this allows you to see Cloud Foundry application logs in the GCP console. And then you can search through those logs for real-time insights into what's going on in your apps. So you can see here this is a snapshot of a page which has the logs slide. You can also dig a little deeper and you can search for them here. And then this is an example of a monitoring dashboard that you could create for your application. And the nozzle is now in GA and there's a tile for it on the GCP console. So that's kind of what we've been working on more recently on our team, as for what's coming up next. We're working on adding native blob store support for Google Cloud Storage. Right now we offer S3 compatibility mode, but there are a couple of things with that. So we wanted to add native support as well. We're also tracking the open service broker project and seeing what comes out of that and then integrating anything into our service broker that comes out of that project. And like I mentioned earlier, based on customer demand, we'll be adding services to the service broker. So anything that comes up that people want, we'll be adding them. We also have a project to integrate UAA and G Suite. So that allows you to have kind of a single source of truth for users and permissions within your applications. And it would sync automatically. And then we're doing various refactoring and UX improvements for the service broker as a whole. Well, so I've added some links here to our service broker and flash CPI release if you're interested in learning more. And that's it. Thanks.