 So, I'm Yui Cao. I'll be giving an overview of the Cloud Foundry Runtime PMC. I'm the Cloud Foundry Runtime PMC lead, and I'm also the director of product management at Pivotal. So what is the application Runtime PMC newly named today? The application Runtime PMC, direct strategy, development, and quality control of the core components of Cloud Foundry. All of the required components for Cloud Foundry certified paths are produced by projects in this PMC. So of the seven certified paths that were mentioned in this morning's keynote, they're using the components produced by this Runtime PMC. For more information about any of the projects, you can go to cloudfoundry.org slash projects. There's links to the Slack channel, the Pivotal Tracker projects that show the work that they're working on, links to the GitHub. Some of the projects also have included links to their documentation as well. You can also subscribe to CFDev, list.cloudfoundry.org, and there you'll also see the bi-weekly PMC notes to keep up to date with what the individual project teams are working on. So here's a terrible market texture diagram of the Runtime PMC. There's loose relations in the placement, but really it's quite difficult because there's 19 projects, and they work very closely together. The ones highlighted in blue here, permissions, bit service, and AHA proxy, are incubating projects. New projects this year were the services API, that CF permissions, and AHA proxy boss release. Runtime themes of investment broadly, and I'm still shopping this around, so I would love feedback on this, but I think security and stability are kind of table stakes in terms of themes of investment. You need to be able to have a stable platform, scale out, and guarantee application workloads can continue running. You need to do that in a secure way, so that people are able to run their workloads confidently in complicated environments. And developer happiness is my personal favorite theme of investment, in encouraging developer productivity and allowing developers to focus on business value. So in these following slides we'll be talking about a lot of things that I hope fit mostly into these themes of investment. Application life cycle, I think broadly fits into developer happiness. Things around improving first push, that includes adding more support for commands into the CFC line manifest, making that a first class citizen and making that first push better. Improving second push, that includes investments and rolling app updates, zero downtime updates. People are often confused when you go from first push to second push and there's a significant amount of downtime in between because of the amount of time it takes to stage an app, and perhaps maybe that staging didn't go so well, and you might have to stage again with some new bits of code, or you might have to maintain your own blue-green scripts, and so if the platform can own some of that complexity, you can again focus on providing application providing business value. We've heard a lot about people caring about native AB testing so they can roll out a little bit of their next version and see how that goes while still running a majority of the previous version. Canary deploys, being able to roll one out of your new version and make sure that's healthy before going on with the rest. Smoke test, being able to run smoke tests in a coordinated fashion while you're rolling out your updates. These are all areas that I think we can improve in push. That promotion is also an area that I think we're interested in, so that say you're iterating very quickly in your dev space, but then you want to promote it to another space where your product manager wants to do acceptance, and so how can you more or less promote that same artifact to the next space in a nice way. CF local is being proposed in the extensions PMC, but it's also in this broad theme of how can you get faster feedback locally. All right. Improved operator experience, we have Bosch bootloader, that's a CLI tool for starting up for configuring and paving in IaaS quickly, starting with a Bosch director and getting all of that configuration very simply. We currently have support for AWS and GCP, Azure is in progress, vSphere I think is on the horizon as Terraform support for vSphere matures. CF deployment, it's very close to replacing CF release, and this allows for much simpler manifest generation taking advantage of the Bosch 2.0 features, and also allowing you to have a much more composable experience with what you deploy. Bosch backup and restore support, so the Bosch backup and restore exists in the extensions PMC, but each of the components that have state within Cloud Foundry now have support to hook into the Bosch backup and restore and perhaps go into read-only mode or gracefully stop itself so it can have a consistent state within that single backup for when you restore that. We're also investing in improving route consistency and availability so that in the face of instability in the management plane, NATs, for example, we may not have to completely prune the routes after two minutes and still be able to guarantee that your apps will not be routed to an incorrect container. Connecting services, we have container-to-container networking. It was GA'd earlier this year. You can now securely connect an app from one space to another space and not have to go out and around through the go router with this in place. We have application instance identity credentials, which I think is a new tool for us, and we're still kind of seeing how we can make use of that. But I think in Eric's demo earlier today, he showed how you could, how applications can use those credentials to authenticate with each other and be very sure of who's communicating with what and not having applications actually have to deal with provisioning of those credentials. MTLS support through the go router. So now if you want to have mutual TLS, you can have that client certificate forwarded through the header. And if your applications are able to consume that, which is now supported in the Java build pack, you don't have to spin up a TCP router and figure out how to manage that port and whatnot. You can just trust in that header that's forwarded through the go router. Operator managed multiple certificate support that's also now supported in the go router and HAProxy Bosch release. So now you can have operator configured support for custom domains using SNI. Things that are still in progress include securing service credentials with CredHub so that if you wanted to, you could opt into service brokers that can store credentials into CredHub and applications are then able to retrieve those credentials and not have those stored in Cloud Controller where a Cloud Controller admin who may only care about operating Cloud Foundry doesn't actually need access to all of those credentials, kind of reducing the blast radius. Also, the services API team is working on service instant sharing across orgs and spaces. It's been long requested, the ability to share a particular service instance from one space to another space. And I think this is common with microservices patterns. Platform provided service discovery is another area of investment. So that you can, and I think that fits really well with the container to net container networking now that there's policy. How do you provide internal routes internally to provide discovery for those routes and communication paths? Envoy and Istio are also a fairly hot topic nowadays. And how can we take advantage of Envoy and Istio in the Cloud Foundry community? There's a lot of capabilities that Envoy provides in terms of weighted routing, and Istio, we think, will help control and configure Envoy's and perhaps be that bridging mechanism between apps running on Cloud Foundry and apps running somewhere else, perhaps on the container service. Investments to support legacy or non-12-factor apps. There are a number of legacy apps that can't run on Cloud Foundry currently, but with support for multiple ports. A lot of Java EE apps, for example, need multiple ports. But with just that simple extension, they could run on Cloud Foundry. Shared volumes, we also have support for, and there's additional drivers and brokers that are being developed. We started with an NFS V3 driver broker. The EFS driver broker is being developed, and I think there's interest as well as in NFS and a Samba and SIFS driver broker. Build packs. There's investment in multi-build pack support, which allows for polyglot apps, or potentially API gateways to be composed with your app, and less forking of build packs in general, so that if you need to provide certificates or a particular agent or whatnot, you can actually include that in your own little specialty build pack, and then compose that with Java build pack or Python or whatever that may be, and have these be coordinated so that you don't have to fork the build packs and then merge back upstream all the time. OCI build packs is an area of experimentation where we're looking at, could we have droplets and rootFS that are actually image layers? Would those be more portable? What benefits could we get from that? One idea is, if we're able to do that, perhaps the rather large Windows rootFS could be an image layer or refer to Azure's layer out there because they provide the canonical layer for Windows. CF Linux FS3 will come at some point in time. I think that there's a current question of, should we wait until 1804 comes out? Should we build one for 1804? Are there compelling reasons to do this sooner with 1804? And that's something you can talk with Steven Levine about. OpenSUSE, they're also developing a rootFS for SUSE, and of course build packs would then need to support compilation on top of SUSE. All right. Windows, we now have HWC build pack and .NET Core build pack. And I'm really excited about the Windows 2016 containerization support that brings to the Windows world support for CF SSH and volume services. And that SIFS Samba driver is actually very popular for users who are interested in Windows .NET workloads, being able to connect to their SIFS Samba file share natively that they have within their environment. User management. This past year or so, we introduced two new Cloud Controller scopes to, again, help reduce blast radius. Cloud Controller admin read only. CloudController.globalAuditor, they act very much like CloudController.admin. Both of them do not have writeability. The global auditor one acts just like the auditor role, but without having to add yourself to every single space, which is the experience that was before if you wanted to give permissions to someone and not to be an auditor, but not have them see all the credentials that are in the system. The CF permissions team is working on user role to group mappings, such that it simplifies the user management process so that you can map a particular group to particular permissions, perhaps space developer in a particular space, in a particular org. And by someone joining that group, they'll natively have that particular permission. And when they leave that group, that permission will be removed. I think that's their first phase of work. The next thing they're hoping to tackle after that includes finer-grained authorization, the ability to allow CloudController, for example, or potentially other components in the system to define finer-grained permissions in this context, like while still being able to use your role to group mappings, but say we often hear about separation of duties where someone has to have only once the ability to stop, and start, and scale an app. But for whatever reason, it's not allowed to modify the app or delete the app. And so to satisfy compliance, a custom role, fine-grained authorization would support that. And then custom roles allows you to build up meaningful names for that. Vulnerability fixing. I think Cloud Foundry is really best in class here with how quickly we are able to patch rootifas and stem cells, and really our components even, as things are reported. I'm sure you see those updates all the time on the security mailing list. We have an amazing number of penetration tests from all of the wide ecosystem and users. If you can imagine, all the enterprise companies and providers, they are throwing all of their resources at UAA and the projects in general and providing that back almost on a weekly basis, we're triaging those and addressing those as quick as we can. And individual teams are keeping up to date with third-party dependencies. And we're working on additional scanning tools on how to identify that better in languages that maybe don't have great support for dependencies. Another area of investment was securing communication paths. And one of the bigger ones was securing the communication path between Cloud Controller and Diego. While Diego was being developed, there were a number of bridge components that were developed. And between the bridge and Diego was secure, but between Cloud Controller and the bridge, it was completely insecure. And so we did quite a bit of refactoring to get to the state, as shown on the bottom there, where we actually eliminated some of those components and absorbed the functionality into Cloud Controller and otherwise introduced mutual TLS between Cloud Controller and Diego. I think you can find in the documentation now much better documentation about all of the communication paths, the ports, the protocols. And we've been slowly stamping them out. There's still a couple. But we've been making good progress against this. Rootless Garden. We have currently experimental support. We're hoping to roll that out to the PWS environment soon to see how this goes. Assuming you have all unprivileged containers, you'll be able to easily opt into this mode such that your surface area of attack, should there be a container breakout, is much lower. And Cloud Foundry, the garden team, contributed greatly to this work, providing PRs to make it happen. And we're the first in the wider OCI community to adopt it. Isolation segments. We've made great progress with this. I'm seeing a lot of adoption in the community. The number of use cases are wide and varying. We're seeing people put an isolation segment into their public DMZ so that the routers and the cells are in their public DMZ. And those are publicly accessible. But everything else, they leave in their internal networks. Or otherwise, we're just seeing consolidation. We have one customer who is going from 16 separate foundations down to four because of isolation segments. In the spirit of refactoring, we're removing console dependencies, distributed service locks. We're looking to move or eliminate them. The ones that we were moving, we were moving to the database, the ones that are through the locket service here. And otherwise, eliminating some of the ones where, for whatever reason, we reached for console for that, even when the component didn't actually need distributed locking. The other aspect of console, the other use case for it was for service discovery and health checks. And this is very much still in progress as Bosch DNS develops. But the idea is to leverage Bosch links for discovery, for zone affinity, and Bosch DNS for healthiness. And that, hopefully, will remove our dependence on console in general. We've also, in Cloud Foundry, anyway, the application runtime, we've removed our XED dependency entirely and replaced that with Postgres and MySQL support. I think that reduces a great deal of operator burden as well, because many operators are very familiar with operating Postgres or MySQL. But they're familiar with troubleshooting XED much lower. And I think we're seeing that the relational database is supporting us in the fashion that we need. And Diego scaling benefited was one of the ones that proved out that XED wasn't able at the time to scale to the workloads that we needed. And I still, this was over a year ago, but I still want to highlight that Diego and Cloud Foundry in general as a whole integrated component was able to scale. We didn't just start up 100,000 containers and what not. We actually ran as an integrated platform at this large scale. And from this graph, you can see things crashing intentionally and things recovering. And I think the degree of scale that we were able to achieve with the components that we have, I think that's great benefit to the community. And I still think that's unmatched by any of the other things that are out there currently. Routing performance, the routing team invested in improving performance. And they were able to achieve a 3x throughput of the go router by doing a few things here. My favorite there was update the dependencies. And here's the headroom plot of before, which is that red line there, and that throughput. And the much nicer one, the blue one there in the after, a much better throughput request per second. Logger gator performance, the logger gator invested greatly in becoming less lossy. So you can see here, these are graphs from before and after certain deploys. And you can see, I think PCF 110, I forget what the open source versions were for that. But it goes from 50% kind of really spiky terrible, which is bad. It gets a little bit better after that certain release, after a certain number of investments and improvements were made in the logger gator system. And after CF260 was deployed, it's much better, much closer to 100%, a few spikes there. And I think even after this, we've continued to make improvements in logger gators availability. There's now a logger gator SLOs and scaling guidelines so that you'll have a better idea of when and how to scale the different components of logger gator. I think that's it for me. Questions? And join us. Yes. Do you want to repeat? Yeah. This is one question around isolation segment and preventing one compromise isolation segment from comprising the shared control plane so that we can have true multi-tenancy, even though one isolation segment might be compromised? I think this is something that we've thought about. It's not currently prioritized, but it's something we're thinking about. So we'd appreciate additional feedback. But I believe we've thought about, could the things in the isolation segment, the Go Router and the Diego cells, for example, provide clear, when they communicate back to the control plane, have in their identity, in their certificates, more clear identifiers about what sorts of workloads they're allowed to ask about or get so that when, say, a cell was compromised and it goes to talk to BBS, it was only able to get the workloads in the blue segment, for example. So I think there are some ideas around here, but it's not currently prioritized. But I think if you're interested, perhaps reaching out to Eric over there. I think we can all agree that the security work that has been done is pretty good, absolutely. There is only one thing that is a little bit left out, maybe, in this picture. And that is actually vulnerability management for the applications themselves, right? We already have a better story with build parts and what that means for applications. But I think there could be something to be done there that would improve the story for our users. Is there anything planned for that? I think there's nothing currently in plan, at least not in the near term. There are things being investigated. I think there was an idea that, for example, you could consider in that multi-build pack thing. Perhaps you have a black duck build pack that inserts itself at a certain point during the staging process to scan that app. That's kind of in the source. And is that the right place to insert it? Should it have been done in an earlier pipeline at the code? There's a few different considerations there, but that was one idea. Sure, that's obviously possible, as it is possible, to do it during the deployment pipeline. But the thing is that, often, application teams deploy something and it's free of vulnerabilities at deploy time. That might not be the situation one week later. How to deal with that was the main focus of my question, actually. How to deal with incoming vulnerabilities, meaning when vulnerabilities are discovered after the application has been deployed. Something that is part of the build pack can warn you maybe during staging, right? Yes, I see. So we have, and this is something you might want to follow up with Steven Levine, perhaps, at least on the build pack side of the house. We have thought about, could we provide additional metadata on the app, such that we could log what dependencies did it actually pull down? Which build pack exactly did it stage with? And with that information, about what specific dependencies did it pull down and what build pack did it actually stage with? And some of that information you do get with the new V3 APIs when you go through the new staging process, at least which build pack was used with it. But I think there could be more metadata, and that's something that he's been looking at. And once you have more metadata, how to query that information out based on knowing, oh, this dependency has a vulnerability. Let me scan through my system's metadata. All right, I think that's it. Thank you very much.