 Everybody awake? Yes? This side sounds a lot more awake than this side. Everybody over here awake? There you go. Front row is awake. Awesome. Well, welcome. Today is the first full day of breakout tracks. Thank you all for getting up this early and showing up. To hear a lot about, we're going to talk a lot about the evolution of the Cloud Foundry platform, our various technologies, our projects. You're going to see a lot of great demos today. Hopefully, that inspires you to get out there and dig a lot deeper. There's a lot of projects. There's a lot of contributors that are here at the event that want to share what they've been up to. And they want to share it with other contributors. They want to share it with users. So take the opportunity later in the day to use the hallway track, use the breakout sessions. And in particular, please make sure you go talk to the sponsors. Like Abby said yesterday, we don't put these events on without sponsors. So at least go and thank some of them. We would appreciate it if you thanked them. OK. So with that, the other thing I wanted to thank you for is coming to my city. So I lived here. Lived here my whole life. Go Philly. It's actually a really great city. If you have an opportunity to get out and explore it a bit, I think you'll really enjoy yourself. It's a city that's got long history. OK, the Europeans stop laughing. We had a long history, a lot of amazing old buildings. We really have a, when the clicker works, we've got a great food culture, everything from the cheesesteak all the way up to Michelin star restaurants, a lot of good opportunities for you to get out there and enjoy yourself. World-class art museums, our parks are amazing. The wildlife is just unbelievably great. I have absolutely no idea why they made this a mascot because he's awesome. OK, well, there we go. So welcome to Philadelphia. And again, thanks for being here. All right, now let's get started. Yesterday, Abby talked a lot about kayaking. That's sort of the metaphor for how do we as technologists help our companies navigate this ever-increasing world of change? And that kayaking metaphor, I think, works remarkably well. But today I want to talk a bit about the Cloud Foundry platform and I think speak a bit more to our contributors as well as for the users to understand how our community thinks about evolution. So let's talk about science for a little bit. Evolution, Darwin's theory, all based on natural selection, right? The theory of evolution is that external stimulus cause a natural selection process, which allows species to basically be created based on minor changes over a long period of time. And as we've, of course, gotten a lot more advanced in our understanding of science, we realize that this has to do with the DNA and small changes in DNA. I think the same thing applies to a long-running software project. I think the same thing applies in particular to a long-running software project that is in a space that has a lot of external things that are occurring. We need to adapt to it. We need to adopt them. And also, for Cloud Foundry, what's really good about our DNA as a project is that we're prepared to make those changes and we have a long history of embracing other open source projects, embracing new ideas. And so over the course of this morning, I want to have everybody think about this evolution in really three contexts. One is the evolution of the developer experience, right? That the promise that we make to enterprise developers that just want to get on with the job of writing good software, solving the business problems, how we're evolving that. The second is around the operator experience. Because for those of you that are users in the room, if you've been using the Cloud Foundry platform for a significant period of time, you've likely seen that it continues to grow in use. Because whether that's a top-down mandate, as we heard, some companies approach growth through other organizations take a very bottoms up organic approach. And so that growth brings with it some challenges around scale. And so I want to push the community a bit on scale and point out some areas where we can improve. And the architecture, of course, is going to continue to evolve. And we're going to dig into that quite a bit more. OK, so some charts, because statistics. We talk a lot about the release velocity of the Cloud Foundry community. And we used to talk about it in terms of the full-coordinated release. And that was happening on average about twice a month, which is a pretty high-velocity project. Many of the commercial distributions were similarly accepting those releases and rolling out dot releases or patch releases at a very high frequency. But if we take a step further and look deeper into the way that our project community actually operates, we see that there are an enormous amount of actual repository releases that are occurring. And each one of these is a non-trivial event. So going back all the way to 2015, we had a great company at the startup called Sourced. And they're applying some machine learning to source code analysis. But they did an easy thing. And they did some count of the number of releases across all the repositories that we have. Last year alone, we were averaging 137 releases. So that's across our community. And that's over 5,000 releases that we've kicked out since 2015. That's a lot of change. That's a lot of introduction of new capabilities or fixes or security patches that all the users get to take advantage of. And the same thing in terms of commits. This is honestly commit tracking. You can think of it a bit like a vanity metric. I don't necessarily think that the more commits is better or worse. But it does tell us thematically that there's a lot of work going into the software that you're using. And there's an enormous open source community wrapped around it. Again, over 1,000 people contributed to the project in the last 12 months. So let's talk about that evolution. So we know we're doing it. Statistically, I showed you graphs going up and to the right. VCs would love it. Let's actually talk about some of the specifics of that developer experience and its evolution. Now, how many of you have heard of the V3 APIs? A couple of you. It's been a long time coming. I think that was first. The first V3 ideas were back in 2015, maybe even earlier. Jules, who told me it's OK to tell this story, will take full responsibility for sending the community down this path. He said it would take at max a month. They'd be able to knock that out. But it's taken a while. So the communities really had this pent up. In particular, users have had a pent up desire for a different way for this API to work, having a bit more granularity around how applications can be deployed. We saw Yui actually demonstrate some of the V3 APIs yesterday. So they're coming online. We have a lot more progress. But recently, the project teams have reconfigured themselves purposefully to go from the slow drift of new GA capabilities to really speeding it up. And thanks to being able to track that, since early December of last year, we went from, this is pretty stark. You don't have to read all the various lines in here. But think about it this way. We went from, on one side, only 20% of the total scope was even being looked at, thought about. Initial design ideas put together. To recently, just a couple of weeks ago, 20% hasn't been touched. So we're really speeding up the process of switching over to the V3 framework. I think that's an amazing story. I think that, in particular, I want to thank those contributors that are part of those project teams. Because making sure that we're working more closely together across the CLI project, across the Cloud Controller project, and the services API team, that's really shown demonstrable improvement in the pace of the V3 development. So we're going to have a demo now. But before he comes up, I just want to take a minute. In Boston last year, Zach came up, and he, speaking of V3 APIs, he was showing off some of the V3 API capabilities. And he did that, and he mocked me. He had me dancing as a stick figure to demonstrate that AB deployment, rolling deployment would work fine. And that's OK, because I'm here for your entertainment. But I figured this time, I was just going to use really advanced technology and embarrass myself ahead of time so that I don't care what Ben shows you, there's nothing anybody can do that's going to make me feel worse than this. So welcome to Philadelphia. Ben, come on up. All right, good luck. And we're doing all the demos live, so wish you luck. All right. OK, let's get that horrible scene off the screen there. So today, I'm here to talk to you about a project called the Cloud Native Build Packs. And for those of you who haven't heard about it, Cloud Native Build Packs was started a little over a year ago, and it started as a collaborative engineering effort between Pivotal and Heroku of all people. Over a beer, someone came over to me and said, hey, we've got some Build Packs, you've got some Build Packs. Why don't we keep talking about what the next Build Packs should look like? And so the Cloud Native Build Packs project, I don't need to sell you on how important Build Packs are, sort of, to the Cloud Foundry experience. But what we want to do is take a generational leap in those Build Packs. We want something that's vastly improved and specifically based on top of OCI images. And we'll talk about why that's important in a little bit. But the key thing to take away from this is between Pivotal and Heroku, the Cloud Foundry community, and Heroku, we have a combined decade plus of experience with Build Packs. We've seen all of you that are users, all of you that are operators that have used them to date, and we know what you want to see. So we want to make really, really significant improvements in how Build Packs work and what it means to write a Build Pack. But we don't want to lose any of the existing critical features, things that make Build Packs so good and so much more powerful than something like a Docker file. So we're going to talk about two specific features of this new Cloud Native Build Pack, but don't think that this is the only thing. So one of the very first things we see as the developers and the maintainers of the Build Packs inside of Cloud Foundry is this idea that multi-Build Pack is a really, really important feature that a lot of customers and a lot of partners want. This idea that there should be multiple Build Packs that can cooperate with one another. Now, we've built some multi-Build Pack functionality into Cloud Foundry already today, but realistically, it's not as good as it could be. It's tacked on top of something that was never really designed to do it, and it requires a lot of close cooperation between individual Build Packs. Everybody's sort of defining their own contract for cross-Build Pack communication and sort of infamously, for me at least, as the maintainer of the Java Build Pack. The Java Build Pack doesn't actually participate in all phases of a multi-Build Pack life cycle. And I view this as a problem as much as anybody. So in the Cloud Native Build Pack space, we wanted to take that idea that multi-Build Pack is really, really important. And boil it down, say it's so important it should be a native part of the API. It should be baked into the life cycle rather than expecting Build Packs to build it on their own. And as soon as we sort of got to that point when we said, OK, we're going to do multi-Build Pack, this epiphany, that now all of a sudden Build Packs could become much more modular than they ever were before. So you might see the Java Build Pack, for example, break down into a Build Pack that contributes OpenJDK, one that contributes Maven or Gradle for a Build System, one that knows how to run Java applications, Azure, Google, and other APM integrations, things like that, all get broken out into their own separate Build Packs. So that at detection time, some subset of these Build Packs sort of selects itself, says, OK, I want to work with this. I want to contribute a JRE. I want to contribute Maven or something like that. And when that execution actually happens, each one of them is responsible for laying something down. The OpenJDK Build Pack is responsible for putting down a JDK for build time and a JRE for run time. Gradle or Maven maybe get contributed. The JVM application Build Pack knows how to do Java dash, whatever it needs to do. And of course, JMX would contribute its own configuration as well. And maybe this is like, OK, yeah, it's great that these are modular now. I know one of the greatest requests for the Java Build Pack was to simply contribute a JRE and nothing else. Don't write me a command line. So now we have the ability for you to do that. But where the power really comes in is imagine you're on something like AWS, right? AWS, Amazon has recently come out with a great new OpenJDK variant called Coretto. And one of the really powerful things about it is not so much the code that's in it. It's that if you're running on AWS, it doesn't matter what environment you're running in, Amazon will provide you support. So what we really want to do is simply be able to say, OK, instead of using the Cloud Foundry OpenJDK, I want to use Coretto instead. That means that at detection time, Coretto sort of participates there. And instead of laying down OpenJDK, it lays down Coretto instead. So that's what this modularity gives us while allowing us, without forking or anything like that, to simply replace pieces and parts or compose or decompose as you see fit what you think a job application or name your language is. Another feature that we have in Cloud Native Buildpack is what we call image layer rebasing. So we're all familiar as Cloud Foundry developers with this idea of replacing the root FS of all of your applications at runtime with zero downtime, right? It's one of the most powerful features of the platform that you know you can bring everybody to a safe operating system with zero downtime. We want to make sure as we move into this Cloud Native Buildpack world and into this OCI image world that we have that same functionality. And what's really important to think about here is, to date, this hasn't actually been possible. If you use a Docker file and you want to change the from line, change to a safer version of the operating system, you have to rebuild the entire container, right, the entire image. But what we do using Cloud Native Buildpacks is all of those Buildpacks we saw earlier that are participating lay down individual layers inside of an image. And we push all of those layers directly to the container, right, out to our edge node. But as we say, one day, that operating system is going to become vulnerable, right? 0204 becomes vulnerable. It's, say, an open SSL issue. When that happens, we have the ability using the Cloud Native Buildpack spec to simply reuse all of the previous layers, except point to a different operating system underneath it. We're talking about as pointers inside of metadata, not regenerating layers. And this is really, really important because it means that your edge node, where you've got a bunch of data, a bunch of layers already cached, doesn't actually need all of the data again. All it needs to do is transfer that one particular operating system layer. And even better than that, you only need to do it once. So instead of you all thinking, perhaps, that this is just sort of a theoretical exercise, I have a demo here. So one more slide, even. So I should also say that the things that I've showed you, obviously, are not exclusively what we're doing. There is all sorts of other improvements, such as auditing via bill of materials. We're doing caching optimizations using the OCI image layer spec, security integration. Many of you have tools that already integrate with OCI image registries. And we're expecting a larger ecosystem with this strict standardization. So now to the demo. So the first step when trying to build an image using cloud-native build packs is we're going to define something called a run image. So effectively, what we're saying here, and the name of this particular image isn't particularly important. But the key thing I want you to take away here is there is a version of a particular operating system. In this case, it's version build.7. We're going to say, I want you to use this operating system as the run image for all of the applications inside of my platform. So we tagged that up in the registry. The next thing we're going to do is we're going to use a hand called pack. And pack is a CLI that wraps around the lifecycle. This thing that orchestrates all the build packs and builds all of our applications is also used to run our applications. It's a little thin CLI that allows us to execute this locally on our machine the same way you'll be able to execute it inside of Cloud Foundry. What we're going to do here is we're going to go ahead and build an application. In our particular case, it's a compiled Spring Boot application since I'm a Java developer. And off we go. So the key thing to take away here is detection. There's a number of what we call build pack groups. And within each one of those build pack groups, we have a number of build packs that will participate. So we see just to open JDK and the JVM application. We see a JRE gets contributed. And here's the command line that we're actually going to run. So if we say that pack is responsible for orchestrating our application or orchestrating our build, we are effectively acting as our own platform. So we'll go ahead and do a Docker pull to get onto our Edge node, which is my laptop. And then we'll go ahead and run this particular application up. So now we have a running application. But as we said, eventually an operating system is going to have a vulnerability within it. And so in that case, what we want to do is we're going to tag up a new version. This time version 8 instead of version 7. The command we use from here, this is the demonstration of this idea of rebasing. Pack rebase says take all of those original layers, move them on top of a new operating system. And I put time in front of it because it's amazingly fast. As we see here, about a 10th of a second, within our registry, we now have a completely safe version of our application that can be deployed as each platform sees fit. So in our case, the way we see fit is to say Docker pull. You can see most of the images already there. We just grabbed new layers for the new operating system that we're pulling down. And we'll go ahead and run that same application. And all of a sudden, we're safe, right? Great. So let's head back to the slides. So I'm really proud to be able to announce here today because it was kind of looking iffy yesterday while I was on the plane the first public beta of the Cloud Native BuildPacks project. It's the PAC CLI that I just demonstrated there, which you can go ahead and download. It provides both the Cloud Foundry BuildPacks. You saw me demonstrate there, as well as Heroku BuildPacks from our partners over at the Heroku team. And it's, of course, based on the Ubuntu operating system that you're familiar with. Thank you. If you want to know more about Cloud Native BuildPacks and how we envision it working inside of Cloud Foundry in the future, please come take a look at the talk we have tomorrow at 11.45. Thank you. Thank you. Outstanding. Thank you so much, Ben. Appreciate you coming. I'll just reiterate. My personal opinion is that the process of the Heroku BuildPack ecosystem and the Cloud Foundry BuildPack ecosystem basically coming back together again is perhaps one of the most important things that's happened in many years in the past part of the market. There's a lot of learning that's happened between those two ecosystems. And seeing that come together and benefit the whole industry, I think it's really positive, really productive. And everybody here is going to get to take a lot of advantage of it. So let's transition. I want to talk about the operator experience. The operator experience, there's a lot of components to how we think about operating platforms like this. And just like I can't talk about every project team and every part of our community, I want to zero in a little bit on the concept of scale. Because as enterprises start to use this more and more, the scale demands become very different. And there are a lot of service providers that have achieved very large scale with Cloud Foundry. Now, service providers and the enterprise, they have a lot of similarities in how they operate technology. But there are some pretty significant differences. But I think it's worth thinking about how can we continue to improve the platform? So this is a bit of a point that I want the contributors in the community to hear. And I also want those of you that are users to think about how as you run into scaling challenges, you can provide what is in fact the most important thing that you can provide an open source ecosystem. Your opinions, and your experience, and your feedback. So we're going to go back to a couple of case studies. And I remember yesterday, some of the folks from Comcast chuckled a little bit. I fixed it. Sky has been acquired by Comcast. But let's just zoom in specifically on this Sky business and their scale. This is 3,000 requests per second, just for their use to back the mobile services part of their environment. They've got 300 developers working on the platform. That's impressive. That's impressive. Home Depot is another good example. You didn't hear about them yesterday. But we do talk about Home Depot on a fairly regular basis because of the work that they've put into scaling their technology organization overall. To give you a sense for Home Depot's use, 2 billion requests a month, they've connected the environment to tens of thousands of devices. And they have over 2,500 developers currently building cloud-native applications. That's a lot of developers that need to be serviced. Now I actually want to switch, and we'll talk about a provider because there's a continuum of scale here. And we have a number of providers at a very large scale. I picked SAP in this particular case because they have an interesting story in that they use Cloud Foundry for a significant amount of product development in addition to being a core part of SAP Cloud Platform, which is the product that they offer to customers. So if you look at their internal environment scale, they're supporting over 20,000 developers that are building new business applications that they then go and sell to their customer base. In their customer environment, they have 10 different, they call them estates, right? 10 different environments overall. Their largest is actually pretty large. At this point, 44,000 running containers in the environment, and it continues to grow. So we have this continuum of scale and different needs, different organizations have different abilities to operate, different amount of investment they're going to put into operations teams. But if I summarize some of the opportunities that we have, when we think about the scaling demands as we continue to help underpin a lot of the businesses around the world, I'm just going to pick three areas. I think in Bosch, the release automation tool chain, that project community could really spend some time with the larger providers thinking about parallelism, right? How do you handle these incredibly large deployments for rolling updates? And are there things that we can do to improve that so that the updates, while working very well, don't have to necessarily take as long, right? Let's look at things like parallelism. For the application runtime, I mean, that's really where all these containers are running. And the nature of scaling software is that you bump into different edge cases, you see it stressed in a particular direction. But just a reminder for that part of the community, think about scale for every new feature that gets added. Think about being able to apply quotas to every new feature that gets added because that's how these large scale deployments are able to actually continue to evolve and support the workloads that they have. And last, our CLI team, right, that developer experience, it's a similar challenge there because the 80-20 rule, right? 80% of the time you're dealing with a single developer. But as these environments scale, the operators that are trying to do things like, for example, get a large return set, list all the orgs or all the spaces, these large return sets have become problematic at a particularly large scale. So I don't say this as anything other than presenting opportunities for all of us to think about how we can, one, inform the contributors and committers that are helping to build the software. And two, if you're a contributor and a committer, remember that at this point, the Cloud Founder software is being used in enormous environments and we're just seeing more enterprises continue to grow the scale of how they consume the platform. And we're gonna continue to see stresses and we need to evolve the architecture to account for these new stresses as we hit new boundaries. So now let's actually think about the architecture itself. There's two major changes that we're gonna highlight today. Again, I cannot possibly fit in all of the amazing work that the different project teams have done over the course of the last six months, much less the last year. But let's focus in two specific areas. What's inside Cloud Foundry and does that manifest in new capabilities for the end users? So the first area that I wanted to focus on, and Yui demonstrated this with weighted routing, the first area I wanna focus on is the networking part of the application runtime. And there's been some change in the way the project teams have configured themselves. We've combined our container-to-container networking which was dealing with application-to-application connectivity with the routing team. That was dealing with ingress, egress out of the environment. They've merged. And they've merged because the integration of Istio Pilot, the integration of Envoy Proxy, those things have proven valuable for both of those sets of use cases, right? Into the platform, out of the platform, between applications. And that's unlocking new capabilities, like weighted routing as an example. And there's a number of others that are on the roadmap. The other thing that I'll point to about this effort is that when we build bridges into other communities as contributors for Cloud Foundry, we can improve that community and the industry overall. And I think in terms of Istio, that's a great example. We just talked about scale. And as the Cloud Foundry community got involved in Istio and started to integrate it, scale became part of the use cases that they stepped into the Istio project with. And they said, look, we have these scale demands. How can we work with you to continue to scale Istio so that it can support very large environments? So that's actually an amazing success, right? We're building bridges with other communities. We're bringing software into the overall platform. That's a great story of evolution. So the next one, and Abby mentioned some news about this, right? We're very happy to say that the IRENI project has evolved quite quickly. It represents an option for, can we use Kubernetes as the underlying container scheduler? What use cases will that work for? Can we make sure that it can do all the same functions that Diego can do? There's a lot of work to be done to really reach feature parity and then reach scale parity, but it's work that is accelerating and the work that's moving very quickly. So Abby mentioned, it's passing the initial cat suite. Google and Pivotal have joined IBM, SAP and SUSE on the project, committing to it full-time. We are seeing early adopters start to adopt the open source version of IRENI. IBM just announced that they have a technical preview that uses IRENI. I believe that SUSE, if they haven't announced already, is probably pretty close to doing that. So there's an amazing amount of progress that's happened there. Now, I've been rambling for a little bit. It'd be cool to see some more command line stuff. What do you think? All right. So let's actually get Julian to demonstrate, but before he comes on, let's get him to demonstrate the IRENI project. So Julian, Jules Friedman, Dr. Jules, from IBM, he's the project lead for IRENI. He, you know, we talked before the event and he promised me that this was going to be amazing. But last week, he realized he had to practice the demo. So let's see if he did. So Jules, come on up. Musted. All right. This is the hardest part of the demo. USB-C. Maybe it worked. We'll see. So click it. Yes. All right. So we're going to talk about IRENI. IRENI is a little project we have been working on to do Cloud Foundry on Kubernetes. Chip asked me to show you an architecture diagram. This is what I call a squintergram. It's supposed to squint at it and see that it looks complicated. Yeah. Right? So the blue bit on the right is Diego Container Scheduler. On the left is all the other stuff around that. How do we convert this to let you use Kubernetes instead? Done? I make this look easy. You probably want to dive in a little bit and we will do that. So what's really happening inside that IRENI box? We're taking that CF push developer experience instead of mapping the applications to Diego LRPs, we are mapping them into Kubernetes native objects. So as you see a push, you get apps within Kubernetes. As you scale, you get more apps in Kubernetes and under the covers, that's all just Kubernetes YAML, Kubernetes objects that we're updating. So I'd love to show you a demo. Let's swap to terminal. Here we go. So I thought IRENI is Cloud Foundry on Kubernetes. I thought to get started, let's just look at Cloud Foundry and Kubernetes and just say like, how do they differ? How do you use them to actually create an application? So if we start on the right, I have a Cloud Foundry, have a little application. This is pronounced CFresh Prince. It's pretty simple. It says in central Philadelphia, born and raised. And when you curl it, it says ouch because I've always thought it must be painful to be curled all the time. This is Cloud Foundry Summit. I imagine you know how we push that to the cloud with Cloud Foundry, we say CF push. Then we say a little prayer that Wi-Fi works. No! Oh, I'm in the wrong space. Live demos are tough. Oh, that's not even right. CF orcs. No! Did you know that CF is really easy to use? Particularly the orcs and spaces, CF targets. Then you do a CF push. Good. I wanted to delete an app in another org earlier, so I was in the wrong one. All right, so we know how that works. Cloud Foundry takes that, uses a build pack, uploads it to the cloud, builds it into a droplet. We're probably less familiar here with Kubernetes, so let's just talk about how Kubernetes does a similar thing. So I have a similar app. This one says in Kubernetes is where I spent most of my days. How do we push that to the cloud in Kubernetes? So we start with this thing called a Dockerfile. So a Dockerfile is kind of like a build pack that you maintain yourself. I got this from a little web app called Google which stores Dockerfiles for you on the internet. And in order to use that, I say Docker build. This is already in cache. It's nice and fast. And I've built that into an image. That's the operating system is built into an image. So run those commands, get an image. And then I say Docker push. And then I put that image onto a registry somewhere on the internet. And it now has a URL that I can refer to that image with and run it wherever I want to run it. So next step is I will edit this thing called a deployment YAML. This is written in an entire abomination called YAML. In this case, I'm using a deployment. I could use a stateful set or a replica set or a few other things. This is a deployment. Two things to notice. It's got the name. It has this thing called replicas. That's how many of it you want. And in this case, it is the thing in this template. So spit out a number of this template. So this number of pods that look like this. And it has the image that I pushed before. So that's how those are linked together. So once I've done that, I say kubectl apply. And that will cause that to exist in the cluster. That gives Kubernetes that state. It's going to go away and make it happen. I'd also do something like kubectl expose to give myself a service, a kubectl get services. And that will let me get the IP. There's a few other things I would do. I'd set up log streaming, set up network policies, probably set up some sort of CI, make the Dockerfile a little bit better, but basically CF push Kubernetes. Now, you may be thinking to yourself, weren't you going to demonstrate Irene? Why are you showing me Cloud Foundry and Kubernetes? The answer is I have been demoing you, Irene. The CF on the right and the Kubernetes on the left are running on exactly the same Kubernetes cluster. And what we CF pushed is running in that cluster. So if I have a look, Kubernetes get pods. Inside the Irene namespace, we see CFreshPrint's S funny name. That is the thing we pushed, and I can prove it. So let's watch it. So we can see the pods that are running. And if I say something like CF scale, give me two instances. We'll just see that pop up a new container. So we've created another instance of that pop. So we can do some nice things. So because we have CF, we don't have to go setting up log streaming and fluentD and stuff that's set up for us. So if I just grab the logs and I can just kill the logs and ouch, ouch, it's actually quite satisfying. I recommend you do this if you're ever feeling stressed. If you're on a keynote stage, just do that a few times. So how does that work? Let's have a look at the YAML that Irene is operating. So we can have a look at the stateful sets. So I'm using a stateful set rather than a service. The one that it has decided is 2zjnc. It's also my password, not really. So let's have a look at that. This is the YAML that's like the deployment YAML that I created earlier. But this is the one that CF has created for me. It's a bit more complicated. It does a few other things. But it's basically the same. So you'll see the replicas element. That's 2 because we CF scale to 2. And everything I change in CF will result in changes to this YAML. You'll see we've set a couple of annotations. So we have here the application grid, the process grid. Those are the CF grids for this. So that's how we sync the Kubernetes state and the Cloud Foundry state. We check we've got those things matched. But one thing I thought would be really interesting to show you is if I float down here, we'll see this image element. People who know Cloud Foundry know that Cloud Foundry thinks in terms of droplets so that we can patch the images over time. When we did that CF push, we got a droplet in a blob store, not a container image. Kubernetes thinks in terms of images. So how do we square those two? Well, I actually just taught Cloud Foundry how to serve Docker images, OCI images, for all of the droplets. And that means we can do some quite cool things. So if I copy this, I can actually just run it. It's just an image. All this YAML is just Kubernetes YAML. The images are just Kubernetes images. So I can run that image. And I can say cat, var, vcap, vcap, home vcap, app, main.go. And Docker will just pull that image. That's the same image in the cluster. It will just pull it down and run it. So you want to reproduce your CF push locally. You can push the CF push locally. And there's our application. There's the container running the stuff that we pushed up there. So let's show you one more thing. This is CF application running on Kubernetes is running over here. If I flip over to a web browser, we could have a look at the demo directory. And this is CF running on Kubernetes. And my suggestion for a new CF Kubernetes not quite haiku. It says, in central Philadelphia, born and raised, in Kubernetes is where I spent most of my days. But now I'm chilling out maxing relax and all call and see if pushing some apps with a simplified tool. And that is CF on Kubernetes with Irini. We have tons of talks around the conference, much deeper dive demos. Thank you very much. That was amazing. Thank you, Jules. I heard what you said about V3. We'll talk later. Yeah. Yeah. It was your fault. OK. That was cool, right? Yeah? Interesting. OK. So before we move on to the rest of the show, I just reiterate, today we showed you some demos. I talked a bit about some of the evolution in terms of the operator experience. There's the CloudNative BuildPack project, which represents some of the shift in the developer experience. And you can truly see that our architecture is evolving. And this is something that's going to continue to accelerate. This community has that long history of evolving the architecture, evolving the platform, evolving the experience around it. We're continuing to see significant growth in the contributions to the project. More companies are getting involved. More people are getting involved. And I think that we're going to see this evolution continue to happen faster and faster at even greater scale with greater impact. So I want to thank you. That was the initial part. But before I kind of step off the stage and introduce our next speaker, Pivotal had an announcement. And so I want to bring up Richard Sarota, Vice President of Product Marketing from Pivotal. Come on up. All right. Shake, hand over. Good walk-up music. Thanks. Yeah, definitely. All right. Take a couple of minutes. Yeah, after you on PCF25, what we just shipped yesterday. So just a few categories of things. We've already seen demos of things like Istio and Envoy. And that's a big piece of all of this. How do you integrate the best of open source? So things like Istio and Envoy, Kubernetes 113 coming in PKS about in a couple of days, Spring Cloud, Steeltoe, all these great things. And the monitoring indicator protocol is pretty cool. How do you actually declare your observability metrics as part of your product and then have things like HealthWatch hold that in and actually show those metrics? So that's really neat stuff. Open source as well. And then there's more apps on the platform. So being able to support things like multi-port is really cool. I'm excited about that. We've been doing Windows support in PCF for years now, Windows Server 2012, 2016, and those sorts of things. Now supporting Windows Server 2019, which is great. PKS Windows, meaning Windows Workers as part of PKS, is going to be coming right around the corner. So again, excited about that. Supporting more types of workloads. Again, Steeltoe support, if you're a net developer, being able to pull that into Windows and Linux is great. I have a breakout later today at 2.15 on all the crazy things you can run in Cloud Foundry. So of course, skip every other session and attend that one. And then finally, automation. How are we just making it easier to build and run PCF? So platform automation is really cool. These are the components to actually build the pipelines to continuously deliver PCF itself, which is super great and handy as you have stem cell updates and product updates and things like that. I also like that you can have a single operations manager experience in PCF for both PKS and PAS in the same place. And find the last one that's really cool is the multi-foundation apps manager. Meaning if you have foundation sitting in Azure and AWS on-premises, OpenStack, whatever, I can have a single application interface that sees all of them and manages them all in the same place, which is really neat. So between all of those, it's some pretty exciting stuff. If you are a pivotal customer, you can go to PivNet and download today. If you're not, it's fine, nobody's perfect. You can find me around the conference and I'd be happy to tell you about it. Thanks. Awesome. Thank you, Richard. Really appreciate it. Thanks for coming up.