 Unfortunately, some of the most frequent feature requests we get are hard or impossible to do by incrementally improving the API. Because of that, we needed a more drastic change to allow us to respond to the needs of app developers. So what were our goals? Backwards compatible. We want to maintain V2 behavior. We will be deprecating V2 and new features will be added exclusively to V3. We know there are a lot of tools out there, so we want to make sure we don't break that experience. Modular. We have a few great ideas about how to improve the app deployment life cycle, but we're just 10 people as a team. We want the API to be modular so that your ideas are possible, and so you can customize the app, your app development life cycle, to your needs. Flexible. Flexible code base for us means that we can deliver more features to you. Faster. And we want users to spend less time repeating the same process over and over again and more time delivering. Easier to manage. We want to get rid of hidden complexity and the need to have a deep knowledge of the internals of CloudBoundary. Developers shouldn't have to know that an app needs to be restaged if a new build pack doesn't work out and your app deployment process should be easy to understand and customize. We want to be consistent. Lastly, we want the API to be discoverable and unsurprising. So how do we accomplish these goals? We initially wanted to extend the existing API, but we were blocked by the overall domain modeling and design. There's a metaprogramming layer that's closely coupled with and shared between the database models. This can make adding simple features for one resource pretty complex task. For instance, if we're hitting V2 apps to create a new app, for instance, then it would be hitting the same code base as what you would be hitting if you wanted to make a new space. So if we try to change the behavior for how apps behave, then we could make an undesired change in how spaces behave. So this made feature work pretty slow. And we needed to redefine some of the constructs to provide flexibility and future-proofing. So in addition to that, V2 app is a monolith. And the user defines a configuration like a start command, CPU, a number of instances, memory, and they have a package, which is the source code and a compiled bits, which we call a droplet. And when the user wants to start the app, the package bits and the app metadata are downloaded into a container and run. This is what we call V2 app. It doesn't really allow for things we value like flexibility of implementation or client-side manipulation. Our solution was to extract things like the droplet and package from a V2 app into top-level domain objects with their own API endpoints that all live under an Umbrella V3 app. An Umbrella V3 app can have many source code packages. And these packages can be staged into droplets or tar balls of compiled source code or Docker images. And then when the user is ready to execute their code, this desired combination of metadata and compiled bits can be downloaded and run. And this is now what we call V3 process. And by splitting this monolith into more manageable and independent concepts, we can provide an API that supports a more flexible and nuanced management of your apps. And this V3 process is now available, well, V2 apps that were previously pushed with V2 are now available as V3 processes as well. And if you'd like to learn more about this, then check out a talk from last year. So I'm taller. So that's the details of what V3 is conceptually. But what we also want to make sure is that all the tooling people have made around V2 of the Cloud Controller API still works. So fortunately, we've ensured that all of the endpoints on V2 still work. V3 doesn't deprecate V2. Also, as Utako had mentioned, all existing applications that have been pushed in V2 are being migrated to the V3 API. So you can actually use V3 for existing apps, just considered as processes. Now, once you've got V3 on your Cloud Foundry, it's interesting to think about what using V3 looks like. And I think the best way to understand that is to look at what it looks like to push an app in V2 right now. So if you're pushing an app on Cloud Foundry, what you're probably going to do is create a couple of spaces, one for development and one for production. And you will most likely create a service for this, this could be a database, it could be anything. So in this example, we're going to make a couple of spaces for our Boots app and create a user-provided service. The password is Dora. Don't tell anyone, please. It's my password. All right, so when you're pushing that application, what you'll want to do, let's say you've got a standard web application. Probably has three different processes that need to run. What we oftentimes see is a web application, a worker application to process jobs and a scheduler application to tell that worker what to do. So you'll push your app three times and it's going to be the same code base. You have to do three pushes, interact with the API three separate times and then bind your database service to that same application three separate times. It seems like a lot of duplication. And during this push process, there's some interesting stuff that's happening. Even though it looks like one command in the CLI, in reality what's happening is first creates an empty application, then creates a default route for that application. It will bind the route to the application, upload all of the files, stage those files into an executable droplet. If you're not familiar with the staging process, when you push, that's the part that takes a long time, has a lot of output, and then finally we'll run the droplet. So in v3, this looks a little bit different. All you do is one single push. It's got the exact same steps, except because we can be smart about one application having three different processes, we only need to do one staging. So as you can see here, we just create the empty app, the route, upload the files, do the staging just once. I think that feels a lot better. It's a lot faster. It just makes more sense. So the way that we're able to do this is we specify a proc file in the application. So just at the root where you'd have your index.html, your gem file, whatever it is, just create a simple proc file specifying I want my web, my worker, and my scheduler. And the v3 API can look at this and say, I'm going to push three apps for the same bits. Again, only one staging process. Unfortunately, we're able to do other things, coordinating processes as one single app. So those three different bind services that you do before. In v3, it's just one single bind service. This is not only faster, easier for you to do, but it's also safer. The service binding has credentials, so let's say you've leaked your credentials, maybe you've announced your password at a conference, and now you only have to change your password once all of your processes for your application get the new password. So that's how development's a little bit easier. It makes even pushing to production better. So in production, you would normally target your production space, you'd push those same three apps again, you'd bind those same three services again, and you'd map your production route to the new application, which seems, again, kind of like extra behavior, because we've already pushed our bits, we've already staged our bits, we have our droplet. So in v3, we can do better. In v3, we could just copy a droplet over to the production space, the exact same droplet, exact same bits that you've tested, that you've run through CI, you don't even need the code on your development machine, you just need access to the cloud foundry, and you can move the application from your development environment to your staging environment to your production environment. No staging, much, much nicer. So what happens when you copy? It creates a new application, it copies the configuration for the app, it will then copy that same droplet for the application, allowing you to later on change it separately if you'd like to, it will bind all the services, or the same service bindings, but to your production services that will have the same names as in your development space, and then it will just run the droplet, nice and simple, nice and fast. So what if you want to update your app? Alright, in v2, this is what we do. We stop the app, we push the updated code, create a package, upload the files, stage and create the droplet, and we start the new instances with that droplet. In v3, we push the updated code, create a package, upload files, stage and create the droplet, stop instances of an old droplet, assign the droplet to the app, and we start instances with the new droplet. In v2, your app is down for almost all of this process. And in v3, your app is only down when we're swapping the old within you. So what if the update didn't go well? Well, in a normal process, you would freak out, look through the logs to see what was previously deployed, check out that shot, and then maybe deploy from your development machine outside of CI where anything could go wrong. And then you hope that you deployed the right thing and wait for your app to come back up. In v3, it could be as simple as one simple rollback command. And because v3 keeps track of your app's last five droplets, you can roll back to any one of them. And so everything we've said so far is a hypothetical world, the CLI commands are not implemented yet, and they could change, and they aren't currently available, it's just what the world could be. So we've talked about how existing workflows can be improved with v3. Now we'd like to talk about something new. We're really excited about this feature. Users have been requesting this for a really long time. Diego had it available for a while, but we just didn't have the structure in place to expose this. Implementing v3 has made it possible. So you can now run arbitrary commands on your app. Your tasks and apps share the same code, the same environment variables, and the same service bindings. So let's say you push an app called Dora with v2 or v3. So what could you do with it? You can run a migration. You can send an email blast to your users. You can make a query against your database. Pretty much anything you want. And also, you'll have access to the output of every task you run and a history of the tasks that you run against your app. So maybe you'd like to know when this will be made generally available in the next few weeks. All right. So what we've talked about is what is available in the Cloud Controller API. Like we said, some of the new CLI features are coming soon. Some more work to v3 is going to be coming soon. So what are those things? Obviously the CLI does need to support this and task support is coming quite soon. Everything is going to be in the CF CLI. So if you want to take a look now, you'll have to hit endpoints. You'll have to create post bodies. CLI is going to make this very easy. It's going to be great. We talked a little bit about how downtime is shorter in v3, but because we can do more complex application life cycles, we think we'll be able to get zero downtime deployments. There are a lot of blue-green deployment processes, other sorts of zero downtime deployments that are kind of outside of Cloud Foundry. We think we can make that a native part of the platform. We also think it will be possible to move droplets from one existing Cloud Foundry to another Cloud Foundry. So you've got a development Cloud Foundry on your machine, on your laptop, or you've got it on a different part of the world and you need to balance between foundation installations. You should be able to move droplets between those Cloud Foundries. And finally, if you've got a situation where you don't want to rely on a proc file, you want to make processes dynamically, you'll be able to make processes for existing apps using that same droplet. Again, it's enabling workflows for different types of application deployment processes you'll want to do. So fortunately, all this stuff is free. You can try it right now. And we'd love to have everyone try the V3 of the API. We think it enables some pretty cool things. It's fun to work with, a lot faster, a lot easier. So the ways that you can get involved with it right now, we've got documentation for V3. These sides will be published, that's correct. So if you don't know, or if you don't want to copy down this link, you'll be able to get this link later. The documentation will show you how to use it. You can also write a custom CLI plugin to play with these different workflows, see what a workflow feels like. There are a couple of alpha plugins using V3 right now that maybe you want to do something else. That's what V3 is good for. So please take a look right now. We'd love again to get feedback. The alpha release is coming up pretty soon. And general availability is, I guess, soon after soon. Please provide us any feedback you can. We'd love to hear it. We also have Slack channel, cloudfoundry.slack.com. Hopefully you're in that already. We're in the Cappy channel. So if you've got questions, if you need advice, if you've got suggestions, that's where you can get in contact with people from the Cappy team, talk about V3 experimental. Would you like to explain experimental? So V3 is currently experimental, which means that your data may be truncated on immigration. So just keep that in mind. It's not a feature, though. We don't want that forever. All right. So at this point, does anyone have any questions about V3? Yes. Yes, that's correct. So that's using the proc file that you put in your code directory. So the binding of the services will be separate, but you'll bind it to the application, and all the processes get the binding. So you won't have to bind per process anymore. I don't think the manifest part of that has been figured out. I think there's still some design work about what exactly it feels like to use the CLI, what it feels like to use the manifest. It seems like something that is pretty reasonable. If you would like that feature, absolutely. I'd encourage you to go into the channel, talk about it. The more we hear about what people want, the more we're probably going to go in that direction. Yes. Can you speak to that? The question is, between different installations of Cloud Foundry, when will we be able to copy droplets between the two? That's also still in development. I think we have something in our backlog right now about uploading a droplet that's already been staged. So it's already in the works? Short. But I make no promises. I'm not allowed to. This has also not been implemented yet. You could technically assign your current droplet to an older version. So you could look through your apps droplets, find the good or something. And then when you assign the droplet, actually, this has already been done. You can assign that droplet to that droplet good. And start that up. Any other questions? I don't believe so, but that's a good idea. As far as scheduling of tasks, I think that's something for now would probably best still live in your application logic. It might be, I would guess, a little complex for what task is meant for. But from your own app, you could actually trigger tasks on itself. I think there is one more question to the left. If I understood correctly, the environment is also copied over if you copy over your app droplet from your development to your production space. But most likely, at least for the applications that I know, the environment will be different. For example, I don't know, a lot of detail level, my RAP profile, and so on, all these kind of things. So how would I deal with that? Most likely you'd pass in a different set of environment variables. Maybe we would have a different flag. This is a really good point. I think we'll take it into consideration when we implement it very soon. Any final questions? All right. Well, thank you all very much. Appreciate you coming.