 memory, different CPU, things like that. So that's sort of what a proc file is. And sort of the idea of what we picture this looking like is you have an app with your droplet, your compiled running bits, and it has a list of multiple commands that can run. And those are sort of what today's app looks like. So when we look at bringing that into v2, we see that we sort of have a limitation here. And that our existing structure doesn't really match this desired representation we want for a proc file. And part of what that is is this tight coupling we have in today's app, where the one app object is our droplet. It is our package. It is our command. So when we talk about wanting separate start commands, like that we would get from a process file, we don't really have a place to put multiple start commands. Really the only thing we have is add more apps. And to do that, you're getting a copy of all this other stuff. You're getting a copy of your droplet in your package, which is sort of anti the goal of what a proc file is. We want to push a thing once. So we have this sort of this mismatch. So that's great. But maybe we could hide that under the hood. Under the hood, we'll just move some data around. And everything will be great. But we have another limitation there. And that's sort of this rigid implementation we have in our cloud controller code base. And what I mean by that is we take advantage of some Ruby metaprogramming, where essentially there's this very tight coupling between an API request and a database table. So when you're making API requests, so for example, say a v2 app endpoint or v2 spaces endpoint, they're actually going into the shared code. There's some metamagic that's happening that's directing them over to a table. And because of that, there's a lot of assumptions made about what that table structure looks like and what an API request looks like. So we don't have a lot of room to maneuver. It's great if we wanted to add something new that fits that model because most of the code is already done. But it doesn't leave us a lot of flexibility. So if we want, say, spaces to do something different than app, you change that. It's in a shared code base. It changes everything. Our next limitation is just app itself. So what we're talking about here is app is everything in the system. Everything talks app. The API is based around app. The back end systems know how to run app. They know how to keep app alive. So when we want to start talking about other things, we just don't have a definition for it. So one, it makes it hard to change what app is, but that's sort of what we have to do here. So what are we going to do next? Well, we have this sort of mismatched domain model where our existing structure and our desired structure don't match. We have a little bit of an inflexible code base. So we're going to move on to v3. And specifically for v3, we're just going to talk about app. We're not going to touch the other things. We're not going to talk about spaces or users or anything, but just app. So how are we going to get there? Well, today we have our API for v2 apps. We have our app object. And it sort of has the things we want in it. We just need to break it apart. So that's what we're going to do. We're going to pull out packages. We're going to pull out droplets. And we're going to pull out processes. And we're going to give them their own API endpoints so that they can actually be interacted with. And they can be treated differently. And we have a more robust vocabulary to deal with these things. And what we're going to do specifically is pull v3 app up, which is going to be a new object. And this thing we call process, like our proc file, is really the old v2 app and managed by the proc file. So that's kind of important. And the reason that's important is it means we can isolate our changes inside the Cloud Controller code base. We don't have to change DEA, Diego, HM9000, all that health management stuff. They're still going to know how to run a process, which was v2 app. So that's kind of nice for us. So running, scaling, and stopping are all basically done. Also, we showed this thing where we have this rigid implementation. We're going to go ahead and change that. So inside the code base, we're going to make some changes. And we're actually going to have separate code for each endpoint, novel idea. And also, you notice here, we take away the database tables from the drawing because it doesn't matter anymore. It's just an API. We can hide that stuff under the hood. So Luan's going to talk a little more about how that code change kind of happened for us. All right. Hi. I want to talk about code. So Zach mentioned that v2, it's rigid in its implementation. It's tied to the database. And therefore, it's hard to change because of all the shared code we have. I want to show you an example of that and what we mean when we say this. So this is an example of a controller. There's a lot. It fits on the screen there. But this is the domain controller. It does what you expect a controller to do. It does the credit operations for domains. But the code is not actually here. The code is implemented elsewhere. So the first thing we see on this controller is not related to credit at all. So this translate validation exception here just translates an error code to the user. It's not related to operating the domains themselves. So let's just take that out of the way and start from the top. Sorry. So the domains controller, and here's from model controller. Model controller is really where all gets implemented. It's the secret source of v2. And that's what we mean when we say we have that shared code between all of the v2 controllers. Next, we define out the attributes that the API is going to accept and return. In this case, they have to be columns in the database. They're just mapping some database to the API. And we also do the same things for relations. So we have two one and two many there. Those are relations that the domains model has in the database. Then we have query parameters. As you're expecting, when you're listing domains here, you can filter your domains by those feuds. All that's defining. We define the delete methods. The delete method is actually the only one you have to define explicitly. All the other ones are implemented by model controller. And this one is also implemented by model controller that do delete. All we do here is actually the authorization part of it. But this is an HP API. So which endpoints do we actually define here? So as we said, model controller will actually implement everything for you. And it will define these five general crud endpoints. So create, read, update, and delete. And it will also define a few endpoints for your relations, which may or may not be obvious when you look at this. So this is all great when you just need a direct mapping from a database to the API. You can implement things very quickly when you have only one model and a few columns. But what if you want to present a different user? What if you want to join more models in one response? Well, then you have to hack our A out of it and have something like this. So this is code from another controller still in the V2 that needed to run a different query and to get different models in the same response. And it gets a little complicated. It's locally optimized. It's not natural for a developer looking into this, for a new developer looking just to know that they have to override this method in a controller to get this behavior. How did we solve this on V3 then? What did we do on V3 that makes this better? So this is a V3 controller now. There's a lot more code. This is only one action in a specific controller. This is the update of apps. So this is the apps controller and the update requests on that. So the first thing we define is actually the path for that action. And it's really obvious on the first line there that's what we're doing. And after that, everything just flows through. So first we have authentication. That's just doing a basic check that the user has a token with the correct alt scopes. And then we do validation. So the user is sending us some JSON. It has to be validation. And it has to have the correct parameters that this controller will expect. With that data, we're going to go and fetch from the database what we need to have in memory to process this action. So this is really just doing select queries. And this fetcher object that we created is a first class object that helps us isolate this data fetching behavior from all the meta magic we used to have. Then we do membership checking. In Cloud Foundry, you can be a member of a space in an organization. You can have different roles in those spaces in organizations. And in this case, you have to have the update permission to change this app. So that's all we're doing there. And if you have all those permissions and that everything is OK, we're going to perform the action. So the action object that we call app update there is another first class citizen on v3. We decided to pull that out because it's a really important piece of the behavior. And it's really important that we have that easy to test in a dedicated place. And then finally, when everything goes OK, we return a response to the user. 200 OK, we use a presenter to serialize the app into JSON. And off the user goes. So I mentioned these two new paradigms that we introduced in v3. So the fetcher is the first one of those. And what it is, it's really just a query object. This object in particular is quite big. And it's actually just listing all the apps either in the space or in the system. And it has all the logic necessary to create that query. And no other queries are going to happen outside of this because we actually return just the objects themselves to the user. So you get no surprise queries when you're running a system. The other first class object we mentioned is the action object. So this app update is the example we picked. And again, it's isolated. There is no shared code between this action and other actions in the system. Everything that happens for this controller action happens here. And because of this, again, it's easy to test. It's isolated. There are no unintended side effects. So this is the log mark code. We had to write a log mark code in v3. Then we had to write in v2. Why is this better? Well, things I said. By doing data access upfront, we get no unintended queries. And we have one place we can test and optimize that single query or those many queries you might need for one request. And by having the separate action, we get a similar benefit. We can ensure database consistency, transaction locking, and all of that in this one object. And we can test that in isolation. And my favorite one is actually the clear and easy flow. Because Cloud Foundry is open source. So we get a lot of people looking at our code base. And our teams themselves get a lot of rotation. So we get new developers on the team very often. And it's very important that those developers, both the community and on the team, get to the code and understand what's happening, rather than having to dig through the code base to understand what code is actually being generated. And the last one is by extracting all of these microservices, let's say that, inside the code base, we move towards a potential architectural change. If we need to extract those components into actual separate services, we have a place to do that, too. Yeah, that's all I have. Jim's going to talk to you about the API changes we've made. Yeah, so let's talk about API design in v3. First, let's take a step back and let's look at what a v2 app response looks like today. What you're going to see is, at the very top there, we have a metadata section. This contains basic information, such as the identifier, the path from which you can fetch the current object, and timestamps like created and updated at. You'll also see that there's a really big section entity. Entity has truly become a dumping ground for any information that might be associated with the object that you're fetching. So what you're going to notice here is that this is mostly a homegrown standard. It doesn't really adhere to any patterns. And we find that it's drifting from the way that public APIs are actually moving towards today. The entity section is also populated with a lot of irrelevant, excessive data. And the reason for this is because the v2 API heavily reflects the database representation of the model that's being displayed. This problem gets exacerbated due to the fact that, as talked about earlier, app was a monolith. It's huge. It has too much information. And when I'm fetching an app, I don't really care about staging information, package information. It's just complex. Lastly, we're going to notice that there's some duplicate information here. There's space good. There's also space URL. We're providing two ways to fetch an associated object. And it's just becoming a little bit unwieldy. So let's take a look at what v3 API responses look like. What you'll notice is what we've actually taken inspiration from the hypertext application language, otherwise known as HAL. And so what we've done is we've simplified the response. We removed the metadata section. We removed the entity section. And we moved most of the data top level. Now, when I'm getting a response, I'm getting what's localized when I'm fetching. In this case, it's an app. I'm getting name, GUID, state, and some other basic information, environment variables, pretty basic. Another key point to this HAL pattern is this link section right here. The way I like to think about the link section is that it's like a map for the user to help them navigate through our API. It contains paths to fetch the current object, to fetch associated objects that you're going to care about, but also something that we're working on to clean up user reactions, API actions themselves. So let's take a step back and think about how you might start an application in v2. Currently, what you do is you just update the state field to your desired state. Started, stopped. It's pretty simple. But let's consider a different case. Now I'm scaling my memory. What do I have to do in this instance? I have to change the memory field to the value that I want. But then I also have to restart or restage my application. It's this hidden complexity in the v2 API that makes it complicated for users to use. So our goal with this is to actually design a different way for users to interact with our API to achieve the goals that they want. And so let's take a look at how you might start an application in v3. You just call the start and stop endpoints off of the app. What about scaling? Same thing. You just call the scale endpoint. You provide instances, memory, disk. This endpoint then takes care of the hidden complexity so that you don't need to know the internals of Cloud Foundry in order to actually do what you want. You just call these actions through our API. Another area that we tried to tackle is query parameters. So in v2, if I wanted to filter a collection by, in this case, name, for example, I had to know that this queue query parameter existed. And then I had to know this weird non-standard syntax with key colon value. This was pretty confusing for users. It's not only hard to read from a path perspective, but it was also increasingly difficult for us to deal with in code. So what we've tried to do and what you might expect is we standardized our query parameter format. We took the query parameters and we moved them top level. So now when I'm filtering on collections, all I need to say is names equals. It's pretty simple. And it's what I think users are most likely to know with experience with previous APIs. Some other things that we've tackled in query parameters is inline relations depth. What inline relations depth did in v2 is that it allowed a user to expand nested associations through the API. When I'm fetching an app, I can also expand and get the space and other associated objects. This was dangerous because it allowed users to dump a large portion of our database in one API call. It often resulted in expensive queries and we were losing control of our data access. So by removing this, we're really trying to make users fetch explicitly the data that they want when they're making API calls. Another thing that we've done is we've just made our pagination parameters a little more explicit, a little more clear, and easier to use. So with this overall to code, API design, and domain representation, we have some pretty neat side effects for the future. I think the most important one is that staging and running are now independent things. They're no longer tied to each other. I can now stage a package in v3 while I still have my app running in v3. The only thing that bridges these two concepts is user interaction, user actions through the API. Another huge thing is we've moved some concepts such as processes, droplets, and packages to the top level in our domain. What that's going to help us with is it's going to give us the building blocks and features, because the word on the street is product wants some cool stuff like zero downtime deploys, rollbacks, and even different package types. So let's take a look at what a zero downtime deploy might look like in the new v3 world. What you can see here is you create a package, you upload some bits. After that, you stage it, and you make a droplet, and then you're going to associate it with your application. This can now all happen in the staging realm. It does not actually affect the running application. In v2, you would need to stop your application, do these actions, and then start your application. This can now all happen separately, and then we have a final API call to tie this in together and actually take effect in the running world. Another big thing is rollbacks. Here, we have an application that's associated with a current droplet. If I wanted to assign it to a new droplet, all I have to do is call the assignment method. Just put to v3 apps, go ahead. I think it's actually called assign current droplet now. But let's say this droplet has bugs in it. The code is bad. You want to go back. Now that we've moved droplets to a top-level domain representation, this concept is really easy to envision. All you need to do is just reassign it back to the old droplet. This is huge. Lastly, package types. We want Docker, we want GitHub, we want bits, we want them all. Now, all you have to do is once again implement the package concept in Cloud Controller, because it's a top-level object, and then just plug it into the existing infrastructure, and it should work seamlessly with your applications. Lastly, I know you guys are all probably concerned about how your v2 apps are going to make their way to v3. What we're really hoping for is a seamless transfer from v2 to v3. And what I mean by this is we're going to try and implement v2 in terms of v3 data under the hood. This means that when I create a v2 app, I'm actually going to create a v3 app, but with one single web process. And we'll have data migrations that are going to take care of all existing data, move it into the v3 data representation, and then v2 and v3 should work seamlessly side by side. Hopefully, once v3 is finished, you guys will have all your v2 apps running in the v3 world. Lastly, feel free to check out our API docs at apidocs.cloudfoundry.org. Thanks for listening and any questions? Sure. So the first question was, are we going to provide a guide for people who have implemented clients against the v2 API and want to move to v3? And I think a lot of it's going to shake out when we have our CLI start moving over. We're going to learn the little edge cases and the bugs, and hopefully we'll be able to put some good documentation together. So hopefully, we'll test that out, that track. My second question was, because we're moving inline relations steps, how do I get things that I actually do want in one API call? So we don't have a way of doing that right now. What I envision is doing. So the issue with inline relations steps was, since we had that meta-programming, it made assumptions about the association. It didn't know that you wanted space or org. It just said, anything that app is touching, give me all that stuff. So we want to make that more explicit. So the way I imagine that happening is us adding a query parameter where you can say, in this response, embed a space, embed an org. And then you specifically ask for what you want, and you only get that. And we get control over it, so our queries don't run away from us. Good question. So right now, like I said, we're actually marking it as experimental. So if you check out our API docs, experimental sections, basically when it's done, which I think will essentially be when the CLI can consume it properly and some other clients. When you say that we are combining the scale action into one core, is it going to restart all the application instance in order to complete the scale, like apply the new memory setting? Yeah, it's actually going to, when we're done with it, it'll actually just apply everything for you. So it'll be like, it'll just happen under the hood. You'll say, I want this to happen, and the runtime will go through and make that So it will be done in a rolling update kind of style to make sure there's no downtime during? Yep, that's the idea. Well, if you're just changing the number of instances, you shouldn't get a restart. You'll get a restart if you're changing memory and that kind of thing. What was your biggest challenge in adopting a HAL-based approach? Sorry, could you repeat the first part of that again? What was your biggest challenge in adopting a HAL-based? I don't know. It wasn't actually too challenging. It kind of just flows. It's more just, it gives us a lot of flexibility since we're not tackling all the endpoints. So when we say an association to a space, for example, if you look at those links, it points to V2. In the future, if we ever implement a V3 version of space, we can just sort of swap that out, which is pretty nice. But yeah, it's actually pretty easy. It's mostly just presentation logic. Hi. What kind of changes do you anticipate with services and the Service Broker API standards? Do you guys have any more context on the Service Brokers changes? Oh, OK. So it's actually, so surprise, there's actually another team that works on APIs in Cloud Foundry. So there's a services team. And they work pretty independently of this. So I'm not exactly sure what their schedule looks like. 350, talking this room. Cool. Any others? So within V3, as things move over, it will become consistent in that world. Cool. Looks like that's it. Thanks a lot, guys. Thank you.