 Hi, everyone. Thank you for coming, you know, almost last session on the last day, so I appreciate it. Thank you. My name is Aaron Levy, and today I'm here to talk about kubectl apply and the dark art of declarative object management. If any of you saw Brian Grant's talk earlier this morning, apparently it's kubectl is the correct way of saying it. I have a habit of saying kubectl, so I will do my best to say kubectl through this presentation. But let's get started. So why the dark art? And part of it is because if you do Harry Potter references, your talks get accepted. But also because kubectl apply may not actually behave how you expect it to. My original understanding of kubectl apply behavior was essentially, OK, it's taking configuration that I have locally, and it's applying it to the cluster. And that was about it. And if you look at the help text of the apply command, it says something similar. Apply configuration to a resource. This resource will be created if it doesn't exist yet. Perfect, talks over, there isn't a whole lot else to go there. But in actuality, there's a ton of behavior underneath kubectl apply, and I personally didn't really understand most of it. And so putting the abstract together for this talk, there was a ton of stuff that I wanted to get into. And then what I realized in putting this presentation together is that there's just actually too much to cover in 30 minutes. So I'll do my best, but there's going to be some homework at the end to dig further in. So when I started using apply more heavily, some of the things that I was seeing was inconsistent behavior across various object types. I was seeing inconsistent behavior across various fields within objects, and then kind of unexpected and somewhat vague errors. Now to be fair, most of these were my fault, because I didn't actually understand the underlying behavior of apply and how to use it correctly. And so I actually began digging into it deeper, because I wanted to be able to predict the outcome of using this command, and I clearly wasn't able to. So what I wanted to know was how are fields actually calculated? How are the patches themselves generated? And then how do I predict what the final object will actually look like? And then is this functionality client side? Is it server side? Is it a little bit of both? So what does kubectl apply actually do? This I kind of pulled and paraphrased from some other documentation. This isn't actually the help text for the command. But when invoked, it does a three-way diff between the previous configuration that you had, the input that you're providing, and the current configuration of the object itself. And then it uses those three states to determine how to modify the resource. And then it applies those changes and doesn't overwrite other changes to properties that you haven't specified. So we'll dig into this a bit more. So those three states that I just mentioned that we're calculating kind of what we're going to do, the first is the object configuration file. And you could think of this as, if you have a git checkout of your actual object manifests, that you're going to operate on locally and then run kubectl apply on those actual files. Then you have your live object configuration. This is the state of the object as it exists in your cluster. So if you were to say kubectl get a deployment, whatever you see down, that's the live object, what we're talking about. And then the last applied configuration. This is the state that the last time you invoked apply, what it looked like at that time. And between these three states, that's what you use to kind of compute what's actually going to happen when you're running this. So let's start off with, if you're going to create an object, pretty straightforward, kubectl apply, filename, my simple application object. And this does what we expect, which it creates the object in the cluster, but it also sets a special annotation on the object called the last applied configuration. And the contents of this are actually set to match what you're applying to the cluster. And this is used to later compute whether an action is going to add a field or update a field or delete a field because you're tracking what did I do before and then what am I doing now and I'm able to determine those new states. So if we look at an example of this, our base object, a pretty simple deployment. If we were to run kubectl apply on this, what would actually end up in the live object in the cluster is that it would inject that state that you're pushing into the cluster into that special annotation, into that last applied configuration. And then the rest of your object exists normally. So let's go over an example of adding a field to an object. So we have those three states that we talked about. We have the local object. This is like your get check out of your manifest. You have your last applied state. This is an annotation that we've kind of pulled out just to make it easier to describe. And this is part of the live object itself. And then the live object is just the rest of that manifest. And so between these three states, we're going to determine the actions that we're going to take. And in this case, we decided, well, we already have this object in the cluster in the live object, but we want to add a new field to it. We want to add the min ready seconds to it. And so we're going to, on our local copy of that manifest, we're going to add that field. We're going to run kubectl apply. And then it's going to determine what action we're going to take based on these other states. And so you see, well, it exists in my local configuration, but it doesn't exist the last time I tried to apply it and it doesn't exist in the live configuration. So the action that I'm actually going to take here is I want to add that field. And so we see, after we execute this, that first of all, the live object reflects what we wanted it to. We've said we want to add this field. And then last applied is now set to what we just pushed into the cluster so that the next time we do an action, we're able to compute the next state that we want. And so we can see that from an example of now we want to actually remove a field. So in this case, we see that in our live object, we still have that field that we added previously, but now we decide actually we don't want that field in it. So what we do is we modify our local object, our checkout of our manifest, and we say, all right, actually, let's delete that field from it. And now when we run kubectl apply, what it's going to do is it's going to look and say, well, it doesn't exist in my local state, but it does exist in the live state and the last time I ran apply, it existed there as well. And so this is actually a deletion event. It used to exist, now it doesn't, let's get rid of it. And so we can see after running it, all that happens is the live object that field is removed. The last applied annotation is updated to just match what we just pushed into the cluster. So we can create objects with apply, we can add and remove fields, what else? Why do we want to use apply? So another really powerful aspect of using apply is the ability to kind of preserve and enforce different fields. And so when I say enforce, that means that your local object configuration, the fields that you're putting in there, the things that you're populating, you're essentially saying, these are things that I want to enforce when I run apply. I want to set them to these values. But you can also emit certain fields from that and not have them controlled. So you could apply the object and not update what's on your live object. And so that allows those other fields to potentially be controlled by other aspects of the system. A really good example of this is autoscaling components. So an example would be, I want to be able to update things about my object. I want to change labels on it. I want to change the containers that are inside of it. But I want an autoscaler based on some metrics to be scaling this up and down. So when I run apply, I don't want to know how many copies of it are running. I just want something else to be managing that. And so that's an example of how we would say, I want to enforce certain fields, but I actually want to ignore and preserve others. So let's go over an example of this. So what we see is in our live object, replicas actually exists within the object. And we say we have three replicas of this, but that same field doesn't exist in our local object and it doesn't exist in the last applied annotation. So whenever we've pushed updates to this object, we've said nothing about replicas because we don't want to manage it as part of this piece of object management. We want something else to deal with it. So then let's say that the autoscaler is just running kube control scale commands separately from us. And so we say, scale my app deployment up to replicas, the count to five. So we just went from three to five. Still our local object and the last applied annotation don't actually change because this is just something that we're essentially preserving and ignoring on the live object when we're calculating any kind of patches. So then let's say later, I want to deploy a new version of my application. And so I check out my copy of these objects and I say, well, I want to deploy version two of this. So I update my local copy and I'm gonna run kube control apply. And what ends up happening is that in my live object, that field is updated and I don't touch the replicas. I don't need to know about them. It doesn't matter to me in this workflow. I want the autoscaler to be dealing with that. So we've seen how kube control apply can add, update and remove and also preserve fields, enforce fields. So how is this stuff being calculated? How are we deciding that some fields should be left alone and others should be updated? Some should be cleared. There's several ways the merge actually takes place. So for primitive values, things like strings, integers, booleans, example fields of this would be like the image field as a string or the replicas as an integer. The actions that we can take on that are essentially either clear or replace it. For things like maps and objects, examples of these would be like labels or metadata or maybe the deployment spec. The action on this is actually to merge the elements or to merge the subfields of the objects themselves. And then lastly, we have lists. And in this case, you might have in a pod a list of containers or you might have a list of ports or you might have a list of arguments provided to a container. And the action here actually depends quite a bit. So the other two are decently well understood but I want to jump into lists because this can get a bit more complex. So several strategies. One is just that the list gets replaced, the entire list just gets replaced. And another is that you can actually merge the elements of a list of objects and we'll go into both of these. So another example, this is a list of primitive values. Just the args that we're passing to a container. And in this case, all three of our states are slightly different. Our local object, our desired state that we're going to be pushing into the cluster says the argument should be a and c. But the live object actually has a, b and d and the last time we applied this, it was a and b. When we run kubectl apply, what's going to happen is we're going to take our desired state, that local config and it's actually just going to replace args wholesale. There is no merging, it's not we're going to take the live state and mix them together, it just replaces it completely. So that's an example of just that one option. Now what about if we have a list of more complex objects and how do we can merge them together? An example of this would be a list of containers. So containers being a more complex object. And our goal in this case, in this local object we can see is that we want to be able to add a sidecar container to our existing application. So initially we just had just the app itself as one container and we can see that by the last time we applied this configuration, this last applied, the only container that's listed as app. The live object, but we see that same thing. App exists there. We also see another field has been added kind of out of band. This is one of those fields where we're not managing it in our object configuration, but it exists there in live. And now we're updating to a new desired state. So our local object, we said, all right, we want this new sidecar container to exist. So when we apply that, what we actually see is that the sidecar container is added to our live object as we intended, but also args is preserved, which didn't exist in our local config. And so what you see there is we're not simply taking the entire list of containers and then applying it and then replacing it on the live object. We're actually walking into each of those objects and updating the individual elements and updating the individual fields and merging them together. So let's take a little bit more complex example. So we just saw that happen with containers, but let's look at that for a list of tolerations. So again, a list of complex objects we should expect that we would be able to merge these as an action. So we're gonna step through this kind of piece by piece into what are the expected actions that we would take by evaluating these three different states. So the first state being the local object. This is, we are going to try and push some desired state into the cluster and what's going to happen there. So we see there's a toleration with a key of Baz. And if we look at the last time we applied this configuration to the object, it didn't exist there. And if we look at the live object, it also doesn't exist there. So my expected outcome from this is we should add that toleration. That's the action that should come out of this. Then next, if we look at the last applied, the last time we ran this, we actually told it that we wanted a toleration with a key of foo. And we also see that that still does exist in the live configuration, but that no longer exists in our desired local object configuration. So that translates to me as, okay, actually we should be deleting this. It used to exist, we used to apply it, we no longer do, let's get rid of it. And then lastly, we have this key of bar that exists in the live object. And this isn't specified in our local config. So again, it's not something that we want to manage or enforce. And so essentially we're just going to ignore that value and it should just still exist. So my expected outcome of this would be that we have two tolerations. We have a toleration for a key of bar and a toleration of Baz. We saw this with containers, this should be my expected outcome of merging more complex objects. But instead what happens is when we run apply, it actually just sets the live state to exactly what we had in our local config. So it actually does a replacement, it's not merging the objects. And then our last applied is also updated as well. So we expected to see the tolerations merged, they weren't, you know, why did this happen? We saw the exact, completely different behavior on something similar. So I wanna briefly go over the ways that patches are calculated here. So the two that apply actually makes use of our JSON merge patch, which is an actual spec you can go read about the exact behavior of it. But in this case, it's something called strategic merge patch, which is custom to Kubernetes and it deals with handling how to merge lists of complex objects when you're running apply. So with a strategic merge patch, you can treat a list much like a map and you can merge specific elements of that list based on a predefined patch merge key. And then based on that key, you can individual elements can either be added or update or removed. And this key itself is actually defined on a per field basis and this exists in the Kubernetes source code. So generally you have to look this up directly. You can look in the source code itself to find it or in the API reference, this is also bubbled up, you'll see these patch merge keys and I'll show an example of this. So if we were to dig into the code and we were to look at, this is just a stub of what the pod spec looks like. We would see that as part of the pod spec, we have a list of containers and there's this metadata associated with it that says the patch merge key is name. And so what we saw before was that we had these two containers. One of the containers had a name of app and one of the containers had a name of sidecar. And so we were able to enter the list and merge and look at the states and say, well the live object has a container with name app and my local configuration has a name of app. Let's merge those objects because they're the same one even though they're a list. But if we look at tolerations, it doesn't have that metadata. And so we know that, well it doesn't know how to inspect the complex object, it's just going to replace the entire list wholesale. But if we look at another one like volumes, we also again see that metadata added. So we would know, well, if we wanted to update volumes, we know they would be merged together, it wouldn't just be replaced. And this is important to know because it's changing the behavior of what your local configuration should be. In one case, you're going to always need to specify every toleration that you want and it has to be managed wholesale. But in other cases, you could have some containers be managed yourself and some containers managed maybe by a different component in the system. And same with volumes. And it takes a little bit of digging because you have to either go into the code or you have to dig through the API reference docs. So a little bit of complexity here in like the actual behaviors not super consistent in some cases. And there's more complexity underneath this. So another one that I want to mention is actually defaulting fields of objects. So an example of this is a deployment object, for example, even though you might have a very simple manifest itself, when you apply that object to the cluster, the API server is actually going to default certain fields for you. And so if you don't specify replicas, then it's going to default to a count of one. If you don't specify an update strategy, it's going to default to a rolling update strategy. And in some cases, these defaulting values can actually cause some problems when you're interacting with that object with apply. So an example of this is the local objects, pretty simple, we don't define a replica, we don't define a update strategy, but when we actually apply this into the cluster, those fields are created, so they're defaulted. So we actually will see in the live object, if we were to just say, who control get replicas of one, and then the strategy is going to be rolling update. And then as part of rolling update, there's more subfields that are defaulted like max surge and max unavailable. This becomes problematic because let's say that at some point in time, you say, well, I want to change what the update strategy is. I want to change it to recreate instead of rolling update. So the process would be, well, I can just add that field to my local configuration, say that's my desired state, apply it to the cluster, and we should be good. The problem here is actually that there's other fields that are incompatible that still exist within the manifest. So even though we're changing the type, there's other subfields like max surge and max unavailable and the rolling update key itself that haven't changed. And so you would actually, this would fail if you were to try and apply this configuration. So it can kind of cause some problems and it takes them a little bit of foresight and thinking to be like, well, if I might want to manage these fields, I should probably explicitly define them rather than relying on defaulted fields. So at this point, we should have a pretty good understanding of the basic apply behavior, but there's some other considerations of mixing apply behavior with other object management styles. So coming back to this example of the MyApp deployment, in this case, we're gonna have two users that are interacting with this object. And one thing I just want to note is that we're not gonna specify the replicas in this. Again, we're gonna say an autoscaler is gonna manage that. We don't want it as part of our local configuration. So the first user, they're going to initially create this application, kubectl apply creates it. Over time, it might be scaled up manually. It might be scaled up by an autoscaler kubectl scale MyApp, set the replicas to three. Later, we want to bump it to a new version and so we get our checkout, we change the image from 010 to 020 and then we push that into the cluster. Workflow works fine, everything's great. Later, a second user needs to add a volume, but for whatever reason they don't have the same local checkout. And they're just like, well, it's a quick change. I just need to make this right now. I'm just gonna get a live copy of the object itself. It's what the current state is, I may as well just grab it. So you say kubectl get deployment, we output it to a local file and then we edit that file directly. And we add the volumes and then user two things, Aaron said use apply. So I'm gonna apply that manifest to the cluster. Everything works fine. It has a new volume, nothing changed. Later, let's say a week from then, user ones needs to bump the image version again. They check out their manifest, they change the image version, they commit it, they apply it to the cluster. What just happened is that user one inadvertently reset the replicas back down to one and they removed the volume and they removed other fields as well. So if we look at how that actually happened, what we're looking at is this last applied, this middle column, is essentially the outcome of what user two did. And what they did is they pulled a live copy of the object which contained all fields, whether they were originally specified or not. So it contained replicas, it contained the volumes, it contained the update strategy, everything. They pulled that local, they made their edits and then they set apply, which means that all of those fields now exist in the last applied annotation. So now they're managed, now they're enforced. So now when user one comes back and this is their view, essentially, their local object doesn't have any of those fields. It doesn't specify replicas, it doesn't specify the volume and they just say apply. And so these all turn into deletion events because they say, well, the last time you applied this, a volume was present, a rolling update was present, a replica count was present and now they're not, so we should delete all those fields. When you delete the replica field, it defaults back to one, we delete the volumes that were added by user two and none of this was intended. So the important thing to note here is user one's workflow didn't change at all. They did the exact same thing every single time. They had a source config, they modified it and they used apply. And what happened though is even though their workflow didn't change, they didn't get the expected outcome. And you could think of user one as actually being a CI CD system. There is no behavior change. It is told this is the tag that you should check out and then you should apply those manifests to the cluster. But inadvertently another user has actually changed that workflow. So it's pretty important to kind of consider that mixing these styles of object management can lead to these unintended outcomes. So, brief list of recommendations based on some of these things. One, in general, you probably don't wanna actually define replicas as part of your objects because it's probably better managed separately. When you're making updates to your object, you probably don't want it to know how many replicas are actually running in the cluster. There's times where maybe it's gonna be three copies, it's never going to change, that's fine. But think about this from the perspective of like a CI CD system. You want it to be pushing out image updates or label updates or something like that. The manifest as a whole. You don't want it having to ask the cluster, well, how many replicas do you have right now so that I can update this local manifest so that when I push it back to the cluster, that's preserved. And this also goes to other fields that you might want externally managed. There's some other ones where it's like the load balancer IP, for example, on a load balancer type service. This isn't something that you are managing. This is something that Cloud Provider Code is managing. Another thing would be explicitly defining the defaulting fields or some of them. And so if you think that at a later time you might want to change the update strategy, then put that in the manifest that you're managing so that when you do want to change it, you can change all of the fields at once. And then lastly, using apply consistently and using it from the same source configs. And so this is that kind of user one, user two model where you had two different users that were applying two different base configs with two different views of the world which change each other's workflows. And just in general mixing kind of imperative commands like create and edit and set, these can lead to unintended consequences unless you're really sure of what you're doing. So looking into the actual behavior and knowing it's okay that I set this particular field because I know it's not going to be managed declaratively in this configuration that I have locally. And then I know time-wise, which actually I talked way faster this time, but things we didn't get to cover but I do want to briefly go over is that there's also kubectl apply prune. And this gets into declarative object deletion itself. So what we're talking about here is managing objects themselves, but you may also want to manage sets of objects. And what this allows you to do is you could have, let's say a directory that contained object A and object B and object C, and you could say kubectl apply prune all of those. And as part of the process, it's going to create and also try and delete, but first it'll create all A, B and C, those objects. Later you decide, well, object C is deprecated, I want it removed from the cluster. Rather than you having to decide, oh, I've got to go run kubectl delete object C, you just remove it from that directory and you run the exact same command again. You just say kubectl apply prune and it looks at the state of the cluster and the live state of the cluster and it says, well, I see this object C and I don't see it in the local configuration. So that's a deletion event. But this is something that you should be careful about using. Definitely look deeply into its behavior because you could inadvertently delete a whole bunch of objects that you actually don't want to. Another really interesting one is you can set the overwrite flag to false when you're running kubectl apply. So the typical behavior is to set this to true and that means that your generally desired outcome is that your local configuration state, the local file, is what you want to end up in your live object. But you can actually have it calculate whether this might have been a conflict, that the live changed underneath you from the state of the last time that you applied from the state that you have locally and it can report back. In some cases that's nice because it's kind of a signal that, hey wait, someone should go look at this because this changed in an unknown way. And then also another one is actually interacting with this last applied configuration annotation. Because it's an annotation on an object and it's just a serialized blob interacting with it can kind of be a pain. So there's some helper commands, sub commands of apply so that you can view, set, or edit that annotation. Again, be aware that when you're changing this annotation you're changing behavior the next time you apply to this. And then similarly the patch command itself, the underlying functionality of apply is that it's calculating patches and applying them against objects in the cluster. You can also do that yourself. You can actually formulate a patch and apply it against the object itself outside of apply as well. So I did have a pop quiz to see if we actually understood the three different states and how they'd go through but instead I'll just give homework. The first link is the most important. This goes into everything that I just talked about plus a lot more depth plus different styles of object management. So the imperative style object management. Big shout out to Phillip Whitrock who led the documentation effort on this and everyone who worked on it. It's incredibly helpful. Before this existed it was kind of a guessing game at least for me. I didn't really know what was going on. Also the kubectl patch documentation goes more into the different patch styles and then you can understand what are the patches that are being calculated locally and being sent to the API. There's a discussion about an apply v2 refactor. So this is getting into some of the implementation details of how apply functions and how it's communicating with the API server and how you can extend the kind of patch strategies that are implemented but really interesting information and read into that to kind of see where the future of this stuff is going. And then another really interesting document is Brian Grant's declarative application management. So again how we're talking here about objects it's taking these kinds of concepts in expanding it into how could we generically manage generically and declaratively manage applications. So it would be multiple objects and services and control config maps and deployments taking these same kind of concepts of these patches and these overlays on objects and then just using all native Kubernetes to manage applications themselves. Really long, really really interesting. And then as part of that there's another document about the issues that are related to that effort. But a lot of these issues have to do with behaviors of kubectl apply and so they're applicable if you're kind of digging into the behavior and you wanna know some of the nuances or some of the edge cases of the problems. This is discussing those states so that they can kind of be put on the roadmap and addressed. Anyway, that's all I have, thank you so much. So can you please move to the slide with the pod spec with the notations about merge key and merge strategy? Yeah, and I do have to say I modified this slightly for brevity, but yeah. So the question is like basically about the mechanics how kubectl works. So the question is whether it supports custom types like CRD types or custom API server like service catalog types for example. Or if I build my own, whether API server or CRD controller with defining my own types, will I be able to annotate my types this way or are they built into the kubectl? I mean basically like whether there is a metadata API in Kubernetes API which passes this metadata back to kubectl. Right, so is anyone from API machinery and or CIG CLI in the room? So I'm not sure if those mechanics are exposed to things like if you would be able to import that into your own objects. You could, I mean essentially all that's happening underneath is like patch is being calculated and then being communicated to the server. So as long as it matched those same semantics, I believe so, but I'm not positive on that. Yeah, so the question is just because I have seen with the some changes like there is a stream of changes to make it less core specific in kubectl to move it back to the API server to pass the metadata from service. So that with API aggregator and other things kubectl would support like generically such things like that. But I've seen the cases where when kubectl aggregator for example or some API machinery stuff edit the support for that but because kubectl haven't been changed to use that meta information we still can't really use this feature I guess. Yeah, so from some discussions that I've had it sounds like the near term direction, one option is that start with moving all of the calculation logic into the client side so that you would actually calculate the final state of the object and instead of using a patch request you actually just use a put request and so it would be okay let's do all calculation locally get the final state of the object and push that to the cluster versus calculate the patch object locally and then communicate that with the patch style to the API server so that would be a first step and if that was the case then on your own custom types you would just do the same kind of thing like you would put the end state that you desired then pass that point another thing that I've heard discussed is implementing this logic on the API server side except all you as a user are doing are providing those three states explicitly to the API so you're saying this is the local object this is the last applied and then the server obviously has the live and then all it does is it calculates all of it on server side and it doesn't know any, there is no patch communication at all it's just calculating it locally but that's part of that kind of v2 discussion there's a couple people I can try and connect you with that would probably be able to answer your question a bit better, yeah. Thank you. Yeah. All right. So if I understand this correctly if you're running on the slide where you have the replicas that were not previously defined and you're getting an error I believe it's what it is you're changing the update strategy if it is. So could you work around this error by somehow updating last applied whether by patch or apply or the sub-command to include the update strategy in the last applied and then change it? Absolutely, that would be the exact way that you would get yourself out of this situation and so that apply sub-command of kubectl apply, edit, last applied you could then it would pull it up in an editor and you could add those fields into the last applied so that next when you went and applied this it would know that it was actually deleting all of those sub-fields so that would be exactly how you get yourself out of that situation. And then minor related question is the error message that you get back in any way helpful to try and figure this out? It's changed over time I don't know if it's more helpful now once upon a time it was kind of confusing but I think now it actually says like that it's, you know, this field is incompatible I think it says something along the lines of like strategy type recreate isn't compatible with rolling update or something of that sort but it's still not immediately clear you're like okay cool I'm trying to change it so how do I get out of that state which it's not immediately clear oh you need to go modify that you need to add fields to an annotation on the object to get yourself out of this situation that's the kind of like behavior that it's not immediately apparent that that's what you need to do. Thank you. Yeah. So two questions one is there a mechanism to do like a dry run or a preview to see what it would change before applying the patch? And second question given what Kelsey said about like you know maybe we shouldn't be giving developers cubecuddle to run in the first place is it a good idea to use this given all the semantics in your opinion? Well so to answer your second question first if you think of like from Kelsey's presentation that that was actually a CI CD system underneath it I'm pretty sure he's probably running apply so you know you have your manifest and it's tweaking the image field and then a CI CD system is going to apply that or it's going to patch it the concern is what if that user two comes along and then does it your CI CD system is completely correct and it's doing the right thing over and it's repeatable and yeah the user shouldn't probably be doing anything else. It's that like okay well being knowledgeable that there's real reasons why you shouldn't be you know doing that directly or you need to have very like clear understanding of it so I actually agree like in that approach it's just also worrying that someone with the best intentions could still mess up that system and then repeat your first question sorry? Is there a way to do like a dry run or preview and see what changes it would make with an apply? Not truly to my knowledge because of things like the defaulting values and that kind of thing and there's also you never know if it's passing through the API there could be like initializers or other things that are going to modify the object in transit. To my knowledge it would take a couple round trips for it to actually like say this would be what we would have done but I think that if either the code moved all the way to the client and the only thing that it was doing was a put that would be much easier to do dryer runs on or conversely if it was an implemented server side but that code at least of how it takes those three states and computes a final state that at least would get us closer I think yeah. Thanks. I think somewhat related to the previous question so the default fields that get applied are they also annotated in source code so you would know beforehand what would happen so you don't have to do this round tripping? I believe I don't know explicitly I know that there's optional metadata on some of the fields but I don't know where the actual defaulting logic lives again if there's anyone from API machinery in here. Okay there's registered functions that apply all of the all of the defaulted fields apparently yeah so probably not the most clear thing to just understand from. Yeah it would be nice if it was annotated just like the patch behavior is well documented in source code it lives right there. Yeah. It would be easier to understand that. Yeah. Anyway thank you so much.