 Hi everybody. I'm Andrew from Snowflake Computing working on the FoundationDB development team. And I will be talking about, so kind of the way that internal FDB types get serialized and sent across the wire or stored as values in the database itself or stored on files. But for motivation, availability of FoundationDB is very important for Snowflake. If FoundationDB is down, then Snowflake is down. And so sort of the reason I'm talking about this is that the current serialization format does not support scheme evolution. Kind of the way it works, the way to think about it is if you have a message with two types or two fields A and B, it serializes A and B and concatenates it currently. So if you want to add a third field that won't really work, if you read it with something that expects only two fields, something will happen. So currently what that means is that two FoundationDB processes need to have the sort of same protocol version to talk to each other. For introducing protocol versions, this isn't such a big problem because you can have the new guy understand both protocol versions. But for sort of downgrading back to the old guy, if there's something say written to a file with the new protocol version, the old binary won't be able to read it. So that's why downgrading a cluster is not supported, one of the reasons. So we have an implementation of flat buffers that works with FoundationDB types. So here we kind of have this idiom used in FoundationDB for serialization. It's a lot like a visitor pattern and so our flat buffers implementation also works with this idiom. So flat buffers, we're looking at it because it's already documented for one, so that's good. And so we didn't want to design our own kind of custom thing to get schema evolution. We wanted to sort of go with something that has a lot of use. Okay, so where would be good applications for flat buffers? So even if you don't want to do anything like online upgrade, if you want to do just a normal upgrade where you read something say from the disk that was serialized by an older version, this would be good there because then you could downgrade kind of like I was describing before. Also, if you are writing something into the database itself as a value, the same thing applies. And this also could be useful for network messages if you want to support something like online upgrade. And so kind of for an example here, let's say you have a new binary and you want to see what happens if you just enable it on one storage node. If you were using flat buffers and this won't cause a recovery, right? Because if you take down a storage node, that won't cause a recovery. So you can interoperate with field. Okay, let's say you do it and you see a CPU regression. You can kind of just try upgrading one storage at a time and just see what happens. And that's not really something supported with the current protocol. Okay, so where is this not a good idea? Okay, so if you're going to write database keys, flat buffers is not going to preserve the ordering of what you are writing. And you often want to sort of have the serialized bytes maintain the ordering of the original type. And for streaming messages, the current protocol has this property that if you know what the type is and you have the byte sequence and you serialize it, it will consume the right number of bytes and start right at the beginning of the next message. Flat buffers also, flat buffers does not have this property. So you need some other protocol to encapsulate the flat buffers message. Okay, oh, that's when I was going to talk about storage. Okay, so how does this improve availability? What I described earlier, basically you could do something less drastic than taking, or like stopping the cluster and bringing it up and seeing what happens. You can do something with just one note at a time. So if there is a CPU regression, you won't cause an unavailability. Okay, so testing, we've talked about testing in FoundationDB a lot today. What we have for testing this so far is we can test it on a live FoundationDB cluster by kind of having, you know, one binary with one version of the message and another with an earlier version, say, and kind of upgrade one thing and see what happens. What we really want though is to be able to do this in the simulator. And what we've been able to do, and we've had some success with this, is building FDB server as a shared library and then DL opening it so that the simulator can basically kind of start a new FDB server process in the simulator at either version. But this gets a little tricky because if there's any changes to the simulator itself, or any global used, then it wasn't really set up like that, so you need to be aware of that if you want to support this. Yeah, I think that was all there. Okay, cool. So if you have any questions, grab me after. That's all.