 Hello everyone, so welcome to the last talk of the morning session. I want to present you Alan Jude We'll be talking about the future of open ZFS on free BSD. Please Alan. Thank you So as me, I said my name is Alan Jude I'm a free BSD developer and on the core team and I'm also an open ZFS developer And I work for Clara, which is a free BSD professional services and consulting company so if you Use free BSD and need help with it then reach out to us So a brief overview of what we'll talk about today We're going to talk about how open ZFS got started Some of the roadblocks that hit along the way and how we overcame those and how that has changed In the present and how it's going to change even more in the future Including some of the challenges that open ZFS is facing today and how we will overcome those and then as part of that On free BSD will be changing the upstream we use to get ZFS from So that we can more closely track the active development and get all of the latest features And then we will also talk about what's going to be coming up in ZFS over the next couple of years So to start at the beginning as they say ZFS was originally developed at Sun Microsystems as most people know Work started back in 2001 meaning that ZFS is almost 20 years old at this point and the First version that leaked out for everybody to have under the open source CDDL license came out in 2005 with the release of that version of Open Solaris But sadly in 2010 Oracle ruined everything They bought Sun and ended up closing off development of Solaris and Taking all of the future changes they made to ZFS internal only And so out of that They forked the last version of open Solaris that was under the CDDL and made a Lumos And that basically became our upstream for ZFS because all the files were still in the same place and Everything was good Yeah, so Illumos became the upstream for free BSD and previous day was able to follow that very quickly In a previous presentation I did some analysis on it and found that when a new feature went into Illumos version of ZFS it On average was in free BSD within a week. I think the slowest one In general was about 60 days that it took for a feature to get forwarded to free BSD because it required some amount of extra work You know supported in the OS and wasn't unnecessarily because of something in in ZFS But at this over the time ZFS was ported to a number of different platforms Including when it was introduced to free BSD in 2007 And then there were a number of different efforts to try to make something for Linux including one using fuse That didn't really go anywhere, but at Lawrence Livermore National Labs in the States They had what became the ZFS on Linux project which started in 2008 But once we had all these and especially when they started to diverge of it features would get added to free BSD and not put back into Illumos or In the ZFS on Linux project. They added some extra command line flags That didn't exist anywhere else and it was starting to look at like, you know ZFS wouldn't be the same everywhere and that would be bad So the open ZFS project was started to try to coordinate development across platforms and keep You know, so the ZFS knowledge you built up on free BSD would translate to any of the other OSes and ZFS would be ZFS everywhere At least open ZFS would be open ZFS everywhere Again oracle bad So open ZFS the original plan for open ZFS was actually this one common repo Basically, it would be a GitHub repo that contained only ZFS and none of the OS specific code And then each of the operating systems like Illumos free BSD and Linux and then later OS 10 etc Would pull down that common code and then add in their OS specific goop to make it work However that would take someone would have to do a lot of effort to keep this one true clean copy of ZFS with None of the OS goop in it And it turns out there were no volunteers to do all this extra work for nothing You know, it wasn't going to help the effort of any one operating system And it wasn't going to help any company. And so there was basically no way to make that sustainable So instead the the repo of record for open ZFS became a fork of the Illumos code and because the Illumos request to integrate process was Very complicated for an outsider and Very slow at the time If you made a pull request against the open ZFS repo Matt Aaron's who's one of the So in in Illumos, they have these merge advocates and there's only like 10 of them And there's the people that can commit and you Give your code to them and they commit it kind of thing So Matt and his team would take care of the Illumos process for you So if you tried to upstream something to ZFS it would get reviewed and then they would take care of getting integrated for you because that process was Too hostile for an outsider But as I mentioned previous detract this repo very closely commit by commit We could pull in each commit that happened to be affecting ZFS And bring it into free BSD and we were able to keep up very nicely But then the number of platforms started to explode You know you have Illumos was basically the start of it all and it's many distributions and then we had free BSD and derivatives like Free NAS and PF sense and so on that we're using ZFS and then net BSD started their Version of open ZFS, but they started by porting the one from free BSD rather than the one from Illumos Because it would more closely match their VFS and so on But you get these kind of follow-on effects where especially if you're Very far downstream it can take a while for a commit that happens up here to trickle down to your upstream to then trickle down into your version of the OS Which was especially funny when the port to OS 10 started by or started from the ZFS on Linux repo rather than the Lumos so it ended up further away and then Jorgen got bored one day imported it to Windows But the Windows 4 has come quite a long way in the last couple years and is is basically usable in a development sense at this point But you can't actually import a ZFS pool on Windows and do send receive and read and write files and it does work It's just there's lots of debug messages and it's not fast but it's coming along and then OS V which is a kind of a Virtualization specific OS has also integrated ZFS and then lots of other Linux to shows like Ubuntu's next long-term support release is going to support ZFS on route and Proxmox, which is a hypervisor appliance uses ZFS on route and ZFS to back all the VMs But we even though the goal of the open ZFS project was to prevent divergence There's still been quite a bit of divergence as we get new features generally start in one OS and maybe get sent to the others But that generally relies on the other OS coming and getting them and especially at Before now that process usually involved somebody from Linux should upstream their changes into a Lumos and then once they were there They'd be pulled into free BSD But again now you're further away from where the change is happening your many steps away And that's slower So one of the things open ZFS did early on was replace the concept of the pool version numbers if you remember using ZFS and free BSD back in like seven and eight days you started with you know There was like ZFS v12 and then v17 and 20 and then eventually got to 28 And that's when Oracle ruined everything But basically from that point we decided because development is happening across many different platforms Features are going to show up in different orders on different platforms So monotonically increasing version number is not very helpful for that because you know if v30 is going to mean one thing on free BSD and another thing on Linux that's not going to work So instead we added feature flags So basically your pool has a list of features that are either on or off And then when you go to import that pool on a different platform or a different version of free BSD You can only read that pool if you have all of the features that are enabled Or if those features have a specialized says this feature is read only compatible It means that you can read the data, but you can't change data because it requires you to have some feature you don't have But So that solved the problem of features, but So it'd be obvious if your pooled had a feature or didn't have a feature that some other OS did But bug fixes sometimes, you know as part of writing this new feature They found and fixed the bug somewhere else instead of s and because it was buried in that feature or whatever Free BSD would know that there was a bug fix and it wouldn't get copied into a lumos and then back down into free BSD And so we wouldn't know that there was this bug and that it had been fixed elsewhere And so the virgins kept diverging and again, you know Free BSD developers are doing a bunch of work and stuff They don't necessarily go talk to a bunch of Linux developers about it And so each camp didn't necessarily know what the other was doing or would only hear about it at the end when it was actually available not in progress and so that caused quite a bit of Divergence in the ZFS code base So to help with that back in 2013 Matt Aaron started up the open ZFS Developer Summit, which was a once yearly summit to bring together developers from all the platforms and actually talk about what they were working on so everybody have a better idea of what was going on And also made it a good place to discuss future direction and features and so on So in that that first one they had a platform panel with representatives from each of the different OSes that was active in ZFS at the time and then having vendor lightning talks so that companies could also present what cool things They were working on and so on so when it started back in 2013 there were 30 developers that attended Now the conference is actually limited to 100 slots because of the venue we use and so we try to The first so many tickets are available to anybody and then after that it kind of goes into a waitlist thing where we Sadly have to pick and choose who's can come because we only have so much room But we also added a second date of the conference, which is a hackathon where you can get in a room with a bunch of other developers and work on prototypes of new features and being able to have The experts on every different subsystem of ZFS in the room makes it much quicker to be able to ask questions like you know Where do I look to find the code that does this in the arc or whatever? And also facilitates those kind of design discussions being able to get a bunch of ZFS developers around a whiteboard and draw out how a feature will work and Have all their experience about you know, there's gonna be a gotcha if we try to do it this way We might want to do it this way is super helpful So at the summit last year especially as The ZFS on Linux project was becoming more and more the place where more work was happening as Some alumos based companies had switched to Linux or just in general Linux have been attracting more developers We decided to In order to keep that reigned in That we would need to have Meetings more than once a year. So we started the ZFS leadership meeting, which is a monthly call The time changes every that we do two Calls the first to any two months are done where it's convenient for North America And then we shift the one on the every third meeting to be more convenient for Europe versus Asia So that we can cover so everybody gets a chance to participate Because it's really helpful to have the developer that's working on OS 10 and Windows support and the call But he lives in Japan, which is you know, 12 hours offset from the east coast of North America But it means any time that's convenient for him is inconvenient for all the Russian developers And so we shuffled back and forth But the goal is to keep the platform better in sync and to keep everybody better informed of what's going on especially as we're designing features Laying out the feature a specific way makes perfect sense on your platform But on some other platform a limitation or something might mean it would have been better to it Slightly differently that wouldn't have made any difference to say the linux implementation will make the free BSD one work better or Be more possible or whatever The nice thing about this meeting is it's open to anyone. It's a giant zoom call. There's usually like 50-ish people on Not everybody talks so that helps But they're also live-streamed and recorded and available on YouTube So you can go back and watch all of the meetings to get caught up if you want or it's just a good way to keep informed on what's happening for example, the last one was Tuesday of this week or last week and It was announced that one of the companies has is going to open source a better version of Ddupe That does it based on the timing to Ddupe to block not just the hash so that the performance isn't Nearly as bad as the current Ddupe So the outcome of that is that these leadership meetings have been very successful We've got better direction and started working on some of the interesting problems we've come into like that ZFS has never had any defecation policy for removing a feature and Needing to work out those kind of things But also working out the cross-platform compatibility stuff better And the other one is working out how to name some of the tunables and stuff You know what the original developer thinks it should be called and what people more familiar with the administrative side think it should be called For example, I'm sure many people that use ZFS are familiar with the a-shift variable That one was never that's an internal implementation detail if it was exposed to the user it definitely should have been called like minimum sector size or something Because and it should probably be in bytes not in powers of two And so trying to prevent More of that type of thing or if you remember back in older versions of ZFS especially on free BSD where you'd have like Prefetch disabled Was the flag and so one was actually disabled and zero was enabled and it's very confusing when you have a double negative like that And so on so trying to avoid more of that So some of the ones that are currently being worked on is that for example NFS you have the share NFS property and you can stick a bunch of NFS settings in that But it turns out the NFS settings on a Lumos Linux and free BSD don't match up at all Completely different settings and formats And so this means that if you export the full on free BSD and import it on a Lumos your NFS shares aren't going to work correctly Or it might even actually confuse or break the NFS demon on the OS as it's going to be like Oh, you have an invalid config now. I'm not going to share anything You definitely don't want that So trying to decide whether ZFS should have its own common least common denominator implementation or if there should be a separate property for each OS So you actually set share NFS colon free BSD and put the free BSD settings And if you're going to use it on another West that'll be ignored And you can set the Linux specific ones if you want But you know that leads to oh, I updated the free BSD ones. I forgot to update Linux ones and now I got problem So that discussion is still ongoing if you have ideas or Gotchas about that you can participate on the opens out of S developer mailing list or join the call next month Other problems including extended attributes that turns out the way X adders are implemented in each OS We each have our own namespace You know free BSD 6th word free BSD at the beginning of the attribute or user if it's a user attribute and Linux does its own thing That's different and Solaris uses some random jumble of letters that I don't understand But again, you want a pool created on free BSD that say using Samba And storing extended attributes for Windows to just work when it goes on Linux Not for all those extended attributes to disappear when you go to Windows Especially in this case where free BSD and Linux are both using the same Samba software to share SMB of course a Lumos has its own different SMB implementation and so you know, we don't want to make the X adders feature of ZFS specific to Samba because you know, we're also getting Mac and Windows support here So how do we work that out? And then we have the additional consideration of you know on free BSD and a Lumos We can actually change the VFS code to make it You know always use Z pool dot as a prefix for X adders or something if it's ZFS But on ZFS on Linux and on OS 10 and Windows The people working on ZFS have no control over the OS You know because ZFS on Linux can't be upstreamed the upstream developers like to be hostile to ZFS and try to do things to break it even and So, you know, we're not going to convince them to make a VFS change specifically for ZFS and so, you know Some of our typically the immediate solutions that the previous either Lumos people thought of turns out won't work on Linux because they don't get to control their VFS layer Or the defecation policy. It's you know, ZFS has been around for 18 years now And this is the first time we've ever thought of removing a feature but it turns out so there's a A feature separate from Ddupe called de-duplicated send. So this is when you're doing is that if I send It doesn't matter whether you're using de-duplication or not because the send protocol Surrealizes everything and is unrelated to what's on your disk But it has a de-duplication feature where it will start as it's doing the send It will keep track of the blocks that is sent and if it sees a duplicate block It will just reference the previous version of it But turns out this doesn't work very well and it's intense very well. And so they'd like to remove it But you know, we've never done that before and how do we actually do this? How long do we need to give people? And it turns out we're also going to need to build a utility to take a stream that was made previously that used de-dupe and Unde-dupe, rehydrate it as we say so that you'll still be able to receive it because one of the guarantees We make with ZFS send is use ZFS send and store it You'll still be able to restore that like ten years later on a newer version of ZFS ZFS send is backwards compatible down to old versions of ZFS and all future versions of ZFS So we don't want to break that So we have to figure out how to do that and how much warning we need to give people And especially you have problems like the current long-term support version of Ubuntu Includes ZFS on Linux versions 0.6.4 or something like that We're currently on 0.8 and that one's two or three years old at least But it's still going to be supportive for a couple more years So if we don't get a fix for this well, even if we defecate this now and Remove it the next long-term support release of Ubuntu that comes out next April or whatever It's going to be using the 0.8 branch and it's still going to have this feature So there's still going to be people with this feature six years from now So we really have to think about the defecation policy because we're going to have such a long lead time before Things actually get defecated so we kind of want to defecate them as soon as possible Because we're already going to have to put up with it for six years or more And another one that we want to get rid of Maybe we can't now is DDoop ditto The idea was that if you write if you have when you're DDooping if you actually have a block that you've have more than 100 instances of You let's write a second copy of it to the disk Instead of DDooping it all down to one copy because you have a hundred different copies of this If that one goes bad, you'll feel really really not good But it turns out that while that works during a scrub or a re-silver the second copy never gets fixed Whoops So rather than fixing that it was like oh we could just remove it that we find like nobody uses DDoop But then last week somebody's like well We have this new version of DDoop that works much better and we're going to open source it for everybody And so now maybe we can't get rid of DDoop ditto and somebody has to fix it But either way we need to figure out what to do that Then we have the problem again set of us on Linux would like to remove support for CentOS 6 and Red Hat Enterprise Linux 6 Since but those have end-of-lifetime, you know said to us and Red Hat provide 10 years with a version of the LS and Well, CentOS 6 will still be supported in the 0.8 branch of open ZFS on Linux We want to remove it from 0.9 But again, how much warning do we need to give people that you know if you're using CentOS 6 you can't have new features I think they mostly understand that but we have to Figure out how to message that like how do we get this message out to enough people that we're not going to surprise people And then we also have the same problem currently you can use the a newer version of ZFS that we'll talk about in a minute on free BSD by installing it as a port But if we continue with offering that so that you'll be able to have like a development branch of ZFS as a Port to update the version that instead of the version that's in the base system How are we going to manage the support lifecycle for that because at some point? You know you're gonna be running the oldest supported version of free BSD and trying to run the newest version of ZFS and Do we allow that and how long do we have to put up with people that won't upgrade? and then other problem is Some features in ZFS are getting a bit old for example the LZ4 compressor is really good and really fast and we'd like it But we bundled the version of it that back then didn't even have a version number Into ZFS and we've just continued to use that same version. There's a newer version. That's about 30% faster And has a number of optimizations for newer CPUs, but if we import it it might break some things in particular Currently we expect that if you take the same data and compress it again, you'll get the same hash But if it turns out we compress it better nowadays It won't have the same hash and this will break some things So trying to figure out how to do that. It mostly came up when I was looking at importing Zed standard Which is a new Compressor and is under active development has new versions very frequently And you know just over the course of my development of it. They've had two new major versions. I will middle versions or whatever but You know, we don't want to be stuck on the old version when all the when the new version has better Compression and more speed and more of the goodness that we're trying to get in the first place So how to work out how to deal with that and so the other big feature coming out of the open ZFS Leadership meeting is trying to come up with Compatibility stuff so in particular when you create a new pool on any OS it defaults to having all the newest features turned on But that list of features might not be compatible with the other OS you're trying to use or even you know Even in the cases use running in previous d 12.1 when it comes out in November And you create a pool and then try to import it on your 11.3 machine and it won't work because it has one newer feature Maybe the default shouldn't be that so we're looking at creating a Special flag that would be open ZFS dash the year So it would be whatever features were supported by the lowest common denominator of ZFS platforms at January of that year So you'd be able to easily say just give me something that will work with every ZFS in 2019 or in 2020 or whatever or you know, give me a version that will work with previous d 12 But how do we keep that list from getting too long? You know, we have to decide what we're going to support and maybe that's previous d 12 rather than 12.1 versus 12.2 But you know and how long before we can prune off that list? We don't want to break people's scripts and you know need to figure out how to do that And if you have ideas, please speak up on the open ZFS developer mailing list So the big thing that this talk was about is the change of upstream it turns out that Like 70 plus percent of the new development in ZFS happens in the ZFS on Linux repo And in general is taking a very long time or is not being ported to alumos and since alumos is previously upstream It's either taking a long time or we're not getting the new features So we'd like to change our upstream to ZFS on Linux so that we would get The features faster But as we tried to do that it turns out that because ZFS on Linux was developed over a long period of time As they added features they worked on them, but they imported stuff from alumos in between So when I tried to pull over one of the features that's relatively small called the multi-mount protection designed for JBODs You know They had the first couple commits to it and then they merged this feature from alumos that rewrote how Zipple import works Which is where most of the MMP code goes and then they had more commits to it We'll try to port that to free BSD when all the alumos commits are already merged But in a different order made it really really difficult So it was decided the easier way would be just report all of ZFS on Linux to free BSD Because the way Linux did it was basically take the Solaris code and then on the side They wrote the Solaris porting layer, which basically converted Solaris internals into Linux internals We already had something like that called open Solaris that KO on free BSD And so it was mostly a matter of just lining that up and cleaning up a little bit But it turns out that we took it one step further and actually separated out the OS specific bits So now in the ZFS on free BSD tree There's a module called OS with free BSD and Linux and all the OS specific code lives in one of those two sub directories And all the ZFS generic code is in the ZFS module and luckily the ZFS on Linux people have agreed to let us upstream that so That will mean that going forward there will be one repo that will gain both the Linux and free BSD code And any OS specific bits will be in a certain sub directory specific to that OS And the advantage of this means that we will also connect to one common CI system So that anytime a Linux developer makes a change to the ZFS on Linux repo It also gets tested on free BSD and if it breaks then it can't be merged And then we're actually going to get even farther and the ZFS on OS 10 people are going to join that effort and add their OS as well And we will actually end up with that common open ZFS repo like we talked about And to help try to fight some of the FUD around you know the concept of free BSD to being depended on Linux for Open ZFS because it's not right that the ZFS on Linux people are ZFS people that happen to be working on Linux They're not Linux developers, you know the Linux developers don't like them because they're using the CDDL license, right? So they've agreed that once we finish the upstreaming process of adding free BSD support to their repo They'll actually change the name of their organization and GitHub from ZFS on Linux to just open ZFS And so we'll have the one true open ZFS repo with OS support for three or more OS is built into it And everybody can just work in that repo and then do releases based on it So specifically that means that there will be no GPL code leaking into free BSD and there will be no Linux KPI shims or anything like that So all the Linux code will sit in the side and we won't import it And then there'll be a OS free BSD where we put all the free BSD specific code Just like we would have if we were importing it from Illumos In fact, it'll actually be slightly cleaner because we won't have all these if deaf Linux if deaf Illumos else for free BSD We'll just have a free BSD sub directory with all the free BSD specific code in it And so we end up with the kind of like we do the machine dependent machine independent code in the kernel We'll have OS dependent and OS independent code in open ZFS and like I said, we'll leverage the CI work that the ZFS and Linux people have done a very good job with and Any change we'll have to work on both Linux and free BSD before it gets merged Currently, there were a couple of things that are specific to free BSD or that'll be changing as part of this First free BSD has had trim in ZFS for a long time and we were the only platform that had it for a long time But make sense I had built one for Illumos and that got worked on a lot imported to Linux and it turns out it's actually slightly better has better queuing and batching and it also supports both online and on-demand trim So you can do normal trim like we are now where you trim everything as you delete it Or you can say don't do any trims right now, but every once in a while I want to trim all my free space and you can do that So we'll be switching to their trim code because it's better Jails are based on LUMOS zone or jail support and ZFS is based on zones and that'll be retained as free BSD specific code because there's no analog on Linux and Our previous or NFS v4 ACLs will be retained as OS dependent code because Linux doesn't have that So before I get into the list of features. Does anybody have any questions? So currently the features that are relatively new to ZFS that are available on all platforms is a new sequential scrub and resilver. You'll probably notice that I think in previously 12 That instead of scrubbing everything based on the order in the file system. It will actually Scan and find blocks to resilver or scrub and put them in this range tree And which I think is limited to like 600 megabytes of RAM as that gets full It takes the largest contiguous block and does that scrub or resilver and then goes back to scanning so this means most of your scrub ends up being sequential reads instead of random and Makes your scrub or resilver somewhere between two and 16 times faster than the old code and You can also pause and resume scrubs now So that you can say don't do any scrubbing during business hours, you know the scrub might take three days But let's you know pause it during business hours and resume it outside of business hours Device removal is finally available in free BSD. So if you have Mere V devs or striped V devs you can actually remove a disc from ZFS And shrink the pool you can't do it with Ray-Zed But with single discs or mirrors you can actually remove discs from ZFS and make your pool smaller If you decide you don't need such a big pool anymore Z pool checkpoint allows you to do a whole pool snapshot Which allows you to undo anything you can do to ZFS. So if you create a whole pool snapshot of which you can only have one But basically once you have a Z pool checkpoint Nothing actually gets deleted when you delete stuff. So even if you destroy a data set It gets marked as free, but we don't actually delete the data or overwrite the data So no space ever gets freed so eventually you fill up your pool But it means that if you're doing something like an upgrade and you're going to delete a database and make a new copy or whatever Especially on an appliance. It means that you can always undo it all So you create the checkpoint rename stuff delete stuff add discs whatever you're gonna do especially the operations in ZFS That normally are really risky You create the checkpoint force do it if you're happy with it destroy the checkpoint get your free space back But if it goes sideways, you just export import from that checkpoint and you have You know things are unrenamed and undeleted and everything's back the way it was Z pool initialize goes through and writes to every sector So if you're using a thin provision storage, especially off like Amazon EBS Turns out the first time you write to a block is a lot slower than later because Amazon's in the background having to go find space and allocate that block So Z pool initialize allows you to Write all that space in the beginning so that your disk will be fast Even if you know you have a terabyte of storage from Amazon you can claim it all now so it'll be fast as you're using it The new space mapping coding supports makes it more efficient to have very large drives and Take less space for the space map and make them load faster Channel programs if you want to know more about that Matt gave a whole presentation about it at BSD Can but it basically allows you to have little Lewis scripts that run inside the ZFS transaction lock So that you can do many administrative operations as one atomic unit so if you need to you know roll back 10 data sets or Create manage snapshots and renames and a bunch of stuff It allows you to do them all as one atomic operation by writing a short Lewis script and Then the largest you know it support is a Linux specific feature to support the very large directories because Linux is bad at that Some of the features we'll get when we import the newer version of ZFS Includes encryption so native encryption per data set so each data set can have its own separate encryption key And you can unmount a data set when you're not needing it and unload the key So the data is actually at rest and protected and it also allows Scrub and resilver can still happen without the encryption keys loaded Because the checksum is stored half as the plain text and half as the encrypted text The multi-import protection I mentioned is so that if you have two heads connected to a common disk Or common JBod that you don't import the fool on both at the same time And then metadata allocation class or special allocation classes allows you to put all your metadata on a dedicated device Instead of mixing it with your data on the pool Parallel ZFS mount allows you just if you have a thousand data sets mount more than one at a time Trim the new way already mentioned Z pool sync is just a command to make sure that everything has been flushed Before you you know reset a VM or something and being able to restart a resilver if you need it, okay? Any other questions Okay, thank you very much. I've been looking forward to the channel commands. Yes Can I finally do my multiple snapshots? No, the channel programs are transactions, which are per pool. It's purple. Yeah, then fix demand page There is a new update to the rate that expansion in the ZFS on Linux repo which you can Easily do a git rebase to pull it into the ZFS on free BSE It's still very beta, but it's coming along. There's been progress Finally after a long time of no progress Unifying the ZFS implementation in one repository and maybe just doing these specific bits and folders and stuff but It kind of reminds me to of the apache foundation or something like that because then it seems that the project would probably Slow them a quite a bit Because the development would then have to take care of things as a developer if I want to change something I would have to make sure all platforms will work So it might discourage me from changes or I cannot do such a big changes It mostly depends on the change most changes are only specific to ZFS and won't require any OS specific bits When they do that's why we have the ZFS leadership call if a platform were to fall behind And we're becoming a roadblock they probably get disconnected from the CI and that would be their problem So we have the monthly leadership call specifically to prevent this so that if there is you know If the Linux developers working on something and needs a previous developers help to get the previous decide port done That's why we have a monthly meeting No, basically this is mostly the result of the project speeding up more than we could handle the old way So we're trying to adapt to be move faster Although at the same time we are trying to make sure that we don't break anything because as Kirk says, you know If you curtail somebody's file system once they will never trust you again You mentioned the initialize feature of Unthink provisioning the underlying block storage. Yep. What does Amazon think about it? That's their problem. Well, I mean what I'm saying is that me for the whole terabyte the whole time anyway So yeah, what I'm saying is that? What I see is a hazard, you know, they may start back brushing on that or like banning that kind of stuff Well, what however they can do this? I mean, it's from the political standpoint is a little bit dangerous like when we had this talk I don't know if you were on that Regarding the DOH and the DNS over HTTP. Yeah, so it seems like a good idea at the beginning But when you go through then it seems like a lot of people it's definitely an optional feature and most people won't need to use it but For Delfix's database virtualization appliance It was very important that they don't get random, you know, 100 millisecond write latencies on Blocks just because it wasn't the block they had written before Thank you. Yep. Also about the Z pool initialize If I'm reusing some old disks to create a new pool does it zero them out? You can although there's also the trim on a net feature So when you first create the pool you can trim the whole drive Before you start and that might that would be more efficient than that kids. Thanks going once Twice, okay. Thank you