 Hi, this is Steve French. I'll be giving a talk today on network file system security. Be giving an overview of securing SMB3, the most popular protocol for accessing files on the internet. So to give a little introduction about who I am, I'm the author and maintainer of the Linux SIFS client. I've been working on the Samba team for many years. I'm the co-author of the SNAF SIFS technical reference and I'm a principal software engineer in Azure Storage at Microsoft focusing on Linux and accessing network storage safely and efficiently. I'll be giving you an overview of SMB3, the protocol and network file systems in general, then talk about some of the security challenges and the different features, authentication identity, encryption access control and data integrity, and then talk about some of the efforts and where we need to go from here as we improve network file system security and the kinds of things that would help the kinds of things we need help with from many of you in improving security in this important workload. So first of all, a lot of people ask why network file systems matter. It's been a lot of focus, for example, on NVMe, Fabrics, RDMA, and people question why network file systems. Well, the answers are fairly obvious. Network file systems are a superset of what block devices can do. They're a superset of what objects can do and all applications we're used to save files. The presentation I'm giving today is saved as a file. I typically back these up into Azure, into the cloud, sometimes to a local server. Now, historically, one of the problems had been performance, but with a well-configured NAS, especially with support for RDMA and faster transports and faster encryption, you can now get 80 to 90% of the performance you might get with a block device with network file systems. But what you get is a lot better security. You get the ability to identify the owner of a file more easily, more metadata. It's easier to understand security in a file protocol. It's easier to understand backup, encryption, archive, compression, quotas, quality of service. All these are much easier when you know the owner of a file, when you know the access patterns of a file. When you access a block device, there's very little the block device knows about who's accessing that and why. In a network file, it's much more complex, but you have incredible variety of features. One of the challenges with network file systems is keeping the performance good, keeping the security good, and doing it in a way that allows us to take advantage of these useful features. Network file system knows who's making the operation and knows the applications, access patterns. And it knows a lot more about the type of access the applications requesting. So it's a very complex problem, but it's a much bigger superset than block or blobs and objects and what it can do. So it's important to understand why network file systems matter. We're never getting away from this idea of storing files on disks, whether it's local or remote, storing files on specialized block devices is going through a file system. It's the same thing through a network. We want to be storing files over a network file system onto network storage where possible because it lets us take advantage of these features and lets us take advantage of this enhanced metadata that helps us secure workloads better and make them higher performing, but they have huge disadvantages. Network file systems are much more complex than block devices. I used to give a presentation at the annual Linux conference in Ottawa on how to write a file system in 45 minutes. Well, you can do it, but file systems are incredibly hard to optimize and they can be slower than block devices and we have a lot more security features and challenges. So this isn't easy. The NFS and SMB clients in Linux are about 50,000 lines of code and they're quite a challenge, but they're incredibly well optimized for certain workloads and we can continue to improve that. One of the things we're gonna try to do here is talk about how to improve the security features though and where we need to go. Now, why do we talk about NFS and SMB? Well, over this 36 year period, they're the two survivors. So in 1984, Barry Feigenbaum, IBM, recent PhD graduate invented the SMB protocol and son back in same year invented the NFS protocol. Many other protocols have come and gone. These are the two that seem to live on. They are by far the most widely deployed and it's hopefully just coincidental that that coincides with George Orwell's famous 1984. Now, the current dialects of these NFS 4.2 and SMB 3.1.1 are almost unrecognizable compared with the original dialects in 1984 and they've evolved quite a bit. So looking at these two network file systems, it's interesting, there's a reason they survived. They've gotten a lot of attention. They've been deployed incredibly widely so major devices have them and they're well understood. But SMB has some unique advantages. One of them, for example, is it's insanely well documented for various reasons and it's incredibly helpful to have these thousands of pages of very detailed documentation and then many hundreds of thousands of lines of test cases. But in general, how do they differ? One way to think about it is the SMB protocol is much broader in scope. You can see some of the examples here, branch cache, witness protocol, file replication, the global namespace, DFS, claims-based act was, this is a much broader protocol than NFS. But there are some exceptions. NFS has support for PNFS where the metadata and the file data can be separate. PNFS allows you to query the layout of a file and then access the file's contents, it's data from different servers. That's something that SMB doesn't have. The backend typically provides that. In addition, NFS4.2 added support for labeled NFS to better support SE Linux to allow the client to enforce SE Linux better. Now there's also one other major disadvantage, I don't know how you wanna phrase it, probably disadvantaged NFS. NFS is layered on top of another protocol, SunRPC, where SMB talks directly to TCP or directly to RDMA or in the future directly to QWIC. This means NFS is a little bit harder to optimize for network operations, although there is an NFS over RDMA. But both these protocols have borrowed features from each other for many years. For example, NFS ACLs are based on SMB ACLs. And you can see that they've learned from each other. And we need to dive in on some of the security features as well to go explore this. Both of them support Kerberos, but in a different way, for example. So let's give a quick overview of SMB. SMB 3.1.1 is the current dialect that was introduced in late 2015. It's by far the most broadly deployed network cluster file, it's the default on Windows Max and various embedded devices. It's built into the media player and various operating systems, including Linux. It's got clients and servers on every major operating system. Samba is common on Linux, for example. Now there's a kernel server on Linux. It's part of a family of protocols that offers the broadest set of functions of any network or cluster file system protocol. And you can go into the documents like MS-SMB 2 and MS-FSA and see some of the examples of this. And vendors have done vendor-specific extensions, for example, Apple, to offer specific client-specific features, server-specific features. It's also the best documented. Just the core document alone for the file protocol, SMB 2, is over 470 pages long. And the test cases, Samba's test suite, SMB torture, as they call it, is over 200,000 lines of code. The Microsoft open-source test cases are also huge. And there are multiple annual test events, some of them coordinated by SNEA, that go into a lot of depth testing the, so this zoo of clients and servers against one another. Now on Linux, although Samba SMBD is the most popular, there's also now an open-source K SMBD. There's also various users-based libraries, the SMB 2 among others. Ronnie Solberg's live SMB 2 is a great example, since it's embedded in various applications. And there's servers from many, many vendors, NetApp, EMC, and many others. And they range from very small devices to the largest server in the world, right? The largest file server in the world is Azure Files. There's a great presentation on this at the storage developer conference a few weeks ago. So what about the key security features of SMB? For authentication, Kerberos, you can use other things, but Kerberos, and Kerberos is encapsulated in SPNego, so it's opaque to the SMB protocol, but it allows the SMB protocol to integrate well into directory services like Active Directory and AAD. And after you authenticate, there's a preauth integrity step that verifies that no man in the middle has changed the authentication. And then from there on, you have the option, it's by default in things like Azure, to encrypt and all the traffic after that preauth integrity is encrypted. And typically that's GCM 128, although GCM 256 is now possible and there are other choices as well. It can be configured on a per share basis, so you can decide that some shares need tighter security or on a per server basis. The client can also require that encryption be configured on a share. So the client can force encryption or the server can force encryption and it can be required either on a per share or per server basis. It's very flexible, much better than it'll give VPN, for example. Now for access control, there are various mechanisms as possible, but SMB3 ACLs, rich ACLs are most common. And in Windows example, they have this optional DAC claims-based ACLs that can give you much more enhanced access control features. In addition, there's something called an SACL that allows you to configure auditing. Now, there's a fairly serious problem when you're dealing with a network of systems and that's understanding which users are which because clients coming in from client one and client two, users coming in from client one and client two could have the same UID on their systems but be very different people. SM French, maybe UID 1000 on one client and UID 2000 on another client. It's extremely important to be able to map identity. Who is a particular user? And different protocols choose different mechanisms for this, some use OIDs, some use SIDs. NFS has user at domain, which is now possible to use when you configure ACLs, for example. So you can configure a realm name to scope the user's name. But in the SMB world, they use a globally unique identifier called a SID. And there's an example here. And the SID can be mapped various ways to the more primitive UIDs that POSIX operating systems and Linux use. And that can be done via services like WinBind or SSSD. They can even be hashed. But the most common way, some of the large enterprises do this is RFC 2307, which is an LDAP feature that allows you to associate a Linux UID or POSIX UID with a user in a directory service. A globally unique user. So what's the goal here? The goal is to be able to access files. My presentation, for example, whether I'm storing it in the cloud or whether I'm storing it in a local small device, we have to be able to do that securely and efficiently in a very, very hostile world. When I'm backing up this presentation or when I'm, you know, if I was working on a resume or if I'm working on patch sets, it's very useful to be able to save those files in the most natural way possible, but be able to save them into the cloud and to be able to save them to local servers securely, efficiently, quickly. And we have a lot of common use cases. We have things like containers spinning up needing to store the container images. We have things like backing up systems. You know, the Macs, as you probably know have a time machine feature. If you're in Starbucks sitting with your Mac, the Mac can be backing up over SMB into Azure, into the cloud. So we need to make sure that users don't move away from a secure file protocol to some less secure mechanism or less reliable mechanism, less functional mechanism, just because of some missing feature in SMB3. So it's important, I think, that for these commodity protocols like SMB and NFS, that we have enough key features so users don't make bad decisions and move to either less secure or less functional protocols in order to save their important data. It's not just about saving presentations or video files. It's about being able to do workloads in this new world where data is spread out broadly across the enterprise, sometimes into the cloud and do that efficiently and securely in a very, very hostile world. Let's dive into some of the details. So, you know, authentication is very different from authorization. Who are you? Can you do that? So there are various ways to authenticate. In SMB, you typically authenticate with Kerberos, but you can with NTLM V2 as well, but that's been around for more than 20 years. What we need to think about is other mechanisms. Maybe we need to reserve through the IETF additional SPNego OIDs to identify other mechanisms. We today have the option of PKU-DU, MACS have a peer-to-peer Kerberos variant, but we may have to go to things like OAuth in the future and be able to opaquely just plug in additional security mechanisms for file protocols to use. So in Kerberos authentication, it's a fairly simple flow. The client gets a ticket granting ticket from the KDC, usually Active Directory or Samba Active Directory or some equivalent. And then this ticket granting ticket allows us to get a service ticket for the server we're trying to contact. And then the client sends a service ticket to the server they can validate that both ends can validate that the users who they expect. This is extremely common mechanism, well-tested. And it's good for a lot of environments, but what we need to know is as we go forward, and some of you may have expertise here, do we need to offer additional options, maybe OAuth, to allow other authentication models to allow saving files securely in the internet? Now, these mechanisms are largely opaque to SMB3 because SMB3 leverages the SPNego standard to embed security tokens inside the file protocol during session establishment. But that does require that we have a way in both Linux as well as have the servers and clients agree on how to identify the OID that identifies this whatever security protocols we may use in the future. So you take a look at RFC 2478, for example. Now, one of the problems we have in Linux, which is fairly obvious, is that there's no easy way to add libraries that would make authentication opaque. There's no SESP-Nego directory where we can put a config file in to add a new mechanism. And what that means is that we embed too much knowledge of the security protocols into the client and server libraries for the file protocol. Now, what if we have a security disaster? Let's say at the next year's security summit, they find some security problem that causes us to have to evolve the security protocols, the authentication protocols. How do we plug those in? Well, we need some way in Linux of exposing these libraries in a way that things like Samba and kernel clients can access those libraries without having to embed, as you see with Samba today, Samba has Heimdall-Kerberos client and server. It has OpenLDAP embedded within it. It can call out to other libraries as well, but it's somewhat limited in that. And we need to make it easier for applications to call out to new security libraries. So how do you protect them from tampering? SMB 311 has a really nice mechanism for validating that no man in the middle has modified the authentication flow. And then when that tree connect comes, the mount, the initial session requests and the negotiate protocol are validated to make sure they haven't been modified. And from the tree connect on, it's typically encrypted and at least signed. So you know that the security, the authentication hasn't been tampered with. It's a very powerful feature and very useful. So they exchange mandatory negotiate context early on that allow additional security features to negotiate. And this allows support for many different types of encryption algorithms and it allows some really nice security features. Now in older protocols, for example, SMB1, SIFs, there was no man in the middle attack mitigations and Ned Pyle and others at Microsoft have given many talks on why never, never, never use SIFs. We don't want to have another one or cry. We don't want to have another major security incident. Don't use SMB1, don't use SIFs. What you're using and what SIFs.co negotiates by default is SMB311. And this is much more secure. So what about identity? What about figuring out who you are? In the dark ages, we use something called NIST, the network information service in the 90s, for example. And this was replaced in the 1998 by a much more secure alternative that used LDAP to store the POSIX UID, RFC 2307. So Luke Howard did a nice job on that in defining this. And one of the obvious questions will be, are there any alternatives we should be considering? Right now, RFC 2307 is really the only alternative for storing identity in a directory service that is broadly adopted. It's not just supported in Active Directory, it's supported in most LDAP servers. Now, there were two variants of RFC 2307. They added the original RFC 2307 didn't have a way of figuring out what groups a user was in. It could tell you what users were in a group, but not the reverse. And so it was extended with something called RFC 2307 BIS. There was also a proposal from Matt Bannister in 2015 for a directory-based information service. It appears to be abandoned. Now, in many cases, users just give up and they just hash the UIDs. WinBind and SSSD have options to do that as well where if you have an unmapped user, you can just hash the global identity to try to come up with a local UID in a consistent fashion. But it's very important for us to understand is there's some way going forward other than RFC 2307 that we should be defining who a user is. You know, there's an important problem here and this is pretty serious. Although WinBind and SSSD security services have a way of with private APIs, D-Bus and SSSD case and a private RPC and the case of WinBind, of mapping names to UIDs and UIDs to SIDS and these are specific to that application. There's no Linux API that allows you to sort of do the mapping among the four ways you might represent a user and this is fairly important. So if you have multiple protocols, NFS and SMB, HTTP and others, HDFS perhaps accessing the same storage, and some of those protocols are gonna represent the owner of that file as an OID and some like SMB with a SID. NFS could be a UID or a user at domain. So this is extremely important that there's a mapping not just between name and UID, which get PWNAM and get PWU UID due today, but you need a standard library interface to map between the four ways you have identities, OID, SIDS, UID and name. And you're gonna have to have this, otherwise you're gonna have people calling private APIs all the time, WinBind's great, it's a wonderful service, but do we wanna way to plug in other ways other than WinBind of mapping identity? Samba includes WinBind as I mentioned, it's a great service, but having an API in Linux extending PAM and NSS to allow you to map the global identity for a user, user at realm name to a globally unique identifier like a SID, and then map it perhaps in a somewhat imperfect way, but map it to a local UID. A local UID could be 32 bits, so it's not necessarily gonna be globally unique. And this mapping is done differently, we may not use RFC 23.07 forever, but this really shouldn't be the file protocol doing this, this should be something that identity libraries, security libraries do, something that plugs into PAM and NSS, but today NFS and SMB have to have their ID mapping callouts because there is no common API that lets you do this effectively, at least in Linux. Now, what's the alternative? Well, you end up mapping local users to guest or you have a default mapping of all local users to some default Kerberos user, and that's really not sufficient for most use cases. What we wanna get to is not having the SMB client and the NFS client calling into their own ID map callouts, we would like them to be able to go into generic PAM and NSS functions to map the four different ways you can represent a user, his name, UID, globally unique SID or an OID back and forth. So, access control. So in access control, here's an example from Windows, what you might see in the Windows securities, when you do right mouse button on a file, notice that you have users, groups and permissions. This is the simple version, you can obviously bring up more advanced panel that shows the individual permissions. Now, this is much richer than what you see in Linux, the mode bits are quite primitive and POSIX hackles are also very primitive, but it's not just Linux that has GUIs for this, here's one of the SIFs utils that allows you to modify rich hackles as well over SMB, right? You can see the SID and you can see the permissions here. Now, there are lots of tools in Linux, they're command line tools, SMB hackles, get SIFs hackle, but I think it's important to contrast this with what users are used to on Linux where you only have 12 flags, now you have 777 permissions and then you have the set UID, set GID bits. These are very primitive compared with ACLs. And in Linux, if you wanna have ACLs, the only broadly supported one is POSIX ACLs and they are more useful, but they don't have deny ACES and they have far fewer features even than the standard SMB ACL, much less claims-based ACLs. Now, there's been a move to push rich ACLs into the kernel since 2007 and they are implemented by various operating systems, not just NTFS and SMB and NFS and they're on various operating systems. ZFS, for example, has support for rich ACLs and Macs have support for rich ACLs. It's important to realize though that they're much more functional than the POSIX ACLs you see by default in Linux, but in addition to that, Apache introduced a concept of claims-based ACLs that allow you to do things like control permission based on other things, the location of the user, whether he's running on a managed client. And this is supported by Kerberos and much richer in function, but it allows a much more detailed access control model. And here's an example you can see from Windows where they're basing it on one of the access control entries based on the location of the user. And also notice in this example from Windows, you see you can do logical ands and ors. You can imagine a case, for example, where somebody in group managers would have permission to something except if they're in group janitors, right? You could imagine different cases like this where ands and ors are used together to do richer access control decisions. Now we don't have this in Linux, but it's an example of something that might help as we go forward. It is supported over SMB though. And once again, this is not necessarily meaning that we have to implement it in all the Linux servers, but it would be useful to be able to have a more visible way of exposing this because in many cases apps wanna check their permissions on files. You see this a lot in Linux and Macs where on open or just before open, they're querying what permissions they have for the file and sometimes changing those permissions on files they own. And of course servers always check their permissions on open of a file directory because they have to do that to understand if you're gonna be allowing access to that for the reads and writes that come in. And there can be real conflicts between the permissions that mode bits expose, POSIX ACLs and rich ACLs. And it's been a real interesting problem over the years. Many, many years of development I've been working on, I see this a lot on how to map permissions or emulate them between these three models. And examples are like how would you map 0707 or how would you map cases where you are going to deny permission to a group that the user is a member of? Not an easy thing, right? That if a group is a member of some, that example I gave where somebody's a member of group janitors and janitors has zero permission, well, the denying of that user access would prevent him from accessing the file even though from the mode bits, you could have 0707, that's very hard to represent in an ACL. So there are cases where mode bits are really tough to map into ACLs. And of course, there's millions of cases where ACLs can't be represented as mode bits. And then another related question with POSIX ACLs, since they don't have deny ACES, should they just be emulated when somebody queries a POSIX ACL on an FS or queries a POSIX ACL on SMB? Should you just show the best approximation, not show the deny ACES? Should we allow users to change them remotely, set POSIX ACLs? Another interesting question is, what is Chone changing the owner due to affect the ACL? In the SMB case, there's a access control entry for the owner's SID, but when you change the owner, you now have an access control entry that's kind of old and doesn't represent the current owner. So, do we change when we do a Chone, do we change the ACL as well? Not just change the owner, but also change access control entries. And there's an interesting access control problem that's quite common in the Linux world. It's most common to enforce the permission on who can delete a file by looking at the directory entries, the parent directories, mode bits. But in a rich ACL world, you're most often looking at whether the object has a delete permission. Well, there isn't an equivalent of that in mode bits, right? So, the object's delete permission doesn't exist. The permission checked is the parent directory. But this can be a real problem. If you have a temp directory and people are creating files, you only want the owner of the file, the guy who created the file to be able to delete it. You don't want an attempt directory to allow George to delete Bill's files. And that's easy in a rich ACL world, but it's hard when you're just using mode bits. So this semantic conflict between what remove means in Linux when you have ACLs and Linux when you don't have ACLs is an interesting problem. This comes up a lot with NFS and SMB when you're trying to expose files to a Linux client over a network file system protocol. So what about SE Linux? So these implement mandatory access control support in Linux and individual objects have security labels stored as Xatters. And RFC 7504 describes potential protocol requirements to support this over NFS. So it's a useful read. So looking at Tom Haines's document there that gives the requirements for what is needed for SE Linux is quite interesting. And NFS 4.2 does support optional extensions for security labels, but this is client-enforced in NFS. And one of the questions is should something similar be done in the SMB3 protocol, which already supports Xatters and alternate data streams but basically defines something like sec label like you see in NFS. So what about encryption? In SMB, there's four algorithms supported. You can see AES128, CCM, GCM. Now AES256 GCM has been added. Typically AES128 GCM is negotiated. It's fast, offload to the hardware. In my testing, large IO is about five times faster processing on the client. The overall gain is a little bit less, but five times faster on the client, the processing of those frames when you use AES128 GCM over CCM, decent and typical VM. And now we have support for the what are they called military grade encryption, the AES256 GCM that's much stronger. It's, you know, these are quite interesting, but they provide very good performance and very strong security and many servers like Azure by default negotiate encryption. So every frame is gonna be encrypted from the tree connect on. And, you know, why do we use GCM128? Because it's got hardware support. It's very fast. So from the 5.3 kernel on, you can see it in the SMB client on Linux and Samba added support a year or two ago for a little over a year ago for GCM128 as well. So here's some examples of recent work in progress on the stronger GCM256 support that'll be enabled via a global module load palm on the Linux client. So what about RDMA security? You know, SMB direct is very, very fast and quite common in the SMB world. RDMA encryption though, shipped in Windows at least in the first half of 2020 and includes support for 256 bit AES that we talked about earlier. RDA signing is also feature complete and will be supporting AESG Mac will be faster. And there's a nice presentation on this from the storage developer conference a few weeks ago by when and would be worth looking at if you're intrigued about SMB direct security. But the message on this is it's extremely fast and can leverage many of the same security features running SMB over RDMA. And like I said, it's quite common in the Windows world but it also SMB direct is becoming more common in Linux as well. Looking forward to porting those to getting those kind of features in the Linux client as well for the faster encryption for RDMA. Now, what about quick? So encryption is very important to discuss but there's a kind of related problem that in many networks, whether the traffic's encrypted or not, if it goes to port 445, it's gonna be blocked. Working on this presentation, for example, I wasn't able to save it to Azure because my local ISP Spectrum blocks port 445 here. When I was on a trip recently, different hotel chains each of the three nights and two of the three did not block 445, one of them did. My son's ISP doesn't block 445, the one I use here does. Now it's an interesting problem. So how do we get around the port 445 problem one way and you can see a great presentation on this in the last few storage developer conferences is to use quick. Much of the network traffic for HTTP already uses quick. It has faster connection setup, it has these nice performance features listed here, good congestion control, no headline blocking. Here's an example from the Windows presentation on how quick fits into the network stack. And as you can see, the SMB client and SMB server, unlike many other protocols, talks directly to the transport. So whether you're talking to TCP or RDMA, SMBD that is, whether you're talking quick, the SMB client directly communicates with the stack. So it's kind of cool and relatively easy to implement in SMB, like I said, there's a good presentation on this from a few weeks ago at the storage developer conference and in a similar one a year ago, that show has a nice demo of that. And Wireshark can even decode this now, right? So obviously there's been quite a bit of progress in implementation of SMB 311 over quick, especially in Windows. So what about for the Linux client? Well, we got an obstacle here, there's no kernel driver for quick. So one of the things that we need to figure out is, is what to do. Well, quick itself is about 30,000 lines of code in user space, at least the MSQuick module. And there's similar sizes for some of the other user space quick modules. MSQuick has some nice features. And so I was looking at that, but there's two others as well. And it depends on TLS, but TLS 1.3 was merged in February of 2019 into the kernel. So there's already support for TLS in the kernel. You can take a look at this GitHub project. It doesn't use kernel coding style, but it might be mergeable with some changes. But what was interesting for me at the storage developer conference a few weeks ago was the annual storage developer conference, three other use cases came up. Three other networking cluster file systems are also interested in quick. So we really need to figure out a way to drive this secure encrypted transport driver, which has these nice performance advantages, but currently is only available in user space. We need to find a way to get that into kernel. And I'm sure there'll be politics involved. I'm sure there'll be lots of pushback, but we have to figure out a way to do this because obviously we need some of the security features. There's some nice security features as well as the performance features of quick. If not for SMB, for these other three use cases that we were talking about two weeks ago. So when SMB 3.1.1 is layered on top of quick, there's really no difference in SMB multi-channel and signing encryption. It uses server certificates. You don't have to double encrypt. There's a good talk by Sudhir Dantaluri if you want to look at it from this year's storage developer conference. What about data integrity? EXT4 supports something called Verity, FS Verity to allow enhanced integrity checking. And this is invisible over SMB, but SMB does allow setting file attribute integrity stream to enable enhanced integrity checking on some file systems and also file attribute no scrub data is kind of the reverse that disables data integrity checks by the background scanners. We could probably make these available in Stadex. Dave Howells added Stadex a few years ago. So these extended stat parameters could expose these and allow us to set in files so we want particularly strong data integrity on. Now, one of the cool things about SMB is that we have many cases where the client wants to do client-enforced security and in many cases, NFS does this as well. In some cases, you want the reverse. You want the ACL to allow, you want the client to be able to emulate the mode bits from the ACL, but not enforce them on the client because you have these multi-user identities with Kerberos tickets that'll allow the server to accurately enforce security. But in other models, you want the server mount wide open and you want the client enforcing security. NFS sometimes has this case. So when you do that, this IDs from SID and mode from SID mount options allow you to store opaquely in the ACL, the mode bits, as well as the POSIX identity, the POSIX UID in a way that'll allow the client to enforce it. Where in many other cases, you'll be mounting with something like SIF's ACL where the mode bits are emulated and you might mount with no perm mount option. And you're using multi-user, so you're allowing different Kerberos identity for each local UID. So here's an example chart that shows the different ways you configure it. But today, the vast majority of people configure a default UID and a default mode on the client to avoid these configuration choices. But you have three models in the client today, sort of a client-enforced model, a default model, and a server-enforced multi-user model that clients often choose from depending on their workload requirements. So what are the two do's? We have to broaden the supported security scenarios, have better SE Linux integration with SMB311. We have to have stronger peer-to-peer all support, maybe add support for PKU2U where the max local KDCs. We need to continue this work on stronger encryption to finish off the AESG CM256. And then this quick support, solving the port 445 problem. And then there's a similar kind of thing that's been recently made available for improving packet signing performance by using AESG MAC. Now, what do we need from the security team? Is Kerberos good enough? Do we need to add support for SPNego? Had additional protocols like OAuth for authentication that are opaquely available over SPNego for SMB? Do we need to extend PAM and NSS to allow mapping global SIDS and OIDs in a different way than RFC 2307? And what about rich ACLs? Is there a way to get rich ACLs or at least standardize the API between the six or seven file systems that can support it today, not just NFS and SIFT.co for SMB3? Can we expose file attributes for Stadex to mark files for enhanced integrity checking? And how do we get a fast, efficient quick driver in the kernel to allow for encryption and alternate a different way? And quick has, like I said, some interesting performance advantages, but it also looks like the way forward for many protocols to do secure, encrypted traffic, but we need to get away, need to find a way to get that into Linux kernel. So these are some interesting actions that we can talk about that we can bring up more broadly. Many of these are far outside the file system scope, but we'd love the help on this. If you have questions on this, feel free to follow up on the Linux SIFT mailing list on the Asamba technical mailing list or email me directly at smfrenshedgmail.com. And I thank you again for your time.