 I'm Chuck Cleaver. I'm not a security person, just wanted to put that out there right up front. If there's something I say that that looks like complete BS, you know, you don't have to say I'm calling you out. You can say, you know, maybe you don't understand this and I won't take that personally. I am, I've been in NFS community for 20 years. I'm a co-maintenor of the Linux NFS server as well as the author of several NFS related RFCs. So that's sort of my area of expertise and that gives you a sense of where I'm coming from with this talk. And I want to thank James and the program committee for inviting me here to talk. So I want to start with a sort of simplified abstract use case that my employer is mostly interested in in terms of IMA. It's the sort of attestation of, and attestation of executables. So the basic model is that a software vendor generates a signed Mac or a checksum of an executable. It wants to distribute and it publishes the application. It's public key and of course the checksum. And then end users are protected on systems that have these features. I'm just sort of going over this really quickly. I'm sure most of you understand how this works. But it's got three items. A way to store the public keys of the software vendors, a security module or a security service that is able to actually perform the measurement of executables before they're used. And then a policy for handling appraisal failures. So the customer installs the application and it stores the signed checksum close to the application. And then whenever it's opened or used, of course the security service is able to measure that application and verify the checksum and the signature on it. So that's how it works in the local case when of course the file system is local to the user who is running the executable. But now what happens when the file storage is remote as in a network file system. And I'm not talking about just NFS here. It could be a CEP, it could be an SMB storage service. Anything that allows access to files over a network is a possibility here. So the first thing to note is we're talking about two separate systems. The server and the client. The server being the storage server and the client being the place where the applications running are actually two separate systems. They're not the same system as they are in the local case. And the second thing to note is that the greatest attack surface appears when the file content is in transit. That's an exposure that local storage, the local storage use case doesn't have. And as you may be familiar with some types of end of a storage, the server, the storage server doesn't have an execution environment, which means the programs don't run there. They will run on the clients. We're also talking about situations where you might have different operating systems running on the client than you do on the storage server. You might have Solaris running on the storage server, on the NFS server, and you might have Linux running on the clients where the applications are running. So we have to be sensitive about the server not, for example, being able to interpret IMA metadata. Or, in fact, they might not be able to understand Linux style extended attributes. So we can't just say, well, look in this extended attribute. Maybe, for example, ZFS on Solaris. I mean, where do you put the IMA, the security.IMA extended attribute in a file system like that doesn't support extended attributes. So we also have the quandary of how the storage server and how the NFS clients might represent users. It's also the case that the server might have a different policy than the client does when it comes to interpreting IMA attestation failures. The server might just, for example, might just audit the failure. Or it might say, I'm sorry, clients, you can't access this file because this is failing my policy. Today, what that looks like to clients, if the server decides that the client is not allowed access, it looks like suddenly the client gets E-access. And the user might be, well, I didn't change the access control on this file. Why am I getting this? Really, the clients don't have any visibility into that. So that kind of gives you the context for the NFS extension that I would like to propose. It has to deal with all these situations, especially troublesome is the fact that your storage server might not be running the same, might not be running Linux. So it might not have any idea what IMA metadata is. So the idea here is it can take any NFS server, Solaris, NetApp, whatever, Dell EMC, and enable it to store IMA metadata so that executables that you store there, you install there, can be measured and attested on clients, on NFS clients that access them. The whole idea here is to extend the protection of IMA from the NFS server all the way to end users on NFS clients, in other words, to bring the protection as close to end use as possible. So in Linux, IMA metadata is stored in the security.IMA X header. And all I've done is I've added a new attribute to NFS v4 called fatter4 IMA that's stored with set adder. It's retrieved with get adder. And the NFS servers can store the content of this attribute any way they like. If they're Linux, they can store them in a security.IMA extended attribute. If they're NetApp, they can put them in a database. If they're Solaris, they can put them in a named attribute. They can do whatever they need to do. But the most important part of this is that the NFS protocol and NFS implementations don't interpret. They don't parse this metadata. It's just a blob of data that's moved. It's basically just to treat it the same way as file content. So this gives us some interesting features. NFS has integrity protection in transit using Kerberos. That's of course optional. But in fact, integrity protection isn't necessary when you have IMA metadata because the IMA metadata has a signature on it. It's cryptographically protected. When the client gets both the file content and the IMA metadata, it can do its own assessment of the file content and a corruption of either the metadata or the file content can be detected by the IMA security service on the client. So this detects both corruption of the data in transit. It also protects it when it's at rest on the server. There are some things that I'm leaving out of the proposal to make it a little more likely that it'll be implemented and merged in Atlenex, not doing secure boot via NFS boot, NFS root. And more importantly, I'm not currently proposing to support attribute protection via EVM. There are some strong reasons not to do it. One reason is that NFS v4 ACLs don't look like POSIX ACLs. And so the EVM checks might be computed from POSIX ACLs and other types of metadata from on the content generation end, but the NFS clients are not going to be able to see those POSIX ACLs. They'll see NFS v4 ACLs, and those won't look the same. So the checks some won't work at that point. So I do have a prototype of this. The slide kind of details exactly how it's implemented. When user space tries to, on the client, tries to get to security.IMA extended attribute, it's translated into the get adder of the fatter for IMA, or set adder of that attribute and sent to the server. And the Linux NFS server does the opposite translation. I also have written NFS v4 extension specification. It's been in front of the working group for about a year. It is a working group document now, so it is something that the working group is interested in making sure is published as an RFC. But there are certain issues that are still a challenge. One is that Linux IMA doesn't have an official published specification. So there are some questions about how interoperable is it to store IMA metadata in the server and will all clients be guaranteed to recognize the contents of the IMA metadata? We don't have like a separate little field that says this is type seven IMA metadata or whatever. It's expected that the IMA security services or module on the clients are able to recognize all of the relevant types of IMA metadata there are and interpret them and do something with them. NFS doesn't play any part of that. I think I've gotten the document and the working group to a point where they've accepted that. The other issue that still remains a challenge is understanding how to authorize modification of IMA metadata on the server on the client. Capsis admin is required to do that, but we don't have the same, we don't have the capabilities at all in the NFS protocol, so we can't really rely on that. The IETF is probably going to require us to make some kind of statement about how secure the ability to modify IMA metadata needs to be on the server, so that's still a question mark for me and if anybody has any good ideas I'm very open to hearing them. It seems to me that we really don't need to protect it that much because obviously when a client gets both of these things the worst that can happen if they're corrupted is by a malicious actor is that there's a denial of service, but I'm interested in other people's opinions about how that will work. So we have a prototype, we have a protocol specification, how do we know what it's done? I'm not quite sure how to answer that question, maybe you all have some interesting ideas about that. I've got the prototype implementation, it looks pretty clean. I haven't submitted for review yet, but I think now that we have a specification document that's usually all that's required to get it merged and so I think we're at a point where it's probably ready for review. Through this whole thing I've been worried about the performance of this, but as I understand it, actual file content measurement is done once and then cash and so maybe that's not going to be an issue. The IMA offload case is kind of interesting, that's where the NFS server handles IMA and the client just trusts the server to get it right. I'm not sure if that's interesting to folks or whether that's something that should definitely be forbidden. So I'm interested in any opinions there. So you can find the protocol specification at the top URL and you can find the Linux server and client prototype at the bottom URL if you're curious about trying this out. And before I get to questions, I just wanted to say that there is a lot more work being done in the security area in NFS and I'm interested to know if I put a secure NFS buff on the flip chart if anybody would be interested in attending, how many people would be interested in coming? Two. That's good enough. Three? All right. I'll do that then. Okay, any questions? If I understand correctly, this does require server-side modification so if I have a NetApp, I'm waiting on NetApp to implement this protocol spec. That's correct. Have you, I guess I don't know much about the problem space but how would this compare to something like FSVarity where you're storing the actual metadata as normal files so you can use a normal POSIX file system, whatever that might look like to store your integrity metadata? I understand that the FSVarity folks are looking at integrating with IMA but I don't know what the status of that is. You can ask the author. Actually probably better to call me the original instigator. Eric Biggers has actually done most of the coding work for FSVarity. So let's answer in reverse order. We are planning on integrating with IMA in so far as when you open an FSVarity protected file, the Merkle Tree hash will be sent to IMA as if it were a new checksum of the file and the protection properties are slightly different than a traditional IMA verification where you drag the entire file into memory, cache it, do the checksum and then never check the checksum ever again. With FSVarity you check the checksum every single time. A page is pulled in and it's as you go as opposed to the whole file. So there are differences in performance and security guarantees and it will be up to the system builder or administrator to decide what's appropriate for their use case. With respect to NFS it's actually not in the file per se because the way FSVarity works is that there is, as an implementation detail for local disk file systems, we place the Merkle Tree at the end of the file after the metadata. However, user space, if they call stat on the file and get iSize, they get the real file and the Merkle Tree is effectively hidden from user space. So an obvious way of implementing this for NFSV4 might be as a stream type object, which I guess is called named attributes. Essentially it's an alternate file stream because the Merkle Tree is generally way bigger than what you can put in an extended attribute. So it would be possible to support FSVarity for NFSV4 with minimal changes to the NFS protocol assuming the server side supported named attributes. Someone is going to have to do the work. If there's anyone interested in implementing that because they have a use case where that would be useful, you should probably talk to me or Chuck because I'd love to see it but I don't have a use case or time to implement it myself. And as an aside, I believe Solaris implements named attributes in the client server but I don't think anybody else ever did. I know Lennox does not. Any other questions? Okay, thank you.