 Welcome everybody. I don't think our next speaker needs much of an introduction. Jeremy is a core Samba contributor and of course a big defender of open source software. So give me your hand in welcoming him and what's new in Samba. Thanks very much. I'm Jeremy Allison. I work for Google in the open source programs office. But I just want to say no Google lawyers have reviewed this talk. So, you know, what terrifies me is reading in the papers the next day. You know, Google engineers say. So this is not a Google talk. This is a talk on behalf of Samba. So please don't mix the two up. That Google pay me but most of my work I actually do for Samba. So what I'm going to talk about is the piece of Samba that I work on the most, which is the file server, which is the piece that is most visible in Samba. There's also an active directory server. There's also the authentication piece. There's also increasingly legacy printing services. But the file server is mostly what people want to know about. So before I can talk about what's new and what we're doing, a lot of which is internal kinds of work. I really need to talk about how the file service is actually built internally. Oh, by the way, if you would like to ask a question, please feel free to put your hand up at any time and I can take questions. So this is kind of an overview talk, not necessarily diving into the details of all the protocols, et cetera. At least on the file server side, I can dive in as deeply as you would like to go. But hopefully this is a talk that's accessible to a broader audience than just SMB3 engineering geeks. Anyway, so the file server has basically three conceptual layers. There's the protocol parsing layer, the packets coming off the network, and then we have to unpack them into a set of SMB1, 2, or 3 commands. Then we have the most complex layer inside of Samba, which is what we think of as the NTFS emulation layer. So if people here don't run Windows, which at first there's probably quite a few people who've never run Windows, NTFS is the Windows file system and it has a set of semantics that are very different to the standard POSIX file systems that we know and love on Linux and other unixes. So one of the main jobs of Samba is actually to translate the SMB requests, which are essentially NTFS semantics on the wire, into a set of things that allow POSIX to emulate NTFS. So that's the layer underneath. Then below that, the lowest layer, we have the VFS. Now the VFS is the virtual file system interface. If you're familiar with the Linux kernel, it's very similar to that. It has a set of pluggable function calls. It's unfortunately much more complex than the Linux VFS because we have to do a lot more to emulate NTFS. But the nice thing about the VFS, because it's pluggable, you can actually write plugins to that that will talk to external file systems. So we have plugins for Ceph, we have plugins for Gluster, we have plugins for GPFS, and there are a whole host of proprietary file systems out there that also use Samba as their SMB front-end and then translate using our VFS into their proprietary storage back-ends. We have plugins that adapt our semantics to ZFS and other advanced file systems. We have a ButterFS plugin that exposes snapshots, et cetera, et cetera. That's basically why Sambri is so flexible. That's the interface that people work on the most. Practical terms, it is nowhere near as clean as it looks, at least for the SMB1 code, which is the oldest layer of code inside Samba. In the SMB1 code, the NTFS semantics are kind of mixed in with the protocol parsing and it's kind of ugly, and I am busily trying to kill it. The SMB2 layers, which came later, are much, much better designed. So working on Samba, is anyone familiar with a classic Milton Bradley game, Mousetrap? Yes, some of the old people. So, yes, young people are like, what is this? Does it run on a screen? So if you're familiar with Mousetrap, it's a terrible game. I used to play it with my family and friends and what happens is you slowly build this Mousetrap with different cards, which is pretty much like building Samba, and then at some point somebody pulls the Mousetrap card and a ball goes rolling through this increasingly Heath Robinson sequence of things and almost invariably, somewhere along the line, the ball falls off and doesn't go where it's supposed to go and the user says, oh, there's something wrong, to which the Samba developers reply, there's always something wrong. So that's kind of how we see Samba is there are always things that we need to improve, there are always things that are broken and we're just trying to build a better Mousetrap for the user. About as successfully as the Milton Bradley game, I'm afraid. So what are we doing in SMB? Some of these things are already done, some of these things are under progress, some of these things on this list are speculative. So first of all, die SMB1 die. We really would like to remove SMB1, which is the oldest Microsoft protocol. I'll talk a little bit about that later. The VFS layer is old. It was designed in the late 90s, early 2000s. It is the POSIX of the day. It's greatly in need of modernization. So that is in progress right now. It's probably a one to two year job, but we'll get there in the end. Making Samba more asynchronous internally and hide threading, which we need, and we use quite heavily now, but hide that under the covers for reasons again that I will cover later. Obviously, we need to improve our performance and as part of that, our single server file system performance is pretty damn good. You can saturate 10 gigi with SMB3 going to Samba on a local file system with SSDs, no problem. So that's great. What we would like to do is to get a similar level of performance with a clustered SMB solution, and that's much harder, and that's where a lot of the performance work is going into right now. On the Active Directory side, well, we have to interface with Active Directory and services that are accessible through SMB, so there's some service framework improvements that we're going to talk about, and then there is the exciting, completely speculative no code yet, but we think this is where we're going to go, which is SMB over the Google Quick protocol. So Ned Pyle, who's actually, the relationship with Microsoft has completely changed from the utterly adversarial relationship we used to have when we were sewing them, strangely enough. We actually get an incredible deal with Microsoft. They invite us up to work with them. They direct access to their SMB engineers. They work with their test code. We test details and implementations, and Ned Pyle, one of their evangelists, basically says, friends, don't let friends use SMB1. So does anyone here still have SMB1 on their network? Yeah, yeah, pretty much. I mean, I have it at home. I have Internet of, let's just say, let's use the polite term, Internet of garbage devices that I can't update that still use and insist on SMB1. We would really, one of the things that drags down the implementation in SAMBA is keeping SMB1 going. It causes a lot of issues. There are things in there that are never actually used, but you have to have, as part of the protocol spec, we want to get rid of it. Code is still there. So what do we do in 4.11? We made the great announcement was we flipped to default. We changed one line of code and said, SMB1 by default off. And then we said, we've removed SMB1. And all the press picked it up and cheered. It was a one-line code change. All the code is still there. What we're planning to do is we are planning to get rid of it. And the way we will do that is, first of all, we have an entire regression test suite that depends partly on SMB2 and above, partly on SMB1. All of those tests are in the process right now. Susie is helping a great deal with this. SMB2 are being migrated to SMB2. We'll probably keep the tests alive. But eventually, we are going to throw that code out. We are going to be a complete SMB2 only server. And the last version of SAMBA that supports SMB1 will be 4.x.y. Because whenever we've deleted the last SMB1 code, I'm going to call it SAMBA5. That's my definition of SAMBA5. Having that code in there makes it increasingly hard to modernize the internals and to do the clustering work. Mostly, it's path name-based, which we're trying to get rid of. And essentially, before I can really finish the SMB2 extensions, I need to update the VFS. So there's a lot of dependencies in the chain on this, which we're working on. But die, SMB1, die. So eventually, what we're planning is that people who really have to have SMB1 will have the last version of SAMBA which supports SMB1. People will probably run it on Raspberry Pi and then do an SMB2 mount from the Raspberry Pi to the real server and just gateway to SMB1. And that's... There are people out there who are going to need that. A few years ago, we had a guy turn up and said, my industrial controller in Germany that's running in blast furnace runs in DOS. And your latest version of SAMBA stopped working with me. What did you do? And so we had to go back and fix it. And we have OS2 people. And Microsoft has abandoned those customers. So it's nice to show the power of free software by keeping those things running if we can. As I said, the VFS was originally designed around POSIX in the 1990s. It's an open-closed read-write, very simple. Modern Unix system calls, I don't know, however, most people are probably using some web framework wrapped in Rust or Go or C++ or whatever. But down and dirty under the covers, the modern Unix system calls are open app, Fstat app, remove RMDR app. They all work relative to a file descriptor. And they're SimLink safe if used correctly. So we have a boatload of code in SAMBA making a SimLink safe that can go away when we move all our VFS interfaces to the app versions. The other issue is that multi-threaded operations. We would like to thread as much as possible of SAMBA for parallelization improvements on multi-core systems. They're not terribly well served by the current VFS design. The current VFS design assumes that the credentials stay the same throughout the entirety of the call chain. When you're running in a multi-threaded environment, that's not necessarily true. Especially on Linux, you can attach different credentials to different threads. And so SAMBA works by changing the credentials to the user who were impersonating on behalf of the client. And so there are issues where we want to make sure that if we're halfway through processing a chain of requests, we have to switch users, we're blocked, and we have to switch users to process another set of requests that essentially the user isolation is correct. So that's actually quite hard to do in the existing VFS. So our new VFS looks like it was the old SMB VFS Makedeer. Now it's SMB VFS Makedeer app. And instead of just passing in a standard integer file descriptor, we pass a pointer to one of our internal file open structures, which has a lot more stuff attached to it. And it makes the coding inside SAMBA a lot easier. And all names then passed in will become relative to the directory file pointer passed in. So that actually moves us a lot closer to NTFS Windows style requirements. The one of the things, I don't really like Win32, and I don't really like the Windows interface, but one of the things that they really did get right is that everything is handle-based. Everything is handle-based. There are very few path name operations. Mostly what you do is you take it, or rather the path names, what happens under the covers is a path name is translated internally into a handle, and then the handle is what is operated on. And so we need to be the same. This should, when we're finished, make it a lot easier for VFS OEMs to plug in advanced clustered file systems to make SAMBA an easy front-end for cluster, CIF cluster and all the other, and the proprietary ones too. The problem is we stamp out a new release every six months and we're halfway through. So I'm hoping, so basically the VFS in 4.12 is kind of half handle, half at-based and half not. I'm hoping for 4.13 it will be finished. Unfortunately, because a lot of the VFS vendors just keep their code out of tree, there's some unavoidable churn that it's going to be a little painful. But it's something we have to do in order to modernize the file server. So I don't know if anyone's seen that before or probably can't read it, but it's actually from the Mozilla offices in San Francisco. I love this because it's just as true now as when it was posted in the early 2000s. And that little poster up there, that guy's about six foot or so, and that poster up there says, must be this tall to write multi-threaded code. And that is just true. There was a boatload of SAMBA competitors that were rewritten and their big thing was, oh, we're threaded rather than that old, crifty, crappy SAMBA. Many of them have failed or at least had horrible, horrible bugs because they were multi-threaded. It's really, really hard to get that right. I mean, it can be done, obviously. And a company was bought, who actually produced one of these things, simply to get access to the engineers who could fix the damn bugs that were causing a major OEM to our problems who was using it. So most of our code is still single-threaded, but what we try and do is do send receive calls to make things asynchronous. Now we're single-threaded going out the socket because you've got one stream of requests coming in, a stream of requests coming out. You can't really parallelize that. You're just going down one single synchronous wire. So we are trying to move the VFS so that you can have multiple outstanding calls going on inside SAMBA simultaneously. And, you know, the synchronization point is still reading and writing to the client, but our new calls look like SMB, VFS, P-read send, and SMB, VFS, P-read receive. So what will happen is the control flow will come in, will parse a read call, will issue the P-read send, and then we'll go back to our main event loop and do other stuff. And when the event, when the send is, when the read is finished and has filled the buffer to go back out the wire, then P-read receive is called and that then picks up where the read started and finishes processing going out the wire. And that actually allows us, we have a really, really nice P-thread pool implementation. Has anyone ever written a P-thread pool? Oh, okay. Yes. Yes, they're really, really hard, aren't they, Geoff? Sorry, we had one person there who's also a SAMBA team member, but whose P-thread work was different from SAMBA. It's a nightmare to get right. We have a P-thread pool implementation that's been written and worked on by some of the cleverest engineers I know in CERNET in Germany. They are still finding synchronization bugs five, seven years later. It's really, really, really hard. That must be this tool to write multi-read code. Even if they stand on top of each other, they're probably not that tall. So what we do is we try and hide our threading infrastructure inside the VFS course. So our threading infrastructure is as simple as possible. We say, okay, we're going to do a spin up a thread, make this system call, do the thing, return. No complex synchronization of big data structures, no locking all over the place. We just try and hide the simplest things inside threads, which is manageable, I think. The impersonation infrastructure is in progress, not put in the code yet. What we're planning to do is every single VFS call will basically have a user credentials structure attached so that it's basically saying, asynchronously do this thing as this user. So at that point, all of the underlying code has the information it needs if it needs to change credentials. And on some systems like FreeBSD, our two main systems now are Linux and FreeBSD, all of the other old unixes have essentially died. Flores is dead, Illumaz is dead, all of them. And that's why we're able to move to the modern Linux syscall interface, I think, at calls. A lot of the older unixes just don't have it. Anyway, so FreeBSD, which is our only other major platform, doesn't have per thread credentials, but you've been bugging them to put that in for about seven years, and hopefully they will get there. Linux already has that. We have the ability to attach specific user credentials to a single p-thread, so. Okay, we've moved standardized to the GNU TLS encryption code. That was marvelous work done by Andrea Schneer at Red Hat. And that gives for encrypted SMB connections, which are increasingly important. Everybody runs everything in. Everyone should run everything encrypted on the wire. Is Seth encrypted on the wire? Jeff? So the question was, is Seth encrypted on the wire? And, yeah, honest, we've done it now. No one should be putting anything on the wire unencrypted ever for any reason whatsoever. Everything on the network should be encrypted. You cannot trust anything between client and server endpoint. Just, I don't care who makes it, whether it's from the US, from China, whatever, just everything should be end-to-end encrypted. And so SMB can do that, and the GNU TLS actually speeds us up quite significantly, mostly by just moving to a new AAS, Advanced Encryption Standard Algorithm. Is that default? Sorry. Is that default setting? The question is, is that the default setting? Well, for 4.12, I think the clients will negotiate that by default. You have to turn that on per share. Or you can turn it on per share or globally. Default depends, if you ran with no SMB.com for a minimal SMB.com, it would not be encrypted. There's a one-line encryption equals mandatory, and then everything going in and out of the server must be encrypted, and it won't talk to you if it isn't. So that's only SMB3, by the way. So Vista clients and Windows 7 clients, I don't think they will do that. It's only modern Windows 10 clients will do that. And of course the Linux client. There's an Anasamba client. Sorry. Okay, I better hurry up. Cernet has been working on the share mode databases, essentially a lot of refactoring work that users haven't been seeing. But what it has led, in a very common case where you bring up a boatload of Windows clients, they all connect to one share, they all take a change notify handle on the root of the share, which basically means, oh yeah, if it's not too much trouble server, if anyone changes anything on this share, please send out asynchronous notifications and let me know. That's not putting a burden on the server at all, is it? Really? Oh, no. Oh, give me 10 more. So you'll have like a thousand clients making that kind of call, which means you've got a thousand opens on one particular handle at the root of the share, and they all have to be notified when anything changes. And that's a horrible performance bottleneck. So Volker at Cernet managed to speed that up by a factor of 20 by basically refactoring the way our internal structures are handled. It's complex. I had to review the code. I actually can't remember what it was now, but it's very clever and it speeds us up a lot. So the other thing they've been doing is separating it's separating out a lot of our data models to make them more cluster friendly. I'll talk a little bit about that later, but when you're doing an SMB cluster, the difference between that and NFS v3 at least is that you have an enormous amount of state that SMB needs to keep coherent between members of the cluster. And so the more you conflate the data structures that you need to swap around, the worse the traffic between cluster member gets. So there's a lot of optimization work that can be done to actually make the cluster communication more efficient by looking at those data structures. And then we've done a lot of, again, Cernet's done a lot of work on the caching performance improvements. There's just been a lot of small scalability work, people identifying and fixing bottlenecks. And then the other cool thing is, as anyone heard of the new Linux IOU ring, asynchronous IOU implementation. So we have a module under development. It's, because it's German, everything's done except the testing. It exists already. It needs integrating. No, I'm sorry, that sounded really bad. I didn't mean it sounded really bad. What I was trying to say was that I'm incredibly impressed by the German engineering. They get this stuff done really quickly. The part of the testing is it needs a specific version of Linux with the IOU ring library. It's not their fault. What I meant is they're excellent engineers. I'm sorry if I... I have to work with them every day, so I'm really not trying to criticise them there. What I'm trying to explain is why it isn't already in the code base and they're kind of, you know, why isn't it in the code base? Well, it needs the tests. So we need to have it integrated into the tests, which I'll talk to a little bit about some of the building infrastructure. Yes, I really... Sorry, I offended the Germans. That's so hard to do. Sorry. OK, so clustering improvements. As I mentioned before, persistent handles. Persistent handles will always be slow. I don't care. A persistent handle is where a client opens a handle and says, I have a handle on this file. If your entire cluster dies and you go away for 100 years and then you come back on an IP address I know about and I reconnect and I hand you that handle, my data had better be there in exactly the same state with all the locks that I had, with all the pending operations still pending, everything has to be. What that means is every open has to check all other opens on the share mode and you have to do three... You have to do essentially transactions, acid transactions on every state change. Think about that. Every read, every write, every lock, everything has to be persisted on to stable storage before you can return that operation completed. Now, it's... So the question one, does every read? I don't think read has to be persisted but you have to be able to return the same data. I think you would have to cache the data such that if the cluster went down before you'd returned you were still returning the same data that you were reading. I believe, I'd have to look very carefully at the guarantees on that. It's... Yeah, it's difficult and it will be slow. The only programs that really, really need it are SQL Server, to be honest. SQL Server running on an SNB share really needs this, everything else. They claim they need it, they claim their data is important. Yeah, make backups, screw you guys. Your data's not that important. I don't care how important you think it is. You're a database, you're important. Everyone else use a database and they'll take care of it for you. So with the result of that persistent handles, at least in a Windows cluster are usually turned on a per share basis. So it's like, okay, here's my slow share. I have persistent handles turned on. For all the other shares, the clients can ask for persistent handles and we'll say no. So we have a plan for doing this. It's all scoped out, it just needs essentially the engineering time and effort putting into it. I'm expecting it to be coming in 4.x, where x is greater than 12, but I don't know what that number is. But there's a lot of OEMs who've asked for this, there's a lot of requests for it, because people really want to run. Oh, the other people who need it are people running VMs, oh, I better hurry. VMs who want to persist VM state and they're running, they need persistent handles too. There's been many improvements in SAMBA CTDB cluster, clustering, not the least of which is the ability to separate out the clustering calls, such that if you want to, this is another OEM ask. The OEMs have said, wow, SAMBA's great, we love the SMB stuff. We think our cluster manager is better than yours. Why can't we use SAMBA with our cluster manager? Why do we have to layer yours on top of ours? So the idea is to separate that out so that CTDB is separable from SAMBA, still tested and integrated, and that's what we would benchmark against, but allows OEMs to make SAMBA a little more modular and flexible. So an OEM who has their own cluster manager that they're really happy with, they can keep using that. The other thing that we need for that is we need to have CTDB merged into our continuous integration testing. So yes, as I said, plumable clustering, decouple to allow third-party cluster managers. Okay. Who here has written their own crypto code? Yes, sucks, doesn't it? And it was full of holes, like mine. So don't write your own crypto. It's just like you wouldn't smelt the metal to make your own car. Don't write your own crypto code. Leave it to the experts. And in this case, we just decided to standardize on GNU TLS. We needed to feedback to the GNU TLS maintainers to fix some of the things we needed, but eventually, just don't do your own crypto. It sucks, you'll get it wrong. So, you know, we basically outsourced that source of CVEs which are vulnerability announcements to GNU TLS. So now we can blame them for everyone. That's great. So, because of the history of SAMBA growth, we have two RPC server, remote procedural server implementations and two remote procedural client implementations back when the project was kind of fragmenting. We now stitched it back together. We need to drop two of, hopefully not both servers and both clients, but we need to drop one server and one client to merge the framework back into something manageable. That's ongoing work. There's a massive pack set running in GitLab it will eventually get put in. And fully asynchronous RPC calls, the person who was working on that I believe is in this room and I'm going to put him on the spot. Gunther, how close is that work to be merged? Ah, okay. So Gunther tells me he's going to rewrite all the old code to match the new RPC server framework. Thank you, that's great. I will await the check-in. I'm happy to review Gunther. And we need that for the SMB witness service. And there's also some work that's ongoing to allow... The other thing that some vendors have said is, well, we have a great SMB server, we like ours, we don't want yours but you do RPC services which really sucks by the way when you enter the Windows RPC it's awful, terrible, and a massive source of CVEs. So they would like to use our RPC framework but not our file server. So there's some work going on to make some of the more modular there. I'm going to hurry up a little. SMB over quick. Has anyone ever heard of quick? Okay, cool. Oh, wow. All right. So it's going to take over the world. Microsoft actually have servers and clients running with SMB over quick. The great thing about that is that all of the ISPs who block the SMB port 445, they're screwed, they can't block the quick protocol because it's running over HTTPS, essentially. So you can actually then share files. A lot of people are going to be very unhappy about this, but anyway they're very happy to open it and document everything that you need. Microsoft is a joy to work with these days. So, you know, we've been looking at this, we've been looking for possible quick libraries on Linux. It's not going to be too hard, I think, to adopt one and work with it. A quick connection comes in. It's essentially talking to a web server. How do we... As far as I know, there are no standard ways on Linux web servers of essentially having a port mapper where you have a request coming in saying, I'm not really a web server request. I'm an SMB request coming in over quick. Please root me to the SMB server. I don't know how that's done yet. I don't think anyone's worked that out on Linux yet. Mostly quick is being used just for web traffic. So there's lots of interest in this. We think it's going to break open and make SMB accessible everywhere, but we don't quite know how it's going to work yet. And I truly believe that SMB3 over quick is the way that most on-premises clients are going to talk to cloud storage in the future, because it's just such a nice protocol, fully encrypted. It has all of the semantics that you would want to make seamless cloud computing a reality. You basically plug your ethernet into the wall and you've got multi petabytes or exabytes of storage available to you all over SMB. A lot of work in Active Directory that's been done by Catalyst, the Catalyst company in New Zealand. They're mostly the one who help on it. I'm really going to hurry up now. They've now got installation scaling up to 300,000 user objects. There are a lot of performance improvements going into that, a pre-fault model. They can do smart code authentication. They have a JSON security logging module produced security logging in JSON possible format. There are some large European governments that are using this. I'm not going to mention any names. They can do a separate talk of their own and they do actually. The missing features are Active Directory web services. The issue with that for us is we really don't want to be in the web server business. We try that once and we suck at it. We just don't want to be a web server. Again, we need to find some way to integrate with web servers so that Active Directory web request, remember on Windows, all this stuff is integrated. You're basically running an INS and it knows how to do the SMB requests or the Active Directory requests. We need to figure that out in the open source components on Linux. I don't know how we're going to do that yet. That's less of my area. Project infrastructure. We move to GitLab. Who's on GitLab? Everyone else is on GitHub. GitLab is where the cool kids are. GNOME is there. Our project workflow is mostly now on GitLab. You can still work on some of the old ways sending patches to the main list, but mostly what you do now is you make pull requests, merge requests on GitLab and you can any user can run the continuous integration so they can actually submit a patch and say, hey, I ran your test suite and my patch doesn't break it because that's kind of a thing they would have to spend. It's like four to six hours running the regression test suite on a powerful local machine. Now they can just outsource it to the GitLab cloud and that really works well. So we still maintain the code quality by having to have two core SAMBA team engineering review before we will actually merge it and plus we're paranoid and our real source code doesn't live in GitLab. The GitLab is a mirror of our real source code which still lives on an independently maintained server. A lot of fuzzing work has been done. We originally fuzzed with Codename and a proprietary tool which was utterly superb. They still will run on us for free because we're kind of a famous project but this is, I mean, we would really like open source tools so a catalyst has been doing a lot of work to integrate with OSS fuzz running in the cloud and a boatload of bugs have turned up in code that we thought was safe and stable and none of them have caused CVS yet, I think, maybe one or two but yeah, this is this is a new frontier. We're hoping that SAMBA will be so battle hardened. We already got a five star review from the security the quality of code. We still suck. We still have a boatload of bugs. If we're this bad God help Joe Random Git Hub project. So general SMB updates, not really SAMBA specific. I don't know who works for Red Hat. I don't know whether you know him. He was a Warshark contributor, author of LibNFS. He decided to do the same thing with SMB. He said, you guys are bloated, fat and ugly. I'm going to write a minimal one. So on his own he wrote a 140 kilobyte SMB2 user space library. Zero copy, no external dependencies. It literally just loo users libc. Now, having said that the first time I looked at the code when he said you've got an integer wrap overflow here. Okay, I'll fix it. So it's now been reviewed a lot more robust and it's incredibly tiny. You can build, don't let anyone say oh SMB2 is too bloated as a client. I can't use it. You can put that tiny, it's LGPL you can put that tiny user space library in anything. Someone had it running on some kind of game console that he was using to have a file server appear to be a game cartridge source anyway. Plus the Linux kernel, courtesy of Samsung is getting experimental SMB2 only server KSMBD. Right now it's kind of a toy. There's a lot of missing features but I went to the Linux kernel talk how Linux kernel works at the beginning of this and so we're doomed. Eventually those guys will develop it so much that it'll probably overtake us. But hey, not yet and so we're still having film with SAMBA. It's very limited functionality but it's something to look at. It's not in anyone's tree I think but if you're interested in that the Linux file server file system one of the Linux kernel main list has the details and they're working on that in public so that will be interesting to look at. And that's it. Woohoo! I finished on time. How about that? Any questions?