 This is a short talk from Manisha about the advantages of NFS version 4. So I'll introduce myself. I'm Manisha Saini and I've been working with Red Hat from last one and a half years as a QE engineer with NFS Ganesha team, testing NFS Ganesha. So Manisha, you have to speak up a little bit more so that people can also hear you. So the microphone is only for the video. So as it is only the lightning talk, so I'll quickly go through the topics which are main, like what V3 lacks and then what's improved in V4, then we'll have a quick walkthrough on all the NFS V4 enhancement. Then we have some of the difference between V3 and V4 in a tabular form and then just a quick overview of what is NFS Ganesha and how we are moving forward with cluster FS NFS Ganesha. So this is V3. So what's the problem with V3? So the main biggest problem with V3 is like it is a straight test protocol where V4 is a stateful. So stateless, it kind of creates a problem in degradation of performance and locking issues and then in NFS V3 we have firewall problems like different services are floating on different, different ports. We don't have a dedicated port for some of the services like Moundee, Stardee, NLM, but we have few ports which have dedicated services like PortMapper which is 111 and NFS which is 2049 but still it's kind of a drawback in firewall. Third is like we don't have any integration support with locking so we use NLM layer for handling V3 with locking and then we have only support with the POSIX cycle with NFS V3 Ganesha and then at the end it only operates single operation per RPC remote procedure call. So what's improved? It's a quick walkthrough on what NFS V4 has been improved so first it's stateful and then firewall has been improved with the dedicated port which is 2049. Then we have a support of pseudo file system. Then there's a features like delegation and leasing over HA and locking is also improved as a part of protocol only. We don't have any separate layer for locking as we have in NFS V3 which is NLM and then there's a integrated support of ACLs which lacks in V3 and then there comes a performance improvement with parallel NFS 4.1 and multiple operations over compound RPC. So it's like one one slide for each. So what is stateful? So with the stateful we have two new operation which is open and close calls which leads client and server nodes like what is the condition of they can easily communicate with client and server node. And so guaranteed consistency in the sense in NFS V3 it keeps on writing from server from client to the server irrespective of knowing the rights have been pushed in the server back end or not. So the operations is like operation per 5 seconds so all the operation will go in each 5 seconds. So it will kind of create network congestion and repetitive writing but in NFS V4 this is improved so it will not write the other operation before knowing that it has been committed in the back end. And callback and recall function so these terminologies are being basically used with delegation. So in the coming slide I will tell what is delegation and in NFS V4 it also keeps the track of all the files which has been accessed by different different clients at different different times. And so every time it doesn't have to send any more information of the file again and again and it eliminates the useless write-through which I told you as a part of guaranteed consistency. And then it also improves the file locking we are leasing over HA. So NFS V4 is firewall friendly so you can see in NFS V3 we have different ports like port mapper, mount, DNFS, locks, taters, ACLs. We have different different ports for different different services so if you can see port mapper is fixed which is always port 111 NFS port is also kind of fixed which is 2049 but the rest of the ports these keep on changing so the major drawback it will come when suppose the client is in firewall network but the server is outside the network so we don't know on which port the NLM and ACL services NLM and mount services will be running. So it's kind of a drawback for NFS V3 whereas in NFS V4 all these are as a part of a single protocol and it runs on a one dedicated port which is 2049. So this is also one of the major benefit of NFS V4 which is a pseudo file system. So what is a pseudo file system? So pseudo file system is created on its own via NFS V4 and it's like I'll directly go through with an example so suppose if the client has exported, if the server has exported the export FS local and export FS project NFS V4 so suppose if you mount the same root handle on NFS V3 so after mounting on NFS V3 the V3 client will have access to projects as well as NFS V4X also because these are not like hidden but in NFS V4 it is intelligent enough the root file handle is intelligent enough to handle it will only show the local and project NFS V4 directories as exported but it will not show the other parts which are not being exported. If you want to show you need to externally export these directories so in pseudo file system all these exports act as a directory we are under root file handle. So this is delegation in NFS V4 so NFS V4 supports both client side and server side delegations so delegation is a technique where the server provides the delegation to the client having permission to access to write something on the file without being sending each and every operation we are backend in the server so this is one of the major benefits so delegation it is like we have two kinds of delegation server can provide a read delegation as well as write delegation so multiple client cannot be granted write delegation because if two clients will have write delegation at the same time it will create a consistency in consistency and whereas in read delegation multiple client can have read delegation so who decide whether the delegation needs to be granted to a client or not so it is only server who decides whether the delegation needs to be granted and it depends upon the previous accessing of files like previous pattern of file being accessed by different different clients so suppose if the file is being frequently accessed by multiple client so it won't allow the delegation to those clients and so here comes a consistency issues also so when a suppose a client one has taken a read delegation and at the same time client two comes and it wants to take the write delegation so in that case the client the server will send callback request to client one saying key return the file handle return the delegation because I need to grant it to client two so in that case it will client if suppose a client one has written something it will just give all the commits to the server it will commit everything and then it will grant back the delegation to the server itself but then the server will decide whether it needs to grant the delegation to other client or not it's not guaranteed every time so in delegation all the operations are being copied from if the delegation is being provided by the client so all the operation done via clients is act as a local copy to the client it will not send each and every operation to the server so it kind of improve the performance as well so it's like quick walkthrough it cut down the scope of re-validation requirement each time because it has full permission from the server the client can access anything if granted delegation and it reduces the network traffic and therefore improves the performance on the client and the server and have the access pattern of the file before providing delegation it depends upon the access pattern then only it provides delegation and server may recall the delegation at any moment of time when other opens a file it's like recall and callback operations so what are the challenges? so delegation is majorly it runs on 2049 port only but sometimes it may opt some other port depending upon the firewall issues so it can create a problem with delegation so suppose if the server is unable to contact to the client for the delegation and it's not running on 2049 port so server won't be able to grant the delegation to the client and the other is suppose if there are a mix match versions of clients running suppose V3 clients and V4 clients so the delegation technology will only be compatible with V4 clients and the other accessing the same file won't go through these delegation and then there is a leasing over HA so the problem with NFS V3 is it is a stateless protocol so client and servers are not aware of each other state so suppose if the client is rebooted the server will still hold the lock keep on holding the lock and it will not allow the other client to take the lock but in NFS in the same case if the yeah but in NFS V4 if the client is rebooted the server is very well aware that the client has been rebooted so it will keep a lock for some time for some grace period and then it will allow the other client to take the lock but if the server is rebooted in NFS V4 so the server will put into a grace time for some time and it will allow the client which was holding the lock earlier it will give a chance to that client in the grace period to again take the lock so that no other client can take the lock and it can again reclaim the lock so this is what I explained and more on locking so it's the first thing only it's integrated there is no LLM kind of extra protocol running in NFS V4 and client must maintain contact with NFS version 4 server to continue extending its open and lock leasing so once the leasing is about to expire and the client needs to do some more operation so the client is responsible for extending the leasing period and NFS server V4 server and client can so this is the drawback in which the NLM can in NFS V3 server and client cannot run NLM on the same client like on the same node it cannot act as a server and a client in NFS V3 whereas in NFS V4 it can be used as a single server and client so ACAL is one more benefit in NFS V4 whereas in NFS V3 it only supports the POSIX ACAL and NFS V4 ACALs are much more richer than NFS this POSIX ACAL so all the POSIX ACAL can be mapped to NFS V4 ACAL but the vice versa is not same because it has many advantages in NFS V4 ACAL which cannot be mapped to POSIX ACAL so we are currently, stream user are currently working on fixing this and yeah in NFS V4 ACALs the user and group information are stored in the form of a string but not in numeric where in POSIX ACAL it is stored in numeric form so in NFS V3 so compound RPC so for each operation in NFS V3 it has to send small small request from client to server for each operation whereas in NFS V4 we can couple many request in one call and send it to client so it is kind of it improves the performance so you can see the lookup open file and read data all these request have been send in one RPC call so this is the biggest benefit in performance in NFS V4 which is parallel NFS so in parallel NFS we have the metadata server and the data server so metadata server is like keeping the track of on which node the file is present and data server has the actual data so suppose if the client needs to access any file it will directly go to metadata server and metadata server will point to the location of data server having the file with the node having the file so then after that it can directly communicate in parallel with these parallel nodes at the time so it is like parallel accessing from client to server so it suppose three types of support provide three types of storage access protocol file, block and object so file, block and object so it is these protocol are more related to client and data server communication so based on it it communicates as block object or object or file so we have this PNFS feature in NFS V4 which is in currently in tech preview feature so this is just what all I have explained in the earlier slides it is just the summary, it is stateless, it is stateful export are like NFS V3 all exports are mounted separately but in NFS V4 we have the pseudo file system so in NFS V3 we have NLM as a separate layer whereas in NFS 4 everything is in one single layer and then in NFS V3 we have permanent locking there we have the lease based locking and RPC is like one operation per RPC there we can have multiple operations and the firewall issue and at the end we have different different layers, different different protocols for different different services in NFS V3 I am running short of time that is why I am going through fast so this is like you must have seen in earlier Supreeti's presentation also what is NFS Ganesha it is just a user space address protocol compliant NFS server so we have NFS V3 4.0 PNFS protocols running and it uses the libgfapi in NFS cell support to run the cluster server so we have integrated HA with PCS and pacemaker currently but in going forward we will be moving towards CDDB with NFS Ganesha and it has the debus mechanism for exporting the volume so suppose if one node is down the other node will put all the nodes other nodes will get to know and it will put in grace period and it supports a lot of file system like Gluster, Ceph, GPFS and Lusher so this is the NFS Ganesha architecture and this is what I explained and this is the last slide how we are heading with Gluster FS NFS Ganesha so we have some improvements in coming releases which is like read the chunk in NFS Ganesha 2.5 it's more related to improving find and RMS and then we have X reader plus support in GFAPI which is in Gluster 3.13 and then we will be having a delegation support which we are planning in NFS Ganesha 2.7 with Gluster 4.0 then we currently have pacemaker and chorus link as HA so in coming release we might be moving in CDDB and then we are also planning for ASynchronous IO IO for NFS Ganesha 2.4 release with Gluster 4.x so these are the reference you can contact anytime with this mail and IRC contact thanks any questions?