 I am part of NFS team in Radax knowledge, but today I am talking about a different topic as Linux support our clusters. Before moving to my talk, I would like to take this opportunity to thank following people for their work towards this project, Brian Foster, Neil Divos and Manikandan. So today I will go through the following sections or following slides. So I will start with what is cluster first, then I will talk about slowness in cluster first, then the challenges or issues which we faced while implementing the slowness support and how the clients can do the stuff and where are we. So what is cluster first, like this slide might be you have this see this slide before, so sorry if it is boring or something like that. So basically cluster first is an open source scale of distributed file system which is POSIX compliant and it is a software defined storage and it works in the user space and it aggregates storage different storage units into a single name space. In cluster times we call it as a volume and one of the important feature or like key highlight of cluster first is it does not have a dedicated metadata server and it provides all the overall cluster first design provides a stackable module architecture and it can run on any commodity hardware like a laptop or source like that sort of matter. And so these are the following components in cluster first, so the basically the disk should support extended attribute like EST for XFS, that is the only requirement for a disk to be part of cluster first storage, then it has following components. So there are servers which we know we say it is name of storage bricks or it is a Gloucester FSD daemon running on these storage bricks and it exports a local file system. And at the client we aggregate different storage units to a single point which we called as a volume and we have different stackable modules inside clients which we call translators and there is a management daemon which in which we can manage volumes or add nodes to the cluster or add peers or increase the volume size and set different options for a volume etc. And this is done with the help of CLA tool client interface tool and this is how the cluster first architecture will look like. So the storage admin can create a cluster first volume using bricks, the physical drives and it will export each physical drives via bricks and admin will create a volume and user can mount or use that volume using different protocols like NFS or strips or fuse or abstracts like that. And if you look into the internal architecture like you can see two diagrams left side is server and right side is client. So each of them we call translators, each of them are plugable modules like in which you can ingot each feature and remove it if you want. So client will request and it will go through like these are different translators will be loaded at the client side, at the right side the blue color all request will go through this translators and it sent to the server and in server also we can see different translators all request will go through this translators and at last it will write just write into the POSIX layer. So before moving to nest topics do you have any doubts or questions regarding Glastrofus or architecture or something like that. Okay, cool. Now I will talk about HCLNX in Glastrofus like so I will just give a brief intro about HCLNX like I am not an expert in HCLNX. So this is just a brief intro which have few things like so it provides a mandate to access control for Linux systems. HCLNX can work on policies like it enforce rules based on policies and each processes and files are labelled with conducts like this you can see ls-hyphen is a defile give the first one is the DAC the directionally access controls the way write permissions user group then you can see the excellence one. So it has four parts first one is user, second one is roles, third one is type and last one is secret level. So user is similar to the Linux user you can map Linux users and HCLNX users using HCLNX policies and then there is object so object roles like it is kind of domain for the users like what all the user can access or something like that and next one percent type of that file or process so each domain can access different types of files or processes based on the HCLNX policies and last one is the security level and what which I matter is what is HCLNX for the back end. So HCLNX is a just an essential attribute named with secure.hclnx and it stores following thing with this attribute. And glass surface and HCLNX so glass surface is an application which works well with HCLNX terms like it has the running all the running glasses process will have following HCLNX context and system user system it means that it is run by system d that is why it gets system u and system r as user and role and cluster d hyphen d is the type for all the processes like it may be brute process or client process or the management demon which I talked about. So by default you only get like you get following context and below I have listed the files which the glass surface process can access. So you can see the roles for a type for each file so each which represents cluster d in which so the glass surface process or the process can access those files and one thing to be noted that like bricks will be a path in a Linux system if you have following context with glass d brick so. And so not all applications which uses cluster of a support HCLNX so following are two applications which have HCLNX context with client side as well so one is fuse client which have fuse underscore t type and other is NFS client which have NFS underscore t type. But today I am not talking about this stuff this is what we have in glass surface with HCLNX I am talking more about the client side presentation of how we can do it applications can set context on client so currently with glass surface architecture clients cannot set different context on files which sets on a glass surface volume and brings HCLNX forward mandatory access control so and glass surface a file system basically like Linux compliant or POSIX compliant file system we need to support HCLNX. So why it is not working for us basically as I mentioned before each files which access cluster process have context so bricks have its own context if application tries to write a context it will overwrite the existing brick context so it would not work with HCLNX it would not work with glass surface process would not work with HCLNX enabled environments so we need to do some kind of facts so as I mentioned before at the back end this HCLNX will be stored as a security dot HCLNX attribute so we just need to map that a security dot sorry HCLNX attribute to some other stuff so this is done with help of an HCLNX translator so it does the following so basically it stores the HCLNX context from client as trusted dot glass surface dot HCLNX and it will do the mapping from client to server like if client set some context it will set as trusted dot glass surface dot HCLNX at the back end and if client request for the context it riddle from this context and give back to the client and we need to interact following course like setata or getata mkdr and mk0 like this call subsystem course in cluster terms or network terms we call it as fops so we need to handle following fops and this translator will be loaded at the server graph by default so by default avd mdry in the volume will get following context which is similar to brick and application can change the context accordingly and one of the things which we need noted like the indian operations related to cluster like cell field or rebalance so they should not denied by HCLNX like we may need to migrate one file from one brick to another or we need to heal a file so those operations are important for cluster we need not deny those operations so we need to bypass those operations and at the server side we do not do anything for the enforcement it should be taken handled by the client set like depending on the client on the Linux system they will handle it accordingly and this is a small picture which I try to draw like this will happen at the server side so client can send request like chcord or restore srimanage to the server so it will be and getata or setata call at the server and protocol server is the first translator which will see the call it will go till HCLNX and HCLNX will convert the call into the value change the value into torsted.glusterfus.hclnx and give to poses and poses will set that standard attribute on that file and so after doing this like the following are the two clients which have current support for SNS one is fuse sign and other one is NFS so there is a bug in fuse client which is not fixed it so that so the users cannot set a SNS contest until fixing this issue and there they have proposed patch as well and case of NFS clients like we have an labeled NFS it is a feature part of NFS 4.2 and using that NFS clients can set the SNS contest directly without any issue but there should be some kind of operation should be implemented at the NFS servers which glossary is using that is NFS Ganesha which is also missing so there are lots of shaft need to do in this project and where are we now so it has planned for 3.10 but it did not make it up so we have two poses two patches poses upstream like one is the implementation of the translator other one is like setting the brick contest where you create or add bricks and two more patches I had to be started or need to be done and one of them is so the inherited contest from the parent directory should be also modified with trusted dot appending stuff so that is missing and maybe for the other applications you need to provide GFAPS for managing the SNS contest the following are the if you have questions you can ask mailers or ask at IRC and these are some useful links and they have any questions