 Yes. Nudes, do you want to do the design? I think it's full. Okay, welcome everybody to the first class at all. Please welcome Jürgen and Marco. Yes, I did. Thank you all for being here. We are here to tell you how we use Glaster within ING Netherlands. Like I told before, my name is Jürgen Rij. I'm the product owner of the high available shared storage squad. This is Marco. He is one of our DevOps engineers. We have a current solution which we offer to our internal client, which is a pretty basic setup of a RapidCloud 3 node Glaster, where the client connects to a VIP, which moves over if one of the nodes goes down. The downside of this solution is that we deliver these three Glaster nodes to our clients and they need to do their own support and maintenance. Very often, they are lacking the skills or they just don't want to do their own support and maintenance. We thought of another solution, and that is Glaster as a service, where we will be delivering NFS shares via a web portal. We do the maintenance and support of the underlying Glasters and the clients can do second day operations like extending a volume, et cetera, via the same web portal. How does it look? We have still a three node Glaster. The only thing is that we have two data bricks which are in the same data center, but in a different availability zone. Then we have a third node, which is the arbiter node, which is in another data center. The arbiter node obviously only contains metadata, so it makes the synchronization to the other data center a bit quicker and it saves disk space. Another component we've introduced is a Ganesha proxy. This is placed in front of the underlying Glaster and helps us in being scalable, which I will show in the next slide. This is just a simple example. The client used to be landing on the bricks with an X on them, but we can change the config file in the Ganesha proxy and route them to another volume. In front of the Ganesha proxy, we've placed an HA proxy, again for scalability, and these also take care of load balancing. It can make a decision to go to what Ganesha proxy based on the current load. Obviously we want to be high available, so everything is double. The HA proxies are active passive and the Ganesha proxies are active active. In this picture there are two Ganesha proxies. Obviously it can be more. En voor disaster recovery purposes we have enabled geo-replication in Glaster, so we've got the complete mirrored setup in another data center, and via the internal workings of geo-replication all data is replicated to the other data center. Then we have one final component, which is Hackety, and Marco will tell you more about it. Basically for the Hackety component we decide to use Hackety because it's a RESTful interface API and then there is also a already client base and the Hackety takes care of all the capacity management about the disk space and the brick and where the volume will be created over the Glaster. And then we are going to show a small demo about how the solution works. I think you have to drag it. The mirror is not... In ING we use Ansible and the internal customer can use the web portal to trigger the Ansible Playbook. In this demo we trigger the Ansible Playbook directly from one of the machines. We define the variable for the volume. We have the volume name, the size, and if we want the geo-replication of this volume to the other cluster and then we run the Ansible Playbook with this external parameter. So the Playbook starts with the... connected to the Hackety server and creates the volume on the master cluster and then on the slave one. To create the volume check if the volume already exists. There is some issue. After this part now it starts to connect to the Glaster directly to set up the geo-replication. Because at the moment Hackety doesn't support the setup of the geo-replication in Glaster. And then the latest step is to configure the Ganesha proxy to export the new volume to the NFS share. Now we can see that on the... this one is the Hackety server. And then on the Hackety server we can see that the volume has been created. And we see two times the volume name because one is on the master cluster and the other one is on the slave. And here we are on the master cluster and then we can see that the geo-replication cause them volume is active. Now we are going on one of the client and then we try to mount the share folder. So we create the folder directly. So the directory has been mounted through the h-approxy. We can see now the mount point that has 10 gigabytes of this. Now for testing we try just to make some create one file in the ID function on this directory. And now we can see... this is the two Ganesha proxy machine. And then we can see that on the right one there is the file transfer. So I'm going to shut down the... I'm going to stop the Ganesha services on that Ganesha machine during the writing of the file. And then we can see that the writing is stopped just for a few seconds. And then the h-approxy start to send to the... to this other Ganesha proxy file. So the failover of the... of the NFS mount is just for a few seconds so it doesn't take anything about the... that you can steal writing the file on the... on the shell. Here you can find the... our email address for some question in the future. And if any question now... We have five minutes for questions. Yes. Can you repeat the question? The question is... is used just to create the... volume of the cluster. And the answer is yes. The Ganesha is directly created through the Ansible Playbook and then we are going... because basically the configuration of Ganesha is just an export file. So we just write the file on the Ganesha center and reload the Ganesha... set. The question was... I think the question was... Sorry. The question was whether the volume is fixed in size. Is that a question? No, it's not. Like I told you, we give a web portal to the client and there the size of the volume is a variable. And in the future we will also offer second day operation so they can extend it if needed. Yes. The question is how large the setup is in total. Right now we're in the test phase as in we are delivering a test setup to our clients. And so far it's... the only thing you've seen. So we're starting out small and we can always scale out when we get more customers. Basically the setup is three node cluster on one data center. En 2 Ganesha machine on each data center en 2 H-approx on each data center. En of course a KT server. So, any other questions? Okay. Thank you.