 So hello everyone, so now we work with ARM hardware and especially the HPC servers, and so we work with bare metal servers. So we need to have a set of tools and an infrastructure that allows us to provision and install the whole open HPC stack, run benchmarks, and in general, analyze the performance of ARM servers and see if we do a CI and see the performance gain and try to get a clear picture of what is happening. So we do this with multiple tools. So we got a provisioner, which is Mr. Provisioner. We use Jenkins as a job dispatcher, the HPC stack we use is open HPC, and this whole thing is installed and coordinated with Ansible. So how did we set up the lab? So we have to house the services and the tools. We have multiple KVMs. We use Libhurt. And so the provisioner, the Jenkins, and the other services, the regroup in the file server are all installed on those virtual machines. These are installed by Ansible Recipe. That is more or less all contained in the HPC lab setup. You see the link just there. So for the network setup of this administration part of the lab, we use static IPs. We have two different networks. Because we use Bermittal servers, we have the BMC network, which is used to administer all those servers, reboot them, and in general access the IPMI sole. And we have the other main network, the provisioning one. Of course, on both of those networks we run DHCP. This DHCP is hooked up to the Mr. Provisioner so that they talk to each other. So as I said, all of this is installed via Ansible. And we use as well SFTP to store all the results of the tests we do. And we of course need to be able to store them securely and provide a private storage space because our members, private members to Y-Way, Qualcomm, KVM, whatever, they do not want the public, anyone to be able to compare easily their servers to the Y-Way server to a Qualcomm server. So we need to provide them with a secure and private storage space. And we use as well a package cacher because we need stability in the lab and we need to have a fixed and set environment to do all our tests. There is a private git repository, the whole thing is not public because there is some configuration in the Jenkins since it's all hooked up to the LDAP, that is of course we can't share. So one of the big services, big tools we use is Mr. Provisioner for all the provisioning of the bare metal servers. And this includes talking to the BMC to reboot it, to reboot the server and pixie boot it of course. And I'll show some screenshots after that. But of course this, we also have console access so we can monitor serial output from the machines as they are being provisioned. Mr. Provisioner is built in Python with the Node.js front end and Postgres SQL back end. It talks to the KIA, ISC's KIA DHCP which features events and easily dynamic network, subnet. And all of this is done per MAC address. We provisioned them with first the bootloader, the bootloader chain loads, the pixie install from the distribution, whether it be CentOS, OpenSusie or Debian. And KIA also features, also accept Mr. Provisioner providing machine discovery. So that is easily accessible. And all of this is automated. The Jenkins, we have a couple of jobs. The management of the Jenkins is all done via Ansible. Everything is in a GitHub repository, meaning everything is versioned and very easily manageable. And finally the fast server, as I've already mentioned, needs a secure and private storage space. We've got a package cacher and some storage for the tool chains that we use for the benchmarks. Again, because we need a fixed and set environment that doesn't change between runs. A reproducible environment. So here is Mr. Provisioner. So this is what you would see if you click on one of the machines in the Provisioner. So you have a quick overview of some of the parameters, the architecture, what's its status, the BMC associated with it, and multiple interfaces. We have the BMC interface in it as a quick reference. This is how we do it. This is not necessarily compulsory. You've got a couple of actions on the side. So you can access the console, reset the IPMI console, and of course Pixi reboot and reboot. And PowerCycle, the machine. As you see above, there is the other, the other services provided by the Provisioner, which is how the images you have, the pre-seeds you use to automate all the installs, the list of BMCs, the Discovery tab, which is the Provisioner talking to Kia to see which machines have been brought up on the network, and the multiple networks and infrastructure, architectures that are more the... we delve into how the Provisioner itself works and the different settings associated with the different machines. So here are the provisioning settings. As you can see, there's a sub-architecture which corresponds to which bootloader you will give the machine on Pixi boot. And then the kernel, kernel command line, init-hardy and pre-seed files that we use to provision the machine. There's the users that access to this machine and can do stuff on this machine. The users have SSH keys associated with them, and these SSH keys are then used in the pre-seed to provision the machine directly with those SSH keys so that you don't need a root password. There is no password, no users. There's only SSH keys and SSH authentication. And, of course, there's the event log, as you can see, you can see the DHCP requests and, in general, the DHCP traffic. So you know if the machine has effectively installed the OS and has correctly rebooted and asked the DHCP for an IP address, meaning it's initialized at least the IP stack. This is the serial console. You can download the log, which is very useful for bogzilla reports. And then we come up with the Jenkins part. So here are the main jobs that we use in the lab. There's the benchmark job, the OpenHPC install and test suite jobs, and there's the whole provisioning job ecosystem. The provisioning job ecosystem consists of two main entry points. She has the cluster provisioning job and the machine provisioning job. So you can either provision the whole cluster or provision one single machine, which is especially useful for benchmark. So we have the multiple configuration options. Note that this is still in progress. We have the benchmarking job. We support Lule, Shimano, and OpenBlast with all the different options and configurations available directly from the job interface. And the OpenHPC jobs install and test suite. Still work in progress, but we have a couple of options. And here is the OpenHPC matrix that we have to test. So all the different dimensions that we need to be able to test in different configurations. And here, especially what is important is the different provisioning options inside the cluster. So how does the master provision the slaves? So this is either done all flat or denseable or done with werewolf. At the moment only werewolf supports R64. We will work on XCAT support hopefully. And in conclusion, we have tried to make our lab very modular by developing territorially and trying to get the best design. We have multiple configurations because we need to test multiple dimensions. This is low maintenance because everything is automated and wonderful and the SDI is versioned so we can easily pinpoint problems. And finally, we have multiple options with OpenHPC because we need to test all the different configurations possible in the environment. And we are still working on some of the werewolf options but this is something that will come. And that's pretty much it. Thank you very much.