 So, good afternoon everybody, we are now at one of the last talk of this day. First of all, me and my colleagues want to thank the organiser for giving us the possibility to have this slot. I am Jussi Muscianisi and with my colleagues we work at the Cineca and a profit consortium in Italy and during this talk I am going to present our cloud high performance computing infrastructure and the way in which our user scientific community use it. At the end of the presentation I also show you some nice use case that we hosted. So Cineca, as I said before, is the Italian profit consortium, it is composed by many university, ministers and research centre, it is founded in 1969 and the headquarters is in Bologna. The mission of Cineca is to support research and deliver IT services for all the associated and in particular the supercomputing application innovation department where we belong, obviously all the HPC services. Here you can find a list of the main pillars of Cineca, spanning from the support for research and innovation, how to the technology transfer by prototyping, problem solving, machine learning and so on. Just to give you some number, at the present time in the last over three years we have provided more than 10 billion of computing hours for our users, we have more than 3,000 active HPC users, more than 4,000 HPC projects that are running on our infrastructure and also many courses and training activity are a core business. So what about the infrastructure? We have the two main HPC systems, a tier zero flagship system and a tier one system where the classical HPC simulation run and together with this infrastructure we have also additional servers that are used for data repository, data services, interactive computing, data processing and also a cloud infrastructure. We have a cloud infrastructure since 2015 because many users express the need to have a more flexible environment, respect that is in a multi user environment like a classical HPC cluster. So in 2015 we started by installing the first cloud infrastructure based on OpenStack. This experience was such as for us and also for the users and so we replaced Pego with Meuchi infrastructure. With the Meuchi we increase the power, the capability power of the nodes and also we start to have a dedicated self storage. And last year we replaced Meuchi with a new one, tier one system named Galileo 100 and this system belongs to the Phoenix infrastructure founded by the DIC human brain project. Galileo 100 is a very complex and in particular a very hybrid system because it is done by both a scalable and interactive part when user can run a classical HPC simulation and the near hit there is also a dedicated part for cloud computing. All the nodes that belong to the cloud computing part are interconnected with a bandwidth low latency network of 100 gigabit per second and the nodes have more than 700 gigabyte of RAM DDR4. And also a dedicated self storage is available for the cloud part of the dimension of one petabyte in write six. So effectively for the user there are more than 700 storage that the user can use. Moreover there is an additional last refile system storage of 20 petabyte that can be shared among the scalable node and also the cloud part and this is also useful as I will show you in the final part of the presentation for some use case. To be a little bit precise and the cloud installation is based on the OpenStack Wallaby but we are upgrading it soon. We have three physical nodes as cloud controller in availability and the 68 physical node that are the test role of compute. The installation was done using call Ansible and in particular regarding to the storage, the storage is completely full flash storage so as perfect to say so it is ideal to host high ups workload. In particular all the hardware and software architectures are implemented fully redundant to avoid a single point of failure and moreover we don't have any over subscription of the resources in order to address all the high performance computing workload. The name of the cloud infrastructure is AdaCloud in memory of the mathematician Ada Lovelace. The user can access the infrastructure both by the horizon dashboard and by the command line. Their authentication is done by using the Chinag identity provider based on key clock open ID and we are going to add also the phoenix identity provider since the infrastructure is in the phoenix group of infrastructure. We provide of course images for the user and also different flavor that are built in a way that also very big virtual machine can be hosted by an amount of resources that cover an entire hypervisor. We provide all the core services of OpenStack of course plus Barbican, Manila, Magnum in order to address particular use case and I say before also we offer the possibility to share the lustre file system between the compute node and also the virtual machine. We provide infrastructure as a yes as an infrastructure as a service, the system is full in production and we support user during all the work days. On the basis of the hardware of AdaCloud many nice use case are at the present time hosted in the system spanning from use case that need real time data project or workload that need to process sensitive data or use case that require a huge amount of data to be exposed on web and also in general any framework and any workload that need an flexibility as the key point. Now I show you three use case in particular. The first one is related to the weather forecast. Cineca has since a lot of time a collaboration within Arpa Emilia Romagna that is the agency, the Italian agencies that perform weather forecasting for Italy and at the present time Galileo 100, we run on the scalable compute part, we run the Cosmo model, so the parallel simulation that created the Output and the forecast map, then some post elaboration and some processing is done on the accelerated part of Galileo and all these data are collected in the last three file system. Then these data are available for all the user by a web portal that is metohub.misterportal.it. In general all people can access this portal and see the provision so the result of all this model. If you register on the portal you are also able to download some data used for the simulation. Thank you for enjoying the run of the open and closed summit. The open doors will be closing in 10 minutes. Please proceed to the designated exits. May I continue? Okay, so in this screenshot it is represented the temperature and this is the background, the value of the temperature computed by the Cosmo model while along the Italian peninsula you can see the value of the humidity and the value shown are a combination among the data produced by the simulation and the data that are observed by the weather station. Another nice use case that we hosted is related to the sensitive data environment. We collaborate with NIG group, NIG is the network of different Italian hospital for Italian genomics. The idea is to create a centralized repository for human genomic variant data produced by various center in Italy and Cineca as the role to create the infrastructure for hosting this data and analyze this data. Of course the platform that we have built for this project, for this research group is GDPR compliant by the nature of the data that is stored. To give you an overview of the platform we have a virtual machine that has a role of front end where the users upload the data by a web portal. When the data is uploaded on an encrypted file system it is computed and analyzed by some pipeline that runs on a second virtual machine and after that some data have been analyzed in a third virtual machine start another computation that perform an aggregation of this data. So at the end the aggregated data is then provided to the user always by a web interface. Here are listed all the security measures that we have adopted and also this platform is scalable in the sense that on the basis of the amount of data that must be computed the workers can be increased on demand. Final another use case always related to sensitive data. This use case started two years ago or as a response to SARS-CoV-2 pandemic and also in this case we collect a large number of data sets and the idea is to collect harmonized and standardized this data. Also in this case obviously a main challenge is to create a secure infrastructure and a secure platform to analyze this data. Having a look through the services provided we have a virtual machine in which the data is stored and a second virtual machine where some further added learning analysis are performed all the data transfer among these two machine red cap and FL server are inside the cloud infrastructure and regarding to the final user he login in a web interface or in a web portal login that and ask to perform some computation by using a token. So in this way the user can perform the final user can perform computation but without access to individual data values. Okay so that's all the time is finished in the sense that I'll show you great. Yeah so that's my our infrastructure and some use case that I hosted.