 Hi. Good afternoon. No, good morning. So my name is Leong. I work for Intel. I'm a senior software cloud architect. I'm Sunil. I'm a senior HPC architect. Work for Intel. So today in this lightning talk, we're going to show you some of the work that we have done at Intel about integrating OpenStack and OpenHPC. So primarily, we're basically using an OpenStack ironic. So we basically have to bring the two environments together to provide a very seamless integration with the HPC world and provide a consistent interface for the HPC application users. So because it's just a lightning talk, 10 minutes, so we are not going to have too much deep dive in terms of technical. So I'm going to pass the next mic to Sunil, who will be talking about giving an overview of what OpenHPC is and then what are the three different approach that we try to provision OpenHPC environment on top of OpenStack Cloud or having a cloud-burst scenario in the two different cases, either on-premises or off-premises. Thank you, Leong. I'm going to talk about what the OpenHPC is. A little bit introduction about OpenHPC. And then I'm going to go over with how OpenHPC can be used to enable HPC within an OpenStack-based cloud. So talking about the OpenHPC is very similar to OpenStack. OpenHPC is an entire middleware stack, which is built by and for community to enable HPC. It provides common ingredients, which is required to deploy and manage HPC cluster. For example, it provides a provisioning tool, and it provides resource management, job launch, some of the IO libraries, development tool for HPC, and a variety of scientific libraries. Most of the libraries maintained here are very highly optimized for HPC.W. They are around 60-plus integrated and tested modular components and different libraries. And this is operating working as a Linux Foundation collaboration product. There is a link in case you are interested. Please go there. And today it is available at version 1.3. We launched sometime last year. This picture here provides a typical OpenHPC-based cluster where it has a SMS node. We also call it the HAD node, which is connected to compute nodes via Ethernet, as well as a high-speed network connection. Here, HAD node is very similar to OpenStack Controller node. It's very similar. I'll go over with that a little bit here. And with that, all the optimized library, you can run the HPC workload here. So with that, OpenHPC is highly optimized for HPC point of view, whereas OpenStack is for cloud. We put an effort to bring them together. Meaning, can I utilize the goodness of OpenHPC in an OpenStack? Can I utilize all those optimized libraries? So our effort is to make OpenHPC available within an OpenStack. So in first use case, what we did, we enable HPC as a service within an OpenStack. What that means is actually assuming there is an OpenStack-based cloud, which is connected with the Ethernet, and assuming it has a high-speed fabric in our experiment. That's what we did here. What we did here, we instantiated a complete OpenHPC-based system, which include OpenHPC-based HAD node and instantiation of OpenHPC-based compute node. Everything within an OpenStack. But we are using all the OpenHPC-based optimized libraries and making them connections. So this is HPC as a service. What we gained with this is actually we gained HPC performance and optimization from the OpenHPC, whereas we gained all the goodness of the OpenStack here. Our second use case is the cloud bus. So in this scenario, we are assuming that user has an HPC system on-premise, as well as a private cloud system, which is OpenStack-based. They both are on-premise. They are connected with the same Ethernet switch, as well as the same high-speed fabric. With that connection in mind, all we are doing is actually we are instantiating more compute nodes from the OpenStack and increasing our HPC cluster bigger with more compute capability. But that gives us actually with that HPC cluster is now can utilize some of the unused compute from the cloud, where most of the folks, you guys are aware that HPC is CPU-hungry, memory-hungry. So with this extension, we can utilize it. This experiment was done on-premise, assuming they both are on-premise. So what happens if systems are apart? It means they are not on-premise. That HPC cluster is sitting here, and maybe cloud cluster system is sitting somewhere out very far apart. So that's what our next use case, which is very similar to the previous one. Only difference is here. They are not connected together. They have their own high-speed fabric network. They are connected via public Ethernet connections. So what we did actually, we created a VPN tunnel between those. Once VPN tunnel is established, we use the same methodology to instantiate more open HPC-based compute node from OpenStack Cloud, and extended our HPC cluster with more capability from the private cloud. A little bit caveat here, even though it's working. User is not suggested to run one HPC job across the different nodes, because as you know, they are apart, and there is a VPN tunnel. So if you run one HPC job across these systems, performance will be very bad. Instead of that, it's advised to run multiple HPC job on different, different cluster. We use the slum in our experiment. And what we did, we created subclusters, one subcluster on this private cloud side, another subcluster on the HPC side. And with that, we were able to launch a different, different small HPC job and utilizing those one here. So in all these three use cases, we used a similar approach, our design and approach. What we did actually, we are building an open HPC-based images using the disk image builder. We created various elements, which are open HPC-based, that those elements help us creating very highly optimized open HPC-based head node, as well as HPC-based compute nodes. With those images, that was the first step. Then we created various cloud-init recipes for post-boot configuration, which with that, we did set up the SSH connections, set up the various slum configuration, Mung configuration, in case you are aware of it, we set up NFS setup, all these why cloud-init scripts. And then at the end, all the integrated recipe, which enabled us to create a push-button kind of functionality, where you select which use case you want and it will just instantiate either HPC as a service or instantiate as a, or it create a cloud bus scenarios. We are, in our experiment, we are using OpenStack Metaka release, and we are using the ironic for bare-metal instances in our use cases. So that's our design and approach. So we wanted to prove, is it working, or how are we doing in there? We did HPC-G workload, and we tried to see the performance, and we see, is it really, we are gaining something or not? With HPC-G workload, we got the same performance as with OpenHPC, we did not see any difference, hardly any difference, that prove our HPC system. And we were able to gain the HPC performance with the help of OpenHPC. So as you can see on the screen, those are the key findings that we have found through our experiment. So we basically allow, by using the previous HPC images and cloud-init recipe, we can enable the HPC in OpenStack with a more seamless way and providing the best bandwidth performance for HPC use cases. And the key thing that I want to highlight here is, all this thing will be open-sourcing in January next month, still in the process. The key thing that I want to highlight is we welcome collaborations from all of you. If you guys think that this is something interesting for the HPC community, and if you guys are working on anything that related to this integration with OpenStack, BAMMETO, IONIC projects, or this image builder related to OpenHPC or any of the HPC use case, we would like to invite you to collaborate with us and put it into the open-source community. Question. How do we tie SLUM into NOVA? We did not. So in our experiment, what we did actually, NOVA is instantiating the different BAMMETO images. SLUM is, we are using for launching a job for now. That's the next addition we are trying to go with, which dynamically it can instantiate and create. So as I say, because this is the first initial work that we have done, if we want to do, I think there's a lot of things that we can do in the future. So definitely welcome the further collaboration with having SLUM into NOVA support. Yeah. This chemist builder is mainly for creating the images so that we can reproduce those images, which is OpenHPC based and distributed. These are another areas of collaborations. I'll go back in the previous slide here. Here. So one of the collaboration is how do we share, IEO share here, how do we NFS share between HPC and NOVA, because they are far apart. And we would like to have support from the community if someone can help community point of view. We can grow this further and increase further this one. Sorry, this is the time concern. I would like to respect the next speakers. So I'm going to stop here. So questions, we'll stay around and we can have questions out there. Okay.