 Hello everyone, it's a pleasure to see all the people that came to the call. So today we'll be discussing an open shift plus a deploy on one bare metal server. My name is Tatiana and I work at Red Hat France at Telco SA partners team. So I participate in a talk, it is mainly I actively contribute to distributed site tool which is a very powerful CI written and uncivil and we use it to install open shift on bare metal. And I also collaborate a lot with Red Hat Telco partners on their real life 5G workloads. So it's basically to help them to install open shift on their private labs, to install the workload, to help them pass the tests and certify and automate all this using DCS. So here is the agenda of today's presentation. In two words, we will be addressing some pretty niche use case, which is very often in practice. This use case lies right in between the local open shift distributions and the enterprise solution with the clusters on many physical nodes. So it's something that you really often see in practice and the application is too large to be fully tested on the local open shift distribution with one node but it's not ready enough to go to the full cluster and we have a hardware shortage. So we'll be discussing this problem in more details in five minutes. I will be proposing this solution to this problem to deploy open shift on one large bare metal server. And we will be going through all the hardware requirements, what will be created and then they will move to how exactly do it. I will again show you how easy and seamlessly it can be done with DCI and I will show you some demo and be accelerated on words here. And then they will move right in the conclusion. So that's the plan for today. Let's really start from the motivation. As I told you, the problem is we are often in practice. Work with telco partners, we see that a lot. So usually a 5G application is not something small. It's usually something really demanding. You can see a lot of situations that you need at least to work is running because of some particularity of the traffic because of some situations with the application design itself. So you're not really in the situation that you could test the application on the local distribution. So you really need something pretty decent. Now the side, again, it's telco use case but you could imagine here any application which requires a particular hardware. For telco, it's often, for example, some security requirements, some data protection, which they are not yet compliant on the test stage. So it's something that imposes you to test this large application in-house. And as usual nowadays, you have a shortage of hardware, so just one bare metal server and you have basically a choice you can wait while other servers will arrive and you can start testing or you could maybe do something with one bare metal server that you have right now. So that is actually the solution that I'm going to discuss here in this talk. So what we are going to do, like in general, so we are going to deploy our open shift cluster on fully virtual cluster. So what does it mean fully virtual? It means that we will be creating the nodes of open shift as virtual machines and we will be connecting these virtual machines by virtual networks. All this we will be doing using the review libraries API and all this will run on one single bare metal server. So all virtual machines and all virtual networks will be in server solution. So I just want to emphasize it since the beginning, all this is in server. So the enzyme server and networks as well. So it's nothing like you really have to do a complex job of these switches and the work setup. Now all this is in server and all this will be running is this all done by C++ playbook and will be destroyed the same way. So it's really convenient and simple solution. So let's start right from the beginning. So we'll be first discussing our hardware requirements. Okay, so build granny on one bare metal server, but how large the server should be. So what do we need from the point of view of hardware and which cluster from nodes point of view we could get in here. So here I'd like really to provide you two examples hitting in the two extreme test cases. One is both are real life situations, okay? So all this is tested, it's not a fury, all this is running. So here you have sort of real life example for a large application on enterprise law. So it's not a cheap solution. You can see that it's almost 150 gigabyte of RAM here. So it's really expensive. It's a really enterprise solution. And this is what is running on the lab of one of our partners. Basically, the problem here is with the workers you see. Every worker has to get a 32 gigabyte of RAM and a 32 V CPUs. And that is really something that is requirement of the application. The application is pretty heavy. It works these 5G traffic in very particular ways. So that is really the bare minimum requirement for the test that we want to run in this lab. So this example basically shows you that even with one bare metal server you still can get a pretty decent level of testing and pretty high level of confidence with what you're going to do. So that's one example. But okay, it's really expensive. You see how many RAM it's really enterprise but maybe you'll have a home lab or you have a simple solution or you want to try it without really spending all the money. So if you're providing a second example, again it's tested and practiced. What you can run? Again, I'm providing you the exact hardware you can use if you'd like to try it on your own, if you have your home lab or something. So here again we have two workers but this time the workers are much more decent so they are really more smooth. They are more, let's say they are more slim and you have eight gigabyte of RAM and eight DCP use per worker but still you have two workers. So it's an example of the application that could really go on the minimum resources but just requires to be somehow a bit more distributed. So this is what you can do in your basically home lab 56 gigabyte of RAM, it's pretty decent. In real life, to be honest, we are running it on 70 gigabyte but still it's tested. We see that we really could go with this solution not only on really expensive and huge servers but on small home server as well. So here we just discussed basically the nodes. We discussed what we are going to get as a mesh that will be running on our cluster but we also need to discuss one last step how we will be connecting these meshes in between. So we really need to create some networks that will be connecting our virtual meshes that we created on the server. So again I want to emphasize it again it's in server networks. These are networks created with the group APIs. So it's not like something that you have to do setting up switches and so on. It's something that will be done for you by the group API and they will be creating two networks. So one is provisioning network. We will be using OCP API installer here. So it will be used to do strapped variants and the second network is Bermatone network which will be used to connect all the nodes. Bermatone network here is in red. And you see that we will be automatically creating a router for external knotted connection. So basically your server will have some sort of outside connectivity so you get the possibility to bolt from your cluster to outside. You will not have possibility with this setup to go from outside to the cluster but this is fine. This is testing setup. It's even sort of additional protection for your situation. So Bermatone network will also provide a basic DHCP and DNS config and every VM will get an IP address from DHCP. So we discussed what we are going to create nodes as VMs and we're going to create two virtual networks that will be connecting these nodes. So the next question is how to do it? And here again, it's a time for some advertisement of distributed CI tools. So here you have a DCI logo and as before it's really powerful tool which is combining the goodies of CI with the goodies of Ansible. So it's CI that could do everything Ansible can. You could install OpenShift on Bermatone, you could install your workloads, you can do whatever you want as you are working with Ansible and you can do whatever you want as it says CI. So it's really the good things from both worlds united together. So it's really cool. So what we do, we deploy OpenShift, automate the deployment of workload, run various tests and even propose the certification of the box. It's nice, I like this tool for sure. I'm using it every day and it offers a lot of flexibility. So here we will be using DCI in two steps. First step will be to create the full virtual cluster that I discussed right before, virtual machines and networks. And all this will be done in one step. So we'll be basically just running one Ansible Playbook until we'll be creating everything. That will be creating most nodes, networks, authors, everything. So we'll get fully created, ready virtual cluster. And then the second thing, we will be choosing the solution to deploy this cluster and we will be running IPI installer which is available out of DCI. I probably did not mention in out of DCI that you have basically all the startups available, UTI, API, API, AI. So here for the sake of the presentation I'm just using the API but it's not the limitation. We have a lot of stuff available. Let's go to the requirements. So what you will need to install from software point of view. So you'll need to get Red Hat Enterprise Linux just to get all of your dependencies to be coherent. You simply install DCI, it's DNF installed, it's easy to do. And then the important thing here is to enable nested KVM on the jump host on your own. Bare Metal Server. Why would you need the nested KVM? This is a particular moment so that this is important one. So let me spend some minutes on that. So we will be using OCP API then. So you see here we have a provisioner mention and you probably know that OCP API works starting from creating the bootstrap VM. Since the provisioner is already VM and bootstrap VM will be created within VM. This way we will be creating VM is in VM. So what we need here is really enable nested KVM to make the sake of make this stack work. So here it's a really important moment, not all hardware allows that, but nowadays most of the stuff will have this opportunity. So this is one of the requirement, we will need nested KVM to make this solution work. So now that's all about the dependencies and they are easier really to go more into this stack. So we are ready to really go into the customization. So what I'm showing you here is not something that you have to write from scratch. The template is available within the DCI. You have a lot of samples and examples. So basically what you will have to do is that you will have to take this template which is already here in the various combinations. You will have a lot of comments and a lot of various situations that already have thought about what you will have to do. You will have just to take this template and customize it to provide description of your cluster that we just discussed before. So essentially to provide the description of your networks and to provide the description of your hosts. On this slide you see the part one which is addressing maybe the networks. And you are providing here the addresses for apps, API, DNS, EPS, or all standard stuff you have in here. And here are our networks. So you see you have vermetal, you have this DHCP range in which you will have your APIs allocated and you have this knotted mode for the outside connection. Okay. And then you will get a provisioning network which is will be used basically for the young school strategy. Then another part of the setup. Here you have hosts. So here you start to describe really node by node what will be happening here. So here I'm really emphasizing the network parts. So for every node you are presizing to which networks this node will be connected. So here I'm providing provisioning and vermetal and just check it out. It's a nice feature for vermetal network. I'm presizing here. It's not mandatory but it's a nice feature because then they will basically pin the MAC address in the vermetal network and it's really easy to debug if you do further application debug. It's not like every time you assign arbitrary MAC addresses. Really pin here and it's a nice feature. So that's about the networks. The second part is, yes, we have to describe nodes. As we discussed before, there will be some trade-offs between how much you want to spend on the lab and how much you can get in terms of RAM and CPU. Basically, once you decide that this one should read the nice table visual setup, what you need to do is just to customize the memory, the CPU, the disk size. You could also, it's not, I think I'll show it here, you could also add a second disk for the storage, for example, for the workers if you have some stuff for the application. So, and for sure, use QVM driver as a base. So that's what you will be doing, not just for provision first, but basically for all the nodes. So you really customize node by node. You can make worker ones if you're from worker two and you can do anything you need and again, I want to say that it's coming out from template. So basically, what you have to write here is just some numbers. It's not like you're running this cool YAML file in scratch, it's basically taking whatever is already here and you're customizing the template. So now, what we essentially prepared we prepared our Ansible inventories. So I told you that this is written in Ansible, so this YAML file describing the cluster that was our inventories. So what we are going to do, we are going to create our virtual cluster with just one comment. So it will be running Ansible playbook, taking playbook review up and providing an inventory with the description of all nodes and the networks. So what this playbook will be doing it will be creating all the nodes, provisioning node, free masters, workers, networks, provisioning and permittal. And it will also do one nice things for us. It will generate the hosts part to be used later for CP IP installation. So it will be just placing the cluster, everything from scratch and it will also generate the description of this cluster in the form which is understandable by IPI installer later. And here is the demo. I hope that video will work. Let me try it out. So basically this is a bit accelerated installation in real life, it takes about 10 minutes. So what this part is doing it's creating form scratch with virtual cluster. So what I'm showing you here is this inventories file and I'm just going for all the nodes. You see it's not too large. So once I customized the inventory I'm firing my own simple playbook and now here it's accelerated in five times. So really installing our virtual cluster here is in two minutes. So again, what it will be doing it will be creating all the nodes it will be creating all the networks and it will be generating a host file that will be used later for OCP IPI installation. I am going to cheat a bit and move right in the end of the installation just to show you how it will go. So let me just move again. So here what you see up so that so I really wanted to come to this part then the cluster is already installed and then I'm showing you what was finally created. So here is this command and I'm highlighting the nodes that were created and then highlighting the networks and the host file. So it was a bit too short so I think I'm going to do it a bit in more details on the next slide. Oh, I think it's running the video, so okay. I think we are good. So here is the screen shots from the previous output just to check them out a bit more carefully. So here you have this all the nodes created. You see a provision post running and the other nodes which are shut off for the moment they will be powered on during the OCP installation major. And we have the networks. Again, these are in-server networks. You see they were automatically created by the playbook and they could be automatically destroyed the same way without doing no harm and without messing up with your network. Also it's an important thing since they are using the network connection to outside and it's pretty safe so you will not mess up by adding this additional DHCP server. You will be just safe, you will be within your cluster. You could only go outside and nobody will come from outside to your cluster to take the address. There will be no mess up. It's a safe network configured. It's really nice from this point of view. So you have two networks but they are safe. They are designed to cluster. It's really cool from this point of view. So now what? We have our virtual cluster. We need now to run the last step. We need to deploy an open shift on that cluster. So what we do now? So we essentially will need three files. So the hosts file which is the description of our cluster for OCP IPI is already generated by DCI Playbook review tab. Configure is required if you need to add some DCI configurations to get it displayed in DCI UI. It's basically a way to nicely display all our simple logs at once. We do it with our partners basically to share the installation and to help them with the debug. And the only thing you have to customize really is settings. As before, settings is template and you don't have to write it on your own. You just have to dream some things. Here you could just provide the name of your lump to provide the information that you are installing an open shift and to choose the OCP version to install. So here I'm choosing 4.12 and that's enough. So all this stuff, how OCP is already generated for you. You choose those conversion and your file installation using DCI CTL installed before with DNF package. So all this is pretty straightforward. So I'm not showing you a demo for open shift IPI installation. It's 40 minutes. It's just a standard IPI installation that will not add a lot of value here. What I want you to show is the final result. I want you to show how it turned out to be. So we get our cluster installed. On our usual cluster you see here which shows 4.12 open shifts. So it's 1.25 Kubernetes. And to interact with the cluster, DCI already generated for you the special folder, cluster configs in which you have all the required binaries. And you have a cube config to interact with your cluster. So you have everything in one folder generated during the IPI installation out of DCI. So here is what you got finally. These are the same nodes that we've seen like several slides before as usual cluster. And we have our three master's and two workers. So that's pretty much it. So I want again to emphasize that we were addressing some sort of this gap situation which is really often in practice. Then we do not, and we have application which is large enough to feed and can't feed in the test with the local OCP distributions. But we don't have enough hardware or enough budget or enough readiness to already build this full enterprise cluster with many nodes. So this pretty niche problem is pretty open. And what we propose here is to fill this gap by instantiating for testing the solution of OCP on your show cluster. Again, I want to emphasize here, it's not a production ready solution. It's not something that will give you the best performance and stuff. Some of our partners, they are still running some basic performance tests to be honest, but it's not something that you want to really do. It's mostly a testing solution. It's not production ready solution in many senses, but it just allows you to start testing more quickly and it also allows you to start some testing just every once in a while. And yeah, I want to also add a bit of advertisement of DCI tool here which we work on our team. So it's a really nice tool. It's a CI written in unseeable. It can allow you to do a lot of things out of box. I invite you to check it out. It's quite nice tool. We are really open to the contributions if you'd like to try it out. If you have some particular cases, we are really open to the community needs and we're actively working with 5G Delco partners. So the tool is really evolving really fast and it involves a lot of new installers, a lot of new possibilities. We constantly evolve and we work on the tests. So it's really nice and open to the community. So here I'm providing you all the links just in case if you'd like to check it out. So concerning this talk with CP cluster only period, I'm providing you source code or funciful role that I just used because maybe you'd like to check in technical details exactly how it was done. I also provide you link to configuration templates and samples if you'd like to try it out. So you maybe would like to check ready solutions out of box and just customize them. And here is blog post on this topic. So if you want to get this talk but in more technical details like really with the whole configuration out of box is the good discussion, just check it out. You will see it's the way more standard here in between the times. Check it out. And here are some information about DCI. If you'd like to get started, so it's introduction to communication and we also have a nice blog that we are trying to keep up to date. We are here writing the articles. Don't hesitate to check it out. And thank you so much. So I think we have two minutes for the questions. Questions? OK, so I have a question about the last year we have this, can we provide the D.C.P. for the staff D.C.P service by the provisional? We are directly. Yes, basically that's what we do here. We have basically two DHCP. We have one DHCP for provisioning and one DHCP for bare metal. So you have them right here. I mean, can we provide the DHCP directly by the provisional, not the last year we are? You mean you want to probably want to provide DHCP for bare metal network then? No. Is that correct? I'm just trying to understand what is the need. You already have DHCP on your plaster and you want to use it both in plaster connectivity and for the APIs of the end? I understand. Yes, you can. It will be a bit more challenging for you in the sense that in this situation you see you have router and you have the external connectivity managed to. If you already have DHCP here, probably it will be exposed to outside network. So in this case, you are not protected for the situations then, for example, some machines from outside will come to your server and get the IP addresses out of there. That's one of the setup I did in the past and it was really painful because we nuke out basically the entire level of the partner. So I do not recommend this, but you can definitely do it. It's just that you need to be a bit more proficient in the work setup in this case. But I know the nested VM is not officially supported. It's just the intact preview. I'm not sure. You mean the possibility to enable nested KDM? Yes. This you will need anyway if you use IPI setup, if you use OCP IPI. So yes, it's not recommended production setup. I always emphasize key testing stuff. There are a lot of things that are not really supported on the production level. So it's just starting from the IPI to be on the IPI. It's not something that you're planning to run in the production time in full support. So yes, it's a testing setup. And if you can also be a solution for the customer, right? What do you mean? To not use nested Kenya? I mean, you have mentioned that this is a solution for the FAMG tech case. So I guess it's a solution for customers, right? Yes. Yes, it's a solution for a larger 5G workload for the test. Yes. So even as you intact it, right? Right, right. Yes. But do not hesitate to come later if you discuss more details. I see that your hands on the solution. So maybe I will provide you more details on your questions. Any other questions? Any other questions? So the installer controls provisions, the nodes, our IDMI or RedFish, try and understand correctly. How do you provide the IDMI RedFish interface to the VMs? We are not using that FISH here. We are using VBMS. But there are some solutions that FISH, you could check with one of my colleagues. I will give you one. Any other questions? OK, so let's try to speak again.