 Hello, okay. This works. All right. I think it's half past now So let's gonna start So welcome everyone today. We're gonna talk about running Kubernetes on open stack and a bit more also metal and how to do it in a way That's that's fast. That's why we do it on bare metal to among other things but let's start by talking about the vision the concept of the Open hybrid cloud because it's all related as you're gonna see so when you are a Developer and I'm sure many of you are developers here You shouldn't care too much about the underlying platform that you are using but in general you're gonna have four types of Footprints, right? You're gonna have a physical platform where whichever All the platform to do development is installed or a virtual one And it can be a private cloud or a public cloud. So these are these are essentially the four footprints and in particular we can divide them between the Public clouds. There are a number of them listed in here. There are many more out there or on-premise in private clouds, okay? so today we're gonna talk about the private cloud and in particular Kubernetes On OpenStack and then Kubernetes on bare metal, right? In this case Kubernetes using coros, right as the operating system to run everything else on top of OpenStack Okay, so let's start then Why first off starting with Kubernetes on OpenStack? Why Kubernetes on OpenStack? Well The two platforms are really well integrated, right? We've been working on the integration for a number of years now with many different components At the same time as I said before Kubernetes itself is workload driven That means that I only care about the workloads when I'm a developer and As I said, I don't really care as a developer too much about everything else, right? So I want it to work and that's it But for those of us working on the integration There are things very important about how to integrate Kubernetes and OpenStack For example, OpenStack is as it says in here programmatically driven API driven, right? So we can use the API's that OpenStack exposes To work on this integration and we've done this with a number of projects in OpenStack as you will see At the same time OpenStack itself in general is a platform that scales very well It's been designed since the beginning with the concept of scaling scaling in the data center And it scales across all the infrastructure, right? OpenStack is an abstraction layer for compute for networking even for storage, right? And it goes across all the data center and It's on top of a very solid foundation, which is Linux, right here. By the way, my name is sorry, I didn't introduce myself before My name is Ramona Cedro Rodriguez and I'm a product manager from Working for RedCat, that's why you're gonna see the RedCat logo all over here And among all the things I'm the product manager responsible for OpenShift and OpenStack and also for Something that you're gonna see in a minute Which is bare metal provisioning with Kubernetes Among all the things and also the project ironic we're gonna talk about ironic Here so at RedCat I work for the ironic team as the product manager. Okay, sorry about that but back to the slides Okay, let's start with the integrations Kubernetes on OpenStack integrations and the idea is that these two platforms are Complementary they don't compete with each other. They use each other, especially Kubernetes uses OpenStack, right? So OpenStack exposes the resources by API And OpenShift as a platform consumes these resources, right? So that that difference Needs to be clear for especially for those of us who are developing the integration But also for anybody any administrator that is running OpenShift or Kubernetes on OpenStack You're gonna see during this presentation that I use the OpenShift word OpenShift is deployment of Kubernetes. It's a Kubernetes cluster. It involves more things but essentially They are exchangeable in in the context of today's presentation, right? I'm only talking about a Kubernetes cluster, right? Okay, so then integration points the essential integration points between these two platforms are compute storage and network, okay Not that as compute I mentioned if you're familiar with OpenStack Nova and ironic, right? So virtual machines bare metal for a stretch. There are at least two integration points one through The block devices through Cinda and another one for object through Well, you can call it Swift several skate way and then on the network we're gonna expand on the network a little bit today You can use Octavia in Courier we're gonna talk about courier today Let's see an example What do I mean with this integration that we are working on? Well? If you are using Kubernetes and You need persistent volume and then you create a persistent volume claim You don't really care if it's open stack underneath if it's AWS or what is it what you care about is? The persistent volume right and what happens under the hood is since the two platforms are Integrated deeply integrated as I was saying before OpenShift knows that it needs to tell OpenStack Hey, can you create a volume for me and what you have it? Tell me what it is so that I can present it to the pod that requested it and that's it That's essentially it and to make this transparent for the developers Okay, so this will be how it works in this case you have OpenShift as a platform above in here OpenStack platform you have a virtual machine and if you're familiar with how Cinda works so Through the Cinda API this volume is gonna be created It's going to be attached and it will end up in the pod where the app is is running Okay, so this is an example. Let's see this a little bit bigger how? typical logical architecture Would look like of OpenShift and OpenStack so in here you have OpenShift on top OpenStack here and There are a few important things here one of them is You have your applications, which is the only thing you care about really and these applications can be running on virtual machines directly Or on containers right and these containers can run at the same time on virtual machines or on be a metal nodes Yeah, and Also, you have a number of other services here. You can see Cinda, Swift, Octavia Courier that are part of this Architecture that in a sense they are used again transparently for you when you install this This is what's happening, right? They As I said are complimentary one exposes the services and the other one consumes it So yeah, this is how it looks in terms of how it looks in terms of a logical architecture Let's now see I do it more technical today It's not gonna be super technical, but just a little bit and If you want to do this well, you will need to an installer and Before you proceed with the installation you want to I'm gonna want to know what do I need in here to deploy OpenShift or Kubernetes on OpenStack. So when we've done this, this is what we ended up with in terms of Recommending how or what do I need from OpenStack? I put a QR code in there so that you can go to the link directly But this is what you need We in this latest say inception with what we call the OpenShift installer that installs a Kubernetes cluster on OpenStack, on all platforms as well, but on OpenStack and we did it with What we call OSP 13 which is OpenStack Queens We tested all of these we realized that well, we needed three master nodes two worker nodes With that list, it's not like we recommend these sizes But at least these sizes these flavors that you're gonna use an OpenStack unit object storage There's something that you need right now so that all of this is based It's using coro s coro s uses an inception an inception How is it called the the Cloud unit of coro s ignition an ignition file and that ignition file will temporarily be stored in Swift Okay, so that's why you need Swift so far in a coro s image and then In terms of OpenStack resources I'm not gonna list them, but when we finished we saw that this is on a you know plain cluster right after the installation What's used so that is so that you are ready When you go and do this One more thing Something and this is again as part of the installer the OpenShift installer Which is what's doing all of this you're gonna see a bit more about the installer right now You know that Kubernetes OpenShift requires a DNS Internally for for example so that each master knows about each other master right each worker notes out all the other workers and This is internal DNS right so with the installer We install a core DNS and mDNS that's running on the nodes Right, you don't have to worry about that this just happens if you're interested in How it works you can look at the containers But as as an administrator, you know that this happens and that you don't need to worry about that You don't need to mess with R&O slash ETC hosts or anything like that in terms of load balancing Similarly, there's another container that will do the load balancing internally for you so that you have And it's using by the way HAProxy and keep alive the so that you can have the VIPs For the internal API the interest traffic to the workloads right and the internal DNS requests So I think this is good for you to to know the selling to understand how this works then again as an administrator You don't care and that's a good thing because if these didn't exist you would have to come up with a way for the cluster to do this Okay, so now on the networking Courier what is courier? How many of you are familiar with courier have heard about it? Yeah, some of you what not everyone, okay? so courier is let's say part of the network integration and As it says in here, it improves the performance Let's think about why this exists If you think about how a part talks to another part If you use what we call the OpenShift SDN You will see that it creates a VX LAN tunnel and all the traffic goes through this VX LAN tunnel Now if you're familiar with OpenStack You know that OpenStack is exactly the same to communicate a virtual machine with another virtual machine a VX LAN tunnel is created and if a virtual machine has Say a master note or rather a note a worker note and another machine on top of OpenStack Another worker note with pots that need to talk to each other what happens that you have a VX LAN tunnel inside Another VX LAN tunnel That's double encapsulation that doesn't sound great, right? It happens all the time anyway But hey, we came up with a way of fixing this so that you don't have in the two headers, you know too much space used and Too much packet fragmentation, right essentially if you're familiar with all of these, you know what I'm talking about But essentially what you know, and I'm gonna show you some numbers is that this makes the communication much faster When running OpenShift when running Kubernetes on OpenStack It's it's a CNI. It's a container network interface On on Kubernetes and as it says in here it provides the communication, right between Pots when running on OpenStack now this is Recommended most of the time but not all the time It's recommended when you have tenant networks Right, and as I said when you have tenant networks the VMs in them communicate each other Through VX LAN tunnels, and this is the problem that you want to solve the double encapsulation VX LAN on VX LAN, right? So when you have this it's recommended, but when you don't have that when you don't have the tenant networks and you're using VLAN based Provider networks where the OpenStack virtual machine is connected directly to the network the physical network, right? That you have with other Parts of your network, not just OpenStack, then it's not needed. It's not like it's not recommended It's it's not it doesn't solve the problem that it's been designed for I don't think that it does It's integrating with Octavia. How many of you know what Octavia is? Okay, so Octavia Not everyone. It's the load balancer that as a service that provides OpenStack and Octavia for every load balancer that you need will create a virtual machine and The virtual machine as small as it may be Is it still a workload running on your hypervisor? So say that you expect a lot of Services exposed with Kubernetes, you know when you scale an app and Then you need to load balancer that distributes the traffic among all the You know little parts that you have in there among the notes So it will create a VM. So if you happen to have not many hypervisors perhaps the performance Improvement in the network that you get through courier Can be great But at the cost of an overhead in the hypervisor because of the virtual machines that it's running, okay So if you're still with me You know that Courier will be using Octavia. It's a load balancer internally. It uses trunk ports as well and Well, we tested it and we made it fully supported with Queens with OSP 13 Okay, a little bit more about the courier internal architecture So here you can see so this would be an open stack compute note and in this open stack compute note You have a virtual machine. This virtual machine is is an open shift or Kubernetes note and I'm sure if you've ever been debugging a little bit virtual machine some open stack. You've seen the VR int OBS, right? Which inside the virtual machine it's just a zero or it's whatever Okay, so this is pretty much how it works the CNI of courier is connected to the pods and to the outside is connected to neutron And this is where the magic happens, right? So this this in here is what will do the more as translation saying hey, you don't need you have a VX LAN channel there So I'm gonna use this VX LAN endpoint to connect these pods that wants to go to another VM So that's essentially the idea, right? And as I said, it's integrated with both Octavia and Neutron Okay, I still with me Yeah, okay So we did recently I think just before the summer Some performance tests to see the game because yeah, we know that it's faster But many times I was getting the question of how much faster I should I care? Well, this is one of the tests that we did Between pods on the same hypervisor. Okay, not to separate hypervisors initially wanted to see if per se Is there any performance improvement? That's a little bit we did with three types of packet size 64 1024 and 16,384 and yeah, yeah, there's a little bit of a performance, but perhaps not enough to justify Well having a team developing career our community, etc. But yeah, yeah, there's a Interesting improvement, but where we noticed the most improvements is when you have To have when you have to run Data across two different hypervisors. This is this was really impressive. We knew it was faster But when we saw the numbers it was like nine times faster. Well, it seems yeah, not having fragmentation using the same tunnel It's really a very decent improvement Okay, we did many more tests not just this and and We published a blog with you know what we found some reflection on it and well here you have a link for you to read it It's very interesting Okay now More reading I'm gonna give you more reading if you're still interested in the subject and that's a reference architecture So we said look we people are doing this Why don't we do it as well and then make some recommendations of how do we see? good practices a reference architecture that you can use as the basis for your own architecture and we did it with well OSP 13 that's that's OpenStack Queens and OpenShift 311 that's a Kubernetes 111 if I recall correctly right and we're working on a new one by the way But it'll be a similar idea. What you're gonna get here. You're gonna find a lot of advice on How to do it how the services work with each other in detail? So that's a very interesting document if you want to do Discombination Now let's let's try to install this. Okay Ways to install it and first I wanted to talk about the OpenShift installer the OpenShift installer will install OpenShift That's what its name says It will create a Kubernetes cluster for you and there Essentially two ways of doing this depending on the level of customization that you want or need one is well I want my installer to do everything for me All I want is to point the installer to the infrastructure in our case to OpenStack and Let the installer decide everything create the virtual machines Create all the services the security groups Install everything and when it finishes it will tell me how to access great. So that's what we call the IPI installer provisioned infrastructure There's another way of installing this with installer which is well No, I'm gonna create all the resources myself I'm going to create the virtual machines here and there of the size and that size, right? So that instead of being provisioned by the installer, it's provisioned by the user. Okay, so we start with this and Well, this is essentially what happens with the first one the installer provisioned You have an OpenStack cluster installed and these the installer will create the open shift cluster for you And that means that before installing the OpenShift cluster. It needs to tell OpenStack. Hey create the networks The internal load balances and DNS I already explained this is not an open stack These are containers which makes our lives a little bit easier, but great for me the instances deploy coro s Etc. Etc. Okay, so this what happens and At the same time very obvious all of these will have to be done by a user Previously to proceed with the installation in the other Installation experience. Okay, so far so good if you want to do it on OpenStack We we have documented this here on on github Initially the in the first iteration when we develop this we've done the IPI type of installation We are working a lot actually right now on the UPI type of installation that you have in here take a look try it out and Well, let us know what your experience is you can open issues in github for The installer and we we are more than happy to hear your feedback and help you with the with the process if you use Red Hat and You go to try. OpenShift.com with your Red Hat account You will notice that there are a number of platforms and there's this platform here It will do essentially the same as right now It's the IPI the same as we describe in the IPI installer right everything downstream fully supported now Don't with this more or less second part of this talk in this case more about Be a Metal and It's still Kubernetes OpenShift on Be a Metal one. I wanted to Spend some time talking about Be a Metal In the past I would say three years or so since 2017 maybe 2016 a little bit. We've seen a huge interest in general on Be a Metal That's pretty cool. That's pretty cool. We've seen how Amazon is provisioning Be a Metal nodes for you if you want some of the clouds as well We've seen how in the OpenStack service the numbers of OpenStack deployments with ironic, which is the Be a Metal service in OpenStack has increased by a lot and I haven't seen the report for 2019 but I would expect that it's still growing in The reason well, there are many reasons right but some of the reasons that we have identified are In particular Kubernetes on Be a Metal. That's one and that's an important one This is not new but HPC high performance computing That's that's very common since the beginning where you say well a virtual machine is not enough for me Or when you need to have direct access to Dedicated hardware devices right. Yes, we do a lot of PCI pass through etc. But sometimes in the scientific community scientific community they need to do this or for example, I know of a Customer that's using ironic to test different CPUs like you create a CPU You make some changes in the CPU and you want to compare the results of you know a test with the previous CPU and the new CPU and They use ironic to do these that that's another pretty cool use case All right and then let me talk about the OpenStack Be a Metal service ironic that's the logo there Which was actually designed by a colleague at Redcat as well some time ago and A number of features in here I'm gonna cover some of them real quick because we don't have a lot of time But it's just for you to see the power of running be a metal in the private cloud as if it was a virtual machine So essentially hardware life cycle management great reason, right? You have your hardware inspected in there with all the Details for you to access whenever you need For you to deploy and deploy with any operating system that works on on your bare metal nodes That's stored on your open stack essentially as if it was virtual machines And that's amazing to be able to do this with be a metal. That's that's really an achievement What else we do cool things like what I was saying inspecting the inspection that even extracts information from the network using LDP What else well, yeah It uses the same type of images as you would use with virtual machines and QCao 2 It does things like routed deployments We're gonna see a little bit of that Multitenancy like with proper isolation between networks Auto-discovery a number of power management devices are supported. Let's see a very cool one This is multi tenants and I wanted to spend some time talking about this because this is not an easy one essentially what we do here is What you have multiple tenants Accessing the same notes, right, but still they are separate tenants So you want to keep them separate from one another? Well, you need a way to isolate the bare metal nodes at the network level So how do we do that? Well, we need to go to the switch and then we need to change the villains where you associate one tenant to one villain one or more villains, right and dynamically a provisioning time because a note will be now owned by tenant a and Maybe later when tenant a leaves that note tenant B comes in and uses that note again Right and you want to make sure that tenant B cannot access to the network of tenant a right do that with villains It's a very simple Concept we've been using this for many many years But now you need a way to do that in an automated way in in a private cloud in this case, right? so essentially this is what we do and If you know about open stack and you turn well, we do it with an ml2 driver that it's based on ansible an ansible networking and You don't need to worry too much about that just to install it and you know that it's it's using ansible and What it does is you boot the bare metal nodes as if it was a virtual machine You say open stack server start da da da Okay, and then the ml2 plugin configures the switch it goes to the switch and it says hey Configure a provisioning network in there So configure this villain and during the installation of the note you use a separate villain that nobody can access Only the administrator. Well, you decide that right? But that's the idea that nobody can access that Once the bare metal node is provisioned this plug in goes back to the switch port and Configures a switch port to the villain of that tenant. Okay, so this is a pretty cool Feature and the same way you terminate the node, right? It's it's clean. It's a very cool feature and it essentially makes bare metal the bare metal experience on par with that of virtual machines another one and and I also wanted to dedicate some time to this one because this is Very used by the way if you have seen This network topology, how many times have you heard or how many of you have you heard the leaf and spine? network topology Yeah, some of you have been working on Networking teams right with them. So it's very popular. It's very popular among Network teams and in here what you do is well You have your ironic notes that are capable of provisioning all these notes only that if this is leave zero This is leave one and this is leave two each leaf It's a separate network. It's a separate subnet. It's a different subnet and you know how ironic works, right ironic will provide the eight by DHCP Networking to the notes to then allow the installation of the image in them, right? But when you provide the HCP and you have different leaves Each of them with its top-of-the-rack switch the DHCP request when this guy wants to put up Needs to go through a top-of-the-rack switch Right. So well, thanks to a technology called the HCP relay, which is even in the most basic Top-of-the-rack switches. We can do that. We can forward We can relay that the HCP request and then it will go to the spine switch and all the way to Ironic, right? These which might be simple to understand it's not easy to implement, but we did it and it's there in ironic now Another one this this is Perhaps not easier to implement, but it's it's more dependent on the driver that you use for example redfish and Well, it will allow you to set some by your settings Right. So this is another feature that you have in in ironic another very cool one It's the notes auto discovery in this case when you add new notes to the to your network to your data center You rack them up power them on make sure that the nick is connected to the right boards etc and They will be auto discovered Registered to ironic and if you want and there's an example in here You can do things like hey if you detect that this note is a del note configure this password Right, you can do things like that You can do a number of things. There are a number of parameters. This is just one of the examples of Data collected during inspection, right? But you can play with this if you want and if you don't do anything At least you have the notes discovered for yourself. So it's pretty nice and then redfish redfish is If you've seen I'm sure you know IPMI so redfish is a similar idea With many more features and also API driven 100% API driven many vendors have implemented All the famous ones Redfish in the VMCs. So you can now power on power off remotely, right? Make some changes change boot order set some bias parameters through redfish and ironic supports that Okay So this is an introduction to many of the things that you can do with Ironic so that then if you want you can also use it to deploy Open shift in it and well, this will be pretty simple Put a link in there. I'm gonna explain in a second But essentially you have your open stack with ironic in it and you use it as you usually will use it Right to deploy bare metal nodes in this case. You're gonna deploy bare metal nodes That then are gonna be used To build an open shift cluster, right a Kubernetes cluster. So what you do is you deploy them Pre-deploy them and once the operating system is in them with the network and everything else that's needed to have them online You point the open shift installer to them and you have everything installed Do you remember about the two types of experience that I mentioned before for installing? Open shift user provisioned and in solar provisioned in this case. It's user provision because you as a user provision the In this case the operating system and once everything is up and running with nothing in it is when then the installer Will install what's remaining to create the cluster, which is all the open shift itself needs dependencies Okay, so in this link You will see how we explain To do this installation on bare metal It doesn't talk about ironic in particular. It says Have the notes installed. Maybe you can install a DHCP server and a pixie server, right? Or, you know, you can install different ways in different ways these bare metal nodes So one of them and a very cool one is with ironic Okay, for many of the reasons that I explained before Okay, and to finish how much time do I have some time? Okay? a New project a new project that has Connections with both open stack and Kubernetes So this project is metal cubed Metal cubed is focused if you remember about the four-foot prince. I was talking about before We've been talking about the private cloud with open stack But the private cloud with virtual machines a little bit of bare metal as well, but this is focused on bare metal Okay, so metal cubed so metal cubed uses ironic and It also uses the Kubernetes operators framework Are you familiar with the Kubernetes operators framework? Okay, so it's a way of automating Knowledge that you would have to do manually otherwise like create a bare metal node destroy a bare metal node Tasks that as I said before You would do as a human or via the API an operator can automate all these tasks for you So thanks to this interface And to ironic We have metal cubed being able To manage to do host bare metal host management with Kubernetes What does that mean well it means that one metal cubed runs on Kubernetes Right, it's it's you need Kubernetes cluster to be able to use it and it's managed through Kubernetes interfaces Yeah, so Kubernetes interface as well the cluster API. So what happens here? Well You can have You can get or manage machines through the cluster API as if they were Machines in you know AWS right or open stack Okay, so you do this and in particular the operator Will get these requests say that one request is Give me one machine, right and that machine could be an AWS machine It could be an open stack machine or it can be a physical machine in here Right, this comes through the cluster API. Okay, so now the operator says, okay So I'm here to provide bare metal node These requests is asking me to do this Deployment and I know how to manage bare metal nodes because I know ironic and I know that when I want a note I will tell ironic. Hey Install this image in this node change the boot order Tell me when you're done and I will pass all this information Back to Kubernetes to say here you go. I have a note for you I'm over simplifying, but you know, that's what we care about this experience of Having a bit a cloud like experience with bare metal. That's not easy That's and for those of us who have been in the ironic project for a number of years You can see that thanks to ironic we've simplified a lot of this Process right but the process of automating all the interaction with the bare metal All this knowledge is is has been captured by ironic and now is used by this operator and exposed to Kubernetes Yeah Okay, so if you want to give it a go Go to metal 3.io metal 3 pronounced metal cubed and If you want And this is in general and if you want to install Kubernetes in it OpenShift you can go to our GitHub Well, this is the github of the OpenShift installer, right and in the documents One of them is how to install and by the way, this is under the category of installer provision infrastructure So this document Will tell you we're working on adding more detail to that document But if you want to try it out now, it's it's there for you to use As I said before also if you try this out feel free to open issues to ask us questions We're more than happy to learn from your feedback and and if you want to fix code You know, that's always welcome Okay All right, so in summary This is a bit of everything we talked about today Install Kubernetes on OpenStack. You have these URLs in there Courier performance courier is an important integration point provides a lot of performance improvements if you need them I reference architecture for you to learn About how all of these looks like and how to adapt it to your own Use case and to your own platforms Ironic be a metal here You can see well the whole say Features this is more the general documentation About ironic with many details including what I said about the ml2 Uncivil networking driver, etc. Right everything is captured in here and then the middle cubed metal free.io URL for you to learn more about it and With that if you have any questions, I'm happy to answer So do have another microphone. Yeah your presentation said Kubernetes on OpenStack, so thinking about the upside down case like OSP managed by OpenShift, is that possible? so Is that possible? OSP OpenStack as she by Redcut Has its services in containers already Not on an OpenShift cluster, right? Even though if it's it would be technically possible It's not something that has enough value to the end users as to put all that work to have OpenStack on an OpenShift cluster. Everything is containerized though, right? But not as in an OpenShift cluster. Thank you. Sure Okay, do I have more time? No more questions That's another question there I think you mentioned about the performance improvement using Kudu About I mean the communication between different nodes. I guess that this that is related to that something like VxLanFloat of Nick I guess that we can use a NikoFloating feature for VxLan if we do not have the duplicated VxLan So I think this is related to that feature. Is it the same? Is it correct? It's two separate performance improvements One is the offload that you get from the Nick and another one is the lack of fragmentation when you put a tunnel inside another tunnel because you could still have the VxLan offload But only on one VxLan layer, not in the two of them Yeah, thank you Can I just follow on the same one? So the benchmark it was shown so nine times faster Was it in this scenario where you have Courier and VxLan offload of the Nick or is it improvement coming only from the service you created? No, both but but this is always VxLanFloat is always there. Yes. Yes. That's that's why the question Yeah, so it's a benefit combined because you use both of these features. Yeah. Yeah. Yeah Okay All right. Well, thanks everyone for attending this session and I hope you find it you found it useful