 I'd like to introduce Peter Klitschewski from Red Hat and he will be talking about managing container infrastructure. Thank you very much. Good morning, everyone. Thank you. I'm really glad to see so many people. So let's start. I'll supply and serve. If there are any questions, let's have them at the end of the presentation, or you can always visit all the booths. I will be there so we can take questions. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you very much. Thank you. Thank you. Thank you. Thank you. Sorry. Thank you very much for coming in here. And thank you for being with us today. Get in contact at web.com as well. engine because we have no control over infrastructure over there. I would like to focus on the VMs. So that's on the VMs that we can manage in our own infrastructure. OK, let's move on. Here's the agenda. So at the beginning, I would like to introduce you to a bunch of open source projects that I will present later on. I will briefly cover some of the interesting features that there are. I have those projects up and running in my laptop. So I will say exactly how the topology look like. And then we see them running. And after that, I would like to cover some of the specific use cases where we can leverage infrastructure and the software that manages it to actually use some of the pains with managing the containers. OK, so let's move on. How many of you have heard about it before? OK, great. So this is something that actually gives you the interaction for those people who haven't heard about it. So all it is the management of VMs running in KVM. It is some kind of place with VMware offerings, like VCloud or SXI. So this is the simplified architecture. And I would like to talk about it because it's really, really simple. It's not really hard to deploy. And it's really easy to have it up and running. So we have the main part, which is the management piece, which talks to the Python processes that are directly to LibV. And by that, we are a collection to manage VMs. We have capability to manage that via the UI or SDK. The second opens for software that I'm going to talk about. So we need to manage containers somehow in our infrastructure. For that purpose, I'm going to use OpenShift Origin, which is platform built on top of Kubernetes, which is a bunch of add-ons. And it's mainly dedicated for developers to ease the lifecycle of the applications that they are working on. So it is quite easy, actually, to have your software create a container image, deploy it into the container registry, and then to have it up and running in your infrastructure. So this is the main flaw that OpenShift is aiming for. I'm going to show you a bit of that as well today. And the third most important piece is ManageIQ project, which we can place as manager of managers. So what this software does, actually, it's able to talk to different virtualization clouds and container technologies and to manage them all together. It actually allows to define the IT operations workflows and define quotas for specific software is the way of deploying because it's supporting configuration management as well. There is quite an interesting feature which actually allows to analyze VMs or containers. And if there are any vulnerabilities, which is actually quite interesting feature as well. OK, so let's move on to what I have up and running on my laptop. So I have ORIP, which manages all of the VMs. So it's underlying a project which actually manages everything that is running in here. We have OpenShift origin master node. And I mean master and the node over here. And we have ManageIQ. And ManageIQ is configured to talk to both. So we have nice integration between VMs and containers over here. And all of those four VMs are running in single huge VM on my laptop. So let's see how that works. So as you can see here, I have one VM which is running locally on my laptop. And in that specific VM, I have those four VMs that I mentioned. So let's take a look with me at the UI. So this is the ORIP management UI. From here you can manage VMs, you can manage infrastructure, you can manage storage. You can manage networks. So everything that you need actually to have infrastructure up and running, you can do it from here or from the SDKs. So we have four VMs up and running. I don't have much time actually to show all of the features, so I will move on briefly. And then we can focus actually on those interesting use cases on managing the infrastructure to ease the pain with the containers. Here I have an OpenShift origin. I already provisioned a project. So OpenShift on top of Kubernetes, OpenShift actually had this notion of projects where you can actually start your development workflow from building your container actually to deploying it to registry and then to have it up and running. So I have two replicas of single container which is running in Python. And this is the most important piece with this infrastructure. From there we can manage the infrastructure itself. So here we can see actually the configured ORIP provider in a few seconds. Yes. So here you can see the configured provider. We can see the VMs. We can see a bunch of data about the VMs, about everything. So let me tell you as it is, it just creates an inventory based on the state with specific provider. In this case, it's ORIP. So we can see how the networking is configured, how VMs are configured, where the VMs are running, how they are consuming the resources. You can see data from the monitoring as well. You can charge block later on. And the same UI actually, you can see similar data for containers. So we have the provider that we configured for OpenSheet. And we can see similar data for the containers. So let's take a look actually at some of the ports that we have. So over here you can see that there is a bit more than I described before, because here we have a bunch of the containers which are part of the OpenShift infrastructure, including Hopeware and other software that is required to have a nice experience with it. So let's take a look actually at one of the ports from the project that I mentioned before. So here what we can see is a bunch of information about the port itself. But the most interesting from my perspective and from perspective of this talk is that we exactly know here the underlying VM. So as you know Kubernetes and OpenShift as a business and many of the container management software, they were created in a way to be transparent where the containers are running. So they built very interesting abstractions that we don't need to think whether we are running on bare metal or VMs or in the cloud. But here we actually are able actually to say where those containers are running exactly. And so let's see the underlying VM for that specific port. So we can see the VM. We can see the physical host, which is actually yet another VM, but the physical host that the VM is running on. So we have information end to end. We can manage our containers that are running in the VMs and we know where the VMs are running. And I would like to show you after the demo why that's important. And here actually we can see quite interesting quite interesting slide which actually shows the topology. I'm sorry, the screen is a bit smaller here than it's supposed to be, but let's move it a bit. So here this is the representation of OpenShift. So we can see two nodes. One is the master node and the second one is the node that is running all of the containers, including the infrastructure containers. But the important part is that we can see the VMs and we can see actually the host on which the VMs are running. So we have the understanding. This is the introduction, but all three of the projects have a lot of interesting features that I encourage you to explore. We don't have the time during this presentation actually to cover both. So I would like to move to the interesting part of the use cases. So I would like to start from the least interesting use case but yet very important. So when we manage the infrastructure sometimes we need to run the maintenance on the hypervisors and how we can do that? We can kill the containers one by one and migrate them over to the other VM running in a different host. Or we can kill the VM all together and then we create the containers on the other VM that was running on the other host. Fine, perfectly fine scenario. Unless we actually have a bit of interruption for the application that is running within the containers. And from the infrastructure perspective we could actually easily migrate the VM with the containers that are running and from the application perspective there will be no downtime, no visible downtime and some of the applications running in the containers are really fast to spawn. But it doesn't mean that the applications that are running inside will actually be up and running fast or we'll be delivering the same amount of, I mean we'll be serving the same amount of traffic in stainless steel. So this is actually one of the, as I said, less interesting but important use cases where instead of actually killing all the containers we could migrate the VM and move on with the maintenance. Next use case. So I briefly talked about the relation between the container and the physical host if we are running in virtualized infrastructure. So let's consider two scenarios. Let's assume that we have two nodes running in the same container and we have in separate scenario we have one node only running in the Ticas in the, is there any difference from the age age perspective for the application with that scenario? Because if we lose host, if we lose networking we lose both of the Ticas. It doesn't matter how many nodes there are. So this is actually something which is quite important to understand the relation between the container and the underlying VMs. Next use case. So we have really powerful machines and we can run many of the VMs, let's say 10. We have our services up and running, nicely packed into the containers. We replicated those containers across the board. And we lose one of the machines with 10 VMs. So that will be 10 nodes where the containers are running. So how many services we can lose at one time? It's very possible that losing one VM actually means that our application is down. Do we really want that? From our infrastructure perspective we can lose quite an interesting feature which is cost affinity. So when we create VMs that will become nodes for the container infrastructure to actually spawn containers on, we can make sure that those VMs are running in separate holes by placing them over there. Or maybe we have a use case where we want to have those VMs placed on the same host because of the latency, because of different requirements for our applications that are running inside of the containers. So we can define a VM affinity so they will be placed on the same physical hardware. And there's quite interesting point here as well. Maybe containers require some of the specific devices that are available on that same VM or on that very physical host. Maybe there is latency requirement for the storage. Maybe there are graphics or specific capabilities of the CPU. Then we can place those VMs in specific places by defining the host VM affinity. And then our containers can actually leverage that relationship without containing management actually knowing about it. So this is quite interesting use case. And another one, I already mentioned a couple of times the crushes. So how to mitigate the crushes of the VMs or crushes of the physical host. And bear in mind that we are in virtualized environment and we are using containers. So in order to do so, the relationship that I showed in Manage IQ, we know exactly where the containers are running on the physical host. Even though that we are running them in the VMs. So by looking at the topology of our infrastructure, we can slowly migrate different VMs to different hosts to make them safer. So then we will be less worried about HAA of the applications that are running inside of the containers. We can divide the VM affinity rules and then all these children will migrate the VMs over to different hosts. So we have a bit of interesting tooling actually to help us to manage the infrastructure that are running the containers because what we are interested in over here is that our application that is running in the container is up and running. We don't care whether it takes time actually for the container to spawn within two seconds. But maybe the application inside takes half a minute actually to be up and running. So we can actually leverage visualization and the tooling that it provides actually to mitigate some of the situations that may occur in our infrastructure. And we know that how good it is. Okay, so let's sum up what we learned. As I said, the Kubernetes and OpenShift they were created in a way to be transparent for the infrastructure. So they don't care. But in my opinion, they should care. The relation between the container when it is running and the physical infrastructure is important. It's fine if you are going with the metal but maybe it's not the optimal solution. So maybe you want to virtualize. Maybe you have other applications that are running the same virtual infrastructure. So why not to use that? So we can specify using the tooling we have the relationship between the containers or the containers and the hosts even though that we have the VMs inside and in the middle between those. So we can, as I mentioned, we can use the VM to VM affinity rules and the VM to host affinity rules on migrate the VM between the hosts and from the container's perspective or even from the applications that are running inside of the containers, it's transparent where they are running and whether they are being migrated or not. It's really interesting how you can actually leverage those features. Here is a bit of information where to find the software. So all of the three projects that I mentioned are free open source and I encourage you to play with those. Here is the links to all the statements like you and OpenShift Origin. You can play with them, create similar infrastructure as I did. Okay, so let's move on to questions. Do you have any? So the question was whether you can include physical machines into ManageIQ to actually without having hypervisor. And the answer is that ManageIQ was actually designed and you have the providers for different virtualize or cloud infrastructures and I don't know, I'm not sure whether ManageIQ by itself is available actually to manage the bare metal holes. I need to check that. Yes, yes, and the formal and formal provider is supported. So actually from that perspective, you could actually spawn your notes on those physical holes and then you could have mixed infrastructure then. And the true thinking between the providers will still work. So I encourage all of you to go to the Yovit booth. I will be there so you can ask questions or play with them for a session on my laptop and you'll see how it works. Thank you, thank you all for listening.