 Okay, now everybody, I would like to come into my presentation about OpenStagging UBRAD is delivered by OpenStagsoft. First, I would like to speak about how to actually use our containers OpenStag within your own laptop. So it's for development purposes for you who are used to go with background stuff. We're going to use the other combos instead. And the second part, I'll go to our Kubernetes cluster and do some more deep dive about the stuff that Jacob and Michael showed before. Okay, my name is Mark Schull and I'm the Chief Network Architect at TCP Cloud and currently I'm also on the team that is responsible for getting OpenStag into containers and running the top of the Kubernetes. Okay, you probably saw this slide earlier. I have to mention it again. The whole thing is based on OpenStagsoft and we actually did it without any big changes in our formulas that were just minor changes. So we can actually start the results project about like two weeks ago because we don't have to change our existing deployment tools. We just used what we had and we just built the CIC pipeline to create the containers and we together upload and open up. Alright, so that's where I started the first demonstration. I go to our GitHub and the Dover Star repository to just show you how you can see it. You can actually build your containers just by applying this command. Without running this command, you can also download the images from our GitHub. It's okay, it's faster. Then there is a file. I'm currently on my repo and if I go to the Doncos, there is a file. And there are environmental variables that actually replace the values in the configuration files with these IP addresses. We use it because of the Kubernetes. As you can see, something like this is a Kubernetes notation. And they automatically create environmental variables when you create a Kubernetes service. These environmental variables are shared among all containers. So they can actually do this variable and place it into configuration files. So we are actually like IP plan independent. I'll go back to it when I go to the Kubernetes. We want to run it locally, so we need to simulate the Kubernetes environment. So we have to do it from this environmental file. So you just replace the IP addresses in this file with your own IP address on your laptop. And then you just run the Jupyter Compost. As we start with the Compost score. Which is like the software services like Mcatch, Minus 12. We just need to figure out whether the database is already started. As you can see, the database is already listening. So you can go and create the OpenStack services. As you can see, it's already created in the containers. So I can go, for example, to the Keystone. I can source my identity and we have to wait for a while. The first run of the Keystone takes some time because it is creating the tables in the database. As you can see, the user is being created right now. So there is as well the OpenContrail Compost file. You can create the OpenContrail as well. As well as what? OpenContrail Compost file. For example, you can see what's going on. Alright, so this was the first part of my presentation. I would like to continue with the high level of this stuff. The first slide is about the technology we use to get it on working. I use the terminology underlay, not just for the networking, but I mean for the underlying infrastructure that the OpenStack is running on. It means the Meranis cluster, together with the Calico for networking. On top of this, Meranis and Calico cluster run on our OpenStack with OpenContrail's SDN solution. We have to consider what networking plugin we use for Meranis. We can use Contrail as well, but we were just thinking why we need to bring the complexity and the bunch of features that Contrail provides with Meranis. Because how are we deploying the OpenStack now? We probably have one VLAN, two VLANs, or just a simple routing infrastructure. So I think it suits well, and I think the Calico brings the all we need, all the features we need, and it's simple to deploy, simple to manage. And it's used only in the networking, there is no overlay with Calico. It uses like slash-parted routes, and they are using BERT to exchange the routing information. You can extend it to your physical networking stuff. They are PGP, or PGP is used best, but I think it can be used on OSPF or some Flutter as well, because it's BERT. And that's why we didn't use Flutter, because Flutter is simple as well, but Flutter uses the VXLAB SDN observation, and we didn't need to run the OpenStack control traffic through the VXLAB files. We don't need that. So now it's Calico or Flutter in the future, probably both, because they are going to match together in one project. All right, there is also a transformation from the stuff you have right now. You probably do the balancing and high availability of your OpenStack services with HAPoxy and KeepLive, but you don't need that with Kubernetes, because with the Kubernetes you have the high availability by design. It's because of the Kubernetes service objects that actually are connected to pods of your labels, and each Kubernetes nodes balance on each pod of the cluster. So, traffic goes to each node of the Kubernetes cluster, and it balances to all his members. With Calico, you use the Lightning Tables, but I think there is also a possibility for my user space balancing. All right, in the previous slide you saw that I have a keystone pods, not a keystone container, because Kubernetes deliver another resource. It's not a container, but the smallest unit is pod, and it's a bunch of containers that is using, for example, the networking namespace and stuff like that. So, for example, when you have an overcontroller, you have like six different Docker containers that share the networking namespace and stuff like that. Okay, and this is the infrastructure we used for the demo. We have one sub-master, which delivers the configuration management for the Kubernetes cluster, as well as the manifest for the OpenStack. Then we have the Kubernetes master, which is currently running in the similar for the demo. But for the production environment, there is no problem to deploy the Kubernetes cluster. It's just about clustering the VCD key value store, and that's it. Then we have three friends, physical nodes, for OpenStack and OpenControl controllers, and two Kubernetes nodes for OpenStack and for control compute. I think that I actually spoke about this previous slide, so we can go to another demo. So, now I'm on the sub-master, which brings us the infrastructure. I will go and access our Kubernetes master and see what I have over here. I have the whole OpenStack cluster up and running in high availability. It's OpenStack kill now. I can, for example, go and as you can see on the right side, there is a note that the service is running, and the Kubernetes by design, when you scale the service, it tries to put it on different nodes. So, we have a high availability as well on physical layer, and it's not on the same hypervisor. It tries to deploy it on different hypervisors. So, if I, for example, go and see the Keystone, I can get more information about the Keystone port that is running. I can see the node where it's running. I can see the ports that this port is responding, as well as the mounting volumes. As you can see, we are using for Keystone and for Glance, we are using GlusterFS, mounted from the host system. In our normal deployments with VMs, we use the Keystone with the Fernet keys, and so we deliver the Gluster volume for Keystone for Fernet keys, as well as the Gluster for Glance volumes. So, for example, if I go to the first node, this is the first node for the OpenStack controllers, I can see that I have the volume Glance and Keystone keys. We didn't do that by hand. We used as well the Gluster for Glance from the OpenStack south, so it was automated as well. We just raised what we had. If I go, as you can see that Kubernetes can manage the Gluster volumes and mount it into the containers. So it's mounted into all three Kubernetes containers. If I go to the service, for example, or to the deployment, it's really simple. You just define other volumes, the name, the type of the volume, and the Kubernetes node, how to manage the cluster, the Gluster. So, now I would like to go to the services, for example, which are the balancing for our OpenStack services. For example, if I use our Keystone to get more information about it, you can see that it has three endpoints. These endpoints are the Keystone bots. And the bots are getting connected to the service through this selector, where at least use the Keyval in Paris. Metadata connected the bots to the services. So when you start new service with this Keyval, it gets under the endpoints immediately. So you don't need to do anything. I would like to show you also the networking. As you can see, we have the telephone running everywhere. They have the BGP establish together. What actually happens, each node gets a slice of the redefined IP address, for example, slash-26. And on the node where the containers are running, they are like a straw slash 32 for each container. So this is all over the Kubernetes nodes. When you expand it to our hardware network, you just start the PGP-ring with the galing nodes, and you can get it also into your legacy nodes. Okay, so now I'm just thinking about whether my cluster is still running. So I'll try Keystone. Glance is working. I'll try Neutron. Okay, I know as well, because Jacob and Maki actually didn't boot Instats or something like that. They just created a network. So probably we can try whether the open state is working correctly. I think that I have already... Okay, we can upload the image. Okay, so we have image, we have network. We can try to include an instance. Alright, I need to just replace the net ID. Alright, as you can see, my instance is running. I'll do the second instance as well on the second node. So we have as well the libvert and the vacuum queue running in the Kubernetes. So I'll, for example, can look whether instance is running into libvert, whether it's queue to the lab, is that ID. You see, my instance is running. So now I'm going to show one more control feature. And that is that when you put the instance, it actually creates like a main local address under the system. So there is a way how to connect to your instance from the host system. So it was a zero. So I think this is why Kubernetes just start like some s. Okay, and the password. As you can see, I'm already in my instance. The address is exactly the same as I expect. And I can try to ping the instance on another node. And as you can see, it's perfectly working. So our OpenStack Kubernetes works perfectly. So what time do we have? I have like five minutes. So I probably leave this sometime for questions. Okay, no questions about the OpenStack Kubernetes. Yeah, of course you can. So can we still, for example, implement to serial data centers? Sorry? We can still implement to serial data centers, for example, or to different providers. I don't think I like your question. Do you mean that if I can, for example, if you stop running, if I can still install... If you can, for example, use vz.poiment for the controlling of production implement. For example, use vz.poiment, use vz.basted for the controlling of production implement. Yes. People who serve cloud services. Yeah, I think so probably. There are a few considerations, a few services that are much harder to be clustered, for example, or to be scaled by RabbitMQ, because they are like members which need to be connected and updated during the run, and also to Galvanoster. So, for example, RabbitMQ and Galvanor. And I think every database has a problem with scaling, but when we are talking about the OpenStack services, I can scale it, install for the 20 instances right now and it will be running in five minutes, I don't mean in minutes, and that leaves me the one thing I forgot to tell you, that if this, you can actually take the Kubernetes manifest we have on our Kubernetes master and you can take it to another data center everywhere and you can run it on your Kubernetes master without any change, because it's, as I said, I need one, I need one independent, so if I go to the destination of, for example, OpenStack and Keystone, as you can see, there is no IP address, nothing that is going to actually point to your service because what happens is that all the services are communicating through the Kubernetes services and the Kubernetes services are created before these deployments. You see services, you just create this directory, create the services that gets the environmental variables into the Kubernetes, and when you start deployment, the environmental variables in the configuration files are replaced by the addresses of the services. So if you integrate the center, run these manifest as they are and you will get your OpenStack plastered up and running. Okay, so I think I'm done. Any more questions? No, I have a question. Can you explain what the changes you had to do with your OpenStack south deployment to be able to implement this as a virtual machine? Was it hard or what changes was necessary? You mean the changes in the formulas or you just added entry points for the entry images? So when you start the entry images, they start an entry point which executes the south, replaces the variables. Okay, Jacob is going to find them right now. That's the entry points we just added to every formula. I think that's the only change we did. Okay, so I think you're done. All right, if anyone has some more questions, I'll be here for the rest of the day, so you're welcome to stop and ask.