 Hello everybody. Welcome to the hands-on session on Magnum. My name is Saulus. My name is Brian. We're from Ericsson. We're working on Ericsson Cloud Manager. So we have some technical trickery with this hands-on session because we have two layers of virtualization in your VMs. It's going to be very slow. So we had to do some things there. The current setup that you have is tuned for six gigabytes of memory usage on the VM. So if you have eight gigs on your laptop, it should work still. But for those who have more memory, if you have like 16 gigs, I'll get to that later. I guess we'll have to modify a few files there. So be ready. It's going to be slow. We'll need to go fast on certain steps. Some of them will take like 15, 20 minutes to get some results. So during that, I'll go through the slides. So quickly, what we're going to do is you all guys have to get the background and virtual box installed. So I sent the email to everybody before to get ready for that. You can still install it now. So you all got the USB sticks and maybe downloaded the background box. And we're just going to go on and spin off the VM from the background box, which has a DevStack in it. We're going to set up DevStack. And we're going to go on with the Magnum stuff. So before that, can you raise your hands? Who has used DevStack before? Okay. Who has used Docker? Okay, great. Who has tried Magnum? Okay. So it's good. And it seems you're all more or less familiar with DevStack than so we're not going to get there in detail. Okay. So let's start part one. So regarding the memory, now those who have like 16 gigs of RAM on your laptops, go ahead and edit the vagrant file. That's the one that you copied in your folders. And there's two things that you want to increase there. The one is the BB memory. So now it's set to 6000. Put that to like 9000, at least. And then depending on how many cores you have on your laptop, set that to vCPUs. The thing is, though, if you don't have more than 8 gigabytes of RAM, then do not increase the CPUs even if you have a quad-core, because it will consume more RAM and you'll be in trouble. And if you've already loaded the box with 6, even though you have more, it's fine, it's just going to be a little bit slower. You don't have to destroy it and recreate it if you don't want to. Yeah, once you're done with that, run type vagrant up inside, being inside that folder. And it should start up the VM. And once it's there, a vagrant SSH will SSH into the inside the VM for you. We have a question. If you've installed vagrant on virtual box and you're in Windows, how do you actually run vagrant? Okay, yeah. The Windows, how many guys have Windows? Okay, did you get my email? No. I don't remember. In the session description, I believe I put that it's best to install Git. If you just can get Git client with it, you'll get the Git Bash. With that Git Bash, you can do all the SSH and all that stuff. It will be same as on Linux. So maybe Brian can direct them to how to install Git with Git Bash. I mean, if you have something like Sigwin or any kind of... Will it work from the DOS prompt, if it's in the path, perhaps? It will, but the SSH, you need SSH client. Yeah, putty is a good thing to have or any other SSH client. If you have it in your path, you can just type vagrant SSH and it will work. But if not, then you can use a separate client. So who has VMs already running? Okay, majority already. Good. So those who are inside the VM, just go ahead, CD into DevStack and run StackSH. This should take around 10 minutes to bring up DevStack. Yeah, if anybody has problems, just raise your hand. Sorry? So you have more memory, right? Increase this to 9000. So as soon as your VM is loaded, then vagrant automatically maps to your host. So you can SSH to local host or 2222. And the username and password is vagrant. You don't need to get into virtual box. You don't need to touch it. It's all through command line on vagrant. If you're on Windows, get the good bash. That's the best way to do that. Yeah, so we had one person that didn't have VTX turned on in their BIOS. So if you don't have that, then you need to reboot your machine, turn on VTX in your BIOS, and then try again. I mean, so far we have them in Google Docs, but they're not shared where you can see them, I don't think. I'll show the link, actually. I should have put them on here. So if you've just come recently, what I said before is if you don't have the box downloaded, then don't try to download it now. We have the USB keys. I have some problems with the internet. We have all the instructions on the Etherpad. I'm trying to bring up somehow the link, but if you can just follow me, it's etherpad.openstack.org. Then slash p slash magnum hands-on, one word, dash lab. Sorry? Slash magnum hands-on, one word, dash lab. Here's the link to the instructions. They're all in one page there. For me, too. Can't open it. Well, okay, then we'll just follow the slides as they are, and if you get that page open, it's fine. Otherwise, just try to follow the slides. Yeah, do you still have a link to the Google Drive? So, Vagrant uses the API virtual box, so you may or may not see the actual VM like you'd expect in virtual box, but that doesn't mean it's not running. No, but they say it doesn't open Etherpad for them either. Can you go to the previous slide? Who has DevStack already done? Okay, so... So it's kind of unfortunate that Etherpad decided to go down right now. He's going to copy all the instructions to a text file and put it on the Google share that we have, so if you want to go there then... Somebody says that Etherpad loaded, no. So maybe it's working. If you could throw the link up there again. So, everything we're going to do is going to be inside of the DevStack VM, so you don't need to be able to... Other than the SSH, obviously, you don't need to be able to ping it or anything, but if you're connected to a VPN or something, then I would recommend unconnecting, because some of the stuff like Kubernetes and stuff needs to be able to contact the Internet, and it's difficult with proxies and things like that. Okay, so I put the slides on the Google Drive, so you should reach them through the link that you had originally. My Etherpad is still in load. But in the slides, there's everything that you need if you want to go faster. It's in the session description. Okay, I think I'm going to continue now. All right, so those who have DevStack running now, just go on, run Magnum Setup, SSH, which will do some changes on OpenStack, will decrease the flavor size and register a chorus image that we're going to use for the cluster, and it will create a security key pair for you. I'll go there deeper into all the stuff that's needed. I just want to make it faster now to get through the slow stuff. So it was pointed out that the script MagnumSetup.sh is actually Magnum-setup. Okay. Sorry about that. So that Magnum Setup script should be pretty fast. Once you're done with that, we need to make sure that the VMs that you're going to launch inside OpenStack will have Internet connection. So run this IPTables command inside the VM, after being on the stage. Yeah, you don't need to jump. I'll say if you need to jump somewhere else, just stay there. Then we need to source OpenStack credentials, those source OpenRC admin-admin. Then we had some problems with DNS. So the best thing to do is check the DNS IP that you get in your EDCresolve.conf. Just type that command, you'll see the IP in there. And taken out of that one, we'll need that in the next command. So the next command will be a very long one. Now, like I said, we're running two layers of virtualization and best approach was to have virtual box for all of you guys, but it doesn't support vested virtualization. The second level, when you run it, it's fully automated, emulated, sorry, so it's very slow. That's why it's taking so long. So it should bring up a bay in 20 minutes. And while we'll be doing that, I'll go through the magnum and Kubernetes and give you more of a background and context of what we just did. So once you have the IP, just go ahead and run this command. I guess it's best to just copy the base from consoles. It's a pretty long one. This will actually create a cluster that... So it's gonna have a core as a base, and it will have Kubernetes cluster defined there. So it's gonna be one master, one minion, one node. And this is kind of the main thing with the magnum to get those bays automatically up and running. And in magnum, it is called a bay. If you're still trying to find the slides then, you need to go to the summit, the list of talks, find our talk, and look at the description. The link is in there. Oh, sorry, actually, I got a bit confused. This big command, that's a bay model. That's just the definition of the bay. The one after that, that's where the bay will be spin off. Sorry. Who has started creating a bay now? Anybody has problems creating a bay model? It's too long, or... I'll explain that. I just want to go for that now fast, because it will take a lot of time to create that. And then I'll go to magnum and explain what you guys are doing now. Okay, that's... The last command, retype that, is that there's some characters wrongly interpreted. We'll create another one. Is there anybody who hasn't gotten their box imported, at least? Can I ask one more time to raise the hands who started the bay creation? Okay, so it's better now. Yeah, so I hope your laptop fans are working well, because it's gonna go to 100% of CPU. So don't blame me if it dies there. So I'm gonna go now. Yeah, everybody who's still not there, be sure that you have the slides from the Google Drive so you can follow it, because I need to switch the slides to some of the background stuff. All right, so this is a diagram of all the stack that you are building now up. So on the bottom, we'll have your laptops. We have VirtualBox hypervisor. We have background that controls the VirtualBox to spin off your VM from the box. And then in your VM, we have Ubuntu running there. And DevStack spins off the OpenStack with a certain configuration. So like I mentioned, that's already the second layer of virtualization. So all the OpenStack VMs are already camera-emulated only. And this is where the slowness comes from. So you do have Magnum in there as a plugin in DevStack installed. So you use some of the commands of Magnum. And what Magnum does is, in general, it abstracts communication with a Docker or container orchestration engines like Kubernetes. So in this session, we're just touching Kubernetes. It supports now Swarm as well and Messos. So those two VMs that you see, the CoreS, VM Master and VM Minion, those are VMs which has Kubernetes installed in them, which all that's done by Magnum. And so once that is done, you have a cluster of Kubernetes which can be expanded or scaled up or scaled down. And once you have that, this is where you actually start deploying your Kubernetes apps. Okay? I'm going to go through a little bit of Magnum and Kubernetes things. So Kubernetes is an open-source platform for container application, deployment, scaling operations and stuff like that or orchestration. So that's what Magnum we call container orchestration engines. So in case of OpenStack, it sits on the VMs spawned by OpenStack. It has certain concepts. One of the concepts is a pod. So a pod can be one or more containers that are co-located on one single host. So you define pods through YML files. We're going to later use those to launch an example application. It also has a concept of service. Service is a logical component that exposes certain services. So for example, if you have a bunch of containers running there and only a few of them you want to... or a few of them are at a certain service like a database, for example. So you define a logical model at database saying what are the ports which are the containers that provide this service. And then other containers can use this logical model to discover those services so you don't need to hard code IPs and stuff like that. Then they have a replication controller concept that's basically, again, a service that for it you can just say I want these certain types of containers to be like five instances always. So it will make sure there's like X number of instances out there and if containers dies or a node dies with a bunch of containers, it will respawn them. So just make sure that your service is always running. Okay, so that's a quick intro into Kubernetes. So now on Magnum, like I said, it's abstracting different orchestration engines, container orchestration engines. It started off with the Kubernetes. That's why we're doing Kubernetes today. Later Swarm was added and now Messos is in there. So it utilizes Docker API and Kubernetes API. That's what they use for communicating with those orchestration engines. So you're going to use a bunch of Magnum commands later on and actually I could go for these first. So Magnum has also certain concepts. Most of them are taken from Kubernetes. So you can see here a service or application controller. It's the same thing. Just abstraction in Magnum service. But we also have model, Bay and Bay models. So Bay model we just created before. It just defines, you know, definition of the cluster, like which VM image to use. In our case, we're using QuoraS. You could see that in the command. You also say, like, what kind of flavor to use. So, you know, how much RAM and then CPU it will have. It provides security keys so it will be injected in all the VMs when it spawns them up, so you can connect to them. And similar stuff. One of the important ones is also there that you tell which orchestration engine to use. So if we would be using Swarm, we would say use Swarm and it will bring up a Swarm cluster. And then the Bay is just basically a cluster of nodes that can host containers on them. Pod is the same thing as in Kubernetes. So it's, again, actually, when you're spawning a pod, you're just giving a path to the Kubernetes pod. Yeah, and the container, I guess, you already know what that is. So internally, Magnum uses a bunch of OpenStack services. I guess the most important one is heat. All these clusters are brought up by heat. So inside there are heat templates. And the way it goes today is you can modify those templates to whatever you want. But that is the main thing there. And, of course, Keystone, Nova, Neutron for networking, Glance, and Cinder. Okay, so who has bake rated? Okay, you sure? If you do, you have a very fast machine. It takes around 20 minutes with that original setup with the one CPU. So the way that you know is when the Bay is complete rather than in progress? Yeah, by gig in total. Did you source the... So I just got the question about when Bay is being created what's actually happening. So like I said, all those things are defined in heat templates. So, you know, it just automates for you spinning off the VMs, installing Kubernetes on them, doing all kinds of plumbing, you know, to get that cluster up and running. And later on it saves all the metadata, so we'll go through. You'll see, like, what are the IP addresses of all the master, of the nodes, so you can connect to them. And that's pretty much it, you know, it's just automation of this cluster, bringing up the cluster. How about keeps going, user dash list or something? Yes, the heat templates, they are all inside a magnum. So if you want to look at them, they're under slash opt. Dash list. Slash tag, slash magnum. And I believe slash magnum and then templates. Yeah, that's weird. Even this is not working. My dev stack. That seems unhappy. I mean, all this stuff too, it's like a... Oh, my dev stack. Yeah, so here's a magnum CLI command example. So we went through Bay model create, Bay create. Now we're going to do Kubernetes service create, sorry, COA service create. And RC, that's a replication controller. We're going to do all that stuff. So after you get the Bay running, hopefully, so these are a couple of commands to explore it. So Bay list will give you a list of the Bays that you have there. Bay show and Bay name will give you the details of the Bay in which you will see the master and node IPs. Once you see those, you can SSH into those VMs. Once you're in there, you can, for example, type docker ps and you'll see a list of containers already running in there because Kubernetes itself is containerized. It's running from containers. Anybody else having problems? If anybody, like, accidentally ran two times Bay creation or something like that, delete the one, the other one, because you're going to run out of RAM and everything will fail. One thing to note is that even when the Bay is complete, when things look like they're complete, they may not actually be complete. If it's behaving strangely, then give it a little bit more time because some of the Kubernetes stuff is still coming up. I don't know if you said this, but the Bay consists of two VMs. So in the noble list, you should see two active VMs. Yeah, in case you're wondering, if you're running that on bare metal, there's no problems like that. Great, the clusters are pretty fast and the containers are coming up very fast. There's just too much stuff we're trying to emulate now. If you're done with this part, don't rush too fast. Like Brian said, you might get errors there because Kubernetes is not up yet. It's still bringing up. Another thing that we don't want you to get a bad impression of Magnum and all this, it's not normally this slow. So the reason it's so slow, like Salias was explaining, is because it's multiple levels of virtualization. If you had a machine where you had the open stack installed directly and you did the same thing, even if you were still in one VM, it would still be much faster. But in production normally, you have separate computer hosts and it would be much, much, much faster. I left out one thing in the instructions. All these Kubernetes client stuff, it's all in the app folder on the home folder. You need to get out of the dev stack folder. Okay, so again, who has Bay running? Okay, growing. Okay, so we're slowly gonna proceed. So I just wanted to show here that once the Bay is running, you can use Magnum commands to deploy our Kubernetes apps, but you can also interface directly Kubernetes. So if you go to the app folder and do Magnum Bay Show demo and catch the master address and then do the Kubernetes setup, the kubectl setup with your master address, it will fix Kubernetes client for you. The kubectl, that's the Kubernetes client binary. And from that, it will run commands towards Kubernetes controller inside the VMs. Hello, sir, I have a question. So the question is, do you always need to deploy these containers, these Bayes into VMs, or can you do that on bare metal? Well, I think Magnum is working on ironic support, where you can use ironic to, you know, provision bare metal hosts as a compute for OpenStack, but I don't think that's very mature yet, so it doesn't really work well. But it's definitely coming there, and once that is there and working fine, then sure, you can provision bare metal machines there, and the containers will replace directly on bare metal. So this part is just to, you know, show you more of the context or, you know, what you can do. And so we're going to get to the last part, which is deploying the kubab. So at this point, we'll have the Kubernetes cluster running, and we have created a very stupid kubab for you to see how it looks like. So at this point, it's much less of the Magnum, but more of the Kubernetes stuff, but we still use the Magnum commands to deploy it. All right, so in the app folder, you have these bunch of YML files. So the first command will deploy a master pod. The master pod is just this very tiny web server that other worker containers will be posting to, and that web server shows the table of the containers that posted to it and the timestamp. This way, we can see the kind of the cluster of your application. So it's just an imaginative example. Of course, your relapse would be completely different, but it just shows how you can use Magnum to deploy these Kubernetes applications and then to control them. Because we're limited with the resources, we're not going to do any scaling. Creating another node will probably kill that VM, but Magnum provides a scaling functionality so you could scale up and down your pod. Now we have just one node in it. You could just type a command and update the node number. It will spawn another one, and then Kubernetes will take care of utilizing that for your Docker containers later on. So like I said, the first command creates a master pod. It will take a while, around five minutes, to bring it up properly. And then you can use two commands. You can use a Magnum command, pod show. It will show you the state of that pod until it gets to active. Or you can use directly cube control to also do the get pod, and you'll see your pod in there. Then the next command is creating a service. So the service will expose this master as a service so that the worker pod that we're going to deploy will be able to discover that master. So we don't need to tell the other pod, like where's your master? So the service, like I said, is a logical component. It starts up fast, and it takes care of all the port forwarding and plumbing on the cluster VMs so that base can access this. There's a type on Magnum COE service list. There's missing dash. I'm sure it'll get in there if you try to do that. What is the service doing? You didn't catch that. So I just got a question. What's the application that we're running? So, like I said, it's a very dummy application. It consists of the master and the worker. The master is a simple web server which accepts requests from the workers and just shows a table of those registered workers. Once we do the port forwarding, we'll get to the web page and see the list of the workers registered in there. That's all it is. It just shows how the service exposure works and how the containers can discover the services and work together there. The slides seem to be an old version. Like the slide in this thing, from the shared ones. So, like, we updated that. I just downloaded them from, like, you know... How do you think we'll look at this thing? Really? You know, we updated the... It's a node port. Right. Like capital and capital P. I'm going to answer a question that I keep getting. So, when you're looking for the Minion IP, it is, if you do a Magnum Bay dash show space demo, it will list the information for the bay. And you'll see a master addresses and a node addresses or something. I forget what it is exactly. It's the second one. It probably ends in 136. So, this is the Minion IP that you should use. So, for those who got to the last part, just to explain what's happening there, why we're doing this port forwarding... So, when the cluster is running and we expose the service, our master web server is available there, but only within the cluster. What I wanted to achieve with this is that you can open up a browser from your laptop. So, we just need to add this additional plumbing, which port forwards... We'll already actually have in the background port forwarding from your laptop to the OpenStack VM. And then this one actually does a port forwarding from your OpenStack VM to the Kubernetes cluster. So, we need double mappings there. That's just because we're running in a virtual box. Everybody copy? Who has got the browser thingy working? Wow, that's good. I didn't expect that. Yeah, sorry. Do you contribute to... Any luck to everybody as well? So, the way Kubernetes works with the services is when we create a service, a Kubernetes service, and after that we create pods, all those pods get injected environmental variables, and there is a naming convention of them. So, based on the name of the service that your pod is interested in, it can figure out the IP. So, that's why our worker is just a bash script that reads the environmental variable and keeps posting to that. Yeah, just Env if you SSH to the Minion. Type Env, you'll see a lot of stuff in there. It's in the VM, in the Minion, in your node. So, if you do a novelist and find your Minion just SSH core at that Minion and type Env, you'll get all the lists of environmental variables and you'll see the service being in there. That's the way to bind containers together without hard coding IPs and stuff. They can only just cover that. I'm going to put this hat in the back. So, if you don't want the USB stick, just throw it in here. But you're welcome to keep it if you want to try again later or something. Does anybody have some, maybe, I know, got lost completely in this whole thing and want me to repeat some part of that? Those who didn't use Docker and Kubernetes can be overwhelming, I think, to put all these... Sorry, yes, you're correct. Which you can get into the container as well, yeah. I'm sure you guys enjoyed it. I know it's quite a challenge to get through all that. It's a lot of stuff. Thank you. Thank you. It was also a challenge to put DevStack in six gigs. We had to cut a lot of workers.