 That's some kind of like backup solution for one colleague here that doesn't run Linux. I got few available to just connect to my machine and try out the same stuff. The question is like really if the network here will allow some connection. We will need a system with live birds inside which will be running another sent to a systems which will be fun. I believe there is a system that doesn't like that kind of nest thingy. So let's see if at least some alternative will go. Okay, so for another people coming in, we would like to join in doing something here are some flash drives to copy some files, so who wants some data to copy and then I would like to try it. I want to call it here it's testing if we can use some work around where it doesn't have for for some other reason. Okay, so alternative approach doesn't look to work, which we can at least blame the network. Okay then. I can download the virtual image of when you're doing it, so I can manage it when I want to stop work. Yeah, there's some limitation for that approach, but you can try it out and we will take some alternative. Okay, let's stop maybe with the site chatting and let's get a proper start as I was walking around here and then quite a lot of chit chatting and so on. So, good morning everyone. Thank you for waking up so early. Some of you seeing on the party, some of you not, but coming for the morning workshop about creating high availability clusters with Ansible. So, who knows? Ansible, hands up. Great. Whoever configured some high availability cluster, hands up. Good, still good. So, okay. We will be doing a practical part, a lot, meaning I would like to show you how to create such a high availability cluster here. We got some flash drives running around here. If someone needs some more, I still get one here, lonely. We will need some Linux system, also file system. We will need some paper that you have on your desk. We got two sites, just know them. They contain some instructions and the other side of the paper contains what we actually will be doing in a cryptic language called a natural language, I would call it. Sentences with objects, objects, whatever else. Little bit made into the form so it's easier to write something in Ansible. So, and most important part, my name is Andre Famera, so just you know my name. And a little bit of introduction, I work in Red Hat as a support engineer in a clustering team, so I believe it's highly relevant for me to present something with the clusters. And I will show you why I use this approach for doing some stuff as I find it really fast and useful. So, little bit of summary with a picture, what we will actually do today. We will do a two-note pacemaker cluster, two-note highly available pacemaker cluster with some clustered resource in our approach. And the picture that is taken from the documentation, we'll be setting up highly available Apache server running on some IP address serving some nice web page that can survive one of the nodes failure and still the user will see the web page. So, fairly simple thing and this will be our objective. One important part before we dive deeper and also if still someone doesn't have a flash drive and has copied the file, ask for it. What is highly available or high availability cluster? Or what is the difference between high availability cluster and zero downtime cluster? Can anyone tell? Who thinks that we'll be able to tell? Hands up. One person, okay. So, in short, high availability cluster is trying to achieve nearly 100% availability of resources. And the important part is trying to achieve, not achieving. So, we must consider that when we are running something highly available, it doesn't mean it runs all the time. It means it runs most of the time. And the objective of the cluster in software is to make it run as much as possible, ideally close to 100%, which is not achievable in most of the scenarios, but could be, imagine nothing goes wrong for a year, for example, which is very rare. So, there is expected that there will be some downtime. But the objective of the cluster is to minimize this downtime and enable either the administrators or the operators of the clusters to not need to take care of it, let the software take care of problems and solving them. So, we will try to do some actual work, as was introduced in the beginning. We will be needing a Linux system with a Libvert, a pair of virtual machines, and some additional storage. And I will be going, would you showing off all the stuff, how we created and how we go forward through it and start working on the cluster part. In the meantime, there will be some breaks for introducing some concepts of the clusters, which might be relevant to the parts that we'll be doing. I will be using mostly the breaks while something will be running or something will be stuck, as it might happen that some stuff will just take some time. Most of the stuff that is prepared here should be able to run without Internet connection. So, if you get the files on your computer, it should work. However, I might not promise that something change overnight or whatever and something might just require downloading a little bit of data. But most of the stuff should be workable for production without Internet connection. So, to what we need and some nice table with the stuff, we'll need two virtual machines. So, for those that already got copied the files, you've got a directory named VM, in which you have two files. One of them is file for defining the virtual machine in a Libvert. It's the small one. The second one is the disk drive with some pre-installed system. Some pre-installed system that's CentOS 7.3 with few packages already installed. For those curious which packages they are, they are mostly those ones that we'll be using today and information on exactly steps taken to that will be disclosed somewhere on my web page. So, fairly minimal one. Yes? You don't need to send this as a host, right? No. I just need to remember you need to extract it. Yep. As a host, we need something that can get a Libvert and one additional thing that can get a Fencevert D. Despite this might not be needed that much, it's the proper way how to do one part of the clustering that I will be talking a bit later on. So, it might be just not named the same way in the host. This kind of Fencevert D or Fencevert D, Libvert and the other stuff. Obviously, we will need Ansible. Pretty recent one or at least something around the version 2.0. 1.9 should also do the trick in most of the cases, but I haven't tested it on that kind of it. But still might work. And you will need a paper that is around here. So, let's get stuff rolling. I will be trying to do the same stuff as shown here with you, so you will see what is happening. If someone come just to watch, yeah, you will see me doing the same stuff as all the others will be doing here. So, don't worry if you don't do it yourself, you will have at least interactive look at how the stuff works. Or if I make something wrong, how it breaks. So, in any situation, if you got questions, feel free to ask or interrupt me. I am expecting this to be quite interactive in a way that will be setting up something. Something gets broken and we might just fix something. So, yeah? If you got image, the big file, there is also the checksum next to it. So, if you got the suspicion something is really wrong there, this is the good one. Okay. So, if you got suspicion for corruption, bring the flash right here. I got already two corrupted so far. So, we are quite good. And I should have one more here. So, let's get down to the terminal. And I will have a tough decision for you to make. Is this white screen looking nice to you? And my question is like, oh, yeah, the different, the normal black text can everyone see? Or at least too small? So, let's give you opportunity to change. Let's try it like this. Okay. Yeah, I know it's like choosing between, yeah. But don't mind the blue one because the blue one will be not that much there. Just my question is like, let's try with the vote. Who likes the white one? Hands up. Okay, five. Who likes the black one? More people. So, who is sleeping? Okay, so it looks like the black one got slightly more votes. So, sorry guys for the white one. Personally, I like the black one more. So, it's more relaxing and you can sleep with it and so on. So, let's get somewhere to creating some machines. So, as I promised, I will be trying to do the same stuff as it's mentioned somewhere. So, let's get the virtual machines up and spinning. We will need two virtual machines. You got one file. That means one important thing. You will have to have a copy of that file. Who didn't figure out that yet. So, I got the data hidden here. So, let's get the virtual machine image. So, I will copy the disk image somewhere and I will most probably need a root. So, for those that already got LibBurth, the most natural place where to put some images is varLibLibBurthImages which is storage which, even on the Seal Linux enabled system, should be safe one to put files into. So, let's get the data there. I will therefore take the VM disk image, make a copy of it, unpack it. It should pretty take some time. Believe me, this is the fast unpacking one. I tried several ones of them. The best compression ratio was pretty impressive but it took like 5 minutes to unpack. This is 40 seconds on the same machine. So, once the file gets unpacked, it's around one and a half gigs. We will need two of them. This will be the base virtual machines that we will use as a cluster nodes on which we will do some stuff. Yep, somewhere online not yet but still I can give you a drive for now if you want to just get it there. I'm sorry. Ah, okay. I believe this will be recorded and if not then... But the image there. However, baking here is done. So, I will do the trick. I will rename it to node1, qcuff2 for example. Then I will make a sophisticated clone of the virtual machine by using the cp command. Done. And there will be one additional thing that is mentioned here on the slide. I would like to have some shared drive between those two virtual machines which is the tricky part. So, we can create this kind of shared drive with nice command. For those that get copied the file, the same presentation is available as a PDF named defconf something, something, something PDF. So, you can copy-paste it from there. So, let's create the shared disk. We should have now three disks and now we need to define some virtual machines. Again, as you might notice, we will need to do some kind of editing there. So, let's take the VM definition XML, copy it into node1.xml for example. So, node1.xml. The one thing that we will need to change and you also most probably, as you have named the files, you will need to give the right paths to them. So, for those really not liking XML, you can try use something else for this. But in general, it's really just finding the line with a source file saying nodex. And change it to whatever you have named the disk file and check if your path is looking the same. So, I have named it here node1. I will call my virtual machine node1, or let's call it node01. Who knows what else I got in my system. And the shared disk drive is called just the shared disk. So, yeah, I've made some change from the original here. So, let's put here the shared disk. The easy way to figure out if I've done everything okay is to try to define the virtual machine in the libbert, which is as simple as verse define. And I hope you have a verse command. So, verse define node1.xml. So far, it looks good. So, let's make the clone of the machine. The geeky way, cp. Let's change the name slightly and let's change the file of the disk drive slightly. If someone goes with creating your own machine and some clicky interface, you can do it. You can just assign both drives to those machines. There is one small, tiny thingy to do and that is for the shared drive. You should search for some checkbox or something mentioning that drive should be shareable. This will tell the libbert that more than one machine will be accessing the file or the disk drive and it will not shove when you try to start both of the machines at the same time. Otherwise, you get the error like, oh, this disk is used by some other machine. You cannot do that. But we actually want to do that. So, I will try to define also the second node, which looks fine to be defined. So, let's have a look if I've got them turned off. So, let's try to start them up. Is here someone who is scared of the command line? Hands up? Sofia, please. Okay. So, the virtual machine that is there can also show something on the graphical output but you can use the serial console for it, which is a great thingy. So, I will do this nice trick of verse start and verse console to demonstrate. So, you can actually see something up. Surprisingly, it's a center-west, the latest bleeding edge 7.31, updated yesterday evening and this morning again because some packages. And that one should end up booting up into system where I allegedly didn't told you the password so no one can play around while I get there. So, we will need two machines, both of them booted up and I will have now a small checkpoint. Who got at least one machine running? Hands up. Okay. Who is in process of getting at least one machine running? Hands up. Great. So, people actually will be doing something here. Great. I'm happy about that. I like the workshops that put work on participants actually. So, that's a good thing. So, in the meantime while we get to some working configuration or at least two working virtual machines, which you can do in the meantime I will have a little bit of talking about the cluster, space maker, high availability and so on. If you think I will be talking for too long and feel too bored, raise your hand and I will ask if you got already the machines up. But promise it wouldn't be too much of talking. So, that's a secret for now. But wait a bit more, just start two machines. Once you got two machines up, I will tell you the password. And it's not that common. You need to add the number at the end. So, for those that want to hear something about the clustering part what we actually do with clusters and how it actually works. Cluster itself is some gathering or group of several machines that is working together to run some kind of resources. We name them cluster resources. They are managed by the cluster software. And this cluster software is taking care of starting those resources, stopping them, checking if they are still alive or if they are responding well, taking any actions necessary to recover from the issues. That is quite important stuff because it means it's usually doing the job that someone else doesn't have to do manually. So, the biggest advantage of having a high availability cluster is like you can have a service that runs on one of the machines. And if something happens to a machine, a magic happens and the service will start on some other machine that is working. You don't need to wake up your administrator. You don't need to pick up the phone during the night. It just somehow works most of the time. This is in general the way how the cluster works. Also, thanks to the pacemaker, that's one of the possible representations of cluster software. More detailed pacemaker is a resource manager meaning he is taking care of starting, stopping, monitoring the cluster resources. And that is something that we will be creating today as the CentOS and also the Nymphedoras are containing as a primary software the pacemaker resource manager. In the past, we can remember there was RG manager resource manager and even further we can get to the heartbeat but that's quite something into the past. So we'll be focusing on a pacemaker today and what it can actually do. So as I was mentioning, it is trying to manage the resources as a resource manager says and how this is done in some situations that we have a service, it fails or something happens to the cluster node then the resource manager needs to decide what should we do. Is this someone making the decisions on where the resource should be run? It should be run only on the machine that we got for example connectivity or it's the machine that is actually working, it's not powered off or something. So for most of the cases that we will need and also to do some kind of more safety, we will require financing. So again, a little bit of exercising with the hands. Who knows what is fencing, meaning related to cluster software. I don't mean like killing other people in some nice matter but who knows it, hands up. Good, heard about it, that's also good. I'm working in support and when I hear from customer never heard about it, I'm getting scared. So why? The answer is data integrity and automated recovery. So fencing by its idea, it's something that helps in a cluster to determine it's safe to do some procedure. It helps us to say that okay, we have for example our two node cluster, we are communicating all the time in a cluster saying like okay, nothing is happening, everything is working, everything is fine. And suddenly we stop receiving some chit chat from the other node and what to do now? Imagine I'm the node one, here I got Sophia the nodes two, we are usually chatting a lot during the day and then suddenly she stops speaking to me. I remember she was running Apache resource that was serving a web page to some important sites. And what to do now? What should I do? I would say like okay, she's not responding so maybe it's not working there, I should start it up. But the question is like, we have something in common, we have the web page on one file system, let's imagine in X4, what if she's still serving that web page only I don't know about it. And I will say okay, we have a shared store so why not to just mount the X4 file system and serve it myself? Why not? Because what if I'm the second one mounting it and you don't want to see the disaster happening afterwards? So the idea is here comes the fencing and the fencing tries to prevent us from doing the data corruption or in nice words to keep data integrity. So whenever I'm not sure if Sophia is just not talking to me for some time because she's busy for a while and will be responding to me a bit later that is acceptable for me. Or if she really just passed off, she's sleeping. I need to ensure that whenever she was running something shared with me that she is not touching it or using it. So for that I use fencing. Fencing in the form that we will be doing today called power fencing is about like, okay, so she's not responding. It's more than I'm willing to take. There's more than I'm willing to take. It's called a timeout. And I decided it's over. It's gonna go like this. I need to keep the service highly available. So I will fence Sophia, meaning I will power her off and power her on in the best belief that she will recover from some problem. And the point is, once I try to power her off and I got information from so-called the fencing agent, which can be some script. It can be some management device on the server that can tell us, yeah, the server is now down. Until that information, I won't touch any of the resources that she was running. Once I get confirmation that, yeah, she is not there. She's powered off. I'm sure that, okay, she should be not holding any resource so I can mount it and start the cluster resources. Meaning I achieve high availability. And this is basically the fencing. There are several approaches how to do it. The power fencing is the nice automatic one as the cluster when the text some issue will do itself usually reboot of the other note. And once it's confirmed the other note is not running, it can take the resources. The other approach is that we can take if we don't have such a device or if we have some other restrictions, we usually call them like security considerations that we should not be able to just power off the other machine. We will focus on data integrity, meaning we are still interested in protecting the actual data. So one of the alternative approaches, so-called fabric fencing, it's based on access to the storage. Sometimes it's possible that your shared storage supports SCSI reservations, which is horrible term. But in general, you can imagine it as everyone who's accessing the storage got its key. So I got my key to the storage, Sofia got the key to the storage. And in case I have a doubt that Sofia is responding, instead of rebooting her because we cannot do that for some reason, I will just tell the storage, take her key out, lock her out from the access to the storage, which will actually mean that she will stop being able to access the data there. She will be also not able to take out my key, so I'm quite safe. And no matter in which state she is, I can say, okay, once I have taken out her key, I can use the resources there because I'm sure that she cannot access them. So again, I'm preventing the data corruption. Still, corruption can happen because we don't know what everything got there, but as a principle, I'm sure that she cannot do the corruption. The only corruption that I can do is that I can do. The little downside of this... Or what happened before? Yeah, already that happened before. Only downside to this is... Sure, I'm interrupting, but am I having any problems with interrupt? Or am I trying to download any package I have for like 20 bytes per second? It's normal, sorry. It's not normal. So... Okay, so what do we do? I try to download the weird bin in order to do anything. I try to... I'm just thinking of the number of repositories you are wearing. I can't even do an update, I think. Maybe somebody has that package on the hardware, I'm sure. Weird bin, at least? Yeah. I just think I can show you my LTE connection, but... That's for everyone. I was able to install this league with Xero, but it was the last package. Okay, I'm sorry for the connection here. I cannot make it much faster. I can try to make your hotspot here for the mobile data. That might be still liable. Yeah, but... Okay, just to have a little bit summary people. Who got at least one VM running, hands up? Current state? We got five people. Who got two? Great. Wait, that's more. How's that possible? Okay. To not make this just a waiting session, I will move on a little bit forward. It's a bit unfortunate that we are not even able to download some of the packages. The images that you have are already preloaded with packages that we will need because I was expecting it will be not so good, looking like it's even worse than not so good. So let's try to move on a little bit. That's the time I will tell you the password who didn't already guessed it. It's a DEF CON 2017. So you can try to log in into your machines and see if you are in. And I will get back now for a while and get a little bit of working. Let's chat. So let's get into my machines. And the most important part, once you start the machine, check the IP address of that machine so you know what you want to access. I also don't know the IP addresses of mine. So the login is root. The password is DEF CON 2017. Works, surprisingly. And my IP address of a node 1, I believe. And let's make some note of it. It's this one. So let's keep it somewhere aside. One thing that once you get into your machine, give it a name. Give it some nice name. Ideally the one conforming to RFC conventions for DNS names. So letters, numbers, nothing to loot the criss inside there. I will be not very creative. So I will change my node 1 host's name to... Oh, let's call it myself. On right. As I got some example here, you would guess how will be the name of the second node. So the node 1 will be on right. And let's... oh, okay, that's a note 2. No worries. So let's have a look at another one. So surprisingly I will use the same password. I will check which IP address I got. Oh, nice. Completely other one. I will call it Sophia to have some real feeling here about it. So again, I will change the host name. It's that host name. Sophia. Nice. Now I believe that's the old part I needed with root privileges here. Let's get into some normal system. So let's try to log in those systems with SSH. On your systems do the same. Try to SSH into your virtual machines. The important part is like you will accept the SSH key. And the other part... oh wait, I put the wrong one. Another important thing is like you will see if the host name has changed. So here I see the Andre. That's... that's good. It's on that one that it should be. And let's see if this is Sophia. Yeah, that's it. So we will go forward and we will try to log in into our machines using the SSH keys. And I believe we get a slide for that. So who worked with SSH keys? Hands up. Great. A lot of exercising for the morning, isn't it? So easy task. Copy your SSH keys to the virtual machines you have. We will use them. So for those that it looks strange. If you don't have a key you need to create it using SSH keygan. Then you will just continue with enter, enter, enter. It's pretty easy. Not very secure to be honest. But for our use case accessing the machine without password it will do the trick. So I already have such a key. So I will go ahead and just copy it to both machines. Meaning to the Andre machine, Sophia machine. If everything worked well I can again check that I get in without being prompted for password. And that is the perfect setup that we will need for doing something in Ansible. So Ansible questions. Who hears, heard about Ansible roles? Hands up. Okay, some of you I've been teaching already yesterday on the party. It doesn't count. So in Ansible I believe that from amount of Ansible lectures and presentations during the DEF CON there is not much point in getting through again deeply how to work with it. So just a brief getting through. Ansible it's a configuration management as we know. To configure some machine with Ansible we need access through SSH to the machine. Ideally with SSH key so we don't need to input our password too many times. And we need actually two files. One of them is host inventory or I call it a host file for the Ansible that is containing the list of the machines on which we'll be doing some actions. The second one is the playbook which is describing which state of a machine we want to achieve. And yes really the proper wording is which state we want to achieve. We want a machine to look like this. We are giving it a little bit hint how to get there but we really want it to look like this. And as mentioned earlier we have Ansible roles which is nothing much stranger than just nicely structured playbook containing a lot of hopefully related stuff bundled together. And you can imagine it as role of the web server or role of the system that is configured for synchronizing the time. It's actually something like encapsulation in the other programming languages. If you want to just do a lot of stuff in one group and you want to group it nicely you can have a role. So we can have a role install role web server that will do install Apache change permissions here put some files here and there and do some other stuff. But instead of doing the copy paste of this whole stuff we will put it into some role and have it nicely packaged together. We can also do inclusions with the playbooks themselves which is very similar to what roles are doing but there are some minor differences and the roles have bigger flexibility on some stuff that they provide. I wouldn't be going too much deeper into this but we will see how the time goes. Maybe we'll get also to that one. So I hope you are able to log in into your system with SSH keys and now let's get some Ansible roles to create a cluster. So if you got your papers on the other side that starts with creating HAPacemaker cluster that whole page is dedicated into describing what we want to do today in a human language and we will try to rewrite that into Ansible. And as you might guessed from the content of USB Drive there are some helpers that will help us a lot in achieving some of the goals and the biggest one is the role for creating pacemaker cluster obviously but it doesn't do everything for us. It do the most of the stuff but still we will need to give it some tiny touch in the end to do what we want to achieve. So let's start. I will start reading out who knows how to do the stuff in the Ansible. Feel free to start. Yeah, you can raise your hands. That's nice exercising but that's too much hand growing today. You cannot start in parallel writing your own Ansible playbooks trying to do it by yourself. It should be enough that you use just the documentation that is provided in the roles that are on the flash drive or the roles and modules that is in your default Ansible installation. You should not require the internet and you can still do the stuff. So I will try to achieve that one without looking into the search engine and do it with you. Last that will go through create HA cluster using the role HA cluster pacemaker with some name and using some fence key. So let's start writing. I will need some Ansible roles and to be able to use them the ones from the flash drives I will copy them here. So in my directory I get to the flash drive and I got the roles directory. I will copy it here recursively in roles. I have several roles that doesn't have to make sense so far but we'll do in a moment and I will start writing some playbook. So let's call it the folder that you need it's on the flash drive. It's called roles copy to your current directory in which you will work. So the layout should be you have some file and next to that file you have directory named roles should be named roles for particular reason how the Ansible is looking up some stuff so don't make it too fancy. Or if you do at least make a sim link to that. We have more people trying out the stuff and we will be doing some playbooks. So we'll be doing the playbook where we will be working on some hosts. I believe it's hosts. I will call them nodes. I know that I will be using the user root which I'm not running as it right now. I'm using as a normal user so I need to specify a bit more that I will be using a different user. And I want to use the role that was specified in a paper to do some stuff for me. So there is the name of the role which I can copy paste but to have some work out in the morning let's just rewrite it. And I will need to get some information on what to put there next to achieve some subparts saying create a cluster with the name test xx using fence xvm key. Okay create a cluster using something that looks like it is there but what next. So let's have a look at what the role that I'm using is actually having as a default for most of the stuff. And let's search if I can find something related to cluster name or something else. I see that we got here a cluster name variable that I can change that looks promising. Further if I skip through all kind of stuff I can see some fence xvm key location so that will be also useful for me. And I didn't get copied here completely. And that looks like pretty it. So let's augment a little bit the role definition I have here and it will be really just specifying the cluster name. And let's call it I will call this cluster morning. Yeah, like the cluster name morning. And let's use the fence xvm key that will be located I hope in my local directory but first I need to copy it from the drive. So there is actually fence xvm key. Let's make it available just next to my playbook and let's specify it there. So looks fine. This one part of what the Ansible needs is telling it into what state we want to get. But another thing that we want to tell Ansible is on which machines this stuff should be done. So we need something like a host file. And again you have two examples on the USB flash drive which is at the moment unnecessary because the second one was prepared for the case that someone can connect to my computer. So you can just follow the host normal which describes some example host file or host file inventory for the Ansible that we might like to use. So in this one you need to specify two things. One is the IP address of your machine and the second one is the name of your machine of your virtual machine how it is named on the hypervisor. And why that? The answer is again hidden a little bit in the defaults of the pacemaker role that is doing a lot of stuff for us. It will be trying to do some fencing for us and there is a small note. You need to define the hypervisor hostname in inventory for each cluster note. So the role can configure the fencing for you automagically. And by default it is trying to do so. So in that case my virtual machines were named nodes and I believe it was 01 and 02. So name them accordingly to what you have created. And one of the first stuff that you can try out is to run the play bug in the dry run mode. And everything should start looking nice. So let's make a little bit of space on my screen from the unneeded part. And let's have a look on how this will look like. Something is obviously wrong. Okay, who can guess what is wrong here? One more time. Hostname. The name of nodes. Yeah, great. Here I have a group name cluster and I named the stuff I have in my playbook nodes. So let's write one more time. Yes, it's it. We don't have any prizes this year I think here for great answers. So if I run it... No, we actually got a cookies here so great idea. Please serve the gentleman with the cookie of the prize. So who is able to get the playbook running at least in the check mode? Or who is struggling with it? Yep, I can show you the playbook. Command looks like ansible-playbook. Dash and my spelling English sucks. So dash the little thingy with the dots. Space hosts. Specifying the file containing the host inventory. And space the name of the playbook. Space dash dash check. Meaning, actually don't do anything just tell what will be done. Yep. So maybe you can equate it. It will just run the playbook without doing any changes? Yes. And if you see the output similar to mine that it didn't get stuck after first screen. It usually means everything is good. That's the first... Okay, so check if the file is in the directory there. And because we'll be going to the fencing part. I forgot to mention one thing. Very important one. And that is we need some demon who will be actually doing the fencing on the hypervisor. And I'm not sure how much of you will be able to download or install it. So we will try to maybe continue without it. But it's essential part for the cluster to work properly. If you ask me if it's really necessary needed I will always tell you yes of course. But there are some tricks that we can use to make it satisfied at least for presentation. So for those that are able to download some packages. And I will switch a little bit to the slides. What we would like to do is to install the fencevertd demon on your laptop. And here we got the... Something that should work in CentOS, Fedora, REL. Yeah, REL should be also the one. On some other systems try to search for vertd or fencevertd with either underscore something else. It should be pretty small package. So not a big beast. It's a small demon that does one thing. And it is that it runs on the hypervisor, so called your notebook. And it listens for some clients to send him request, reboot this virtual machine. Or stop this virtual machine, tell me what machines are there. So it is pretending to be something like a management card for the virtual machines. Our cluster would like to use this for doing the fencing. And to do this fencing it is using the fence key that you are now copying to your machine. And for those ones that were able to get with the playbook to some reasonable state, feel free to run it now without dash dash check. It will take a little bit. The point with the fencevertd and the fence key is that both the cluster node and the hypervisor got the same key. If they got the same key, they trust each other, obviously. And they can do the rebooting of the other machines if needed. So if someone got it installed and want to use it, feel free, it will be great. I will show you how the configuration looks like. It's pretty easy one. For those that didn't manage it due to internet connection, don't worry. We can still continue without it. But don't consider that to be really good cluster. I use a PCS. PCS on the host? The last thing during the... Okay, I will have a look in a while. Just let me show first the configuration on the hypervisor and then I will get there. We are getting closer to the things are getting broken part. So be ready for the stuff. So on the hypervisor part, the configuration is usually located in etc-fence-vert.com. It's pretty easy configuration where you need three things or two things, depending on how much your default diverts from what we need. Important part that you need is to specify the interface on which the daemon will be listening. And this is the interface where your virtual machines are. So on hypervisor, there will be typically fearbr0 or something else. And on that interface, it will be listening. Then you need to specify a key file that it will be using. They should be the same file as you get on the flash drive. Or if you create your own, just specify your own to the cluster that it will be doing. And in some cases, you would have to configure the backend for the lib-vert with listener multicast. But in most of the cases, this is the default ship with the distributions. But it should not be a problem. So I have this fence-vert d, it's a compiled one, so you can see a really strange path here. Don't worry about it, it's running somewhere. I hope. Yes, it's running here for me. So that is something that will be great if you have. If you don't, it might be a bit tricky a bit later on. But don't worry, we can continue without it. So let me have a look at someone who got a problem. And I will let, in the meantime, this labor guy have created to do some stuff. So for those who want to watch, this is the part. Yes? The cluster name is to be some string. Some concrete string, because I named the cluster name as cluster. And it doesn't, maybe that's a problem. In the guide, there is something that could be named as tic tic. Oh. That should be okay. We'll see what will be there. And I will go here. And we will see there. The part with the 10th XVM line be failing it. The fence where V is not there. But it's something that we can neglect at some point. So it is expected it will fail during the test check, right? It's expected one because we don't have a running cluster. It's quite surprising that it's that far. Okay, just with the check. So with the check it might fail at some point, right? So actually when it is okay. Yeah, okay, so this may be quick. Okay, so people, let's try it without check. Let's turn the safeties off. Come on. Yep. Yep. And I was initially planning to give you the solutions on the USB drive, but I decided it will be much better to hide it for a while. But for those ones interested too much in how the whole stuff is done, feel free to stop by after presentation and you will get your copy of the solutions for the stuff that we are doing here. They will be also available online, but if someone cannot live without it, I have them here. So I will run it just one more time without the check so we can see that nothing needs to be changed in there. And you can also wait for it to get finished and see if the cluster will get created. Once this whole thing finished, you actually create your cluster. So not that much of a tough job so far. If you want to see how it looks manually done, you can have a look at the paper in the front in the useful resources part, the first horrible long link, links to documentation, creating the cluster. It's the rel documentation, but it's relevant for CentOS and also should be relevant for Fedora. Depending if the really new one didn't change something, but it should be still relevant there. So in my case it looks like we got a cluster, but that was not the whole point of the task that we got there. We wanted to actually create some cluster resource and I couldn't find a better cluster resource than the one that is shipped with pacemaker that literally does nothing and it's a dummy one. And it's named dummy. So let's create a dummy resource and here we get into a little bit more of clustering and the naming there. So feel free to stop me if you get lost and something. I will try to explain it first, but if you are still not sure, stop me. So... This is correct if we are not running the fence for deep, so... Yeah, I'm running it, but... Okay, then we're having some problem communicating. One more thing, do you have a part one on the host? Yes. Then enable port one, two, two, nine. Yeah, that's a good remark. For those using the firewall, which I hope it's a lot of you, I'm not sure, so try both. To be honest, it's a multicast thingy, so UDP should be the answer, but I have seen it also using TCP sometimes. So I will be lying to you to say I know one, two, two, nine, port on the host for those running fence verb D should make the fencing agents happy, or at least communicating. This is the part that I'm, for example, dealing at work quite on daily basis on, is not communicating there well and the rest of the cluster is not working properly. It is unfortunately the reason that cluster relies on the fencing to work, because it can then do the stuff. It can actually do something. Without fencing, it's something like giving your employee instructions without giving him permission to do them. So you can imagine it as that. Employee won't do it because he's scared of screwing up something, and you won't give them because you cannot communicate with them. So... What's more, from the host file, you can... Yeah, sure. Host file you should have on the USB drive. That example is there. The playbook is something that I'm writing on the spot, but it should be doable. Is there some new error that we have? Yeah. And mind the spaces. As we are using YAML, it might be really tricky on the spaces. This one looks to be fine, which I'm also surprised. But in the meantime, while you check your playbook, I will try to expand this one with the cluster resource, which will be provided from the... another role, pcs-modules-2. And we will actually write here some kind of task. So you may notice we are already getting to customizing the stuff, not just using the things that are available somewhere. So what we want to do, or what the paper is telling us to do, use the pcs-resource module from the pcs-modules-2 role to create a resource named myDummy of type dummy. And to understand actually what we need to do, let's have a look on the documentation. Yeah, documentation. So one thing you will need for documentation, as we will be looking into module that is not in the system, it's to specify some kind of pad where to look for the modules, and then specify what module we are looking for. So hopefully this will work. It is something that you have written on the paper on the front side. Just check the accessing Ansible pcs-modules documentation. It's exactly what I have written, and it's exactly what you need to run in the same directory as you have your playbook. From there you get to the roles and the part, and we will see documentation what we can do. So unsurprisingly I have an example here describing the dummy, but the important for us is that every cluster resource has or is a mandatory to have name and resource class, which I believe should be resource type. So what we want to create is a resource with the name myDummy and with resource type or with the type dummy and nothing else. So looking at this resource class, it's mandatory, but it got a default, so I will don't care about it. I will need the resource type, which is defined as a dummy. State is again default and so on. So getting back to the playbook then, I will create pcs-resource named myDummy. It's hard to choose the right names, people. It's just the random that came to my mind. And with the resource type dummy with a capital. This is a really simple resource. I will again do it somehow so you can see the playbook and I will be running something in the meantime. So on the top you get a playbook and I will be running again the playbook on the cluster. It should be pretty fast this time because most of the stuff is already done and should be green. And at some point we will get to create a dummy resource. And here's one mistake I have made that went pretty unnoticed for the cluster part. Imagine that we have now two nodes in a cluster that are sharing the resources. We actually want to create one dummy resource so I have run the command here asking both nodes to create a dummy resource. They were, I would say, dumb enough to try to do that. But in the end I will have only one resource. It is possible that this happen if there's some race condition. Blame me, I've write the stuff. So at least it didn't fail. What we can check if really the resource exists there I will have a look at one of the machines. I will have a look at Sofia, the machine and have a look at the status of the cluster. So here I can also see something is failing here. Which is surprising, but no it's not surprising. I understand why. So here I can see I got a mydummy resource that gets created and it's actually starting on the node on right, which is great. And it is there only once. That is expected one. And I should actually write in a playbook I want to create only one resource for the cluster. So when working with a cluster part we would like to utilize a lot of run once directive saying like it is okay to one of the nodes just to create the resource in a cluster as there is no difference if the resource is created from node Andre or from node Sofia. So in both cases will be still the same dummy resource as it was before. So who got a dummy resource? Hands up. One, two, here. No luck. That's a Yamler or somewhere. Yamler. Let me have a look. Just a space. This is module. Yeah, if you get too huge error it's a feature meaning like you have a typo. It won't tell you you have a typo, but it's usually it. So we have a cluster with the resource. We're good people. You already know how to do clusters with one playbook but let's not stay on something so easy. So what I want you to show is like we would like to add some shared storage because now our dummy resource is just nothing, a thin air running either on one or the other node but actually doing nothing and to actually do something that makes sense it is a good thing to have some shared storage. So what we would like to do we would like to use another role to set up the shared storage. There are several approaches how we can approach this. One of the easy ones is just to share a drive itself have a partitions on it and just have some file system there. It's a little bit clunky or not very flexible when you need to change something on that partition as compared to some other options which are the LVM who works with LVM? Good, very good. So you will also like the LVM in a cluster. Unfortunately it's not just the normal LVM it's called highly available LVM and it needs some additional configuration. So moving on on the second part on our paper to get into something exciting we would like to create a highly available LVM of a tagging type it's the one that is currently working with a module that you have there as the other one I managed to grow during the week so we will have a look at the tagging variant which allows us to have the shared storage always only on one of the nodes which is quite good and sufficient for our use case we would like to run our web server in the end only on one of the nodes. So I will start writing a playbook that I will run a little bit and then I will walk around for more issues that you are facing. Any questions in the meantime? Not related to errors. Great. How would you list all the... When you actually type your command I didn't see it and then I had to guess it. How do you find out what modules are listed in a role? I will do the LS roles here. I will find the modules. What modules are there? Yeah, because PCS modules... Okay, to know what modules are included there in PCS modules you will have a look into the PCS modules library. And that has to be like that by an Ansible standard? Yeah, this is something that is a module bundled in role. Usually the modules are bundled with Ansible itself those ones provided in a system and you will have to give them a moment to figure out where exactly they are in the system. It's something Python somewhere somewhere something and at that place you can find them. I actually know how to find it. It's a brute force but... I know there is a module named file system it's something from Ansible so the directory where it is hiding in a system it is... it is... Okay, it's not. It should be not. It doesn't matter, it's okay. Yeah. Yeah, lowercase. So if you will be looking for the modules that are by default somewhere findable on my system, meaning it might not be on your user lib64, Python, side packages, Ansible modules. Yeah, the thing is Ansible... I don't actually need to grab them but the thing is if I have them involved I don't know because they're not Ansible but it doesn't include them so I don't know unless it's specified the full path you might want to see. Yeah, the problem is when it's included in a role that it's in directory it's not picking it automatically. Yeah, that's a nice one. Okay. So thank you for the question. Good for thinking how to place them more intelligently. So still in the meantime we'll move a bit forward. I will try to make another playbook and I like reusing the code so we'll make it a bit differently here. I will still use some of the roles I have but I'm not willing to copy the top part and again we need to look up some defaults from the role that we are planning to use. We'll be using the role HA cluster LVM which is trying to automate for us setting up the HA LVM setup with all kind of quirks and requirements that doesn't really bother us as long as we just want to highly available LVM. We just want it to work. So what do we actually need? We will need a shared drive as most probably the one as a default it's not the one that we want to use. On a virtual machine if you use the file I gave you it should be the DevSDB or the second drive or something that is not the normal system drive. Then we specify we want to use the HA LVM type tagging. That is a default which is good enough and we want to name our cluster VG somehow so how to name it. Is here some mic? Good. So I will use mic. Can I have a question if you share your driver's name different than the old one? That will be a little bit tricky. That will be a lot of tricky. So for the different names there should be some identifier in DevDiskBuy that will still lead to the same drive but to be honest it might just not be there for empty drive. So you can you can actually make a simulink on your machines which is a dirty approach but it will serve the purpose. Yep. I think the dirty approach will be create a simulink with the same name on the boat machines pointing to the right device and then refer to shared drive to that simulink. The system will follow up the simulink and set it up there and I'm curious how well this will work so go ahead with that one. Maybe it will be quicker to start a new and create a new report machine It depends. If you are missing the drive somewhere just add another one or you can add actually drive to both of the machines and then have the third drive name same way so that will be again a dirty workaround. Okay. One thing I will try to progress here before you figure out how to trick your nodes into having a shared drive with the same name because UID should be usable but it's usually fixed to file system or partition and if we have the empty drive it might just not be there. So an example that we have in the defaults is pointing to device on iSCSI which is easily identifiable because there is IP address and usually you're using the same but again might be different. So that's quite interesting. I will therefore still move it forward One additional thing that is mentioned in the nodes but the bipad can again just point to the other things because the local drive will be again the path one or zero somewhere else where. So one additional thing if you use the LVM on your virtual machine the one that I have provided you is using it you need to specify one more nifty thingy and that is a list of the local volumes this is something bit manual it's related to way how the HALVM tagging works it needs to know by hand what is considered local at some point it's hard to distinguish the other way than just by telling it compared to the another approach that can be used with HALVM which is called CLVMD that one has some flag through which it can detect ok this drive is a shared one or the clustered one the other one is not but for the tagging one we don't have such information in the system that will just tell us yeah this one so we need to tell system this is the list of our local to that machine and surprisingly as before that should be it so looking at the roll which looks pretty shy and short I will go ahead and run it and I got a error wow so yeah this is the same error as no ok this is the one I put here intentionally so one more thing that it will have to do go into your rolls and create a sim link for the pcs modules without the dash 2 which is the old module still used by the HA cluster LVM which should be rewritten soon but it's still not there yet but for our purpose it should be enough to just run it there and after that one you should be able to run the playbook the playbook is up here so you can see it it will do a bunch of stuff it will rebuild your ingram fs who knows what it is in ingram fs good you like it being rebuilt don't you a lot of great stuff so well do you make sense yep what is it in ingram fs is this partition thingy which you are probably throwing as bootable kernel that's that part of the thingy that starts before even mounting the root file system and it hits the part why we are actually rebuilding it the reason is we have changed the LVM configuration that continues something we would like to be detected or activated during the boot so that is why we are including configuration not only in ETC LVM LVM Conf where it is available when the system is running but also into ingram fs where it looks before the system starts running and that is the point at which we want to prevent our system from doing some wrong stuff for example at that time the system might just think that our shared LVM is a local one and just try to activate it and then you can imagine what can happen if two or three machines do the same and then you suddenly see or you don't see the LV's there or you check some errors that's pretty much it it will access the shared data that it should not the HALVM tagging variant in its idea how it works is doing one really simple thing and that is using the tags as the name suggests it use the tags that it says this VG has a tag node1 and each node has a definition saying like you can access all LVM stuff that is in the local volume list that we specified manually and all the stuff that bears our name one can access the VGs that have a tag node1 VG2 node2 can access the VGs with a tag node2 this way it is guaranteed that they won't try to access the LVM that it's not local or not tagged by their name and this is the way how they are just handing it over they remove the tag put the new one another node can take it over that's also the reason why it can be accessed only from one of the nodes at a time so time for checkup who is stuck on something broken horribly beyond repair great I'm a little bit disappointed that no one broke the system completely yet yes I used a link for two nodes cluster because there is some where I need the for that one I would have to look up because the live migration is something bit special there and I think it's working with the tag invariant there and it's doing a little bit of magic you cannot use it if your virtual machine disk resides on the file system because apparently it would mean you need to mount X4 or whatever local file system on two machines at the same time this will make corruption to happen there so in the case you got it on file system no but if your machine is using just the LV in that case I think there is some procedure that it avoids this tagging issue but I will have to really check in the code that part but for the file system for sure not this one I'm aware that with the live migration there is something it's not always needed to have the CLVMD for live migration so I believe this is the part we can have a look afterwards thank you yes it is good you see it on both nodes but it doesn't contain anything so let's create some stuff and have you shared the drive it's not very good it's unfortunate the reboot should be not needed we are in an environment that should just work let me have a check if I can even see it I'm just assuming that my machines are doing fine when I'm running the playbooks I'm not checking them also only one ok I don't see it here I can see my gear what will be happening there let's try the ok that's interesting should be not but that's an interesting part why we don't see it there ok don't look now you will see a horrible thing happening it's nice ok just ah thank you looks like zero is here looks like something here meaning we don't see it or it's not the same drive what you mean we have set up machining properly and I will kind of ruin the rest of the stuff so let's try to fix it ok one of the drives is here and I believe the second one should be the same ok looks to be the same so the hypervisor is playing tricks on us that's not nice ok let's try to reboot that's quite a it doesn't help ok I'm really sorry maybe for the hiccup because thinking about what I tried it was also the file I'm using mostly the blog devices so we don't have it's not complaining about any ok one more idea that came to my mind is if the one that can see it has actually written it down there so something like we would see that even after the reboot ok everyone looks like we got a problem here but I'll try one thing I'll try to turn both of them down and start them in a different order which I'm guessing what will happen will be a horrible thingy but I wanted the 249 to start first node 2 ok so still what is in front of us if we manage to get the shared drive to look like a shared one is creating some lv on it in this halvm tagging it's a bit special and creating some file system on it and looking at the time I think I will be mostly sorry for not getting into the last part but we will get as close as possible to that one I can see something happened here when I powered them off and I started first the one that was second but I'm guessing it's just playing tricks on me so despite of that I will try to continue and maybe we will see how it looks when you believe your storage is shared but it's not and something horrible happens so I will try to at least demonstrate how we can move forward on assuming once we got the shared storage it works and see what can be done further on can you try really powering them both off and first starting the second one okay so let's get further on we will try to create the lv and the file system so I will do it as you know me I will do it the quick way so here I believe we don't need any special roles we will just do some tasks and those ones will be the pretty lame ones if you want to figure out why I'm using the task that I'm using there it's the second horribly long link on the front page in documentation that one is explaining how to add the lv into halvn properly as straightforward as you might think of because the role that we have has limited the access to the vg that is shared in a way that it should not be accessed by multiple nodes at a time so it is actually preventing us to do a lot of stuff to harm our data and we need to circumvent them in a good manner to really create something there so it consists mainly of adding the tag as I was mentioning we need to have a tag on the vg to be able to access it here is one different thing as one I was talking about as the stuff changes over the time we'll be adding tagpakesmaker and we'll be then trying to access our data using that tag with some works in configuration it sounds maybe weird it's the official procedure how to do it right important part you should do it only on one node at a time then on a boat of course result all nice stuff happening let's start with data corruption loss of data and so on so going bit through this part I will add tag to my vgmike this is not automated as much as it might look like from here I will try to activate my vgmike and I will use a special trick to get it activated otherwise the normal configuration will not be able to do that so the volume list will get me some specialty inside so be prepared I will be now more writing than reading and doing a lot of stupid mistakes and pronunciation the whole purpose of the stuff is to allow one of the nodes to really access the vgmike on which we want to do some stuff and I'm missing here somewhere no I'm having here too much of quotes so this way I will actually activate the vgmike for us to make any changes after this thing I would like to create the logical volume which is the standard module shipped by the Ansible and this one contains on another stuff on how to create it and to show you all a little bit of how it looks like I will maybe skip now on showing that off to not slow down the presentation I will show you actually how this looks like further on sorry for we got lost a little bit in a time but let's have a look at the solutions so playbook solutions you wouldn't believe how the first one looks like the same as once I have written now but on what I wanted to write here is like actually creating lv and this will be more of a session on how to look like on the stuff you can still follow up the paper and try it out at home I will hand over also the thing that you should have in the result as a playbook to try it out the right way but for just getting through that so we know what will be happening let's have a look how it is there so here I am using the cluster vg as the vg name and I am creating lv named data size 512 max this special thing that we need to pass is again the config activation part that we used for activating the vg this one will allow us to create the lv and make it active make it active means the lvm can have lvs but some of them might be just not available to us meaning they are not active the ones that are active can be accessed there is a def cluster vg slash data a file or this is a sim link to the dm device that we can actually use if the lv is not active the sim link is not there we actually cannot even access the data even if we know the lv exists so to actually create a file system we need this lv to exist and be accessible that's why we use the activation command then we create the file system here is an xfs on that path and once we got that one we get to the procedure that we've done in the beginning but reverse way we delete the tag and we deactivate the vg you may notice I'm not using any special configs there lvm is much happier in deactivating the stuff than activating it so if you want to prevent something accessing it's much easier than making it work so that's why I can omit a lot of stuff this will create for us the lv with xfs file system that we can use further on and the part that we will be looking further on will be how to get it into the cluster because the cluster itself will see now halvm it's somewhere but nothing is managing it it's just sitting there what to do with it now for the way so it can be used for something useful for the cluster resource for the halvm tell it which volume group it is and use a special keyword exclusive true meaning like we want to exclusively run it only on one of the nodes I believe this is something that it's not used when the live migration is in place so that will be maybe also the reason for it exist but still we can have a look on what is there then once we have the lvm in a cluster the cluster will take care of activating on one of the nodes meaning this tedious job of vgchange activation and so on will be done by cluster and one of the nodes will have active lv that we can access on that lv there is a file system which we can mount which is another cluster resource obviously named file system and we again just specify to it which device it is in which directory to put it in which file system is there one of the three key things that I would like to at least introduce to you is that the cluster despite we are telling okay we have a dummy we have a lvm resource we have saw the file system resource there is one problem the cluster will start it in a way how it is specified there but what if we want some specific order in which it should start we then using a cluster something named constraints there is a module pcs constraints that allows three important things choose the order meaning first activate lvm and then mount the file system as it doesn't make sense otherwise around second interesting constraint is co-location the node on which you activate lvm on the same node try to mount the file system again it doesn't make sense if on node 1 I got lvm running and on node 2 I try to mount the file system which will obviously fail but it will just generate the messages and the third constraint that we can use is a location one which says like I would prefer if the whole thing runs on the node 1 if possible otherwise the cluster will just choose one of them all of them are quite equal they have some scores on which they decide but without any special guidance they just decide on one of them so these are the constraints that we can use and it's important to use afterwards as you can imagine the last part of the presentation which is installing the apache and running some resources from it it's just in addition to what you can see here it's enabling some file system mounting it somewhere and making some stuff there so I will not show it here and leave it as a homework for those interested in or if anyone interested I will be outside for talking a bit more of it but because I'm a bit out of time I will be kind of ending it at the time of saying that at least to the cluster with shared storage kind of we have some resource but we know how to do it further and it's more like exercise so I hope it was not too tedious or boring here I hope you can finish it further on the materials will be available somewhere see I didn't have even used this slide and I would like to really thank you for getting here trying out some stuff saying me some nice things of what is broken and so on as a lot of work here was done by me to ease to ease my daily workload and I believe it's worth sharing with the others and that was the whole point of sharing this with you so thank you for your attention enjoy the rest of the DEF CON and feel free to go with me for a coffee and some more questions thank you the playbooks if you check the actual address there with the only slash blog you will see some examples with the whole playbooks but actually the cluster is just that one line that we already created the first playbook is create a cluster I have also solutions for RG manager which is the older version of the needed goals to configure the server and the client yeah feel free to keep the papers you can use them please return the flash drives thank you very much thank you I have seen that you have PCS libraries that manipulate the PCS resources and I work on these two last three months I'm building the progress cluster we are using PCS and I'm using Ansible and I was writing these libraries on my own and I see you're quite sophisticated because you always check the difference and push the site here at the bottom I'm not really proud of but I assume you wrote it yeah the difference it's a nice one how to do it but I have seen one nasty bug which happens when the cluster configuration at large it might take just too much time and it might fail during the process however I haven't reached yet that size of the cluster we're testing in I assume thank you thank you very much I have to run the conference thank you very much you teach me I just I just I just I switch to I switch to I switch to I switch to I switch to I just switch to I switch to I switch to I switch to let's please let's please let's set your mommy on rail will if you want you can write it down for me and I will say it it's great to see the face behind the case Thank you. Thank you. Bye. All right. I've seen that one, Bob. To be honest, I got quite late into that case. We need to listen to the next round. We'll be seen after. We'll be back soon. I won't just go outside. You can continue here. The internet is a little slow. It's not functional, I would say. Yeah. I hope that you're ready for a problem with this. Yeah. I think I'm voodooing. Yeah. Yeah. Yeah. What do we have? I'm asking the cook. On red. The flatbread. Can you give us a coffee? Yeah. I'll check me, sir. OK. No, no, no. OK. I'm sorry. Do you want to go to the bathroom? Yeah, I don't know. Yeah. But I want to show you a few things. What if the next room is a little bigger? Yeah. I mean that room. Yes. Yeah, but it's on the other side. Yeah. But the next room is a little bigger? Yeah. No, you don't have to. It's on the opposite side. OK. I'll be back soon. OK. I didn't have anything to put out there, so I was just writing the bills. I'm doing this job that... and that means that if I'm working on this, they get it out, so to be honest with you, it means it's not the job that's being asked with it. It's the place where it's said that you can find yourself. But at this station, there was something... It's sometimes this, you know, it's like teaching something that can be just here on one line, even with the people without the knowledge that you might find.