 Very well populated session today. Welcome to the final set of live demos. We've got four marvelous speakers to whom you have seen before. So we're starting with James Purple Idea talking about configuration management or something along those lines. Okay, take it away. Thank you. Hi, everyone. This is going to go really quickly. I gave a full version of this talk, but apparently they were missing out on lightning live demo speakers. I love live demos, so I'm going to give you like a third of the live demos I did in my main talk, which was recorded because there's a great video team. So really, really, really quickly, if you have questions, we'll see if we have time at the end, but we won't have many. James Purple Idea on the internet. This is the lightning version of this talk. This is lightning. I made it just for you. Big thank you quickly, all the stuff you can read, because this is the lightning version. And just for curiosity, so, oh, six. Why are you all giggling? I didn't even make the jokes yet. So here's some stuff. Puppet in general, if you're not familiar with Puppet, there's lots of nasty hacks you can do, and it really makes for a very unclean design. And you can read through these slides, which I'll post or in the main talk. And basically, I come to this conclusion about, is this the right way to do things with all these scary, nasty hacks? Come on, is it? Of course not. So here's my nope guy, who I like to play for you. This is my favorite nope person. If you have a better nope person, please let me know because I've never found a nope that says nopey as this. And the nope thing goes off. Even in a lightning session, it is sufficiently important that I show you how nope this is. So eventually I sat down and I said, I'm gonna write a new tool. Oh no, not another tool. But I have a really good reason for it. It's called MGMT config, because I really am bad at naming, so I'm sorry. And this is about how Red Hat loves Debian and so on. Anyways, my tool has three main design points. The first is that it runs in parallel. So if you think of a graph, it will go through on the whole DSL in parallel. It's event-driven, which I'll show you shortly, and it works as a distributed system. And let's see if we can get through these three demos. So here's your sort of status quo. You go through the red lines or basically what one puppet execution might look like doing one, two, three, four, five, and six, and then seven. But in fact, you can actually parallelize this to have the whole left side run at the same time as the whole right side. Make sense? Yes. Yes, hey, there we go. But also, if you can see the two A and the two B, once the first blob, this resource, has run, these two resources here can run at the same time. So should I show you the live demo of this? All right, let's do this. We'll actually do one that looks, let's do this one. So this blob here will take 15 seconds to run. This one should take 10, 10, and 10. So the whole thing should take how long? Math is not with us this morning. So 10 seconds, then then, and then another 10. And this one will take 15. So the whole thing together should take about 30 seconds. That's right. And we're going to ask the system that when you're converged, please, please converge. And once you're converged for five seconds, quit. So the whole thing together after that should take, ooh, that's loud. Right here, some. OK, so this is the little, this is actually running the tool. And so the whole thing in general, we'll time it. We'll run this. So it starts up. If you look here, you can see the 15 second install and this 10 second thing are both running at the same time. Time goes by, 10, 20, or 10 seconds go by. And then right away, this first one finishes 10 seconds later. And this second one starts up. Five seconds after that, you can see that in parallel, this other one finished running. Five more seconds after that, that third blob finished. And that third blob at the bottom started. So these blobs are really resources. So this is something that's actually doing some work in a very specific way. Goes through. This last one finishes. Five seconds go by and boom, if you can see, it took about 36 seconds. So that 30 seconds, waited for five, very little overhead. Make sense? Whoever's on their email is busy focusing on their email. Everyone else is awake. So that's basically the idea. You can run in parallel. Turn out to be quite efficient and useful. The second aspect is the event driven aspect. So you think of puppet or existing tools that run. They go through the whole resource graph and then 30 minutes later, they start up and run again. Go through the whole thing, checking, applying, and everything. But in fact, if you want to make a change outside of that 30 minute window, or if after your tool has run, something changes, you won't actually notice until the tool runs again. So in fact, what we actually do is we start up, we watch for events on every single resource, and we also check and apply. And then if something changes, we can instantly fix it right away. So demo, demo. Live demo, all right. So we'll just do a very simple graph. Just sit here. That's more comfortable. So this graph just has three resources. So there's a F1 file, an F2, and an F3 file. And they each have contents IMF2, IMF1, IMF3, and so on. And this fourth file here basically says I shouldn't exist. So we're actually just going to run this over here. And on the right-hand side, we'll just make a directory. So you can see these files. There's nothing here. A little squealing there. So we're going to run this thing. And very quickly, we go here and see that it's made the three files. And you can, can you see that OK in the back? On the terminals? Yes? And you hear me in the back. That's a good start. All right, good. So you can see that the three files have those contents. But you can actually just remove F2 and cat F2. And it comes back, right? So remove F2. It's right there. But it works so quickly. You can actually remove F2 and cat F2 in the same line. And as quickly as you do this, the file just sort of comes back. And of course, you can do things like I go, hey, Debian, and put it into F2 and cat F2, and still sort of same thing. And if you do this, cat F2 can actually even watch which runs this thing over and over so quickly that as fast as you're running it, the engine on the other side is actually noticing these things and fixing it as soon as possible. Now, this is maybe not very exciting just for files. But think about all the different resource types that we can actually apply instantly when you want to make a change. And when we look at higher level resources, like say virtual machine resources, or container resources, or even some sort of database resource, all this stuff will happen live very quickly in a real management engine. Any questions quickly? Questions? No. All right. Shall we continue? How much time? Six and a half minutes. Six and a half minutes. So this is just some examples. So for SystemD, for services, we use SystemD events to get this information. For packages, this isn't possible without the excellent package team, part of which Zymian is in the back and helped me in this possible and answered my stupid questions on IRC. So thanks to him. And what does this really feel like? I've said this is config management. What does it really feel like to you? Or feel like to me? I think this is actually sort of a vague sort of kind of monitoring because you think about actually putting together a system that has config management, but also monitoring the state live so you can fix things or notify someone if something changes. So it's all trying to wrap things up. Really quickly, the last quick demo I'll do. So this is just a quick topology that you might be familiar with. Clients and servers. What's the problem with this kind of topology? Single point of failure. Single point of failure. What's another problem? Scalability, right? Here's a different topology. The arrows are actually pointing downwards. This is a central orchestrator. And what's the problem with this sort of topology? Single point of failure. That's right. Same thing. And what else? Scalability again, right? So you have the answers. So in fact, just skipping forward, we actually build kind of a network like this where everyone is a peer. And every peer can talk to any other peer in theory. But what we actually do is elect temporary primary machines which become the XED masters. Because we build an XED and the RAFT algorithm to do this. And that's how we communicate. And what I'm actually going to show you is how these machines actually talk together. So what we want to do is we want to have a machine be able to put information up somewhere, in this case, into a distributed key value store that's managed by the cluster, and have other machines be able to pull that information down. So each machine which I'm going to show you is going to have one file which it creates on itself and one file that pushes up. So on that first machine puts one file on itself, puts one file up. How many files is it going to get on itself? One plus one, two, exactly. Very good. Let me just show you this, see it working. So I'm just going to run this. And actually, we need to make a directory for each of these machines. So right here. And we can just tree that. So you could see there's, in this case, four directories, one to represent each machine, just because I don't have a lot of machines with me right now. So we'll run this first. We'll just run each thing at a time. So we run this, starts up, and boom, you have two files. So it puts one of those files on itself. The second one, it puts a virtual representation in this database. And then it sees what's in this database, and it pulls it back down. So you've got two. Should we start up the second one? Start up the second one. So we'll actually do the same sort of thing. Now we point it at anyone in the cluster so that they can cluster together. We start up the second one. Same thing. How many files are we going to have on this new machine? I see a three. That's correct. So you have one on itself, pushes one up, and then pulls everything down, which is now two. And that first machine, how many is it going to have? It's going to have three. So let's run this. See how fast? Boom. They now have three on each of themselves. Should we add a fourth one? A third one, I mean? OK. So let's continue this game. How many files now on when we run this? That's right. Four. You guys are getting it. We're counting. Counting as a team. It's great to work together. So we run this. Very quickly, you've got four files. And the other two, notice instantly that something has happened, and they add those additional files. So this kind of pattern would be used for something like one machine might be a web server, which wants to have traffic routed to it. So it would say, hi, I've turned on my web server. I'm available at this port. Please route traffic to me. And a router might be looking for these sort of patterns and saying, ah, I'm looking for web servers that match a certain pattern. I will now open a route to you on the fly when you ask for it. And similarly, if the web server were to shut down, the router would see that rule disappear and shut down. Make sense? Very important for automation stuff. So we've actually now started three machines. Should we add one more, just for fun? All right, I have one more here, and then I definitely run out of terminal windows. So it doesn't show very. So I'm going to run this fourth one, and it starts up. And then quickly, you can see that they all have those there. And if I actually just actually ask the cluster, how many servers are actually running? So you'll see there's three servers that are participating in this cluster. Everyone else becomes a client, because we don't need an infinite amount of primary servers. But what would happen if we were to kill one of these machines? So let's just pick one here. I don't like this one. It's somehow, it's on fire. So I'm just going to kill it. And if we go back here and look, you can see now that it used to be H1, 2, and 3, but now H3 has died. So it has automatically said, I'm going to start up a new machine, because we wanted, in this case, at least three servers. All right, so this sort of happens automatically. It's built into the code base. There's some more demos, but we're almost out of time. So that is the same thing. We kill a demo. We kill a server, and a new one comes back up right away. There are more demos. Again, I'm not going to show you them. This is sort of some slides on all the magic that happens, which you can watch in the video. It's super fast. And you can actually even run existing puppet code on this engine. That's a little side project that a colleague or a friend of mine is working on. Future work, lots of stuff to still do. This is really about a community project. It's not a product. So if you want to be involved, please send me some patches. You can use Twitter or blog about this or something like that. Hack on it. Hack on it, right? Listen to the slides. This is a marketing slide. Again, told you it's a project. We can recap. That's my answer. Now let me recap. This is Arthur Benjamin. As he recaps his pen to sort of finalize. Here's some links. I've actually written four blog posts about this right now. So if you want to have a look online, here are some links. And yeah, if you have any questions, I'm here for today and the morning tomorrow. So please join us. We also have an IRC channel. Thank you very much. Thank you very much. Next up is Neelan Maree talking about the Maycat radio telescope. Testing, testing. Great. I think it says actually not just me, Neelan Maree. But this is my colleague, Mark and Trevor. And I will be talking and you'll be driving. But he's, OK, so we're doing it with this laptop. This is on our VPN. So this is not just talking. I'm talking about the radio telescopes. I was showing you the radio telescope live. So we're going to move stuff that's hundreds of kilometers away from your live on screen. I was just able to know machine for tonight, just safety. Could I see my stuff? Yeah, I fucking. Can you find them in the clouds, please? It's not that kind of telescope, unless. Right, OK, so I'm OK. What do I push to get a next slide when I want to do that? I don't know where it is. OK, I don't understand this Mac stuff, so I don't know. Right, OK. Hi. So why are we at this conference? Because we use DB and derivatives. You can guess which one is quite a lot. We actually from what we call the control and monitoring groups. So as you might imagine, a telescope consists of a whole bunch of different things and we make them work together. Next slide, please. OK, so this is not what our telescopes look like. These are optical telescopes. This is kind of a very serious amateur model. This is more of a professional science telescope, but next. So radio telescopes look like this. OK, I was looking like this. I was looking more like this. But they basically look like the satellite fishes, but they're not listening to satellites. They point it at stars or other interesting data objects. OK, so I was looking more like this. Oh, sorry. So this is a difference. Why radio telescopes? So this is an optical image. This is a radio telescope image of exactly the same piece of sky. And you put it together and you can see that kind of tiny different things. So I'll get to think about why now. So next slide. So this guy is Sir Herschel, someone or other. I forgot his name now. But he does an interesting experiment. So when he was alive, people really knew about the spectrum. You could take a light. You could put it for a prism and you get three colors. And then what this guy did was quite interesting. He decided to measure the temperature of colors. So that's different from the color temperature you might be familiar from your monitor. But anyway, so he had thermometers and he put a thermometer on each color. And then he saw, my goodness, the hottest color is invisible. Here, beyond red, there's some other light, which is even hotter than the red light. And that's when infrared was discovered. OK, and then this guy, James Clark Maxwell, Maxwell. He's a personal hero of mine. In my other life, I'm an electromagnetist. So he made the equations of electromagnetics. But mainly he realized that light and radio waves and all that stuff is all part of the same thing. So we have a spectrum, right? Here we have visible light. Visible light, right? So as the frequency goes higher, your wavelength gets shorter. So light has very high frequency and very short wavelengths, which is why you can see fine details of it easily. And at the lower end, you have radio waves, which have low frequencies and very big wavelengths. So it's all part of the electromagnetics. So this is the sun. The sun is also a star. And the sun is essentially a very hot ball of gas, a hydrogen fusion powered. And when you make a ball of gas very hot, it acts as a black body radiator. So it radiates depending on its temperature. So that's carbon. That's 5,800 Kelvin is about the sun's temperature, I think. It radiates the whole electromagnetic spectrum. Most of the energy is concentrated around the visible and infrared range. But it's also got some radio energy. So that's what we look at with the radio telescope. OK, so you can't see the whole spectrum because we have this wonderful thing called the atmosphere. We should be very thankful. Now the atmosphere blocks most radio, most x-rays and gamma rays and stuff like that. That's very good. If it didn't, it would all be dead. But then visible light somewhat surprisingly does manage to come through the atmosphere. And that's why we evolved to see it because it comes with atmosphere. But there's also what we call the radio window. And you can see it's quite wide. Radio waves for fairly high frequency range, fairly big frequency range, can reach our beautiful telescopes. OK, so here's another example of the difference. So this is an optical image, same thing in radio. And what you can see with radio is the hydrogen gas. And our telescope can do these kind of observations. It's 21 centimeters. That's about 1.4 something gigahertz. So between the stars, there's always going to be hydrogen gas, right? You might know that all stars are made out of hydrogen. It's the original thingy. And it gets you together. And you can tell cool things from the radio. Go on, go on, go on. We've got to speed up. OK, so this is the first radio telescope, circa 1930. That's the Ford Model T wheel. Next slide. This is the second radio telescope called Rieber's radio telescope. And most radio telescopes still look like this, so next slide. But as a reminder, the energy in the radio spectrum is very small, so you want to make your telescopes rather big. Lots of mass, basically. It says the bigger you make your telescope, the more sensitive it is. In other words, it can pick up weaker signals and the higher your resolution of your image that you can form. So people will be bigger and bigger and oh, this will crash, see, that's what a telescope crash looks like. So that's also a good idea. So right, there we go. That's the James Bond telescope. It was a fight scene and I received a telescope. It's pretty freaking huge. And then this is a current Chinese telescoping ball, which is astoundingly huge. But you can't really move them, so that's not very convenient. They're very useful instruments if you can't move them. So there's this other approach for interferometry, where you have a whole bunch of dishes working together and your resolution is determined by the space between your dishes. That was in the movie Contact. That's a big radar telescope in America, EVLA. And this is an example. If your telescope is one kilometer apart, your dishes, that's what the images look like. If your telescope has dishes that are 56 kilometers apart, much higher resolution. So it's another little thing. It's more as well. As you build a dish bigger, it gets bigger per surface. It gets more expensive per surface area as you use more dishes, your computation goes up by n squared. So as computing gets cheaper, we're using more and more dishes. So we're building our telescope. So in around 2000, there was this international project called the SKA project known to build a really big telescope. And this is where our telescope is. It's kind of in the middle of a country. There's nothing going on there. This is a map of a population density of South Africa. We're right here. See, it's red. Here, there's no people. Why? Because people make radio interference. We don't want that. OK, so this is cat 7. So we finished this one in about 2012. It was feature complete. But it's really almost, it's not quite, it's a big telescope. It's 7, 12 meter dishes, but it's almost a toy. But we did learn a lot. And we didn't invent some new technique, some of them which we're going to use going forward. And this is near cat. So near cat is going to be 64 dishes at the moment we are 16. It's, yes, near cat. It's actually a joke. Near means more, as in better. And we have got more money to build a bigger cat through a telescope. Right, there we go. And it's awesome. A lot of the technologies for the SKA we develop in near cat next door. OK, so they're going to be 64. This is kind of an aerial photo of where they'll be. Next one. It's got a lot of cool stuff. Got digitizer right at the dish. OK, so go, go, go, go. OK, we should switch right to the demo. OK, so now. I see it. OK, so we have this nice gooey. So this is telling us all the resources which are in our subarray. You can split the telescope up into different subsets of antennas. And as you show us the webcam, you can actually see them right. These are the live web feed from our site, which is about 500 kilometers away. And what we will do now for you ladies and gentlemen. Oh, you're running it. OK, so we have a scheduling system. And it's going to start a schedule block, which all is it going to move them for us? I hope so. We haven't tried it. Well, what should happen is that you should actually be able to see the antennas move on the screen because it's running the telescope. A minute ago. Oh, it moved a minute ago. Then it told the space. Oh, sorry. Is it signal display? You know, I'm using the live signal display. So you don't actually get the images live. You have to do lots of image processing to get that. But this is just kind of in the operator room, these displays are up, and they tell us how much energy, how much radio energy is being received. And this is a pointy display. So what's the angle, the azimuth, and the elevation of each telescope? And as you can see, they are moving, right? So currently they are moving. And if you go to the webcam, hopefully you should see all the nurses are moving. Can you make it bigger? I can't see. Oh, here you go. Is that our end? That's our end. Great. Thank you. That is basically it, actually. Oh, so it didn't really move. That's too bad. Oh, well. In any case, this really was live. And you could have seen it move, but it didn't. So we're both in Deviant. So we thank you, and we're doing real sound service. And it's going to be awesome. Thank you. No, no. We scheduled it out. Yes. We started an hour in front of a real observation. And I was actually a little bit upset because I didn't read our email, but that's what it was. You may have met this man before doing a thing. The main joke is that I only know two words of Arabic, which were taught to me by two very nice Lebanese girls, and it's nothing like you're thinking at all, unfortunately. And also, I haven't plugged in the video. So I was going to do Mime this time, but I can see from the looks I'm getting that that's a bad idea. So nobody likes Mime, apparently. Hands up, who likes Mime? Natty likes Mime. Not the male thing. Oh, look. Wow. Okay. Okay. Sean says he likes Mime. So he's probably going to make me do Mime now, which kind of sucks for me. Oh, I don't know. Can you hear me? I'm going to put up pictures of penguins with that up. Okay. Can you hear me now? All right. Let's do that. Where is Verizon commercial? Okay. So, right. So what's the point of this? There is a big collection of extensions to Emacs. For those of you who don't know, Emacs is a list virtual machine, which some people use to edit text, but also does a whole bunch of other things. And there are available one more package since the last time I gave this demo. So it's growing quickly. 3,193 packages. It's a project driven more by enthusiasm than by quality assurance. So a few of these packages are still maintained on a wiki somewhere and it's automatically packaging the wiki. So if you're not scared yet, it should be. So Sean and I have been working on some tools to help bring some of these packages into Debian, not the ones that are maintained on the wiki, and sort of insert a little quality control into this cycle. So let's see if Sean has any, if my controller has any instructions for me. Okay. No instructions yet. So let me just... So here's a package which for some reason Sean wants to package for Debian. And I read the readme, and you probably can't read the readme because it's kind of small, so let me make it bigger. And I still have no idea how I'm going to demo this. A generic completion method based on balance. Okay. So it's a thing, right? And it does something about completion in Emacs and we're going to package it for Debian. All right. So I'm mildly confused. Oh, click on the link. Click on the link to GitHub. All right. Going back. Okay. Here's a... So here's someone using this thing and they're typing K and it's showing some completions if you hit S, you'll start completing those things. So it's sort of giving you this global completion view. I'm totally making this up. I've never seen this screenshot before, but okay, so something like that. All right. So it lets you jump around... Well, you can read the IRC thing. So it lets you jump around using the first letter of words. Hmm? Right, right. F and T commands in Vim, but on other lines. So those of you who know what Vim is, might know what he's talking about. Okay. So shall we start? Laney, are you asserting you know what Vim is? Is that what you're... Good, good. I'm not following your instructions, Laney. I'm not following for that again. No, you. You dance. Okay. Oops, he left. Yeah, but GitHub is a big place. I need more specific instructions here for crying out loud. Well, you know, this is part of the charm of this. Wow. All right. Abo, Abo, Slash, Avi. Very good. Deadline. So we decided to spend more time in the beginning talking about general things, and now we regret it. Okay, so the last stable release actually has a tag. Yay. Which is not guaranteed with this bunch. Oops. What? What? What? Okay, there we go. Yay. Okay, so now... All right, so I think we're going to fix. So we may run out of time this time, but actually sort of the interesting point about this is that Sean and I have written tools that take upstream metadata and use it for Debian packaging. So upstream is maintaining some metadata here in the file. Of course, in the charming way of upstream is everywhere, they're maintaining the wrong metadata, but nonetheless that's okay, because that's fixed. So let's go back to our shell. Now, I think we're ready. So the minus package emaxon says use the team defaults. The team is more or less Sean and I at the moment, but we wish it was more people. That's not quite true, but... Okay, so it says, you know what? You shouldn't really just upload this thing that you generated with our experimental script, but we don't care, we're going to do it anyway. Okay, so some things scrolled by fast there. I don't know how we're doing for time, probably have two minutes left or so. Okay, so one thing that is good, that is that we ran the test suite. Now to try it out, I need another emax instance. More emax is better. Let's see, you can't see that really well, can you? Yes, he's telling me I need a new emax instance. Lag is awesome. No, it's installed. Oh, I cheated, it's installed from the last demo. Sorry, let's cheat. We're short for time. Okay, so I could have installed this dev, and in fact I did on Tuesday, so this is the actually, but I'm sure it's all totally reproducible and exactly the same, right? Because how could it not be? Let's see, let's just spend another second here. I type E, and then it's telling me to type L, and now if I type, oh, it's like a little menu thing, right? If I go A, it'll go to that line, and if I go J, oh, I get it, I finally understand how to use this thing. Okay, cool, so sorry if you didn't, but at least I figured it out now. And it actually seems like it could be useful. Now we have one minute, and Habibi is telling me to speed up. So one thing that is cool is that we have an actually correct-ish depth five copyright file generated. So it's got the upstream source. And also, well, devian slash rules is pretty boring, but that's actually kind of the point. So the minus with alpha just says use our helper tool and packaging max is totally trivial, and you should all do it. And I have 10 seconds or something? Yeah. Okay, let's go back to the penguins then. One, two, one. Penguins! All right. Next is Mr. Phi doing some phi. At this point I'd like to say hello, Louisa. She's not here, she's been to DevCon before, so hello. Yeah, it seems to be on. So I will show you the phi CD. There are several flavors of the CD. Phi is a fully automatic installation. You can install virtual machine bare metal chain-shoot environments. So if you're on the website, just download the ISO or get one of the other ISOs. The differences are which packages are already included on the CD. And I now just start a virtual machine with the CD. And then there's a short grub menu. We select the client installation. This is a little bit secured. You have to enter file install. So because the whole disk will be wiped. So and then we get a little menu where we can select which type of installation we want to do. A simple installation without any graphical desktop and XFCE or GNOME desktop. We can also install CentOS or Ubuntu. With those installations, the packages would be downloaded from the network. I just start the XFCE installation. And while the installation is running, we have some more information. And maybe you can just distribute it. And yeah, I'm here for questions. Any questions? The question was how do I generate or create the CD? Once I set up the file server, I have two commands. One for creating the potential package mirror. And then the other command just takes the install environment, the potential mirror and my configuration and creates a bootable CD. So two commands, that's it. We support the 32-bit and 64-bit Intel architectures currently. In the past, we had also users that installed the IBM mainframes, Itanium, Spark architectures, Solaris. Yeah, so this is also possible. Alpha. Alpha, yeah, we also had an alpha user. Yeah, that was you. Oh, hello. Hello, alpha user. So now you can see the packages are already installed. At the end, the customization scripts were executed and they were also fine. And the installation took 115 seconds. I now reboot again. And the GRUB menu now defaults to boot from the first disk partition. This is a normal GRUB environment. And now it starts the desktop. In the configuration, I said, please create a demo user with the secret password FAI. And then you have your brand new machine. And what I did, I said, oh, I want to have the GIMP tool also installed during this installation. And it's here. That's it. More questions. Soft update. Yeah, normally this was an initial installation, but FAI can also do a soft update. So if you have installed your desktop or server once with FAI and you change some things in the configuration, we call this soft update. So we do not do a complete new installation, but we do an update of the package list and the customization script may change things in ETC or in other things. So it's also possible. This is a configuration management part. We use mostly shell scripts for our things. But if you have a CF engine or puppet environment, you can also use this with FAI. Or MGMT. Yeah, in the future also MGMT. Sure. Five minutes. Questions. Come on. How would you customize things? What I can show you is, for example, the disk configuration. And there's a class feature. These are files that describe how the disk will be partitioned. They just look like FSTAP file. So it's very easy to write a file for a different partition type. And for the software package selection, we have a directory. And in this directory we have several files. And every file will be selected if it matches a FAI class name. And they look just like this. In the first line we say which package tool we want to use. We also support the RPM packages and then just give the list of the package names. You can also add the slash testing or use apt pinning. Yeah, so it's just write down the name of the packages that will be installed if the machine belongs to the class Debian in this case. What else can I show you? We have also a monitor tool. So if you install a lot of machines, it will look like this. There's also an animation if I have my network running. Not yet. So every machine connects the server and says hello, I'm booting up. I'm doing the partitioning part. I'm doing the package installation. I'm doing the customization part. And then you have a nice tool just to see, yeah, there it is. It would run like this, maybe not that fast. And so you could monitor the machines during their installation. And the color says, oh, everything is fine. Or we have minor or major problems with it. And then you have to look at the lock files. And the lock files will all be copied onto the server. So even the machine will not manage to reboot. You have the lock files on the server and can see which things went wrong. Okay, more questions. Okay, then thank you very much. Thanks everybody for coming. The next lightning talks will be next year in Montreal. See you then. Bye.