 Hi, good afternoon, everyone. My name's Will, and I'm here with my colleague, Ayan, from the Nock. And we're going to show you some funny photos of what we did to build the network here. So what's the network actually for? Here is our vision, our strategic goals, and all that blah. We wanted to give everyone gigabits, really, these days, some decent Wi-Fi, which has worked pretty well. And also, in line with our aims of having, basically, properly provision-enhanced connectivity, filter-free, net neutrality, all those things you hear in the papers these days. And we wanted to have a lot of fun while doing it, and maybe top our hands up. So how do we design this? Well, we have an intersection team which goes and does various hacker camps. So it means we kind of run different size networks every year, almost, these days. But this is the biggest one ever, obviously. We had 37 Darden clothes, which is maybe 15 more than the previous time we've done an event. 47 fiber cables laid across the fields, totaling 7.2 kilometers, 78 Edge switches, and just over 100 wireless access points. We decided to run a collapsed core for this event to reduce interdependencies. So everything, as much as possible, goes back to a central location. So this site you see all around us, I mean, it has a great industrial history. That means there's actually damn train tracks everywhere and unknown underground services. And one of the problems we came up with, appeared during our planning for this event, is that we weren't able to, without lots of special measures, to really go very deep into the soil because there's unknown cables, and there were really quite a lot of train tracks as well, which make running cables different, difficult. So we didn't do this. We did, however, manage to do this during the build-up. Fibers do not generally survive being run over by a train, and this one is kind of broken. This was due to an accident, basically. Anyway, no harm done there. So this actually all required quite a lot of planning, and we did this in a lot more depth than previously. We teamed up with Avok for planning in OpenStreetMap. We then were able to do a lot of automation of the cable selection and putting the right cables into the right places using OpenStreetMap API and some tools we built ourselves. And the side effect of having all that information available is that it was then publicly available to you guys, so you can see where the dot and close are if you look at the CamperMap URL. And it enables us to check the design properly and generally sort of make sure that we don't spend ages pulling a cable to find it's five meters too short, which is really super frustrating. Here's a plan extracted from that of the site. You probably won't be able to see it because the site is quite large. But you can check this online, but you can see our main cable routes across the site here, which are the green lines. And really we wanted to get to everywhere that you guys occupied. And here is another diagram you won't be able to see, which is our physical infrastructure. And what we actually do is have a lot of multi-core fiber cables which are connected together. And what this means is that basically we have light paths all the way from say an edge dot and crawl the way over there back to the NOC data center. It means that if there's any, for instance problems with power or similar in there or equipment in a more peripheral location, then that won't affect, it won't have as large an impact and then therefore makes the network more reliable for you guys. Uplink, well we had to get the internet here. We spent some time on this because as you traveled here, you know that this place is quite remote. There's not a great deal of infrastructure, but then we found actually about two kilometers that way. There is a large high voltage line and using our contacts, we were able to get those guys to give us a 10 gig wave back to Berlin, which is great. And to those of you who've seen our presentations about Congress networks, meant we could do much the same thing. So yeah, there's a splice enclosure on a pylon. This is it, about two kilometers from the site. But it wasn't all that simple because there's a river there, there's also a lake the other side. So we would have to carry the fiber quite a long way. So we used a lot of this lightweight fiber, which actually two kilometers of this only weighs 20 kilos, so you can actually pick it up and carry it around. So we had to cross the lake. Our experiment showed that the canoe, the two-man canoe was actually better on rolling the fiber than the rubber boat because you could just put a stake through the spindle and unwind it. So yeah, canoes seem pretty good for that purpose. So yeah, we had an interesting time. I can't actually use a canoe. So yeah, it took some time. We, of course, yeah, here is the fiber in the canoe. And we had to do obviously some splicing out in the field, so there's quite a lot of sitting, getting stung by insects and all that kind of stuff. A lot of you know about the rodent damage. In fact, I have some of the offending pieces of fiber here if anyone wants to come and have a look at them. And this obviously had quite some coverage and lots of people found this quite amusing. What we actually did to mitigate this was we went along in this stretch where these animals live and we've basically invaded and hung the fiber up higher in the tree so we weren't really obstructing where they go. But yeah, it's pretty easy with this thin fiber for it to be damaged by rodent activity. So back to the technical stuff. Layer three design, BGP Edge with a pair of Juniper Mx104s in Berlin and here in now, not DC in the Ezekli Park. And then we dusted off a very venerable Force 10E600 that I think was last used at 29C3, 28C3, but did a good job there and actually was just what we needed here. So why not? We did the usual, we used a CCC address space for IPv4 and then a temporary slash 16 from the RIPE NCC. And of course V6, it's pretty much the same as last time. Just a few photos from the not DC. We actually even managed to label stuff this time. Edge, so we rented a lot of HP Pro Curve 2530 with 24 gig ports on the front. And then upload link those into the core with two times one giggy. These are single fiber optics, so they use two different wavelengths on the same fiber and that reduces the amount of fiber that we need to pull around the fields and also makes troubleshooting easier. For some locations, we wanted to provide 10 giggy uplink so we used some Juniper switches there. And we were testing some other hardware as well cumulus gigabit PoE switch. We're taking up links and some Huawei switches as well. So we actually, although the bulk of the equipment was the Pro Curves, we have been using some other stuff as well. Data center, it's often a pain doing this. It's been damned hot here as many of you know and really it's been that way for the past few weeks. So we have a DC container with a knock and Vox services and the uplink actually terminates in there. We had lots and lots of aircon problems. In fact, we started off with two aircon units and we ended up with seven, which is more than the number of switches we had in the place. But yeah, at least it started to stay cool there. I'll move over to Ion for the Wi-Fi. So Wi-Fi, we've been using a similar setup as at this last Congress, which was a dual Aruba controller running in a high availability setup. We've deployed over 101 8211N and 8211AC access points and that had an average of about 1.5 access point per datum close. It would mount one access point in datum close and then we have another, like a more outdoors suited IP65 access point which in the neighborhood to cover the edges. Because what we see is that around the datum close we have around 30 meters of good coverage and then we still need to fill in the other gaps. So we used a couple of outdoor access points. So you might have seen them hanging around at several places which are, these are the access point which look a little bit like security cameras, the big white dome access points. And we are deploying multiple access points in the track tent and workshop tents like this to have more capacity because in a room like this there's like, what is it, 500 people there? So one access point doesn't cut it. So you would need multiple access points to have enough capacity. So we had a peak of 2,300 associated clients and we did around 1.25 gigabits, that's RX and TX aggregated. We've seen around 10,000 unique devices. So that's what we've seen over the last couple of days and that's not concurrently online in the network. So yeah, we were running like at Congress 82.1X again. So people could use a random username password to log into the network. And so on the left side we have a nice top 11 there and then you can see why we made it the top 11. So yeah, the device types on the network, it's mostly smartphones. So you can see that Android and iOS are in the top three. So that's a large bit of the network are all smartphones. And then other than that, we have of course Linux devices. And then as we would expect at CCC that these windows is being very, very low used. So that's good. So regarding the type of devices that connected to the network. So we, more than 50% were actually five gigahertz capable. So that's pretty good. Last Congress we did I think around 65%. But yeah, it's good that we have more than 50% of devices actually being five gigahertz capable. And we're also seeing quite a lot of the newer generation devices, the 8211AC devices. That's already 21%. And regarding the uses of the SSIDs, yeah, about 42% of the devices were in 82.1X. That's a bit less than at last Congress, but still pretty okay. So if you wanna have some encryption on the wifi layer then yeah, use 82.1X. Yeah, more pretty graphs. So this is a graph where we are plotting the number of associations per field or per region. And you can see, I'm not sure if the mouse pointer works, but you can see over here that the green lines, this peaks over here are actually the people getting in and out of the track tents. So we can pretty much see that which, if there's a good talk going on in the tent, so if there's lots of people there, then it must be good. And then at some point, like during night, we see people going to the east side of the camp, which is where the Bear Village is. So people are going to the bars and yeah. Yeah, and you see the red line over here, that's the central plots where also the bars are. So it's pretty obvious where people are going. So the lightning storm yesterday. Yeah, so we obviously have a lot of stuff missing here that we because we had to power down the DC, all the generators actually got turned off. But it's also interesting to see that the people were like moving to the, they were getting into the big track tents, that's the blue line over here. And then people were moving to the central plots, that's the red line over here. So it's also interesting to see what's happening when you're having a situation like that. So challenges, yeah, we had to make a bit of a trade-off between coverage, capacity, and performance because it's a very open field. So it's, and there's not a lot of attenuation. So you don't have, so there's a quite large chance that access points will end up on the same channel. So you, we don't want to mount the access points too high, but then if we're mounting them lower, that could actually mean that we have less coverage. So that that, we need to make those trade-offs to get something working. And we did end up having quite a lot of high channel utilization in some areas. So that's actually the amount of time that the radio in access point is busy receiving and sending traffic. And once that channel gets more and more loaded, it will just slow down. And at some point it will could possibly break that you will not even get an association anymore. So we had some radios that were averaging around 65% channel utilization, which is very, very high and peaking at 95%. Another issue we were facing is that we, there were a lot of rogue access points around and that caused some devices to have some roaming issues because the device will receive so much BSS IDs. If you're, for example, in the central plots, you can see very, very much BSS IDs around you and your Wi-Fi device at some point will have some issues selecting the correct network because it's receiving so many beacons and so many probe responses. So in the future, we would like to use some more performance monitoring using Wi-Fi probes. So we're looking into a solution for that so we can test the performance of the Wi-Fi network just independently of the Wi-Fi infrastructure itself. So we will have a couple of nodes connected to the network which are doing periodic speed tests and latency tests so we can better signal where, at which areas the Wi-Fi would be bad. So, another problem we were facing is that the space blankets that were put around the button close actually caused a drop in 20 dB of signal intangation. So that was very, very significant. So at some point we had to remove the space blankets again to increase the signal on the field. Yeah, and here's another graph that actually shows the five most busy access points with the channel utilization in five gigahertz. So you can see that there are access points that are peaking up to 90% channel utilization even in five gigahertz band. And that's worth near DK Utrecht, Hamburg, and well, one of the access points in this track as well. Oh, we had another tweet today which was pretty funny. So somebody's geolocation was a bit fucked up so. Yeah, this was because most of the access points here they've been used at a conference. It was actually a hack in the box in Burs von Belag, Amsterdam. So that's the reason why he was, his location showed up as Burs von Belag, and there's some sex shops around here. So you want to take this one? So we didn't actually produce any, I don't think we produced any use more bandwidth signs this time around. The uplink did get used quite well peaking at 7.5 gigs out. So we're pretty happy with this next time around. I guess we'll need more, because we always need more. We also did some instrumentation of what happened inside the camp and saw like at a maximum backplane capacity on the E600 of 22 and a half gigabits. So there's some traffic, quite some traffic flowing around the site as well, which is nice. We did a new dashboard, which you can look at. And we're always eager to add more stats to that. We added some temperature sensors from the ICMP village and some other stuff. So yeah, more to come there. And I mean, it just shows, here I've got a screenshot. It's a number of wireless users and speed and traffic used by the visitors. All very shiny stuff actually. Ticketing. We used OTRS for the pre-event, which is kind of a historical thing. And then for years, we've always used roundup on site because it's just really simple and people can just get started straight away with this. We only had 51 tickets that come through, which is quite low, I think, actually. So thanks very much to the Knock Help Desk for fielding the end user queries and doing all that unplugging and stuff. You may notice we had some lights on the dud and glow. These were originally from our own 2013 event, but they're actually very useful for us to diagnose any network problems. So they had this interesting kind of thread. Oh yeah, so somebody at DK Dublin was complaining that the lights on the LED poles on the dud and glow were too bright. And yeah, they asked it to be switched off and we were like, well, at least our Belgian colleague here said that, yeah, stars are down, team has been dispatched, ETA, 10 light years, so. So what's the team actually behind this? Well, it's actually quite a lot of people. Like, there's more than 30 people in seven subteams of some of them quite young and some of them rather old kids like me. And we really started on site two weeks ago and one day today, so I actually arrived here two weeks ago to start with the uplink stuff. And then, yeah, as I said before, the info help desk were dealing with a lot of our end user queries and that sort of stuff. Actually, a lot of the stuff, a lot of the equipment and services we use for this event can't really be bought commercially. Either it costs quite a lot of money at the commercial rates or it's short-term stuff or you need to borrow equipment and people just don't lend their stuff out. So we actually really rely on a lot of people who give us stuff for free. And so, yeah, really, really thanks to our uplinks, KPN Stratto and CIS11, E-Discom who supplied the 10-giggy wave to Berlin, E-Kixson Speedbone for housing of the Berlin side of the operation and then quite a lot of hardware from Bibliocumulus, CureLink, Aruba, FlexOptics. So really, really thanks to those guys for giving us quite a lot of equipment and yes, we will send it back. So, well, goodbye. The network in the camping fields will be torn down starting about 1900 today after the closing presentation. We will have everything kind of gone by 10 a.m. on Tuesday. So please be kind to our fibres as you see them in the fields because we want to roll them up and use them at the next event. So thank you very much. Yeah, yeah, I get to do this, yeah. So I probably have time for a couple of questions. If anyone has some, come up to the mic. Is there a mic? Oh, yes, the mic's here and here. I can't see anything up here. First up on my left. You mentioned that you got the fibre to the electricity pole. But was the fibre to Berlin already included on the network or did you need to add it? Actually, there's an electricity substation at really near Zeidenich, just down there. So the end-to-end, like the actual photons, as it were, go as far as Zeidenich and go into their DWDM equipment there. So the actual, you know, physical splice through a piece of fibre is about six kilometres to camp and then it's transported there. It's too far really to light. So it's not a physical pair of fibres all the way to Berlin. It goes into an optical network, which is pretty standard, really. Do I see any more questions? No, then thanks very much. I'll pass over to the Vok. I don't know how to use such advice with an apple on it. If it had a penguin, I might be able to get it running. Yeah, that looks pretty nice. So it's really, really cool to see that tent filled with people wanting to see what we build and how we build it. Oh, it's running automatically. Interesting. Oh, yours is there. And maybe composite. Thank you. So it's nice to see that tent being filled with people like that. But as you may know, there are a lot of people on the campsite tearing down their villages or already on their way home or some may even not have been able to come here in the first place. And we want the great experience you have here, at least a little bit shared with them. So we are the Vok and we are providing recording and streaming for the lecture halls and a little bit for the music on the field. And you may have seen our small and larger cats standing all around here. So the Vok here is not alone. We don't have enough hardware to do an event great as this or like the Congress. And we always have helpers. And this year we had like the AGS and a guy from iSystems here who provided hardware and also a lot of support. And they both of them run one of the big tents. And we really like to say thank you to them because without them that would not have been possible at all. So as you might have heard, the action is in tents here on the camp. And tents are a little bit more complicated than like a full operational Congress center. So the day we arrived here and tried to build our stuff up, we had like a tent with no walls and no floor in it. And so guys from us started climbing around and hanging all the screens there and the beamers. And actually you may have noticed that the tents stayed like in the skeleton stayed for quite a bit of time and that was not really planned like that. So between the build-up team of the tents, finishing the tents and we are setting up the audio video equipment, there was like 0.125 days. So only some hours and it was really, really hard to get that all done until the opening event. But we managed to do it and it was really hard for the team. So in the end, give the team who's not here on the stage a really big applause because they really, really worked hard to get that done. So we have three main stages here. We have the both tents north and south. And yeah, their names are a bit complicated. They switched places sometimes and also got new names. So there was a little bit confusion there, but in the end, we managed to make it. And we also had the bare stage in the Berlin Village, which is a really interesting program. And those were the three main stages we were working on. And additionally, we had our container, one of the OCs container. The knockhead, one similar, you can see it on the screen. But the POC did it right and they had like a big tent and a lot of space to party. So next year, we will try to learn from them. No, in four years, we will try to learn from the POC. As you can see, we have a lot of hardware there, but the main device is standing in the DC. So what we have in our container is those really nice tally lights. They don't only show the time, but whenever something happens, like a talk starts or talk finishes or one of our encoder processes stop working, they will blink and display a message what's happening and they are used via over the air. So we can carry them to the different locations we're going and always be notified. And also, we have this really nice blinking light underneath it, which actually makes a sound when it's turning because it's that crappy. But you can hear when something goes wrong, which is also really good. So what we have in the DC are two new devices that we bought in the beginning of the year. We call them the Minions because they're really small. They're like 10 centimeters wide, but they are really, really powerful course. They have like four of the 3.9 gigahertz i7 cores and they did the whole encoding for the master HD files for the whole campsite. And at least half of the talks they did even twice because we missed something. And those are the two devices who are actually producing the files you are downloading and viewing in the browser, at least the most of them. And these are really, really nice devices and I can, yeah, I really like them because they're so small you can carry them around with you, no problems. We also have a bit of interesting gear standing beneath the ceiling of one of the buildings in this direction, the one with the high chimney. We have Rodent Schwartz transmitter equipment for FM radio and DVB-T and DVB-T2, I think. And these are really interesting devices and we have played quite a lot with them. Actually, you might be even able to listen to the talks going on here and the special radio while you were driving around Sydney and doing your buyings for your village. And the DVB-T also had a lot of special features like we had EPG running after some time and even managed to get, oh, is there a slide missing? Oh, I think there's a slide missing. We had even teletext working and like a Twitter feed on teletext and this. But we weren't able to finish that as fine as we wanted to. Like we wanted you to be able to enter your own teletext sites and send them out. We are DVB-T and we are really looking forward to get that working at the next Congress. So everyone can have their own teletext website then. So another thing like FM transmission is pretty old and we like the new stuff. So we had guys from the Open Digital Radio project bringing DAB plus transmitters there and they also managed to stream slideshow versions of the video via DAB plus. There are not that many radios out there that can receive that, but they brought us one and it seems it actually worked pretty well. So maybe it's really the future, I think, maybe ish. So yeah, there were some special projects we've been working on. Like when the thunderstorm was announced, the cert asked us if we can do a video explaining to the nerds how to secure their tent. And as you may have seen, there were some tents that were really in need of a little help there. And the thing is we used that video, we reproduced it, cut it, and then we uploaded it to our storage. Like we have a mirror of video files on the campsite and went away and when we came back and looked at our graphs, they kind of looked like this. And what you see there is like 1.2 terabytes of traffic produced by this single file. And we were like limiting out our 1GE link. Oh, it's actually, yeah, it was peaking around 1.2 gig bits per second for an hour or two hours because everyone on the campsite was viewing that one video and looking at the connection stats, we saw like 50,000 people watching that. It's like crazy. And we were not really sure it might have been a software bug of someone's notebook down in the file over and over again. But even then, the numbers are pretty awesome. So as I'm talking about the stats, we also have a nice dashboard. And it's actually the same technology as you have. But ours is not public, I think. And as you can see, in the top row, we peaked like at 2 gigabits per second with the streaming. So that's all the streaming relays added up. The one on the campsite, we have a local relay here, as well as the ones in the internet. And that's actually not that much. Like on the Congress, we were like around 17 gigabits. But hey, OK, it's about the sunshine and camping. And yeah, I know you're likely not to watch the streams. But there were great talks. So maybe you should take a look at the recordings then. We peaked like around 600 viewers at all stages. But the biggest stage was actually the bare stage, with about 400 viewers watching a podcast there. And it was more than the peak at the tent. So yeah, actually, it seems to be the bare stage was more interesting. So looking at the stats, we had like two relays on the internet. And one here on the campsite. And we did split routing and split system so that people watching on the campsite got their traffic from the relays here. And people watching from the internet got their traffic from there. This year, we used HTTP all to the way down. So all streams were delivered via HTTP. And this enabled us to enable TLS. And deliver at least the option for everybody to watch the streams, we are an encrypted connection because encrypt everything would be the right thing to do. And this also meant that we didn't need to use Flash anymore. So we totally scratched that. And because we know the hamsters around here, we decided to do everything really required to run our system on the site. So we did all transcoding and all release encoding on the site. And it turned out to be a good idea. And yeah, this was a little different to what we did at Congress. So it was a new thing for us to do everything here. Yeah, we tried to do multicast. Well, we plan to try to do it. But actually, the problem is that with multicast, you're sending out every packet once. And if the device didn't receive it properly, then, well, it's gone. So we need some kind of forward error correction on our streams inside the video stream, for example. And we have the code to do that because we're using that on DVBT2. But it seems there is no device and no program out there able to play that back. So like VLC doesn't do it, and FFmpeg doesn't do it. So if you're working on a media player and want to help implementing forward error correction in the player, so next year we can use multicast, then please talk with us. We would really like some help there. But it isn't actually necessary on the camset because we had DVBT here. But at the Congress, it might be interesting there. And I think the NOC would like to see some multicast traffic, too, wouldn't you? Yeah, multicast. So yeah, we announced like YOLO Stream. So everyone on the camset that has anything that makes any kind of noise or sound or music or the like would be able to share that with the internet. But we didn't really get to implementing it. But we will try to do that in the Congress. So be prepared. If you have anything that makes any kind of sound or music or want to share anything, be prepared to stream something to an iSCAST gateway on the Congress. We will have a gateway there for you. So everyone on the internet and on the Congress can listen to what you're producing. Yeah, and that's all from the, oh, there's a screen from the teletext, actually. So that's all we have to say. And say thank you to the cats and thank you to all the people. And before you're leaving, we have our local mirror with all recordings of all talks here. And it's connected via 10 GE to the data close. So you can start your async now and take all the recordings with you. And yeah, see you soon. Yeah. So I should talk to you about the power here at the camp. It was quite a tricky task to do it here. We've started way before the start point set here in March and talked to the big net company here in Edis. If it is possible to place some transformators outside, but the capacity of the lines here are all not working for us. So you can use it as a museum, but you cannot get any more power. So we started here on the site on the 31st and get to deliver our material and build the backstage. And yeah, we've installed a total like 30 kilometers of cables, 224 power distribution boxes. That's only the CE connector ones. So we have about 500 normal connector boxes outside at the whole campsite to deliver you all the power you needed. We've planned with many more power than it's really used. That's the plan of the whole campsite. It was also in the public wiki. And the company that does the sanitary installation here told us they alone would use about 400 kilowatt of power all time. I don't see it, but I think all the showers would be cold. But I don't know it. And total we have here seven generators, five connection boxes, which we have get the... Instead of a museum building, they have also our connection box. And yeah, I think it works to find the power network. We don't have much problems. We had one generator failure in this morning at the shower spot and the disco. And we had one burning box, there's a burning RCD in it. It was on, I think, day minus one. And we had a defective power line on day three. So I will come to the statistic later because this computer doesn't have any internet. We had a nice graphics. We'll see it later from the POC. So thanks for the POC for doing it. And we had a whole bunch of angels that helped us here going around, seeing if the cables are not too hot and clicking the RCD back in if you have tried to get it out. And we had a lot of rainproof installation. I don't have to think about so much since the thunderstorms, since we take out all generators and plug them back in. And yeah, mainly all things are working. So some light insulation doesn't work. But yeah, that's normal. But the whole rest was working fine. So we could... So do we get an image? Yeah. So that's the portal of the POC has built for us. You see their live screen of all the power that's used from the generators. So I think this one is right out right now. But the rest is live data. And you also have... Yep, also these live statistics. So we had one big generator standing at the Noctis C that was able to switch on another generator if we have a power failure. But I think we don't have any failures at the Noctis C. Only the big one at the thunderstorm. And also the 10s, I don't hear any failures about it. So in total, everything works fine. And if it just will work... Yeah, I would like to show you the whole power consumption of the network. But because of the big power failure today and generators doesn't work, I cannot prepare it. So I don't think I will find it. You can talk to the POC, they should have it. And yeah, that was it from my side. So I hope it has worked for you and you don't have so much power failures. But yeah, if you have any questions, you can ask them. Hi, good afternoon. Good afternoon. Good evening, ladies and gentlemen. I have a question. Why is there a cat? Why is the cat standing there? Oh, OK. Yeah, that's an interesting question. Some time ago, when the Vox started to record lectures, we had all the things set up, all cameras running, all streams running. Everything looked fine and the hall was empty. So we said, OK, everything's fine. We can go and relax. But then the hall started to fill and people were standing on the stages and someone asked, well, why is the stage still empty on the stream? And the thing is that the system in between has failed in a way that it repeated the same frame over and over again and we didn't notice until the talk started. So we decided we would have something or somebody on the stage who's moving constantly. But we wouldn't get an angel for that dancing all the day on the stage. So we decided to get some cats that are moving all the time. And this is our test if our stream and our setup is working because if they don't move, either the battery is empty or the stream is dead. Actually, we're really liking them. So they're traveling with us to every event and we really are caring for them. We also have a big one at the cube in the office, the mother of the small ones in the tents, maybe. I'm interested in the reasoning about middle-voltage transformers versus generators. And was it no option to install transformers and connect to the global grid? And the... Well, first this? Yes, it would be possible. We have, like Zinox said, there was a big transformer station near Selenik, but it would have cost us more money than we had and it would only be profitable if we do the whole camp for about one or one and a half months, then it would be okay. And just another question, why were the showers electrically heated instead of burnt fuel? Yeah, I would really like to say why, but I don't know it. I was getting a white piece of paper and there's so 401.1 kV for the whole shower installation and I said, okay. Do you know how much fuel was consumed by the generators? How many liters of diesel did the camp need? I think it should be. We have tanks the last bit today and it's about 30,000. Okay, thanks. How much percentage of the installed capacity was actually utilized by the camp? Could you repeat it? Well, you said you have lots of generators and you had too much power that we didn't use enough power, so how little did we use? Here on the camp we have calculated with 200 watt per person. That would be about by 4,500 visitors, about 0.9 megawatt. We had the 400 kV for the installation of the sanitaries and we had the light and other things, so we planned with about 1.8 to 2.5 megawatt. And we are using right now, I think, in the highest peak, it was about 500 kW. Why is the night frequency only 44 Hz instead of 50? That's the whole camp, so it is possible that some generators are a little bit lower and some are a little bit higher, so we had to check all generators and it's calculated. I don't know where's the bug. So here we have 49.98. It's possible, it's a broken one, it's not connected right now. Thank you. I was wondering, I didn't see anyone from the walk from the water operation center. Can any of you give us any stats on water consumptions or supplies or whatever? Maybe they're still drinking. Right. They're drinking the black water, right? I think that yesterday the water was empty and we had to get a new one or it was before two days, so we've used really much water. First, thanks for everything that worked perfectly as intended. Big up, really cool. I would be interested about the costs, actually. I'm not sure if you are allowed to talk about it, but I don't want to see numbers. I just want to know, pock this one-third of the whole budget, walk once, you know, just the technical costs. But maybe it's a secret. I can talk about this from the knock point of view. As we already mentioned, actually, we get a lot of stuff for free from people or there's not really a market to buy the thing we need. So, actually, most of our expenditure is on really, like, ancillaries, you know, how many cable ties and this kind of stuff we have. And all of our work is done by volunteers just like other teams. So, it turns out that it's not that expensive for a network point of view. Well, like, you saw the minions, the small boxes we have to encode the video and we have those new since the last congress, but they are not only for the camp here. So, we will use them at a lot of the small conferences of the CCC and also assorted conferences and meetups all the way down. So, even if they count, maybe I don't know really how, from rich budget they are calculated, but even if they count to here, they are not gone at the end of the camp. So, we will use them for the next years to do what we're doing without any charge to small conferences. Yeah, at my point, I cannot say, I don't know the whole budget here. So, I think it's really a big part of the whole budget for the electrical insulation because it's so much that the diesel or high oils that we are using here, it's generators, but you need it because you don't know how many people you will get. Some villages said they need 63 amps, only for them, they don't use it right now. So, I think it's quite a big part, but yeah. Are there any more questions? Well, then give a warm applause to all the people working here, all the angels helping out. So, really, really big thank you for all the figures, all the interesting stuff and interesting information. This is for the infrastructure review. And so, please, also, a really big round of applause for Will I in Mastermind and Fangel.