 amazing time at Shah. Let's hear. I think this can give you better. So, how was your time at Shah? Very good. So, Shah is an event which is organized by volunteers, which is run by a lot of angels helping everywhere. So, before we start with this technical infrastructure review, please give a big round of applause, not only to thank all the angels, but to motivate them for the next days, for the tear down in hell or in rain. So, one big applause. And to give you some insight, what has happened on this event regarding all the infrastructure which was provided here, we have this talk. It's tradition to get an overview from the various teams. In this year, we have the, and now it gets difficult because I have to speak Dutch, the project which was completely wrong. So, yeah, sorry, German. Now it gets easier. We have the chaos for Midlung. For everybody who doesn't know what that is, it's the field phones which you found everywhere, like the landlines which you could use to call each other. We have the knock, our network operation center, the phone operation center and power. So, all the things you use all the time in which makes this event much more amazing, much more comfortable and give you the possibility to relax on your tent with your cooling gear and stuff and hang out and serve the web and do whatever you wanted and call each other all the time. So, let's start with our first group. So, once I can't pronounce right, so I don't try at all. Bricks, I think it's you. So, let's start with production house. We're actually two teams, team production house from SHA 2017 and C3Voc, which in total was around 12 people core team. We had, we did the AV here on stage and the lighting and the LED walls and the sound. We did FM radio across the road from here and we of course did the video recording and streaming. We had four stages and we thought we could do with two cameras, video mixers and audio mixers and we could get enough angels to do that. That was a little doubtful, but in the end we had 46 angels who did video mixing and audio mixing mostly. We had three remote angels in Berlin and perhaps somewhere else, I don't remember and they did a lot of editing the talks after they were recorded and before they were released on media.ccc.de and on YouTube. I think I can claim to have most leads of everybody else here on this field. I did check it with team badge and they admitted that I might have a million or more than them. So, the two screens here in PA and in NO are about half a megapixel each and the one in RE is 2.3 megapixel so that makes in total of 3.3 million pixels aka 3.3 million leads. At the last check I did we released 139 videos which means that we're almost 90% complete on the event of all the talks that we're allowed to record. The most watched video on YouTube has a thousand views which was the talk on day one from Bill Binney about what the NSA is doing to our privacy. The same talk has 9700 views on media.ccc.de. During the event we noticed the small audio issues left and right, a little hissing and a little humming noises. One of our team fixed that in a FFM pack source and forgot that it was on a new major release so it was a little exciting putting that live and going like okay we'll see but it worked fine it's actually working and the highest level of people who watched the stream at a certain point was 125 viewers which was for the car hacking talk yesterday I believe. So what went right once we got started day to day work worked fine we had a few problems with the LED walls with the resolution that was resolved on day zero fortunately just in time for the event to start and we had a little audio quality problems which we fixed along the way and we hired a little external audio processing and on day four we were at the point where we could record a talk and before the end of the next talk we had it released on YouTube and on media.cc.de so we were almost real-time and now the fun stuff what went wrong that that's moiré because the cameras were picking up the individual pixels of the lead wall so you have this really interference pattern between the CCD chip pixels and the pixels on the lead wall this this is a high this is a picture a photo of a led display and then back on the led display so the effect is slightly overblown but this makes this causes a lot of change in media from frame to frame in the picture from frame to frame which means that the MPEG each new frame is completely new almost so that you get huge amounts of bandwidth which in the end someone said we didn't have enough seconds to put all the megabits in the encoder machines all crashed all hung they were they couldn't cope with it so we had to change the way we recorded the video no cameras zoomed in on the lead screen no and only if you do a total view of the whole stage do it from far away and don't move cameras because moving cameras changes the interference pattern which adds to the interference which makes everything blow up even more because we had so much pixels being changed every frame we found a few we think we found the bug in one of the Linux MPEG decoders because on one piece of one of our team members is using I don't know which player but it had that color band on the bottom have on every single video which had the interference problem with a lot of megabits so we dial down the mega the bandwidth that we use on the video on the videos and that problem was solved and it's not in VLC but in someone or some else player in a different player and we also had a problem we fixed we had a bug a physical bug in the power supply of a switch of a of a computer which resolved in the in one of our encoder cubes crashing on days one and they fortunately we had a spare supply a power supply so it was fixed in I don't know five minutes or so but it needed it there was a physical bug in the in the fan and it died both of them the top graph is the amount of simultaneous viewers in all the streams we had so radio and four stages and the bottom graph is the CPU load on all the encoders so you can see there are some peaks and those are stalks where a lot changed on stage and that's it for team production house thank you very much so next up is chaos from it long so do you want to pluck your solvent so first thank you very much for the team okay again you're making fun of me that's not right that's okay yeah we learned so one thing we definitely learned is LED walls make stuff harder if you want to record stuff I think this was one of the big learns we took away that LED walls minds might sound like a good idea but aren't so much so yeah we have we have the perfect person to go to fix everything so and a minute we come to the next team the team cars from it long oh okay we combine that okay I didn't have that information but fine so we will not only hear from cars from it long but also the park the phone operation center on other events so in this event it was I think that you call was micro park right yes and as far as I know you're quite new so you're doing it since some events but yeah on other events you have perhaps met event phone but this are not the same guys it's a new team and yeah we'll hear how it was working for them on this event so so team park yes so some key facts on what we did so there's a lot of cabling already oh really okay like eating it okay there's really a lot of amount of cable already in the ground on the whole field so we just had to add a rough three and a half kilometers of extra cabling to deploy yes you with over 28 base stations for the deck network and we did it with a hub and spoke model so there was not like one single point where all of the deck station connected to rather than some hubs where we all connected them so we didn't have to have like 20 kilometers of cable so just three and a half and we had the system up and running at least in the core area on day minus four and it ran without an interruption even where the power failed because we have like batteries in our stuff so it worked quite well even when the tetra failed so so basically this is our plan what we did so there you can see all the all your fields and all the cabling and all the stuff and the blue stuff you can't really see on the led walls those are off phone PBX so the switches where all the decks connect so a lot of base stations a lot of fancy stuff it worked so some facts about the the usage we had over 750 registered decked you registered over 180 zip accounts which were actually used quite a lot and we provided our over 200 over 35 desk phones for the stages for the infotain for wherever you needed a desk phone some of them by a zip some of them directly connected to the PBX depending on how much cable we needed to get them running and the zip phones were provisioned by us on day zero and the last one got up on day two because it took the long to get our VLANs running and so on but they worked so our phones some of the fun facts you basically called the whole world we had calls to India the Vatican China and even North Korea and we had some nasty copper squirrels cutting our cables connecting the deck base stations to our main PBX is and so we did a search and recovery and it took us under two hours to get the base station running again there were nearly 14 hours of incoming calls and over 150 calls hours of outgoing calls and basically called Germany the Netherlands Austria in that the how it appears is how much you called it and the longest single call was to Valais and Fortuna I don't know where it is so we googled it it's somewhere in the Pacific the call was over two and a half hours so then in the nasty copper squirrel hi I'm a straight a from cows for me long we do the field telephone so yeah we have this switch manner switchboard and it's battery operated so we do not have any battery outage or do not depend on power supply some key facts we have 60 switches capacity plus 10 additional for Olga we deployed 25 field phones there are 11 public field phones around we also connected to teletype writers over our switchboard also we have a diesel we provide field DSL so there were three DSL lines in use including Wi-Fi for logistic entrance we deployed it at I think two o'clock in the morning because they were sitting out there without power phone and Wi-Fi so we say hey let's get a golf cart and drive there and deploy a field DSL it took I think up to four o'clock in the morning yeah the longest cable was 1.4 kilometer for the parking we deployed 13 7 kilometers cable and have 24 5 starting at these minus one we have two angels normally at the switchboard there sometimes they had a lot to do because they are ringing all the time sometimes it was not very useful for a pock because they are thinking hey ringing ringing ringing but it was okay so some public field phone we also do a little description it was at beginning there was hey please press the button in the ringer because you can't speak but at the end of the event it will was okay so we also had a free con we have four connections to the decked and to the PBX and there was the 15 I think at last with 20 users a conference call with field telephone with four countries okay and there were also at least five people they bought a new field telephone for the next event and you also had to do because we have at least 120 channels to also connect you so thank you very much we definitely see that even when messengers and stuff comes up we still stay with phones and we go kind of back to reliant good technology and stay with us so who of you has used one of these field phones quite a bunch who will do this next time because you know just learned about it okay so we perhaps have to get this into the opening and just not just in the end so everybody knows what to look for so we do the questions at the very end for all the teams so now we come to knock the network operation center to be honest I haven't seen as much this use more bandwidth signs as other years but I think there was still some left but I think you will give us all the information we need now so hi hi I am Wilco from knock and this is Aryan I'll let him explain about the vision oh yeah so like at previous hacker events like ccc and emf our vision is mostly still the same we want to give you gigabit and 10 gigabit internet to your tent and give you a very fast experience we want to give you decent Wi-Fi it should be privacy enhanced filter free net neutrality trademark and compliance yeah yes yes and most importantly we want to have a lot of fun building the network yeah so we're running mostly on donated hardware which is very nice thank you guys who who give us stuff so we mostly got data center switches and data center routers and they're not really campus routers so we have some design constraints there so you cannot terminate a really large fieland on some a box that cannot handle very large fielands so that gives us some restraints on our design but still we were able to do most of the stuff with just a few gaps on on our own stuff this time yeah so we only filled some some gaps mainly the the uplink stuff and a few switches that were used in the core so all in all pretty good so the concept well you did most of this stuff so yeah so this time there were already three data centers on site which we could use and just put equipment in so that was very nice that is n0e0 and L0 and n0s on the street and the L0 building is very nearby here and easier is one in the middle of the big field and all of the dot and close except for I think only six or seven are actually uplinked with a 10 gigabits single fiber so we're using just one strand of fiber and putting two colors in there to get a full duplex connection and this year we decided to do routing already in the dot and close so that's by the principle of route early and route often and the reason we did this is to offload the distribution routers which are not really built for routing lots of users so we this is a very good way of offloading that yeah we use routing protocols of course to distribute routes across all of the networks we're also using DHP relay this year which is something we haven't tried for years because it always sucked and didn't work and but this year seemed to work so that's nice there are some switches that don't do routing don't do layer 3 those are just terminated at the nearest datanclow and it's routed right over there also for Wi-Fi we use Aruba access points and they tunnel all the traffic to the controller so that also fits nicely in this design oh I'll do this one as well for the planning we used QCAT and we're working together with Team Terrain and all of the stuff is published on map.shout2017.org if you're interested you can go there you can enable the knock layer you can see where all the datanclows and all the cables are going but we were using all of that stuff to to calculate the length of all the fibers and then use a script to assign all the fibers and that's this was actually needed because we have around 50 fibers going across the terrain so it would be tedious to figure out the right fiber for the right path this year we also used a web-based diagramming tool it's called draw.io and it's pretty nice you can put all kinds of metadata in there and we actually use that to yeah create our topology and also use that for for scripting and config generation since last CCC we've also been using Netbox for IP administration and this also we have some fans I guess yeah so but this has also worked out pretty nicely it's also good for integrating with your config generation scripts and that kind of stuff and like I already said the scouting land here they invested quite quite a bit in their on-site infrastructure and they already have these three utility buildings and 11 field boxes which have eight cores of single-mode fibers and this is really really awesome because it saves us a really shit ton of work so kudos to scouting so yeah some figures so we have three data centers right one that has the uplink and we have two for our cores which is essentially completely distributed because we don't have very large boxes that can route we have the the data center routing boxes so we separated all the visitor and org of VLANs and essentially had six routers in our core and then the the uplink router and that all that stuff it was basically fully redundant except for the services yeah we made 50 fiber cables I think a bit more actually but at least we went well over 9,000 meters of fiber yes it is so yeah so 75 hexes which is that were deployed in the field in all these down close and in some tents well the edge routers access points 300 senseevers were used that's basically how many links we had active and used for use for our routing sponsored by flex optics thank you very much yeah so we seem to be missing some pixels because our network is slightly larger than the screen can display but yeah we separated the visitor organ Wi-Fi and oh yeah we had four times 40 gig uplinks to the edge routers so that's well that was enough and we did for layer 2 redundancy we used MC lag that's multi chassis link aggregation so that if one of the data centers would actually lose power then everything would still work so that actually worked out pretty well yeah because of routing constraints it is actually so we took some advice from some local village people yeah so well so basically we ran OSPF in the core and that was for redundancy purposes mostly yeah the village people told us to so yeah we also had a fiber splicing party we had it on the 8th of July where we basically invited people to come learn how to splice and we did about 100 splices and I'm pleased to say I did less than a quarter of them and the rest was done by all new people that just came in and wanted to learn how to do this and that's really really great because we had a lot of new fibers to unroll and basically retest them and fix them and yeah thanks everyone for showing up there because that was really really helpful one of the new fibers just for the parking lot to give the angels that have the long shifts there to give them some internet we actually made a new fiber of 666 meters long and also was used for ticket scanning there but mostly for the angels that sit there because they're really doing a good job there yeah you cannot read this okay I will share the slides afterwards okay yeah so uplink was easy this year I guess it was a 59 kilometer dark fiber provided by UNED and yeah we have a fan okay and it's going from the site here towards Nikhev and but the fiber was already here on site because scouting invested in getting fiber here to the site and they already have a permanent connection here installed and we can use to spare fiber cores for that we only had 13.5 dB of loss over this distance so that's very very nice to Juniper MX 240s we had installed one on site and one in Nikhev and we used an interesting bit of equipment to do 100 gig which is a tunable coherent 100 gigabit transceiver and it can fit this in 50 gigahertz channel and that also gave us the opportunity to make a backup uplink using DWM Maxx so we actually had a 100 gig and a 10 gig backup so 110 gigabit of uplink yeah so transit and IP so we got an AS number from surfnet and a slash 20 of IP space so we can use for at least this year which was very nice and we got that's basically what we used for bootstrapping everything then we got a slash 16 of IP space from RIPE which is basically a temporary assignment so we could give everyone a public IP address which is basically completely unfiltered and you have a public IP do with it what you want. EventJuniper has a slash 32 of IPv6 space which is enough we used well three slash 48s I think for this event so yeah let's well thank our upstreams actually entity provided as with 100 gig of bandwidth plus 10 gig for our backup link core backbone gave us 100 gig surfnet gave us 10 gig plus the AS and slash 20 and then we had a 10 gig bearing on NLAX and basically it was all done zero budget which is very nice well this is definitely your area yeah so we like at previous hacker events we used a Ruba setup again this time to 72 10 controllers and we were for the first time running a Ruba OS 8.1 so it's a sort of a test setup and there is also a it's what they call a mobility master which is a virtual machine that does a lot of the coordination between all the two controllers and it seemed to work out okay but we had some issues with the auto channel assignment where yeah some of the channel assignment didn't make any sense so we had to tune that just to do some static channel planning around that 120 access points we have deployed and we've seen at peak 28 2800 clients and about a gigabit of traffic in the end we've yeah we've seen about 7400 devices on the Wi-Fi so that's I think we'll get to that but a lot of them are badges so on the bottom graph there's also but you can you can also go back go to dashboard that shot 2017 on camp to see this bottom graph it's actually a graph that shows how much clients are connected in a certain area where you can see possibly what which talk would be the most interesting so you can see it by track tent or by a field field name so well we got some complaints on Twitter and also stating like is this 2.4 gigahertz that and maybe the question is yeah yeah pretty much yeah this is a graph that shows the the top 10 most utilized access points which has they it's the highest airtime utilization so we see some access here access points here there are almost at 100% channel utilization so at that point it becomes absolutely unusable and also one of the causes is that people also are bringing their own access points and not sticking to a certain channel plan so that just just making the problem even worse we've seen around 200 of those access points which are do not belong to us but just people have brought into their village or maybe they have like a personal hotspot enabled on their telephone but that makes the peer problem even worse and in a site like this where everything is open the 2.4 gigahertz problem is kind of shitty yeah we have some obligatory usernames and realm stats we also offered at Rome and SpaceNet here so not much exciting stuff going on here and for some reason there's a lot of people from Norway here so okay oh yeah yeah so like we said we had three data centers and they were pretty packed not just with our stuff but also with stuff from POC and VOC so basically all the pink cables on here that's all basically POC and the flight case next to it is steam sysadmin and and our own servers and here you see a whole lot of our fiber patches which is basically the fiber patches that we patched through through the data and clothes using the field boxes from from the scouting terrain but yeah they were pretty pretty full because well just count the flight cases that have to fit in this tiny little bunker but we managed to get the air cons in and all the flight cases stacked nicely so yeah that was a pretty cool oh yeah so IPv6 users was a little disappointing 21% at some point actually was pretty bit pretty high because somebody was apparently doing some downloads with 10 gig over to IPv6 and then it worked quite well but yeah most of our stuff is actually on Wi-Fi these days so there were what's this 7.83k of users on Wi-Fi and only 913 on on wired which is interesting especially for the routers that do the Wi-Fi but yeah a lot of traffic we actually peaked above 10 gig for the for the uplink it's both incoming at some point and on the on the downstream upstream that is so the getting a hundred gig here was worth our time because otherwise we couldn't have handled that little peak yeah well we've seen about 2,000 badges or something we've classified as expressive so it's pretty good I guess that's more than half of the badges that have been handed out have at least been on Wi-Fi so it's good fiber to the boat yeah I'm not sure what I'm looking at so this this is the problem we've been editing this slide deck with six people so so you see here fiber basically mounted to the pier with duct tape and there's a fiber rule in a well that's I think one of the tires that's a spacer between the boats and the pier as well that where we have a little loop but yeah we actually spent quite some time getting fiber to everywhere including to the boats we also learned some lessons because the Wi-Fi that did not really work out too well on the data center routing switches fortunately we had the MX 240s that would that could fix it for us which we basically got on loan from Juniper so there was a tiny switch over point where we moved I think about 3,000 users over from one router to another that could actually do the big slash 18 of Wi-Fi space slash 19 oh oh well and doing OSPF with four vendors on hundreds of link required a bit of coordination to plug everything into the right port because everything was numbered we also had some crashes I'm not sure if you could see it but this is basically the day since last quagga crash yeah we had to fix that a little bit but so if if you don't recognize it that was obviously the word bird is the word so we switched all the cumulus boxes over from quagga to bird and then the network was stable so birth is the word actually so yeah thanks whoever did that we'll wear gloves during teardown so the datanclow logs weren't exactly secure but at least nobody pooped in the datanclow this time oh yes you want to tell about this oh yeah so we're running this these artnet lamps on every datanclow and we didn't really bother securing it and some people noticed so the Italian embassy they kind of hacked our of compromised our own nice little artnet infrastructure yeah check out the link it's on the wiki somewhere there was a nice lightning talk about this and so the artnet is actually DMX over internet and we didn't bother securing it and it was they are basically accessible from throughout the network because we're running this nicely the routed network and we didn't bother to put any access list on there so you might still have a couple of hours to try this out or if you can find out where the things are connected in the network but yeah yeah so right now actually the abuse was is it's actually ramping up so we'll probably receive some more in the next week or so because I think we had to shut down the mail client now because there was like 15 coming in every minute but we got when we made the slides we had 380 abuse emails mostly automated stuff and mostly caused by a few people scanning the entire network and there were some serious hacking attempts actually made from our network we had to act on that obviously basically by making sure that that destination was protected so any traffic to that destination was was dropped from that point on because well yeah scanning is nice but interviewing into other boxes is yeah not a thing not without permission yeah so we actually mentioned most of the supporters already but yeah these are actually everyone that made the network possible so yeah please give a big round of applause to our sponsors yeah then we have a general announcement goodbye sad to see you go network will be torn down at 6 so the camping fields will lose their active equipment after 6 and on Thursday everything will be gone be kind to a fibers and see what Congress okay thank you very much dear knock it's always a pleasure to have you and to have internet everywhere running all the time now we come to the next group it's power so yes perhaps you can't see this there but there's a puddle on the stage and it's dripping of this lamp can you tell me perhaps this is still secure I'm sure so perhaps it's good retiring down so yeah let's hear from you team power all right everyone so we got a little bit short notice on doing this presentation so I did not make a new set of slides so I'm working directly from our master power plan it's okay it's okay the important notes are on the back so okay so we got lots of infrastructure moved here onto this terrain by our supplier it was I think five trucks truckloads worth of stuff including 10 generators eight of which have been running one of them was a cold spare that we had here just in case and the other one was at the south family village basically as a backup or overflow field so it was not turned on because well unfortunately some some more people could have shown up apparently so yeah that's the eight generators and we had a basically an installed capacity of 1.2 megawatts of power and this power was then fed into some 300 distribution boxes running about 20 kilometers of various cabling these 20 kilometers where mostly I mean just these many kilometers come from the lots of shoku blocks and shoku extension course that power like all the emergency lightings and everything here but there's also about what did we say for example there was about 3.5 kilometers of really heavy really thick 400 amp cable that we that we used to feed the power from the generator to some of the main distribution boxes and of course these cables are like really heavy power usage as I said we don't have any fancy statistics available just yet that is mostly because nobody stepped up to do some monitoring infrastructure for the cable or for the for the for the generators or the generator cable which really is a shame but it cannot be helped so the statistic statistical usage data is basically from us doing manual rounds from time to time to check on the generator load we can say roughly that we have been used using the generator power between like 10 and 35 percent of the capacity for example we have like one dedicated generator for this field with these two track tents because the team production house or whatever that is pronounced asked us for a lot of power for the shiny led walls but then as you may know they have been turned down to 10 percent so also the power requirement is only at 10 percent so that generator now is set mostly idle and the most interesting generators the generator powering the huge field like Olsen and connected fields we had put two generators there to do a little bit of load balancing and for a little bit of redundancy these generators in the evenings were typically putting out about 130 kilowatts since we know that the pizza oven draws about 14 kilowatts that makes about 10 percent of that power that was invented or that was invested in very nice tasting pizza and I think that was a worthwhile investment now to the well not so party part which is like the work that has been put in was mostly a team of about 10 angels during a build up that have been working since Saturday basically from the morning till it was dark putting out all these heavy cables so that we could them get up and running by Wednesday more or less within the time we were I think two hours late from our internal plans if I remember correctly but that did not really matter we had some small equipment trouble but that could be solved and yeah what what else can we say in terms of power not everything was on generator power we had some stuff that was kept powered by the scouting grid made mainly for the knock people so that there was some redundancy so we could use parts of the power that the scouting people have here so we have to thank them for that or actually you have to thank for them and now it's about teardown teardown will also we will start tonight after the knock has put that all turned off all their stuff we will start slowly disconnecting the remote parts of the power grid and we will see how many people are still here and still need power but most likely you should expect that power goes out tonight at some at some point of time for most of the fields some some as I said some fields in some areas will continue to have power for example the harbor they needed till tomorrow but that's that and here's a final request since we have to move all these heavy cables again we will put out crates at various locations request one please do not use these crates for trash there are four cables yeah you would be surprised what we've seen in past events with our cable crates so and request two when you see some of our power angels then curling up the cables and you've got some five minutes of time if everyone just helps with one cable then everything can be put into the crates and essentially no time and that's that would be really helpful thank you so this were our teams and even if you think all of them independently I would please ask for a big round of applause for all our infrastructure teams and all the other angels at chart 2017 so now we have some minutes left so you have the chance to get your questions out if you do want to know something about the first one is already running up there okay so please to who are your question first to all of you me two words love it thank you the second is a question I need all of you I want to have another event like this before in two years four years so please help me I'm organizing it I'm Eric you don't need to mean more but I need all of you and I have a present for you and I do this offstage thank you all right thank you very much do we have any more questions yes yeah I've got a question for knock so with the pixel flood screen in the bar what was the peak traffic of that do you know that it was around three gigabits so next question yes I have the same question about your location sorry your location how much traffic was there from and to your location at peak it's network I think yeah I think they piqued around 10 gigabit or something that was all they got so they filled it up nicely so do we have more questions okay not at the time yes so thank you again for your amazing work for the effort you put in thank you all the angels saying all the visitors you're here yes one last big final applause for this amazing event and the closing coming up