 Yn ymgyrch, dyma, yma, ydw i. Ymwyl yn ymwyl ac yw Kai ac ymwyl am ymddiwch yn ei wneud o'r mynd i'w campau, oherwydd mae'n bwysig yn ymwyl. Gweithio'n cyffredinol ac mae'n dweud bod yn cymuned cyflwyno o'r meddwl am ffreddau a'r meddwl am y campu ar y netwg. Felly, mae'n dweud bod yn ymlaen â'r meddwl a'n meddwl am y netwg a'r meddwl am y meddwl. Rwy'n meddwl i gyd ddim yn cyflwyno'r pethau. Yn y gynllun fawr o'r ddweud o'r hwnnw, y twel chi ymwneud y CAM networku, y link ei wneud, y hwbl hwnnw, y byddwch ddwy'r cyfnodol a'r stwyffu am y bwyl yn gweithio. Yn y fwyllwch, mae'n cyflwyno'r campau. Mae cyflwyno'n gweithio i gynnig o'r teimlo a'r cyflwyno'n gweithio i'r ddwy'r cyflwyno'n gweithio. Felly, we ended up with two times one gigi uplinks to the campsite, one gigi to tell to a local telco and a one gigi back to Berlin where we picked up various networks here in Berlin. The five was four kilometres through the woods. We were pretty lucky because after all that there was only like 20 metres of fiber left on the end of the spool. And we didn't really fancy picking up and moving the data centre container to make it reach. We used to peak around 1.2 gig, something like that, which is lower than Congress, but we obviously had less available. So what hardware do we use? Well, we had Juniper MX80 as a BGP router. We speak BGP with all of our upstreams. We had some Force 10s as core routers in the Hack Centre Colo, which is always a busy place. And we actually, for camp, we had the opportunity to buy a lot of HP access switches, which was quite nice. So there's a sort of big pile you can see on the right there. And we also had use of 12 Hirschman, which are industrial ethernet switches, which we used in a ring configuration around the site. This is the first draft of the build up, kind of a nice little sketch to work out where to run fibres. Do you want to talk about that? Yup. This was the first draft we draw at the CCB when the whole campsite was fixed and most of the things were placed. This is in one of the end. We have running a single mode 10 gig link around here, which was a ring. We run on the link Hyperring protocol, which is a Hyperring redundancy protocol of Hirschman. They run beacons every 10 milliseconds. And if one side fails one link, then it swaps the whole ring to the other side. Here we have a 10G spoke. And we run single mode fibres here. Most of the fibres on the campsite were multi mode except the whole ring around the F area. The blue things are all the data close. And we have three cable bridges up here. And we changed some, we rerouted stuff because here they run 10 gig. So we laid an extra fibre around the angel area to our data centre. Me splicing, we did around 120 fusion splices because we bought the fibre on a cable and we laid them in the ground, cutted them and then we have to make pigtails on it to plug it into switches. One of the problems is that it's obviously quite a difficult environment to install fibre and you can't easily buy cable with connectors that will handle the sort of stresses of installing everything in the ground. So yeah, the splicing is quite a kind of laborious task really. So it's nice to have a seat. Yeah, it was like two days from getting up to sleep done with splicing. Really boring. We had around three kilometres of fibre to dig around the campground and it has to be like 20 centimetres deep that no one puts his tent packs into it and we have no cable breaks. So we built a plow on the left. You see the first version which broke several times, we fixed it several times and after all we put it in the front of the Radladder with a chain and some straps and everything. So this worked in very good and we can drive with like two or three kilometres per hour through the campground and dig a really deep ditch. So the wireless deployment, we run a Cisco VLC5508 as wireless controller and we have around 50 APs deployed so we had a wireless coverage over most of the whole site. There are some holes in between in the angel area in the workshop hangar because we don't map the hangars with the walls correctly into the map which is a bit difficult. We had to, we had a big challenge. The access points have to be protected against rain and UV and it has to be two or three metres high to get with us a race over all the caravans. So we fixed something with like some tubes and buckets and a lot of cable ties. And yeah, this was how it looks. And the access point is in just, there's a 90 degree stuff in there and we can just mount the access point with cable ties horizontal. It's because they're designed for installing like in a building like this on the ceiling. But when you're at the camp you don't have a ceiling. So we kind of had to make our own. Yeah, this was our data centre. It was a bit of a fuck up. I talked with the guys and this was a company which deploys normally data centres in a container with race floor, air conditioning, power supplies and everything. And it was a big thing of miscommunication and so we got an empty shipping container with two retail racks and two air conditionings like you have at home with split devices. So yeah, this is our data centre rack with all our knock stuff. There we have the E600 on the top. We had to fix some uplink splices. Then we have the MX8 ESBGP router, the Hirschman Cyphering middle note, then some servers and the wireless controller and UPS on the bottom. Yeah, this is a short movie. We built a weather map and there you can see how the network expands and you see the nice traffic flow. We'll put this online somewhere if anyone's interested. It's available in the pentabarth and it will get bigger in a few seconds. Because we were editing this video on the fly as we built the network up we added a new box. We changed the weather map and kept on adding things. So this video basically covers the whole timeline of us building up the camp or the visitors arriving and then basically to the final day. Yeah, we start moving some things around. Basically up to the top here is the uplink connections. And we move the two routers up there are normally placed in the hack centre and the data centre but then there were no place left on the map. And you can see the uplink one and it gets kind of busy towards the end of the camp. Yeah, and there you can see the links here switched. We did a test of the hyperlink protocol where it's just unplugged one fibre and 10 milliseconds later it was on the other side. There's a nice congestion down here. These people down in this area of the field were obviously downloading quite a lot and uploading quite a lot. This is just a single one gigi link. So we did manage to, there's only so many things you can do like installing new fibres and things like that. We do sort of re-engineer the network while it's in flight but to kind of upgrade where required but it's not always possible. Yeah, this was the state at the end of the camp. We had a few problems on the side. We had pink and rainbow main boxes for DNS, DHCP and such things. They had a hard disk failure, both boxes. We think it was on the transport or something. So yeah, we lost one time the whole ETH file which was not so nice and we had a big thing with the cut five cables. They have a shield on both sides and on the campgrounds there are several ground potentials with a big difference. So when you lay a cable from the outside of the angel area inside the bunker you have a big difference. So yeah, we have to disconnect the shield at one side because there are just too much power on it. Yeah, you might fit a little tingle actually when you're plugging in the cable and that's kind of an indicator that something might be a bit wrong. Yeah, it's all data close for the next event. Yeah, I think we might need to get the access points a bit higher so just have some thought about that. So this was the camp stuff. That's the end of that. We'll just move on to the review. Now of the current network of the Congress we will just go the topics uplink, core distribution, access wireless, colo, monitoring, some abuse and VPN and some rules at least. At the uplink we had this time two times 10 gig over CVDM system. We got two times 10 gig transit in the data center in Templehof from KPN and Terrican Electric. And there we also patched three one gig transit from Euro transit carrier 51 and peering with our permanent CCCAS. Also we have a we are in the peering line at Berlin there we can get we dropped off around 500 Mac traffic to cable called Deutschland there. We used a broken MLX4 as BGP router which placed in the data center and then had all like to the BCC one star. So yeah, we had quite a collection of equipment this time around. We coming in from the network we had an MX80 with 20 gig going out each direction basically into the out to the out to the internet as it were and into the middle of the network. Force 10E600 so similar to camp and we had some Cisco 6509s now one at one as one routing and one as a switch for our kind of requirement for lots of one giggy copper ports. We also had some 2960Ss which are like small Cisco 1U switches with one gig as a stack in B90. Yeah and C57 was which is the biggest patch from in the house we have around 200 patches to do there. The whole C and the big part of the bearing arrived there. Yeah the 6509 was a bit difficult this year at first we have had the wrong fan blades since someone would hit the right fan blades but it was the wrong fan blades again. Then we got the right fan blades but then the power supplies were not enough. So we have to get like on the 22nd to power supplies in Berlin which was quite difficult but yeah it worked. Cisco have been shipping these chassis for like 15 years or something and it's one of those things where you have this broom and you've replaced the head like 10 times and the handle 10 times. And yeah it's still the original chassis but all the individual parts need replacing in it so because we kind of had the old fan tray it's like yeah the newer line cards and need more draw more power and need more cooling and all this kind of stuff. Yeah there's a network map of the current network and the left you see our up links. We run all BGP, we speak all BGP with our up links at the MLX and the Alwin counter. We terminate everything and on the inside we choose to only run OSPF because we don't need full tables. Last year we had a ring between, we had an additional link between A87 and A85. But the fire was too broken and we were too lazy to debug it again and it's like every year we debug it and it runs for several minutes and then CRC errors. Yeah you see the router behind A85. We had two color switches which were connected with 10 gig ethernet to A85 and distributed one gig ethernet ports down. And the VLC the wireless controller was an 8 gig lag. Did you have some lapsing studs? Everyone loves graphs. Yeah we had around 23 gig a bit uplink, we only used 5. So use more bandwidth. No top 5. With the top ASN we made traffic too. At all we were heavy outbound. We were like 5 gig outbound and 1 gig inbound only. Most traffic goes to Datak and Hetzner and also to some smaller access providers or bigger. The only content provider was Hetzner, the rest was only leaching from us and it was outbound traffic. Oh yeah so talking a bit about the wireless LAN. Again this is quite similar to what we did at the camp. This Cisco light weight controller solution has been working really well for us. I think this is what the fourth time we've used it. Third time? Third time yeah. Because it's more central control of how clients roam and things like that it's really made a good step change in the quality of the wireless LAN we can provide. So again it's much the same, just a smaller number of APs here in this occasion than at the camp because it's a smaller building. We did find some bugs. We had to put a IPv4 accol inside the controller and that actually broke IPv6. There's actually quite a lot of IPv6 use now. The usage on the network is something like 7% which was quite surprisingly high and is really kind of heartening for those of us that are trying to get IPv6 deployed. So maybe we should just start saying use more IPv6 bandwidth rather than something like that. And I guess the fact that people are reporting their leaching stuff at 30 megabits a second and video chatting and so forth and the wireless means that's kind of a good indicator that it's working quite well for a lot of people. It's quite a challenging environment with a density of devices. Here are some stats. We had around 1,350 simultaneous users at average with around 19 users per access point. Our biggest access point I think was in the hack center with 140 clients. We detected 3,300 unique wireless users. So that's more than one device per person. I suppose when I think about it I'm probably carrying like four wireless LAN devices and many of you will be as well so it's not so surprising. Here then you see the client distribution so the most people are using 11.n in the 2.4 GHz band but the most traffic runs over 11.n in 5 GHz band like it was the half of all the traffic. I think that shows how much better the 5 GHz band is and it's something when you think about when you're buying new harbours you really want to look for that 5 GHz capability because you just have more oomph there, more stuff available. And we turned off 11B again at the camp because it's only slowing down and 2.4 GHz is so crowded. There's some statistics like some traffic peaks around the afternoon and the evening and the client counts. It's quite good because when people get finished using their Wi-Fi devices to go to sleep you can really see when people have left the congress. When the devices are no longer associated, far more so than you say we see with traffic. The co-location, I just mentioned it, you have two switches, two times ten gig each. It was smaller than last year but the cases they are really crap. One time I had to carry out a server called of abuse which was a box with a switch to plugs and tons of cables which is not nice to carry. Or some things like this, the smallest hit on the case will make the hard disk crash so get better cases. This was our motto for this year. You want to monitor as much as we can out of the network. So we built our dashboard, I saw it at DreamHack and I want this for the congress too. We started with bandwidth, the clients connected the Poc and the streaming clients. Then we added a radiation level and then the wireless bandwidth usage. How many open beacon rockets are in the network and IP protocol distribution. Next year we run all this stuff from the beginning and all the graphs are bigger and fancier. If you've got any other ideas of what we can monitor easily then we could add that. I don't think we have some kind of martyr consumption sensor but that could be useful. This was the weather map of the current network. We have on the left the up links, then the BGP router and then the internal network on the right. We had a 20 gig connection between most of the switches. All this is 20 gig, two times ten gig set of links there. I think we kind of needed it with the PU utilisation there. We don't have much congestion and we added eight or nine gig between the hack center and the core router upstairs. This is our e-singer map. We use e-singer for monitoring. E-singer is a Nagios fork. We had around 200 host monitors with around 1,100 services. We monitored all the equipment like servers, routers, all the access switches and the access points. Additionally, the environment that we unplugged some switches, some fans and some switches, that they don't get too hot or the up links are that they are always some stuff left. We have quite annoying IRC ward for alarms, which I was talking every time. A nappling port is full or here are too much Macs and such things. We used also AS stats with a small poll packet for S-flow analysts on AS base. We used MRTG for the graphs and we used ring.nl.net with a project where you share servers with some other ISPs so you can debug the network or some up links during trace routes. This big graph here, this nice pretty colours and everything, is the network traffic to and from individual networks. This is based on our KPN up links. We were sending quite a lot of traffic to DTAG. That's all this stuff here in the red. This is really useful for us when we need to do some traffic engineering. We need to balance where we're sending traffic. It was quite easy this year because we had around 20 ASNs with a huge load of traffic like between 100 and 600 MBIT and it was really easy to shift the traffic to another uplink because just you shift it and 200 MBITs are away from one uplink. It's a really useful tool this and really good for us to know where we're sending traffic. We had some abuse. Two things. There was an online shop who was dust via an exit note via the PRP Zero Day of Alech and a news portal. Yesterday night at six we had a dust against our DNS and HCP server from the Amazon EC2 cloud and all of you dust events CCD, which was quite a week set up with an Apache 2 and nothing more. Now it's a varnishing engineering and MemcashD so it should take the next Congress better. We run a VPN with around 20 beacons of the Congress which are hacker spaces who want to participate here. They did a peak traffic of around 150 MBIT. We also had a DN42N case VPN peering. We have some rules for you for the next time. If you have a switch on your table, don't unplug it. We monitor the switch for everything and we have a tech VLAN on the first port. So when you change the uplink port, the switch won't respond for us. If it's too loud or you want to replace it with a switch of your own with more gigabit ports or you need to quiet thing just talk to us because if it's too loud we can just unplug the fans. They are robust and not very warm. When you want to replace it, we just removed from our monitoring. All the wall sockets are usable most at one gigabit ports. Down in A it's the only floor where you have only patched every second port because we don't have enough patch port on the Cisco there. Some questions. This was a nice photo. This photo was actually from the camp but someone left us a nice message. So thanks for that. It's nice when you're kind of walking through this mud and getting yourself filthy clearing up and find a nice message from someone so thanks. Well thanks to you to keep our central nervous system running here, our network. Do you have any questions? Please raise your hand and wait for a microphone to come to your place. Any questions? Sorry, it's running in front of the loudspeaker. Do we have questions from the internet yet? No questions? How about back of the room, just raise your hand if you have a question. Yes. Microphone is on its way. Hey, I have a question to the camp. Is the fiber still laying on the gram count? No. We collect them all, put them on spinels and put them in the storage. The uplink too? No, the uplink was going back to the provider and they are digging them anywhere in the city. We actually reuse some of the fibers. So some of the fibers we use, there's a kind of big pool of them and some of them are as old as how 2001, if any of you were there. And they kind of get used every couple of years or when sort of required. So we don't throw this stuff away. If it can be reused then we will reuse it. Okay, then we have a question from the back of the room. We need a microphone to get to this place. In the meantime, just let me announce you for the English speaking of you. We have a translation of the talk, Security Nightmares, on DECT 8004 later. So it's a talk in German, but if you want to listen to it in English, there will be a live translation. Now here's the question. Okay, first of all, thank you for your work. I would like to ask if you have any documentation for your monitoring stuff. Because I want to use it on my own network. Yeah, we have some wiki for knock-and-turn stuff with the scripts and we are thinking about doing a small public page with all the knock-talks and script views. It's quite interesting question actually because this year off the camp we realised that we sort of reinvent the wheel every time. So it could be useful for two sides. If you have any special things just made us you want to have and I think we can get you everything. Okay, another question from up here. Thank you for your work very much. How many people are working in the knock team? We were not so much this year, but we were around 10, 15 people who were there at the congress. And when did you start setting up the network? In the BCC we start on the 23rd. I deployed the uplink router on the 18th December or something. And the first page for the first KPN 10 gig link was around October in the co-location. Okay, so Christmas was a little hurt? Yep. Well, any more questions? Yes, over there. Thanks for getting it. Have you considered adding encryption to the wireless? No, not really. You can use a VPN or something, it's just too much load. Everyone can encrypt the service at its own. Okay, there's one more question. Is there any interest from the companies manufacturing the equipment, say Cisco, HP, in how well their equipment performs? Do they have other interest in it? Not really, it was quite difficult to get some hardware this year because the process is you have to find someone who would like to sponsor a broker to a company for this event because the management there won't sponsor the CCC officially. So it was for other companies which I just want to stay unnamed and was more or less from carrier pools or something and not directly from the vendors. We have in the past had a lot more sort of relationship, generally with individual people in vendors that actually come here and are interested in what happens here. And that can be really valuable when you have someone on the inside as well. But they come and go right, it's not an official activity. But it is nice. Okay, up front here. Hello, to answer the question about wireless and encryption. There's actually not much we can do about that to provide you with encryption on the wireless because even if you put a pre-shared key on the wireless and give it to every congress participant, everyone who has this private key and witnesses and captures the package of your first association can decrypt your session keys because they are derived from your pre-shared key. So we would have to set up an account on a radio server. Everyone would have to use that account to associate to the encrypted network. But this provides a lot of work for people and for the help desk because some wireless clients do handle handover between wireless stations in encrypted networks quite poorly, especially if there are many networks and substations available at the congress. So if you are concerned about your privacy on the wireless network, which it should be, we can only recommend you to use a VPN service. Okay, last two questions. Thank you. So at DEF CON they have this thing called the wall of sheep where they monitor all the traffic and the wireless traffic and then they print all the passwords and usernames that are sent plain text over the network on the big wall of monitors and have you thought about bringing something like this to the congress? We don't want it to your traffic. We don't look in the contents of your stuff. We kind of take this sort of privacy stuff quite seriously, I think. Last year we had a privacy officer who checked all ourselves that we don't look too much and we were just giving a higher level of privacy everywhere. We have encryption on every server which has user data and the first one. So we have no backup server for DHCP at the moment because the first one is already wiping. We don't want to use the data because it will make problems at any time. Sometimes we have to record or temporarily store user data for the operation. There I come in a good example is a DHCP leases file that contains your MAC address, your IP address, stuff like that. We have to put that on the disk, right? But we are quite careful to wipe at the end of the congress. I don't think we feel strongly that it's not our job to be looking at the contents of people's packets. What's the usual legal fallout? Will someone be getting a bucket full of lawyer mail after each event? No, we just get only a bucket full of automated stuff like Sony says we should stop torrenting stuff. I still get things from the congress network which were torrented yesterday and the IP range we gave them back one year ago. So we get lots of stuff from the automated things like IDS and something from head snow where you say oh you are doing FTP and that's bad. So actually the address block we get is a temporary address block that changes each time and it's actually assigned to someone else so it's pretty likely that we keep on getting the abuse mails but it's actually not our abuse at all. It's like whoever owns that block now. They're really poor at keeping their who is data updated and there's no excuse for it, right? Unfortunately. Maybe just a short comment. If you log into Gmail account and look at the last activity of your account you see all the logins from the IP address pool from here associated with Russia. A friend of mine was very confused. Oh my account was hacked from Russia but it was himself. Yeah on camp we had the same problem. I think the net block was located in Denmark. So it just takes a few months till it's located in Berlin and with temporary block it's difficult to get a correct geolocation in a few days. Yeah because of the short time frame. And again when you go to the main google.com site you get some Russian characters. Keeps life interesting. Alright well I think I'll wrap things up. Thank you very much. Thank you very much for keeping the network running.