 Hi, good afternoon everyone. My name is Will and this is Leon. We're going to tell you a bit about the network we've built for you guys. We started here. Sorry, the first thing I need to say actually is that we are not responsible for running events such as CDE. So networking is does involve a lot of terminology and acronyms and strange words and stuff like that. So we developed a new item of terminology, the concept of the sauna. And this is various rooms that are all around this building. And we put network equipment in there. Do you have a microphone for me? Microphone, microphone, microphone, hello. Photos in comms rooms. And you can see, yeah, there's quite a feature in there. It was a bit of a problem. We're basically running on the limits there because those rooms aren't designed. Okay, we are now at the edge of our room. The rooms are not so big for a room that's a bit bigger. And the building just wasn't designed for that. So it's pretty normal in those rooms. Here's a graph of some of the temperatures. We collect lots and lots of data, numbers and things. And this is produced in graphite and it's very easy for us to just have a quick graph of the top temperatures which go up to, yeah, 50 something degrees C. These are the degrees C, not Fahrenheit. At one point we had to move over to another room, because the door was wrong. And after 55 degrees, it would be good for the hardware, but the temperature was very good. We didn't have 100Gb this year. For 33 we had 100Gb, and the device didn't actually follow the music properly. So we left that equipment in the demo stock and we just took 10Gb. It's quite easy to do 10Gb wavelength over the single fiber we have that comes into the unit in the demo stock. So we didn't have the unused uplink bandwidth which is too damn high. We didn't have too much uplink bandwidth. We had up to 16.4Gb of bandwidth, which we had in previous years with increasing amount of the inbound IPv6, not like our IPv6. But only 4% of the output of IPv6. So the same we're running this year is pretty common but the same last year, so it's good to try it out. The main network works on the hardware from Juniper, which we get in a loan, mostly MX-240 or MX-80 router, and for the distribution network we're on a X-series Switcher. And we're running a MPLS and VPLS as core network. We're running this routing, but we still have a few networks in a central place. The same network as last year. It's exactly the same network as last year. It wasn't that big of a change. We had to prepare the configurations. That's what we did last year. We tend to throw away the actual data that gets accumulated in the logs and basically keep the concrete so we used that before. That makes life a lot easier for us. Some of the other diagrams, we've got the larger ones, and the diagrams also got more complicated. We have various versions of MX and EX, which we've got in the building and over the inbuilding fiber. We can see here is on the right hand side is our upstream provider. We have various internet service providers that offer the sponsor of a single 10G or 20G links. We have this uplink capacity of 50Gbps, GBE, and also on the right hand side is our router in the data center, where the fiber goes from the building to the rest of the building. And the rest of the map is the patch rooms in this building with various routers and switches. We have a colo, we have a colo here this year. We had around 80 to 100 machines in the colo. You can host your own machines in this place. They have 15kW power, which produces a lot of heat. We had this ticket from the Catless, where apparently one of them was thinking of sleeping in the colo. And then left his sleeping equipment there and then we could not cleanse this hellish place and remove it. So we got a ticket and we had to remove it. I am sorry if you are looking for your stuff. We are sorry if you are looking for your stuff. We had a much better layout this time. We had a better layout in the colo. We had no carbon sand. We were able to kind of make things a bit better. We wanted to make things a bit better. We were on the balcony of the Saal 3. You can see that the Saal is back there. Here is another picture, a bit darker. Now the dark lights are on. A lot of machines. A lot of computers. We had a very interesting discussion on Twitter. I remember it was not going to be in the Congress. In Japan it was not going to be in Africa. The Congress had more IPs available than in the North Africa. And this year the Congress had better internet than North Korea. Most of us in the North are professional internet engineers. We think it is very interesting if we can build a better network than for a whole country. It is a bit sad that we can build a network that is better than the whole country. Maybe some other countries will too in the future. We hope that other countries will also come to the North Africa. We will not hit Africa anymore. They are far behind us. We did a fun analysis of where we are. We did a traffic between Congress and North Korea. We looked at the traffic between North Korea and the Congress. There was no traffic. The whole width of the United States and across the Pacific to get to North Korea. Normally it had to be a long way from New York to North Korea. This year we were running on the wireless network. This year it was built on a Ruba network. These two Ruba controllers had 10G uplink. We have improved the last setup. Last year we were running on 175G. 115G is actually a service client. 10G is for the other end monitors, for these lecture halls. We are measuring the actuality and the spectrum. So the controller can do the same about the actuality. the wireless access point. Also, we had way more users than last year. We had a 5,000 on the highest point and now we have 7,800 users. That means an average of 68,000 clients pro access point, which is very, very high. The interesting thing is that not everyone is here or has devices connected. It's very interesting that not everyone here has a single device. That's about a couple, two devices per attendee, I guess. It's interesting, we have 20,000 different devices. That means everyone has at least two of them. And up to three gigabits we have the network connected. Here are a couple of graphs. We don't collect numbers, but not your data. We don't collect numbers, so I don't have to be nervous. If you look at this large purple graph here, this is actually the number of users in salines. That's the number of users. And then obviously the talk finished here and everyone goes out into this cyan sort of area, which is actually the foyer behind Sal1. So everyone just walks out and their devices stay connected. And then all of us are talking in salines again, so everyone goes back in again. And then you see how many people are in sal2. What is the talk there? We should go and see what's going on. We are in sal2. And so we always knew where the interesting talks are and then we went there. This is a big difference from the north news show on the right side, that is where the north news show ended and everything went home. Some of them went to the bar first, I think. They went to the bar first, but in fact they all went away. If you don't use the crypto we provide, then people will sniff your traffic. We remind you that 60% of the people who want to use the BPR enterprise plants is where you enter. You can run it by the name of Narsword. And it's always easy to use it. Please don't use the only crypto Wi-Fi. We do include it, for instance, for compatibility reasons, like you have some old hardware or Raspberry Pi's and stuff like that. If you persist in using the unencrypted Wi-Fi that we can't help you, then your data will be sorted and sold. This time around the encryption and the machine is terminated, there will be a router set up on the controller itself, which is stored somewhere safe in the building, which we just need to make a surface, because that means that we need this router to be encrypted all the way throughout the network as well, the wired network that the access points go to. Yes, it's easy. We have some more routers. This is like the traffic. You can see the time when hackers sleep. It's not like in this graph, but we remind you that the smallest number of users is somewhere between 7 and 8. That's where the whole depth of the curve is. What you can see is that we have a lot of traffic. This is only 1 and 2, so the main lecture hall. The main lecture hall is full of rooms, and it's about 1.2 gigabits of traffic in the high times. What happens if you have so much traffic on the wireless, we have a pretty graph of this as well. It's pretty clear, isn't it? So this is the spectrum you're going to basically see, it's not as simple as buying a gigahertz car. It doesn't prove you every year, I think it requires hardware, and 65 gigahertz clans is pretty good. That's nice. So we had some problems with that one, and it's never perfect. It's never perfect. We had one of the virtual chassis that didn't literally explode. It exploded, but it was very warm in there, and it kind of fell apart. And we had to, at some point, this chassis is a single switch, several single switches that we have together, and that's one virtual switch, and that's just one IP address, and we're conferring that over SSH, and that's one manager's switch. And at some point, it fell apart. One of the switches is new, and we don't really know what happened there. It could be a problem, a stacking cable, which we didn't really have in mind. We didn't really look at it. Maybe after the event we'll see what happened, but probably we won't look at it. We had a talk about the router that is in the data center in one side of the city, and we were enabling IPv6, which is... We used this for flow monitoring. It's not for monitoring of your data, but we want to collect data about where the traffic is going on the internet for us, and that the internet work is actually part of the internet. And we like to be able to see where the traffic goes and provide the best before and where we need this feature, and it calls the line card to hang an unfortunate route on the router in the data center. We'd only built most of the Congress network at that time, and we were kind of letting it off already for a few years, and then unfortunately we still had a dedicated driver to be able to drive across the internet. the radar and what happens in the exit point is they hear something and if they hear something that they don't understand, they shut up because they think, hey, there's something in the radar here and so I won't transmit in this area. And what we found was that the function was very sensitive and so that's it. It just channeled by and then this was a bit of a problem and that's a certain problem. We want to get more out of the network, we want to get more out of the network, but the way we know it, we don't have to. We're electronic devices in general. It's not going on from here. And this is the function of the density here. So we're working with the vendor on that to make sure that we make it more usable. So again, we need to thank you again for some companies, basically. We want to thank the grates and supporters who have contributed to this network. We can't bring someone up and say, please can we pay you money for this? We can't bring them to the lines. We really need the people to give us the things we need. Especially Juniper, who I think has given us 1.2 tons of equipment and services. I think it was just 3 million euros of equipment, insurance value. Yeah, it's about 3 million euros of insurance value. It's not a company, but we really need to thank the people who have not helped us. That was also very important, but the people who are still here, who have just stopped a lot of work from us. We had a very great support from Ruba and we've already talked about Juniper. There's a German company called Flex Optics. They have a real company that has Flex Optics. They have optical connections to us. We had hundreds and hundreds of these, like it is in a very huge box. We had hundreds and hundreds of these big machines that have just introduced us to the network. We got a donated bandwidth from a transfer car and a KPN, so thanks for that. All kinds of other bits and pieces and all sorts of other things, so far. Thank you so much for all of you, and now off your hands, does anyone have any questions? So this talk is going to be a little different than all of the others you heard before. We are going to be doing a little bit of a Q&A session, so if you have any questions, please take the microphone and be quick so we can do as much as we can. So if anyone has any questions, please go ahead and start with the internet. The question is, what would be the current market prices for a gigabyte of concrete data? We haven't received any offers yet, so I have no idea. All right, then I think we'll start with microphone 2. Okay, microphone 2, please. Hello, I just wanted to ask if you have any number for networks that you have used before? Yeah, we do, but I'm not carrying them in my head. But if you come and talk to us afterwards, we can do a little bit of more detail. We can also share these people concerts just over the Twitter account. We have a few contacts on Twitter and a little bit of feedback. Come on, let's talk about it. Okay, microphone 3, just so you can make sure. I am one of the Infotest coordinators and we have the question of this data from the interwebs. Yes, there is a question about whether you had any fallback over the internet connection side of the security instance or I think just in terms of whether you had an additional line or something, I think it's an extra connection. We actually have to buy the fiber. We're renting it for a month, but that's possible. So we have a full day event, we don't have kind of super dual paths out that we're going to have to do, because we don't have the money to meet the thousands or more. We probably use more equipment next time and have more needs in terms of the equipment, but probably not in terms of the physical fiber. Talking about the use of the media, right? Yes. Very few abuse complaints. We do have a little bit of abuse in the three cases. The rest was automatically done. People were really annoyed. It's very nerve-racking, by the way. Figure out how to do this with MySQL. It's a nightmare. We have some tickets to come with MySQL. That was a dream. We got three of them. Yes, about abuse. Have they removed some things automatically? Like, for example, failed to ban? We get them all and we process them. We get a lot of emails. We don't really get them automatically, because they think that's kind of rude. We get them in theory at least. So we can look at it. But there's little we can do. We provide the service where someone sends us an abuse complaint. We can guarantee they will never receive another packet from Congress ever again. And this is something that we're not going to say again. We'll just put an access list on them. You'll never hear from us again. And we usually put them on an access list. And that makes them very happy. I have a quick question. Do you have a question? Did you check almost traffic goals that were going into control? No, no. We absolutely do not look at the protocol data. No, no, no. We absolutely do not look at the protocol data. No, no, no. We absolutely do not look at the protocol data. We don't collect that information. We don't collect that information. We don't have a chance to get rid of it. We really have no interest in collecting data about your packets. We don't have any interest in the packet itself, except for flow data. On a whole autonomous system basis. So it's a whole big ISP. So it's IP addresses, ranges, sections, one thing, whether it's for providers or not. Do you have a quick question? Microphone 2, please. Are there any signs of spy agencies monitoring the network? No. That's a tough one. That's a tough one. We don't know what to say. That's all I can say. We don't know if there's anything more to say. We don't know what to say. We don't know what to say. Maybe it's too good that we can detect it. We have a lot of upstreams and there can always be something happening. It's outside the border, outside our area of influence. So that's all I can say. Two times. Okay, then thanks again to LockEye. Okay, so we don't have any more questions. We don't have any further questions. Who's next? Fengel, I think, next. All right. So you're going to talk about CERT, right? We're going to talk about CERT right now. It's a emergency team. We need a different screen. Yeah, yeah. We need a different screen. It's good that we have the VOK guys all here. It's good that all VOK people have a different screen. Yeah. Yeah, so I want to talk to you about the power network we have. I want to talk to you about the power network we have. I want to talk to you about the power network we have. So the building test has not so much power. That's built by himself. Unfortunately not enough stock doses like the people that we need. There were also a lot of plug-ins in Hall Hargab. For the Kolo we had more than anything else. Here we used the cables and the other material, like distribution boxes and so on. There were a lot of distribution boxes, with different amperes, 1632, 63, 125 amperes. The nearest time we needed that, Hall 3 was used. Unfortunately, the SMD team has put together all the people, who have large power consumption like 3D printers and so on. That means we made a large distribution. Together we installed a 9,058 meter cable of 653 distribution boxes. That includes all the distribution boxes. There were 3,200 normal power sockets. That means there were probably more, because each SMD made their own distribution boxes and distributed them. That was the first time we measured our power consumption. In Hall 3 we used 1,067 kilowatts. And in Kolo we used 31 kilowatts. We used 53,093 kilowatts in 4 days. Last year it was twice as much. I think it was a greener congress. I think it was on my side. I don't have any data from the control system. So again, would you mind answering questions if there are any electricians? No, no. So then please give it up for the second question. Hi, I'm Sebastian. I'm going to talk about the side streets. This year it was smaller than last year. We used only 600 meters. 100 meters less than last year. This year we had 700 meters. We had 3 days of work to build up. We had 4 to 7 hackers who worked on it. Almost all the time. The same people who worked on the side streets this year. I would like to ask you a quick question. Because I wasn't one of them. I couldn't come here until the 26th. When I was here I was prepared for everything. It was very impressive. We also saw some new trends in capsules. We had a lot more LEDs. We didn't have LEDs to photograph because they were too bright. The room was dark. The capsules were heavier. I don't think you can take care of the weight limits. Because of a lot of LEDs you needed a lot of battery power. I've seen quite a few. The 3D capsules. But most of them had 3D printed parts. Some of them were right and some were wrong. Some were pneumatic capsules that you can buy. But they were half-saked and our system was rebuilt. This year we also tried to make some graphics. We didn't put up transmission locks like we did last year. We tried to find manual things because it didn't work well. The numbers just didn't come together. This year everything was done with capsules in the center nodes. These are the data from the central nodes. The traffic was similar to last year. I think they won. That was day 1, 19 o'clock. And we got over all 548 capsules that were sent or received. And the fastest capsules were 400 meters per second. Which was a similar area than last year. And you can also see some peaks in the graph. That would be interesting because these peaks were not talked about. And then you can see the sleep times. In day 1 you have a power failure because someone turned off the cellar and built a vacuum. We also experimented with auto-rotors. We tried some ideas from the press. The main effect is that the room is less flexible than we expected. So we need more electricity, more motor power. And we don't have a routering protocol yet. Even if there was a router, we wouldn't say where to router it. We work on that. If someone has good ideas, just send us a email. Write us a mailing list. Or get them regularly. That's not the case at the end of the day. If you want to contact us, then you can go to IRC on hackint to hashtag Seidenstraße. There is not much going on there. But normally we are not responsible for that. We also have a mailing list that has a bit more traffic. That's a normal mailman setup. Just write a email with the subject subscribed there. I guess we know the drill. We plan to do a Seidenstraße setup at the camp next year. It could be really useful. Not just a toy. But we need more helpers to do that. With 8 or 9 people to do that is a lot of work. I guess we all want to enjoy the camp. The more people that help, the faster we are done. Also a short note. If you want to have a Seidenstraße at home, you can have free peace of mind. After we put everything down, just call 4451. Then we can give you a few hoars. But you have to have some way to transport it. We can't do that for you. Otherwise it would be nice if you had some way to transport it to the camp, to transport it to the camp. If you need it again. That's all I have to say. All right. Thank you Seidenstraße. I have a quick question. No flying martibottles? No, no flying martibottles. No flying martibottles. No broken tubes? No. Maybe one. There was an automated capsule attack. Somebody had a 2 meter LED strip on the capsule. And I counted like 300 capsules. 400 capsules in half of a second. Okay. We have a question over there. Have you thought about routing? Have you thought about MPLS over Seidenstraße? I'm only a bit of a network guy. I'm not a bit of a network guy. I have no idea what MPLS is. But we have a IP-based stuff over Seidenstraße with NFC takes. How do you save the packages? All right. Yeah, there's another question. They had a package inspection point somewhere next to the Engel. What was there? Package inspection point. Yeah, there was like a box in the tube. There was a box at the ceiling. Okay, I didn't see that from the front. I didn't see that from the front. But yeah. I don't know. I'm not supposed to tell you about that one. I'm not supposed to tell you about that one. Maybe it's the agency or the surveillance stuff. Yeah, just a little more. Maybe it was on the wrong network. That's to say that there's a box on the GHC crew. Seidenstraße, thanks again. Okay, thank you. Seidenstraße. And I think we only have the Vokkas left. All right. Only the Vokkas left? No. Yeah, thank you. Thank you. All right. I'm Daniel. I'll give you a short talk from the videos. Please a round of applause for our waving cat. And of course, other than her, there are quite a few people involved. There are a lot of people here today. So we don't only have C3 Streaming. It's a combination of C3, VOC, Femmefa and AGS. And they're all very interested in video streaming. And CF and CDS, yeah. I guess it's actually new. And thank you for joining us. So we built this and we prepared this and we improved on every conference that we go. But obviously the Congress is like the most challenging conference. It's, of course, the most challenging conference. We have different teams. Infrastructure, streaming and encodering, website and app player. So we have post-processing and encodering. This is the people working in their groups and across groups. So this works pretty well. And we actually have a meeting a few days or a few weeks before this Congress and to prepare the thing in Berlin where we actually put this together. So what's new this year? We've been on demand, which is really great because we're just supposed to be better and it was really received extremely well. Which is also because it was better but with very, very little complaints anyway. For the first time, we have full HD in all the files including the text pool and an edge 2.6.4 in CDA. So we have also free codec support. So in a free codec, H.264 and VP8. Our release pipeline now supports release to YouTube. So everyone doesn't manage to go to media.ccc. So everyone doesn't manage to go to media.ccc. You can now watch it on YouTube. Yeah, that's the amount of applause that's due. The amount of applause fits the subject. We have subtitles on the web player, as well as in the room. We have very big feedback. We have GoPros for small objects. We have big objects when there were hardwares. And you hardly saw them. You had to zoom. And this year we have GoPros. That means you can start these objects much better and show them. We also had backup recordings with SSD. Without them, we would have lost some recordings. And we gradually improved other things. We had a distribution network to count on, because it's just amazing what people do. And we had a DVD broadcasting equipment in this hall with an official broadcasting license. So I think we didn't really notice too much. Unfortunately, we didn't show it enough. Theoretically, you could have watched the talks about DVDs. So how did this actually work? And it's actually surprising that you could actually make out what these work with great grief. Okay, perfect. So I wasn't sure if that would work out. So we actually start in the middle. So we have the camera sources and the slides, which are mixed live. You might have seen live some screen view that we can see, or how one makes it. That we have in all rooms of the first time this year. And when we go through encoding, we actually have to wait for the first three years. There's two phases that can be related to the relays and then there's the actual production line where we actually come to edit the videos and put them with metadata and actually produce all the other formats. We're going to come up with all the units, everything from the full 18 months on. So we pushed everything through that work. So all of our data was sent to Berlin. Thank you so much that it was possible that a CODA class is in Berlin. Without that, the setup wouldn't have worked. And no one would have seen it. Thank you so much. So just to finish the production line, we then push it to RCD, which runs on no-brain, brings up the solution and it's a mirror-brain. And you guys who watched it live, you could see it through RTMP. And through basically the relays, we had one relay that would do the video and demand because we basically used the HLS snippets and served them for you, a delayed view. So now for the subtitles. We had 40 helpers, 40 Engels, up to three tracks, and very good feedback from hearing impaired and deaf viewers. And the service was very important to us and we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it because we were able to do it What else happened? The most remarkable thing, and you might have things about washing hands. We had one in the bathroom. The downtime wasn't so much hardware. That worked pretty well this year. It had nothing to do with hardware. This is very good, if you've seen the streams. Most of the streams were very good for people this year. This is the feedback that we got through most of the channels. The downtime is going on. We actually were the virus operating center. We had 25 core people. And one afternoon, I think I was on day two, and we had nine people down for four hours. And Matt crossed the cert for helping us getting them fixed on again. So thank you, sir. And as I mentioned, technically, DVBT, because we were still testing it, it was a bit flaky. It would be a lot more useful on the camp, I guess. The front end outage, because of the dependence, and we were going to remind them of the effect of the front end. People that were still watching, the streams could keep watching. It was a bit of a lack of dependency. We tested DVBT for next year. Let's see how it goes with the camp. A few statistics. With the knock, of course, we can't keep up with it. We won't even try it. But we have statistics. We delivered 80 terabytes of streams. So you use more bands? How many bands? Nine terabytes. CDN, one in nine K views. At the end of 90.5 kV, which we did with 10-15 hours of work in the video, and we have 4.5 terabytes of video together, and it was 180 for 48.6 hours of recording, because we have to duplicate the format of almost 900 hours of video that needs to be processed by cluster. Because of the different encoding schemes. Okay, I don't need to explain that. I don't really have to explain that. So yeah, but we actually had peak at almost 9K viewers, and at that time we were pushing out 10 gigabits of video and audio traffic. And another interesting thing, we also streamed the sender centrum, and no video on demand because I was only established for the main rooms, and it may be a good idea to have the sender centrum. What's amazing is we could actually see the NSFW late night show that they had 3,000 viewers, which is astonishing. So I thought that was wonderful. And there was another strange thing, we have great cat content lovers, and because of that we have pinker cats, and suddenly we were innocently idling and working away and suddenly this happened. There was a love and life pinker cat showing up the book, and thank you for the donation. Okay, so this is basically it. Finally I would like you to give a big hand to everyone who helped make this happen, and it would not be possible without people who were the cameras, who were the media mixers, and who organized everything, that they and the people on the internet can actually take part in this conference. Thank you, video angels. Okay, and if there are any questions? Alright, last question time again. Yeah, you had microphone too, go ahead. I'm wondering for DVBT, do you know what the range was? Do you know how much power we have sent? We plan to... We actually had a level of three. Then we have three sets of three, and between level one and level three, we have 500 milliwatts, because the Bundesnetagentur... Because the Bundesnetagentur... It's just that the Bundesnetagentur is just asking for distribution within the building. We had a few problems the last few days, and we didn't have enough people to offer DVBT, so we just wanted to test it. Go ahead with the question from the internet. Okay, a question from the internet? The internet just wants to give you guys a big shout out or a question from the internet. There was a question about whether you know the latest in the streams, or... It depends on the views of the HLAs. HMP is faster about 15 seconds, and D is 15 seconds faster. Alright, thank you. I can't say exactly. Microphone 2, please. Also about DVBT, I was wondering which solutions you sent and whether it would be HD content on the camp. So your question is about HD and DVBT. The question is, of course, whether we support D on DVBT. We might do it, we didn't do it here. Maybe using DVBT 2 and... We would use DVBT 2. Let's see how that works. In theory, it should be possible, also in DVBT 1, but we have fewer channels. Okay, that's it. I think that's it. Thank you. A big applause for all the people who really did it in progress.