 So yeah, infrastructure review of the 33 C3. And please welcome them with a lot of applause. Hello everyone. This is Rix, my name is Rami. We are the technical part of both the presale and the cash desk team. And we want to show you some interesting things about what we've done during this event and before this event. We will start with the presale. As you know, all of you have got your tickets before the event. And we sold no tickets on site this year. So the presale was a very important thing after we knew that the event will sell out very fast based on the experience from last year. We had, as you probably noticed, two-stage presale period that we split into a voucher stage and into an open sale. We implemented this voucher system because we wanted to enable all those people who make Congress possible in the first place to be here. This is, of course, the angels because we need all these angels to build this. But this is also all the other parts of the community that we need here to build this experience. Because we do not personally know all of them, we implemented this replicating voucher system in a way that once you got a voucher and you paid for your ticket, you got another voucher to share with a friend from a different group, from the same group, from anywhere in the world. So we used the airfrost of the CCC, the chaos traps, and hacker spaces all over Europe and outside Europe and other groups to spread those vouchers. On the software side, the presale was running Pre-Tix, an open-source ticket sales application based on Python and Django that we originally designed for the smaller MRMCD CCC event. And it is open-source and on GitHub and it has a very flexible plugin-based architecture that enabled us to implement this voucher replication system. The initial hardware that we had running on is one dedicated server that has an 8-core processor and 32 gigabytes of memory. After the voucher phase, which ended in late October, we went into the open sale in early November and we split the open sale again into three rounds and issued the tickets on three different days to enable people who are working shifts or working late or could not access it at some times of the day or some weekdays to enable fair chances for those people. And as you know, in the first period, in the first round, all the tickets have been bought by toasters and one of the toasters came. Still, we're very sorry that you saw a lot of error messages on that day. The load of the server was over then 400 HTTP requests per second over a period of a few hours, even though the tickets of the first round were all gone in about 13 minutes. To work with that load, which increased in the coming rounds, we implemented a queue system that we used to quickly handle that load without restructuring the whole system. We put a second dedicated server as a reverse proxy in front of the original one and used it to limit the number of people actually accessing the real ticket shop. If you went on the page on that day, you were presented with a page that says join the queue, you press join the queue, you got a queue position like 360, 360 people are in front of you, the position went down over time and we let like a few tens of people per second into the actual ticket shop to keep load manageable. We implemented this queue inside NGINX using embedded Lua configuration and in the second round, oh yeah, you could play Snake on the waiting page and if you wanna embed that Snake game to somewhere else, it's on GitHub as well. And also, we're very thankful that the Vock, the video team offered us to share some server resources and they hosted the static file CSS, JavaScript and images for us and that is the number for one of the days, I guess. It's 20 gigabytes just of that. In the second round, we peaked at more than a thousand requests per second, which we handled quite well, there were very low error rates on the queue page and once you got through the end of the queue, you had a very error less experience in the actual shop. In the third round, we peaked at 3,000 requests per second and unfortunately, we had a bug in the queue software that led to nobody actually getting a queue position, we had to restart the queue system at 10.15, nobody had bought a ticket at that time, we announced it like 10 minutes early, the original time was 10 am and from then on, it went like the second sale with a very low error rates. We got some figures from the presale for you that we want to share that might be entertaining. The first one was not that entertaining for us. We got 1,763 support emails from you that we worked through with four volunteers on the team. I wanna express a special thanks to Martin who is not even here. Those vouchers replicated and the longest voucher chain that you build was 15 vouchers long and we think that is quite a lot taking into account how long the voucher phase was and the next is something that we find very interesting. The average payment time it took for a ticket to be paid was 8.5 stays but if you only look at the ticket spot with voucher, without a voucher, it's 9.2 days and if you only look at those with a voucher, it's 6.8 days. So that replicated system was very useful to speed up the payment processing. We are quite disappointed by the next statistics. 20% of you use Gmail to sign up for your ticket. We expect that to get better next year. The third place is taken by Posteo. We have another one in there. And with about 2% we have domains of CCDES, subdomains of local hackerspaces and stuff like that. So I think we can improve here. So I'm handing over to Rix for the cash desk on site. Right, thanks. Right, because just implementing a new pre-sale shop and fancy replicating system was not quite enough for us and we were a bit bored, we decided to rework the cash desk software itself, which seemed like a good idea at the time. We called the system C6SH, obviously the old system was C4SH and you know, version numbers go from four to six regularly. It's also based on Python and Django and we will open SOS in January once we, you know, remove the last ugly event hex, hopefully on GitHub and I'm sure the nice people at the relevant Twitter accounts will publish that fact in time. The software we implemented does the software that runs actually on the cash desk that you see when you enter the Congress center and it also handles back office stuff like figuring out how many wristbands were given out and if the money in the cash desk is actually kind of correct or, you know, in the right ballpark. We handled this event with those five cash desks as every year we had 22 different cash desk angels because it's a bit lower than last year because we actually didn't have that much work this year and we had eight troubleshooters, the nice people on the side who were able to help you if you, you know, lost your wristband, lost your ticket, forgot your email address, stuff like that, don't ask. We actually peaked at 27 transactions per minute on day one at about I think 10 a.m., yeah, or noon. So that means about five transactions per minute per cash desk, so a transaction every 12 seconds, which is really, really good. For reference, last year it was 20 transactions per minute, which is also really good considering that last year we actually sold tickets on site so there was cash handling involved which takes a bit longer. And the maximum waiting time in the queue in the building this year was five minutes, again, due to us not handling any sales on site so everything we had to do regularly was scan the QR code and give you your wristband and off you go. That was about 17 minutes last year, which is also really good for handling money. The 17 minutes is only considering the time in the active queue so if you arrive two hours early to get a day pass, you waited for two hours until we opened. Right, that's about it from us and thank you for your interest. We'll see you next time and I'm handing over to knock now. Thank you, thank you. And a huge welcome to the network review of the 33rd Chaos Communication Congress. But unfortunately, we have to start with an apology. We realized after last year's Congress that our network simply didn't deliver the Fritz Box experience that you deserved. Clearly, we have let you down. So for this Congress, we went back to the drawing board and didn't stop until we came up with something radically new, something that will profoundly change the networking experience of the demanding conference attendees. Something in line with our vision. We realized that networks are the most frequently used internet infrastructure at Congress. After all, we expect our networks to keep us connected to the Facebooks and yahoo's of our peers. We know that everyone can provide you with a connection to your daily digital world, but we are striving for more. As the leading internet provider within the CCC ecosystem, we want to provide you with a remarkable, easy to use and highly scalable network from renewable local sources. And we failed to deliver this last year. And here is why. This is the network design we used last year. As you can see, the network is clearly unbalanced. There are crooked lines in there, crooked links. Overall, it's not very nice to look at. The result, of course, was a disappointing experience for you, the user. So what we have done this year is we have planned the exact same network, but with a new drawing tool. Look at those beautiful orange and blue lines with pleasingly angled links and naturally balanced distribution points. It is amazingly simple and just beautiful to look at. But we didn't stop there. Ever since we started celebrating the network buildup in Hamburg, we always had a dark fiber from our data center, IPHH, here into the CCH. But this year, our incredibly professional and amazing engineers found a completely new way to light this fiber. As you can see in this diagram, the fiber is now blue and slightly thicker than last time, which makes it a lot better, of course. And also, it now runs 100 gigabit ethernet, so that's good. But that still wasn't enough for us. Deutsche Telecom upped their sponsoring and enabled vectoring on their fiber. So they were able to provide us with an additional 100 gig redundant fiber uplink connection into this building. With this simple and innovative uplink redundancy, we knew we needed to redesign the whole core network infrastructure from the ground up. This year, we distributed the available band over not one, but two powerful carrier grade core routers. But with this groundbreaking approach, we suddenly realized that we exceeded the physical limitations of the building's fiber infrastructure. In other words, we only had multimode fiber available, which is orange. Symbol photo. But for 100 gigabit ethernet, we needed single mode fiber, which is yellow. Luckily, during a week long team outing of our dedicated engineering department, we found a brilliant solution for this complicated issue. We pulled not one, but two new single mode fibers through the building in order to provide you the network experiencing that you deserve. Now, with this innovative and strong backbone in place, we identified the second problem in our previous network design. Some of our users could only connect their equipment to a measly fast ethernet port. In line with our vision, we can now provide gigabit to all users on all access switches. But there's one more thing. In some places, we were even able to uplink the switches with 10 gigabit fiber connection to the table. And as you can see here, all switches were carefully pre-staged by our safety airwire engineering team. After having solved all those annoying cable issues, we felt that we were held back by that antiquated concept of wired cabling. And decided that wireless is the future. So from this point on forward, we put all our energy into reinventing the wireless experience for the advanced Congress attendee. We felt that the old approach of suspending access points from the ceiling was keeping the network too far away from you, the user. So this year, we were finally bringing the network closer to you. In lecture halls one and two, you will now find access points under your seats. But don't worry, in the unlikely case of the loss of connectivity, network cables would automatically be deployed from the ceiling. So to summarize our efforts, we had an 180 gig of total uplink capacity. The peak uplink usage was over 30 gigabits per second, both in and out. We had almost 8,000 Wi-Fi users at peak times, and almost 80% of them were the five gigahertz band. We have 189 access points deployed, 121 access switches with a combined 6,160 ports. And clearly, all the graphs went up and to the right. So in conclusion, we have terribly failed you last year. However, through teamwork and dedication, with the help of an amazing NOC team, a helpful help desk, and with the support of all these lovely companies, we have managed to deliver, because as you can see in this scientific diagram, it went up to 11. Thank you. So we realized that we didn't actually have any facts in this talk. So we have a bit of time for a few questions. Apparently, we answered all the questions, which is quite good. There's one question there. Sorry? Please repeat the question. So the question was, which was the new drawing tool we've used, and it's called Inkscape. I realized that the two of you will have to give this presentation from now on in perpetuity, right? You're on the hook now. Well done. Thank you, Neil. We had Wi-Fi passwords. So the question was, I assume the question was what were the most popular Wi-Fi passwords? And we did that statistic. It was about the same as last year. So we figured it's not funny anymore. But yeah, that's the same thing. 33C3, Fubar and blah. Yes? Hi, was it because of you that the CCH was for a moment located in London or Dublin? So the issue with geo-location is that there are a lot of different services. And some of these services work by looking at the MAC address of the wireless access point. And we use these wireless access points at many different events. For example, at this summer's hacker camp in the UK at EMF camp. And also previous other events. And so, yeah. Oh, so it's not a feature. It's a bug. Yeah, that's a bug. But it's it's hard to solve because these systems are self-learning, but it takes a while. So yeah, you will get located in Hamburg at some point. And also the AP addresses are used every everywhere. So yeah, it's unfortunately nothing we can fix as an additional feature. The next time you will be at the conference where those access points are used, you will be in Hamburg. So that's that. Yes. OK, so people on the Internet are wondering since Juniper dropped you last year, where did you get the network equipment this year? Juniper decided to not let us down again. And they have supported us in a great way this year. They gave us all the 100 gig equipment we could ask for. And I think we got 1.2 metric tons of equipment again. So that was really smooth. How many men we've got into cat content? Well, as you know, we do not do deep packet inspections, but I assume all the content is cat content. So I think we have time for one more question if there is one. Sorry? Yeah, what were the IPv6 statistics? I think it's 5,000% or something because at one point our monitoring broke. But what was it? I think the addresses for four times as long as IPv4? Yes, that. That's a statistic? That number is on the public dashboard. So it's dashboard.congress.tccde. You should be able to find it there. We didn't include all those graphs because we figured we could use the time for doing a talk without any content and you have these graphs anyway. Yes. Did you get any abuse complaints? So, well, there's the usual amount of automated emails. There were a few that weren't automated and some were actually about something serious. But it was less than last year. I think there were only three calls in total. So, yeah, you guys obviously behave better. All right, I think that just one. Okay, one more question. How many? The question was how many access points crashed and I don't believe that any access points crashed. Is that correct? Well, yeah, apparently, well, the access points. And we had a few issues with the controller at one point, but that was worked with. All right. Yeah, there was one guy standing. One very last question. How many DDoS attacks did you have? Outgoing one, incoming. Now, which actually is quite annoying because you really should use the bandwidth for better things than shoot other people on the internet with packets. That's just silly. Yeah, we don't endorse this kind of behavior. We think it's idiotic and it's similar to running around, breaking infrastructure and toilets or something. It's just stupid. So, please don't do that. Okay. All right, we'll hand it over to the Vok now, the lovely people who brought you all the streams and the video recordings. Okay, okay. Is it black here on? In, folks, right. So, it looks like nobody has used this HDMI socket before. Oh, come on. They're all doing an awesome job. Just bear with them. I haven't even heard up to here. So, after some little difficulties, let's start. So, my name is Andy. I'm from CSC Vok and I'm Jenny. I'm also from CSC Vok. And welcome to the infrastructure view of the Vok. So, this time, the opening actually worked and we will also see some slides in the future. So, it looks like while our technical stuff worked pretty nice this Congress, now it fails us. Okay, and we have a GVI at the end. So, while we wait for the slides, I can give you some facts. This time, we actually managed to get the opening working right from the start with audio and subtitles. Maybe we'll see the slide in a few seconds for you. I'm not sure. It's a live view. That's fantastic. You must marry a good actor. I know so much about this topic. And I'm pretty sure it's not going to work out. I'm pretty sure it's not going to work out. Well, I could have done it this morning, but I would have had to wait for a little while. Hello. Do you have a page? I do. You do it. I'm pretty sure it's not going to work out. It's not going to work out. It's not going to work out. Yeah, it's just going to work out with a photo. Yeah, I have... There's not even that much noise. It's just going to work out. Yeah. Do you mean it's going to work out? It's going to work out. Okay, so... No. So, you have to look at the priorities. It fits. It looks like it's going to work out. That's a good idea. Let's go again. Can you do it? Yeah, do it. Over there, please. Please switch to the laptop over there. Please switch to the laptop over there. The box will come in front of you. With another team. So... I don't have one here. Alright, so I guess I'm jumping in with the GSM, some GSM stats, meanwhile. So, we have a sad GSM spectrum situation. We've run a test network that was very helpful. For several years at Congress. And last year... No, actually... Yeah, last year in spring. The Bundesnetzagentour, they gave away the last free frequencies. So, it's not really possible to apply for a test network license, the way we've done before. Last year, we got some help by the operator who acquired this new spectrum. But this year, they were using that spectrum. And it was looking pretty bad for a long time. We didn't know if we were going to be able to have a network at all. In the end, we actually did get to loan five Arfkans from Deutsche Telekom. Thanks a lot. I don't know what it took to replan the cell phone network in downtown Hamburg at Christmas to do that, but pretty cool. Unfortunately, it was too late to print any SIM cards with this year's theme. So, yeah, sorry about that. And it was also so late that we had not a lot of staff working on this. But we managed to set up even more BTSs than last year. We had nine total this year in these locations. Six of them, Sysmo BTSs, three of them, IPXs, nano-BTSs. We'll get back to that in a little bit. And all of this was running in the 18 MHz band. So we did some careful experiments. Last year, we started using GPRS on one of the eight time slots in each BTS. And this year, we set it up a little bit differently. We activated it yesterday. And with dynamic time slot assignments. But, however, only on the Sysmo BTSs, we'll get back to that in a bit. The configuration was such that depending on whether you were doing calls or the users, you guys, were doing calls or wanting to do GPRS packet traffic, the frequencies get allocated differently. So previously, last year, we had a fixed configuration where one-eighth of the available frequency spectrum was used for packet and GPRS, this time more dynamic, which meant that we had a lot more possible capacity for GPRS. But it ended up not being used so much. We'll get to the numbers right now. We had subscribers signed on SIM cards, 3,750. So including SIM cards from lots of old events. And sadly, only 300 new SIM cards that we had left over from last year. About 1,200 created calls and 400 established calls. So the difference is creating a call is when you dial and for the established call is someone actually picking up. There can be lots of reasons for a call not getting established. The phone you're calling is just not online. There are no free channels, et cetera, et cetera. Text messages, couple of thousand, 3,200 cents, 2,800 roughly delivered. That's pretty good. Not many that didn't arrive at their destination. And the GPRS numbers are quite a bit lower compared to last year, even though we had six times as much potential capacity allocated or dynamically allocated. About a third of down. So actually, I guess this should be swapped. Well, receive on the network side. This is third of the bytes received and a fifth of the bytes transmitted. So you were better at using GPRS last year. Then we have some fun IP access bugs. So this is the Nano BTS, the three units that I mentioned. They're not completely stable. So especially when we turn on GPRS, they're not completely stable. And I don't know this. So these are some error messages that they send out to the OpenBSC base station controller that we're using. So plain text error messages. And I don't know if someone knows like C or C++ or Java, it's an assert checking that this Q allocated thing is either allocate magic or not allocate magic. So the Q is, I guess, not allocated, but it's also not allocated. I don't know what they've done there. I want to say thanks to the heroes that were running this network because we got the frequency so late. Many of the people who have been helping before, they weren't so excited about joining or they had already made other plans. They did help with setup and teardown, but essentially the operation of this network was done by three people, two of which who were doing this for the very first time this year. And I want to say thank you to you guys who were using this network because it helps a lot to find issues and improve the OsmoCom software OpenBSC. Thanks. So in the end we got it working. Looks like Murphy got us in the end. Okay, like you already told, the opening system was with surreal audio, with translations, with subtitles and everything. So great. So this year's streaming CDN, we had 17 edge relays, three in-house relays. All the Deutsche Telekom customers were directly fed from the 31st C3 and we delivered over 130 terabytes. So this time we decided to support an additional language, translated language in Hall 2 and 1 with several codecs and with the center syndrome and so on. We got very, very many video feeds, which we all had to mirror and so on. Also this time for the first year we used MPEG Dash. It was only in a beta test. It worked starting day two. And there were some FFMPEG extensions and the patches are incoming. So actually about, but only about 30 people per day used it. But now we know how to do it and great. We used 10 GE everything, everywhere at this time. So basically we had four 10 GE boxes, one 40 GE, but we didn't use it to full capacity. So please watch my streams. All we will start producing in 4K. So here are statistics for the stream virus. Here we have the three peaks, which is FIFA, Methodisch-Incorrect and the CCC-Jahres-Rückblick. So actually more people watched FIFA-Jahres-Rückblick, FIFA, then Mintcorrect, but Mintcorrect had higher bandwidth because there was more movement. More stuff flying around. So we also, at our fifth room, at Centre-Centrum, we tested our new tele-intercom setup based on a Raspberry Pi. So everything with Ethernet were quite good. And we now have also intercoms for our decentralized events. Yeah, great. Also this time, for the first time, we used Walk2Mix, which is the software video mixer we developed under Congress. It was used in Hall 6 and Hall G. And it's on Github, so please use it. And yeah, we were also to finally eliminate Flash. So for the time between the live stream, the start of the live stream and the publishing of the finalized recording, or the first finalized recording, we have this service called ReLive, where you can basically stream dump, but right above us. And for some browsers, we had to use Flash there, because, hey, less playlists and so on are not natively anywhere. And also on Day 2, there was a nice guy which said, hey, here is this JavaScript HLS player. Why don't you use it? And our streaming team said, yeah. And great. Also this year, for the first time, we had an assembly where you could learn about how C3 Walk operates, about how to use and set up Walk2Mix, and how to build your own C3 Walk, basically, for small events and where we can't go there. So we had several sessions self-organized about Walk2Mix, also the HDMI guy had a session there and so on. So we will continue this thing, and it was nice to talk to you. We had an overwhelming interest from the angels. Actually in the first introduction meeting for the video angels, we basically happily filled up this hall. There were over 250 angels, and there were about 12 hours work angel shifts per talk. So each talk, each session in the schedule had a work of 12 hours behind it. Like last year in hall 2 and hall 1, we had live subtitles generated by angels inside of the room. It was about 80. You can basically read the numbers yourself. We also added subtitles for this room, for my totish incorrect. And yeah, you see the numbers there. And actually I think this 507 strokes per minute is quite good. But there is more help needed because not all of the live subtitles get reused for the releases. So we need your help to subtitle all the talks with all the languages. If you want to help, go to C3subtitles.de where you can see how to subtitle the talks. Yeah, really do that. Please. So here you see a screenshot of the interface the angels used to generate the subtitles. There were six subtitle angels working simultaneously from incorrect. Nearly last slide. So I don't know if you are aware of Markov Bot, which generates random treats, and we find it quite nice. So all talks will go to media.cc.de and the YouTube channel with the same name. Please don't use the other ones. If you're subscribed to it, please unsubscribe and so on. Just read the blog post from last year about this YouTube problem. And we also have to thank our sponsors because... Yeah. So any questions? Great. So who's next? And by the way, this was a live presentation with Wi-Fi. That's a Wi-Fi grid when we came on stage and now we're using UMTS. This cable is not connected. Why? Oh, nice. The microphone works. It's a good thing. Ah, that's my slide. A little bit of center, but I think that's fine. Better than no slide at all. So I'm Sebastian from the Seidenstraße team and this year we also decided that we need to improve your user experience because last year, last year we had the field telephones and you had to call when you wanted to route the capsule and we had to do most of the routing by hand and to be honest, that's a hacker conference. We don't call people. We press buttons. So what we did, we added more autoroutors. We had three of those routers. We had them spread out all over the building. This was also one of the largest installations we did so far. Because, hey, it's the last time in Hamburg and we know the building, we know all the secret places where we can sneak the pipes through and we thought, okay, for the last time, let's use all of it. And it was really ambitious and we had one autorouter when we started to set up here and all the other autoroutors were basically built on-site with materials that were available. The next thing we needed, we need to get across these huge fire doors. So basically in the elevators, the elevators, the escalators, the escalators is the word, the escalators leading from the center-centrum up into hall one, there's this huge fire door that's retracted in the floor and we were basically not allowed to cross there with our pipe because since the door cannot cut the pipe or something like that and then the door is blocked, then in case of fire, we have a problem. So we decided to build an emergency disconnect which worked electromagnetically. So in the normal case, the electromagnets had power and the tube was held in place by electromagnets and as soon as the power fails, for example, because the door opens and triggers our switch, the electromagnets turn off and the tube comes crashing down and hopefully hurts no one. And since we had already automatic routers, some people decided we need also an automatic capsule scanner so that you can send capsules via data matrix codes that you can stick to them. We did not use the scanner here this year, it worked, but we had some trouble connecting it and I'm going to talk about that in a minute. The other thing we added was we had a new user interface. So instead of field telephones, we wanted to provide you with a keypad, just type the number where you want to send something to, press enter, wait until the display tells you to stick the capsule in the tube, suction starts and everything is fine. Yeah, so we thought. The problem was I basically spent the last two months working on parts of the electronic and there are a lot of custom bus transceivers in there because we use our own bus architecture and I worked really hard to finish them in time and since I arrived pretty late here, I decided to send them ahead using DHL and somehow DHL seems to be really afraid of us, they sabotage us. So the parcel with the bus transceivers, there were like 20 bus transceivers and I had 10 finished in advance and sent them here and yeah, the 10 bus transceivers arrived yesterday. So next time we better send it with Sidenstraße, it's faster. So in the end we had to work with what was available, which were like six or seven bus transceivers which I kept back because they didn't work when I packed the packet and I had to fix them so I spent the morning of Christmas soldering in my basement which was fine by me. I'm not a huge Christmas fan anyway. So let's talk about some numbers. So we had 10 stations. Each station needed a transceiver so we didn't have enough transceivers for all our stations. First problem. Next problem is each router also needs a transceiver. So we were another three transceivers short. We also used 100 meters of cubes, so one kilometer which is a bit more than last year but I don't think it's the most cube we used. We are not quite sure about the numbers because we basically counted the left over rules, roles of tube and then we asked ourselves how much tube did we order in the beginning and was there anything left and yeah, it's all a bit fuzzy. We also, since we had these huge hardware problems to start with, we worked basically night shift from day zero to day one and from day one to day two to get anything at all going and at least we had our automatic vacuum cleaners going at that time. We had our automatic push-pull switch going. You also had one router going. The other had mechanical problems and at that time for some reason several Arduino microcontrollers failed and then we were left with not enough microcontrollers for all our stations and it was 2016 all over again. A lot of hardware that was really dear to us and important just died for no reason. So another important fact is we could not count how many capsules we routed because I used the hardware that was meant for the statistics node for something more important. So there's just three question marks. I think it must be something around 100 capsules from my estimation. Our signaling bus which is becoming more and more important because we are going to automate the whole network again next time and this time it was the first real test with many devices on the bus and it worked kind of well. There were some odd things in the beginning which we were able to fix and we used about a kilometer of cable because everywhere we put a tube we also needed to put a cable and luckily we've got these real strong field telephone cables which we can use to tie the tube to the ceiling. So instead of ropes like last year we could use our cable to tie the tube up which looks a bit dangerous and not really well thought at first sight but this cable is really robust and it worked pretty well so that's fine. Also it was the first time that we tested this bus with many transceivers that were far apart so we had up to 400 meters between transceivers and in the end it all worked out fine. We even had some of the keypads running we had some automatic routing running the automatic routing part was delayed mainly because our network control software couldn't be tested because we couldn't get the hardware on-site in time to run some integration tests and you know how things go you write a specification, somebody else writes another component according to the specification and in the end everything breaks as soon as you're connected so we were a bit unlucky this time. So in the end we made it work for you just in time so that we can tear it down again and that's an important point we could use some help tearing everything down putting everything back into storage it would be really nice if some of you could spare some time after the closing event to help us tear down that would be really cool and also we would like to improve this because this year's user experience wasn't that good at all so next year we want to have everything running smoothly that we planned for this year we've already distributed the hardware among us so there won't be a case that just because my packet is late the whole hardware is missing and also we need people that can code we also might appreciate some people that help us with operations next year so if you want to participate with Sidenstraße then just try our mailing list follow us on Twitter, try IRC we also have a github account with all the code inside it so if you want to have a look at the code and see if there's something you can improve take a look there we really appreciate it so that's it from my side in case you wanted it seems that Niza, Pogna and Serd are about to speak