 Die richtigen Meister. Ja, so, as you've been aware with the move to Leipzig, we were entering a whole new area for Congress. Everything obviously is quite large now, everything is bigger now. And just to give you an idea, here's the size of the CCH. And next to it, there's the size of Messer Leipzig. So in this presentation, we would like to give you an insight on the new challenges, which came with the growth and size, like acquiring bandwidth, redesigning our backbone, rolling out access and Wi-Fi on a much larger area than we've ever done before. And all that while adhering to new health and safety regulations. So, given the sheer size of the venue, we needed to radically scale up our network and put our top priority on upscaling safety. So, we started in summer with the summer trip to Messer Leipzig and learned all about the existing infrastructure. We were excited to discover that this venue's impressive fiber network actually includes only single mode cabling, which is our favorite type of fiber. So, as you can see, there was also plenty of free fibers available for us to use, but at the same time, we were overwhelmed by the amount of cabling. So, we needed to find a way to reduce all the information down to a usable and understandable format we could work with. So, obviously, the only sensible choice was to turn to future technology and engrave a digestible cabling diagram onto a plate of lasagna. Having feasted on all this information, we were able to see a clearer picture of the network. We are going to build for 50,000 participants of the new era of Congress. So, finally, we're making new progress with our planning. But we weren't just finished yet. Health and safety regulations are very strict in venues like this. So, lots of thought has to go into fire prevention, emergency escape planning, and other security and safety related stuff. Just like all the other teams, we were also busy ensuring everyone's safety. And, yeah, we need to do our part as well, basically. So, on the network side, this means make sure your data packets have a wide enough escape route towards the Internet. So, we've actually built a fully redundant 100 Gigabit Backbone over three in-house routers. Furthermore, while we were busy working on the on-site network layout, we asked our procurement manager to work on arranging some uplink connectivity. And as an expert in his field, we figured he would be well aware of what kind of connectivity was needed. As you know, we've seen uplink uses in the ballpark of 30 to 40 Gigabits per second in the past. So, the logical thing to do would be to arrange 50 or maybe 60 or even 100 Gigabits just to be on the safe side. Unfortunately, though, while he was beautifully arranging uplinks, we lost track of him and forgot to tell him to stop. So, next thing we knew, he had accidentally arranged 400 Gigabits of uplink capacity, leaving this building in three geographically redundant directions. So, this was possible due to some very friendly providers, some of which even loaned us their lab equipment, so we could field test bleeding-edge technology with futuristic names such as alien wavelengths, allowing us to run 200 Gigabit per second over one single wavelength toward Berlin. And, yeah, so that ended. So, after learning about this, we decided to arol with it, because, who are we to say no to more bandwidth? However, this gave us additional challenges for crying the necessary network hardware, obviously. In the past, we were always worrying that Juniper, one of our main sponsors for the network equipment, might not have the necessary amount So, this year, knowing all the facts, we're going to them early, talk to them early, and told them what we actually needed, big routers, where we connect 100 gigs, stuff like that. Unfortunately, we learned then very last minute that the routers we wanted were not available. The only routers they could give us were three times bigger than the one we were asking for. As a streamlined organization, we were running this obviously through our procurement manager, who enthusiastically agreed with the proposal, because this solves another of our problems. In Hamburg, we were talking about it for quite some time already in the last years, we could only callocate our routers in like fun sized sauners. The patch rooms here, on the contrary, are very big, well air conditioned, and with the small routers we would have in Hamburg, we would fearing that we might be running in danger of losing them in those spacious rooms here in Meise-Leipzig. So, the bigger routers were also serving a double purpose. We are not leaving them here by accident, because we're thinking, oh, there's nothing in there anymore. So, for additional safety, we introduced a triangle routed topology, so your data can still escape in case any one path is blocked by a firewall angel. This means that your data is now around about 150% more secure and safe than ever. So, upscaling was a central theme for everyone in this year's congress, also for Wi-Fi, obviously. As you know, in the CCH, there was no more room for adding more access points, which is maybe why we had to leave. But luckily here in Leipzig, we were able to grow our potential und cover more space with Wi-Fi access than ever. We brought 180 access points to Leipzig, which is the lower bar, and that's about the number we brought to Hamburg. But luckily you helped us out by running a lot of rogue access points, and that's the upper bar. So, thanks for that. Unfortunately, thereby, you ruined everyone's Wi-Fi experience on the congested bands. Yeah, the problem here is that actually SSID spamming is taking away a lot of airtime for everyone in your environment. There are people spawning up to 100, 200 different SSIDs, thus really ruining the Wi-Fi for everyone in their whole neighborhood. We saw things like this, which on paper is funny, but if you're sitting there in your assembly area and you don't get any Wi-Fi anymore, or you don't have your Wi-Fi signal, but you don't get any packets through, this is the reason why. People are starting to spam, starting to spin up their own access points, taking away frequencies, which we could really, really use for our access points. So basically, whoever did this and all the other access points out there war the reason that the Wi-Fi reception wasn't as good as it was in Hamburg. So, the problem is that when you open an access point, it has to broadcast at SSID in Beacons that are transmitted at the lowest possible frame rate, so they take up a lot of airtime, even when there's no data transmitted over them. So, please don't bring your own access points in the future. So, speaking of trolling, at one point we were looking at our traffic graphs and noticed some odd patterns that we couldn't really explain. We thought maybe it's the streaming traffic from the lecture halls, but then we figured it doesn't correlate with the lecture schedule. And then we were wondering what would cause five gigabytes of outgoing IPv6 traffic in such strange patterns. But then it occurred to us, someone was sending us a message. So, when decoded as Morse code, the traffic patterns actually read 34 C3. Yeah, we started a new tradition last year, which was quite popular and very accurate, to summarize for members of our press necessary information. We have put everything basically on one slide. So you just have to take a picture of this and then you can write all the articles you want. You have the right information, you don't need to misquote anyone or just invent some stuff. So, just to go through it real quick. We had a total uplink capacity of 400 Gigabits per second, leaving Messer Leipzig. We were telling you about this already. We had IP capacity here within Messer Leipzig and in Berlin of up to 320 Gigabits per second by diverse Internet providers who were sponsoring us. Of all this traffic bandwidth we are providing here, you were using up to 42.2 Gigabits per second, which can be better. We had a little bit of space left, so you know, next time leave your access points at home, bring some servers, bring some stuff that actually can use the traffic and then everything will be fine. Lee, get a mic if you have one or is there someone on the Internet available with the question? We're not ready yet. We have one more thing. Yeah, yeah, sorry. Actually two more things, but anyways. Yeah, we have some PIP WIFI users of over 7600. We had almost 180 access points deployed and yeah, all this wouldn't have been possible without our sponsors, of course. Thanks for providing helping us. So, so after our sponsors had delivered us 15 pallets of equipment with a great help of knock help desk and amazing amount of motivated and dedicated angels to help us with all the things we'd like to thank, everything worked out fine in the end and we were able to actually give you the networking experience you deserve. But during the planning phase, we accounted one more enormous problem. Yeah, just like in the last years, we wanted to apply our normal recipe for building this CCC network, obviously. But suddenly we ran into all kinds of issues. We couldn't really explain. Something just seemed to be off and we couldn't put our finger on the problem. But then it occurred to us, our vision was still the same as last year. So, obviously, that was the main problem. We upscaled then and secured every last, we upscaled and secured every last bit of the network, obviously, but we forgot to do the same with our vision. So, we hurried to an emergency weekend retreat and soul-searching mission to come up with a way to cope with us, gross neglect. And then we finally were successful. So, let me present to you our new futuristic upscaled, 100% safe vision. Now, finally, the universe was in harmony again and so we can peacefully rest and go to the long-anticipated questions. Hey, thank you. There are questions. There is one on the Internet to start from. Yes. Which autonomous system did we interact with the most? Sorry, can you repeat? Which autonomous system did we at the Congress interact the most? Do you have that number? I think it was Hetzner, but I don't really know. Yeah, oh yeah, it was probably Hetzner, for whatever reason. Here, microphone number one. If I wanted to leave you a message in Morse code by sending large amount of traffic, where would I think, oh, that's traffic too? So, my advice would be to send it anywhere, but with a really low time to live so that the packets get dropped soon after the router and it doesn't annoy anyone, but it will show up in our graphs. And I hope this is what they did this year. Microphone number six. Is there any extended metrics or data dumps if somebody wants to dive into the graffing and data that you'll make available? I think we could publish some of them, yeah. Because we have them in our Prometheus database and then we obviously need to check which are sensitive but which aren't. But we could work on, yeah, maybe, for example, the uplink graph, if you want to analyze that in more detail, we could upload that somewhere. Just follow our Twitter account. In case we do, you'll notice it there. Microphone number one. Not really a question, but I couldn't saturate the 10-Gigabit link. I had a server in the yellow color connected to. Was it you? Was it you? I had the server and it had around three gigabits graphic most of the time coming to. All right. Okay, question. Kolo was connected with 100G to the backbone. The backbone had 100G and we had 400G outgoing capacities. So, in theory, capacity was there. Right, question out of space. Will there be a presentation about the alien wave and the test? So, the sponsors, who did that test with us, they have prepared a presentation. I think they are planning to publish that and we might even publish it and that will contain a lot more facts. So, yeah, I suggest follow our Twitter account and then we can upload this presentation. Number four. Or two. What is it? So, the question is about the local network. What tools are you using for automated switch configuration? We have, well, for the configuration itself, we didn't have any tools at all. We had tools for generation of the configuration. So, basically, the configuration would be applied manually, but the configuration itself would come from self-written Python-Tools, which would generate the necessary configuration from our IPAM. Thanks. Great. Here. Sir, please. Map of all the access points. Yes. Can you upload it? Can you upload it? We can probably make some screenshots. I'll talk to the Wi-Fi guys, who have it in their controller and see if ... I'm not sure, it shouldn't contain any sensitive information, but, yeah, we can look at that. Thanks. I follow you to the anyway. Number seven. So, you allowed arbitrary usernames and passwords in the encrypted Wi-Fi. Did you examine those? And if so, was there anything particularly entertaining? Well, we didn't have any statistics this year, because it was basically the same. I mean ... Yeah, we've done so in the past. You can look it up in our previous presentations. I think it'll be similar. We stopped looking at them because it was funny in the first years. Yeah. I think it's basically the same. There's random stuff, and there's a lot of 34C3 and people cursing. Ration from the web. How many turbos did we upload and download? Have you collected this number this year? A lot. A lot, yeah. So, when we publish the raw data for uplink graphs, someone can integrate that and figure it out. I think we should ... Yeah, we may have time for one more question, but then we should handle the other teams. So, regarding the graphs. Will you publish a history of graphs year-wise, on many previous years? Yeah, we haven't really bothered with saving all of that, and each year it bites us, because we need to do the planning on how much capacity do we need where. So, what we have is the presentations, and we always look up the number of the peak values. We're just one of the reasons we have this fact slide in there to help us with next year's planning. Great, guys. Thank you. Give a warm, warm applause. We made it. Now we take the other, Leon. A bit more exotic network, and we had complete different challenges, but this year we have UMTS 3G. And yeah, the whole mobile network is run with free and open source software, OsmoCom, but there was a bit of a difference between because last year and the year before, it was running quite smooth and stable. We had a bit of experience, and the network was a bit simpler. So, this was how our network and how our setup looked like. But with 3G, OsmoCom decided and had to a bit extend the infrastructure, and now it looks like this. And it was quite new for everyone. So, even for the OsmoCom people, we only used it for a small network with one base station and one phone connected to it. Now we have lots of base stations and even more people connected to it. So, it took a lot of time to run of this. And each of the bubbles is really one demon with one separate configuration file and they all have to match else. It doesn't work. But, even if it looks that complicated, it doesn't require a lot of power. We ran everything on a small APU, except the voice transcoding, which was running on a bigger server. But yeah, the whole GF7 and UMTS network was on this small box. We had to add some small sort of USB-Dungle because we couldn't log enough of the Codems because the things were crashing. So, this is the central part, the main infrastructure and then you need all the base stations. And this is how they look like. They are quite small. We have generally at every location we have one of the small base stations, the GSM, the black ones. And then the UMTS, the white one. We have them at several locations. We also didn't require, I'll explain more later why, we also didn't require too much power. So, this is the installation. We had a bit everywhere, but most of the base stations were really at the arrival in all room, the GSM room because we needed to debug it and in the hack center and in all of the conference rooms. So, this is what we had. We had seven GSM base stations, five UMTS base stations and they were really running with low power. With GSM we could go up to two watts, but the issue is that we have too many people. And our base station can't handle the load. For every base station, for GSM base station for example, you can only have three calls at the same time. So, if we have more subscribers, nobody would be able to call. It was already quite hard to do some calls. A bit of numbers. So, we sold this year 2,500 cards. Thanks to the POC for taking care of selling the cards. This is a huge task. Although we sold that many cards and you could reuse the cards from past year, only 900 of them registered a token or registered an extension at the POC. Everyone else could use the GSM network because you still get an extension and you still can send SMSes and calls send SMSes within the network but not to decks and outside. For the SMSes actually, we had quite a lot because we have spammers but it doesn't require a lot of bandwidth so we don't really care and let the spammers through. It's not really important. We don't know how many subscribers were there in total or how many calls. We didn't do any statistics because we spent a lot of time on setting up the network. For over the year, one of the pain points of GSM and UMTS is the licenses. You need some kind of license to operate the networks and they are also to the big operators. So, while in the past years we didn't have a license for the Bundesnetzagging Tour this doesn't exist anymore because all the frequencies were sold. So, for GSM, Telecom provided us with three channels. Thanks Telecom. And for UMTS, there's quite a bit of trick because we have US base stations and the US band is not used in Europe. So we just could apply for UMTS bandwidth at the Bundesnetzagging Tour but some UMTS phones were not able to join our network simply because this is a kind of weird band which is not the standard European band. But for all newer smartphones they support this band. It works. The GSM and UMTS network works it took a lot of time to start it because we had to install a whole infrastructure and it was starting with day two was really starting to work. GSM had no data to switch all the time in your phone to where I want to be reachable with voice where I want to have some data because sometimes the wifi doesn't work it's up to you. And there were a lot of crashes but that's also the purpose of trying to operate it here is for us to scale up the whole network because when we use the OsmoCom network or all the software we only have one base stations and one phone connected to it and when we have crashes we just use another phone because we don't want to see why and here we have no choice we have lots of data lots of base stations, lots of phones so we traced all the we have quadrums for all the crashes and we will submit it already some tickets and we will submit a lot more tickets so thanks for pentesting our network now we know how it works so next year it will probably a lot faster changing the IPs and we will fix some things so UMTS is quite new there was no voice for UMTS this will be added there will be a lot more UMTS base stations simply because they are smaller so and we have we have a number of them so we can spread even more of them a bit everywhere and maybe we can have LTE because with LTE the advantages the hardware is readily available so we just lack the software so if anyone wants to play with LTE it's a lot simpler than what I showed with UMTS and GSM feel free to join us and have a talk and then we can maybe integrate LTE with GSM and UMTS and have an awesome mobile network at the next congress and that's it so if there are any questions I'm happy to answer them Are there questions? between the congresses how do you test the hardware and software because you said you need a license from the bundesnetzagentour so you don't always need a license for the bundesnetzagentour this is only if you want to operate on licenses which are frequency which are already used if you want to operate on frequencies which are readily available like bind 5 for UMTS only if you want to provide over the air if you just use cables connect your base stations to your phone using a cable you don't require any license you can also put both things in a faraday cage and this way you don't interfere with the other networks or sometimes you just forget and you have a very very small power and it still works so these are the way to do it great is there another question here well what the test slapper is isn't it it's thank you very much really a last applause before we go over to the next topic thank you very much Leon looking forward to the future ok then it's time for the Vock Review stage is yours audience is waiting thank you very much hello so you may have noticed these guys in the audience with the red telly lights and they and a lot of other people bring your video from the video operations center and I might have some details about that so as you may have we where we try to stream our stuff now there has been a lot of technology in the recent years a lot of technology development as to how ago modern streaming and we tried to adapt for that so this year we had a completely new transcoding setup which allowed to basically have one matroshka internal sort of format where we have the video format the three languages so we had three languages in most talks which was awesome and we had a completely new way of playing out the signal in different ways in HLS this time the first year round also with Dash we even could make this 4 day 2 so during operations there were no big problems with that but we had adaptive playing so when you had low band with it would automatically degrade to slide only and audio then to SD and finally and I hope you could all enjoy this thanks to the knock yeah as I said to translation and additionally for everyone here in the rooms we provided low latency ice cast streams so you could hear the translations on site and finally I think for the first time around we had community feeds and we love them like the fryphone community had broadcasted through our infrastructure we had an OB a unit at our assembly which broadcast through our network and finally we had C3TV which you I hope you took an opportunity watching we tried to replay a lot of old congress content hint it's coming to media CCC soon we got a bag of tapes that was suddenly dropped and as way back as I think 9C3 let's see eventually it may emerge on media CCCDE and this was a great sneak peak at this point yeah we hinted to that last year we said we might do 4K because why not so we did as an very experimental setup this time we used the hardware mixer there but we learned a lot of stuff concerning the delivery and the specialties of 4K streaming let's see what next year brings we got a bit of fancy new hardware because without hardware we are geeks it's fairly boring so we rolled out our own layer 1 network on site so that we could not transport us to the external locations we had a layer 2 to the data center to Berlin thanks to the knock and this will allow us to tear down most of the stuff here and have the last talk transcode in the data center so hopefully the talks that are in the end of today will be available to you much sooner than in the recent years and finally next to the 4K streams we had Voktomix in all rooms so our own software defined mixer and we also took the opportunity this year not just this event but over the year to open source all of our components that are required to setup the stuff yourself including Ansible recipes so go do your own stuff use the stuff in your assemblies we want to take on a lot of your assembly streams next year if you want to do this come find us talk to us we are happy to broadcast your stuff great thank you for your fantastic work you're welcome we took we took a bit of we took a bit of well experiments around Voktomix so this year we had camera controls that used the current state of the mixer so these guys can actually see what's going on and if they are alive or not and we had real professional intercoms that were provided to us and this really made it a lot easier to coordinate between camera the guys in the backstage all the mixing so and of course our room in the congress center and for the first time we had an audio control room so what does that mean I mean what we do is not only video but audio so for the first time we have released all talks in stereo so give us reasons please because when we did the slides I was actually and someone said no don't mention anything about this I was like okay why not there's a lot of master people in the audience they definitely have ideas please come find us for why you need 5.1 pardon the other cool thing is this audio control room remotely controlled the hardware mix us in the in the halls and so we had an independent mix that was independent from the PA here and this is really a huge step forward in terms of stream and audio quality because there are simply different things you need for making sure the sound is good in this huge hall and making sure the audio is good in the stream so we could do that and we had real good loudness meters and we tried to hit a target of minus 16 decibel loudness units full scale and this is actually derived from the european broadcasting unions the audio is an audio standard also IP based of course we had casualties encoding and streaming video is no easy business one transcoder died in the process we just sent the remaining to intel we had one encoder throttling CPU Wir haben zu viel Technologie auf den Encodercubes und wir haben gemerkt, dass wenn unsere Monitoring auf 40-something-Schritte ist, nicht gesund wird. Wir hatten ein paar broken Audio-Embedders, aber wir haben noch genug backupchains, um die Situation zu retten. Wir hatten einen Virtualisation-Hos in einem Datacenter, aber wir konnten das auch wiederholen können. So, finally to the part that you like. And I want you to take guesses on each day. What do you think was the highest viewed talk on day one? Keynote and pizza Valhack. Second day was the Arrestwickblick Methodisch-Incorrect on day three. And this is the total amount of almost 7000 viewers. And the pizza Valhack talk had two distinctive peaks. We have no idea what was going on there. Yeah, and what else? I mean, that was really interesting advertisement that popped up during the Snowden Talk. And I tell you, this was totally unplanned for. And people say, you know, you guys, you do this all the time and you bring your own, you create your own hardware. And are you really doing this professionally? We tell you, yes, we are professionals. And finally a big hand to our sponsors and to Texak, who was the guy that did all the planning and could be it today. And he did really did fantastic work and to everyone who made this possible. Thank you so much. Questions? Are there questions here? Yes, there, from the web, start with it. The 343 everywhere community would like to thank you very much for making it able to participate in the commerce experience. And they would like to know more about the hardware and software and the vendors that you use for transcoding. So in general, we use donated hardware, we use specific, you know, these small machines that really come in handy because they are really compact and you can easily bring them to events. We also have some, it's usually just standard hardware because everything we built, we built to work essentially on all platforms. So we are quite flexible there. And, well, most of the stuff that actually encodes is FFM-Pack. Right, another question? Still from the web, yes. The question is about the slide stream. Is that a bug or is it intentional that it is such low quality this year? It is, in a way, on purpose, because it's also a way to allow people with, you know, less degraded network connectivity to actually participate. That said, we are trying to make sure that the slides have a certain amount of quality. And that's of course a trade-off. But if you have specific, you know, comments, find us an IRC so we can talk this through. Okay, thank you. Is there another question? Thank you very much for this overview. We still have the part of the assembly to do. First, please to the audience, if you want to leave to Adam, please leave that side. And the other ones try to leave that way because it will be crowded after this lecture. Another message here, there is a call. You know, we have a teardown waiting for us still to go. You know, this is not the last, last bit, but we have to help. So there is a call for angels to do and help in this teardown to make sure that this all ends successfully. Good. Yes, an applause for each other. Come on. Assembly. Im Deutsch. I think again, actually it's not working here. Oh, no problem. Okay, here we go. Hotpatching in alpha, all right. Ja, hallo erst mal. Also vom Assembly-Team hatten wir genannt Hotpatching die Alpha, weil das einfach die Alpha-Version ist. Irgendjemand hatte gemeint, das ist kein Problem, das auf Deutsch zu halten. Jetzt habe ich gesehen, die meisten waren auf Englisch. Ich bleib jetzt aber mal bei Deutsch. Wir haben von dieser Halle angefangen. Und in sehr vielen Iterationen, über sehr viel kurze Zeit, mit viel hin und her, diese Version gebaut, die dann auch auf Final gebaut worden ist. Da gab es einige nächtliche Planungs-Sessions. Wir mussten uns dann in kürze noch wenige Wochen Formkongress noch mal ein paar Wege umbauen und Ähnliches, um alle Anforderungen gerecht zu werden. Ich hoffe, dass das zumindest so einigermaßen gepasst hat für die meisten, wie gesagt, das ist eine Alpha. Das nächste Problem, danke. Eine der interessanten Sachen war die Geschichte mit der Anmeldung. Ich persönlich hätte das gerne sehr viel früher gehabt. Wir hatten mehrfach den Techniker informiert, um die Wikianmeldung freizuschalten. Final ging sie dann am 1.11. los. Wir haben sie offiziell bis 15.11. offengelassen. Dem Dampf, mit dem wir dann gearbeitet haben, um die Assemblies zu platzieren, haben wir dann noch mal 14 Tage später gezogen, also so diese üblichen Deadline-Verpeller. Wir haben mit der schönen Zahl 256 angemeldeten Assemblies gearbeitet. Nach der erweiterten Deadline haben sich natürlich noch mal weitere Verpeller angemeldet. Und der offizielle Rekordhalter wollte am 26.12. um 14.42 Uhr gerne noch mal so ein paar Tische haben. Die Statistiken gibt es bei uns heute mit ohne Grafiken. Wir hatten so circa 3.000 verfügbare Sitzplätze geplant. 3.500 Anmeldungen hatten wir über das Wikii. Also wir mussten da leider einen kleinen Schäuble polen und mal überall die Kürzungsliste ansetzen. Wir haben ungefähr 15% mehr Fläche für die Assemblies gehabt gegenüber Hamburg und circa 20% mehr Anmeldungen. Aufgrund der Alpha-Planung und so ein paar umgebauten Wegen in der Halle, die eigentlich mal anders geplant waren, ist uns leider doch ein bisschen mehr Platz verloren gegangen. Im Nachhinein aber nicht so ganz wirklich schlimm, weil das Stuhllager ist das. Alles leer, wir haben alles auf der Fläche. Also wirklich nur noch, wer sieht hinten, drei einzelne verlorene Tische, die ich so als Notfall irgendwo mal in der Ecke gepackt habe. Wobei das auch nicht gut so ganz stimmt. Die Messe war so freundlich und hat uns eine Mobiliarliste geschickt und nicht so ganz das geliefert, was da draufstand. Und unter anderem musste ich dann an Tag 2 mal noch ein paar Leuten ihr geheimes Stuhllager zeigen, Tischlager zeigen, unten im CCL ganz versteckt. Selbst die Messe wusste nicht, dass da noch Tische stehen. Wir werden das jetzt inventarisieren. Was wir dieses Jahr eingeführt hatten, waren die, da gab es ein kleines Naming-Problem. Ich nenne das jetzt mal Centercluster Orbit. Ich glaube, die meisten wissen, was damit gemeint war. Wir haben davon ein paar größere gehabt, im Vorfeld sehr früh mal gefragt, im Juli. Es kam relativ spät erst Antwort oder Feedback von den meisten. Art and Play habt ihr vielleicht gesehen, direkt, wenn man reinkommt, links, sehr schöne Geschichten passiert. Chaos West mit einer sehr großen Bühne. Kommona mit einigen Workshop-Räumen und einem sehr offenen Struktur. Open Infrastructure Orbit mit viel Freifahrung, viel Internet, einer kleinen Bühne und Workshop-Fläche. Rights and Freedom Center ist leider ein bisschen untergegangen, im CCL in Saal 3. Vielleicht kann man da noch mal ganz kurz vorbeischauen, hallo sagen. Und dann eine groß barben Installation, The Hive durch die Seabass initialisiert, auch mit einigen Bühnen und Workshop-Räumen. War dieses Jahr noch ein bisschen holprig, aber hey, ist die Alpha. An ganz, ganz, ganz dickes Lob muss sich an das C3-Nav-Team loswerden. Schlicht und ergreifend, weil die uns an Tag 0 und Tag 1 gerettet haben, weil die ganze Zeit die Anfragen auf uns eingeprasselt sind. Hey, wo ist unsere Assembly? Und wir konnten sagen, suchen C3-Nav, danke. Wir hoffen, dass das nächstes Jahr ein bisschen runterlaufen wird. Das ist so eins auf unserer Liste. Ein ganz dickes Lob auch an die Center, die uns sehr viel Arbeit auch abgenommen haben, sich supertoll selbst organisiert haben. Liebend gerne aufgerufen, Build More Centers. Für nächstes Jahr tut was, überlegt euch was. Ihr habt gesehen, was hier geht. Wir haben ein bisschen mehr Fläche hoffentlich mal nächstes Jahr. Irgendwo müssen wir mal noch Möbler auftreiben, aber auch das kriegen wir hin. Ein ganz dickes Lob auch noch mal an die einzelnen Assemblies. Ja, war super lieb zu uns. Auch wenn ihr euch im Scheubel-Style doch einiges wegstreichen mussten. Immer mal wieder. Es hat aber sehr gut funktioniert. Dem Assemblies-Team selber natürlich, mit dem ich ganz viel beschäftigt war und allen Aufbauhelfern und Engeln. Vielen, vielen Dank. Great. Thank you very much for this overview. Danke schön. Wie auch so possible in English, if somebody has them. Is there any question? And no, there are no more chairs left, sorry. Oh, the web. Yes, our statistics and maybe pictures available from the kids area. That was not my department. No, the kids base was some kind of a center and self-organized. I can ask the guy if he can make something available, but I don't know yet. I need to ask him. Is someone willing to pay for it or? For the pictures. No. OK, thank you very much. Thank you everybody for this massive event. Great. OK, over to the next step. And that's about translation. Subtitles. Right, which is just taking what is spoken and turning it into words that are displayed on screen. It's not translating. It's just taking the original language and putting that into words. So, right. You might have noticed that this year we didn't have any live subtitles because we just didn't have the time to organize that. And so we focused on what we think is a bit more important and that is having subtitle releases for the talk. And well, so this graph shows the number of seconds of talks in blue and then various stages of subtitles that have been developed during this Congress. So in green are completed subtitles and then we have the yellow ones, which are at some stage of quality control. We have the orange ones that are subtitles currently being timed. So we already have a complete transcript of the talk, but we are still adjusting the timing of when to display which subtitle. And then we also have the red subtitles that is transcripts being written or being corrected. So we're starting from transcripts generated from machine audio recognition techniques and well, those are usually missing punctuation and also contain mistakes. So people have to go through that and basically check that everything matches what is being said. In numbers we have had over 100 Angels working on subtitles throughout this Congress for a total of 336 hours, producing at the point when we wrote the slides, 79 hours of material. So that's about 4.2 hours of work for every hour of material that gets released, which is about the speed difference between spoken word and written text. So, earlier today, when we wrote the slide, we had 16 subtitles released for this Congress. It should be about 20 right now and it's still going on, so more to come. And because there weren't any recordings available for most of day one, we just took subtitles, so talks from last year and started to subtitle those. So, that also got us 10 subtitles for earlier talks. And then we have about 64 hours of material in various stages of being processed. So, we have 30 hours transcribed talks, 22 hours being timed and 12 hours being checked, which probably is also already changed because people are still working on that. So, one of the benefits of having finished subtitles is that you can do funny stuff. So, for example, you get statistics for the speed at which people talk and what words they use the most. This is one talk from last year. I'm not quite sure which one. So, it's about people and bias and language and examples, I'm not quite sure yet. So, it has about 800 strokes per minute, which is not one of the fastest talks we had, but it's still a lot faster than what most people type. And if you want to help make Congress accessible for everyone, then too bad. You can also start writing subtitles with us. It's something you can do the whole year, not just during Congress. Just go to our website and you'll find instructions on how to do that. You can follow us on Twitter. You can come to our IOC chatroom. And you can also follow the SRT Releases Account to just see when new subtitles get released. Thank you. Are there... Great, thank you. So, are there any questions? Is there a question? Okay. None on the Internet? Okay. Hier, microphone 2. Where can you find the subtitles? Right. So, the subtitles are on some link. It's, well, mirrors. Mirror.savnet.de and then somewhere. CCC and Congress and 34C3 and then... Wow. No, not there. Sorry? Pointing up. Yes. Ah, right, yes. Yes, under C3 Subtitles. Yes, 34C3. So, those are currently all the subtitles available. Great. No one else has a question? Someone in the web, maybe? No, not for now. To the next thing, the assemblies. Okay, right, yes. Yes, thank you very much. Give him an applause as loud as you can. Please. Wow. Great, thanks man. Right.