 Oh, and welcome to the infrastructure review of the RC3 this year, 2020. What the hell happened? How could it happen? I'm not alone this year with me is Lindwarm, who will help me with the slides and everything else. I'm going to say before I end, this is going to be a great fuck up, like last year, maybe. More teams, more people, more streams, more of everything. And the first team and Lindwarm, who I'm going to drool, is the shock. Are you there? Oh, yeah, so I've got to go to the shock. Yeah, it's kind of a stress this year. We only had about 18 Harrels for the main talks, RC1 and RC2. And we have introduced about 51 talks with that. Only from this home setup, which was a very, very hard struggle. So we all had a metric ton of adrenaline and excitement within us. So here you can see what you have seen, how a Harrel looks from the front. And so it does look in the background. That was hard, really hard for us. So you see all our different setups here that we have. And we are very, very pleased to also have set up a completely new operation center, the Harrel News Show, which I really, really like you to review on YouTube. This was such a struggle. And we have about, oh, wait a second. So as we said, we are a little bit unprepared here. I need to have my notes up. There were 20 members that formed a new team on the first day. They made 23 shows, 10 hours of video recording, 20 times the pizza man rung at the door. And 23 Mato bottles had been drunk during the preps because all of those people needed to be online the complete time. So I really applaud to them. That was really awesome what they brought over the team and what they brought over the stream. And this is an awesome team. I hope we see more of Yesuf, would you take it over? Oh, no, my, my, my, my bad. So is the heaven ready? We need to go to the heaven and have an infrastructure review of the heaven. Okay. Do you hear me now? Yeah. Hello. I'm the Razielos team heaven, and yeah, heaven is ready. So welcome everybody. I'm Raziel from heaven. And I will present you the infrastructure review from the heaven team. We had some angel statistics scrapped out a few hours ago. And on this year, we have not so much angelized jak last year because we had a remote event, but we had a total of 1487 total angels from which 710 arrived. And even more of 300 angels that at least still did one shift. And in total, the recorded work done to that point was roughly 17 and 75 weeks of a done working hours. And for the RC3 world, we also prepared a few goodies so people could come visit us. So we provided a few badges there and every angel that, for example, found our extinguished or expired extinguisher and also extinguished a fire in heaven, the first batch was achieved from 232 of our angels. And even last but still a good number of 125 angels accomplished to help us and extinguished the fire that broke out during remand. And with that numbers in checked, we also will jump into our heaven. So I would like to show you some expressions and impressions from it. We had quite the team working to do exactly what the heaven could do, manage its people. So we needed our heaven office. And we also did this with respect to your privacy. So we painted our clouds white as ever. So we cannot see your nicknames and you could do your angel work but not be bothered with us asking for your names. And also we had prepared some secret passage to our back office and every time on the real event it would happen that some adventurers would find their way into our back office. And so we needed to provide that opportunity as well as you can see here. And let me say that some adventurers tried to find the way in our sacred digital back office but only few were successful. So we hope everyone found its way back into the real world from our labyrinth. And we also did not spare any expenses to do some additional update for our angels as well. As you can see, we tried to do some multi-instance support so some of our angels also accomplished to split up and serve more than one angel at the time and that was quite awesome. And so we tried to provide the same things we would do on Congress but now from our remote offices. And one last thing that normally doesn't need to be said but I think in this year and with this different kind of event, I think it's necessary that the heaven as a representative mostly for people trying to help make this event awesome. I think it's time to say the things we do take for granted. And that is thank you for all your help. Thank you for all the entities, all the teams, all the participants that achieved the goal to bring our real Congress that many, many entities missed this year into a new stage. We tried that online. It's up and down but I still think it was an awesome adventure for everyone. And from the Heaven team I can only say thank you and I hope to see you all again in the future on a real event. Bye and have a nice new year. Hello, hello back again. So we now are switching over to the signal angels. Are the signal agents ready? Hello. Yeah, hello. Welcome to the infrastructure review for the signal angels. I have prepared some stuff for you. This was for us slightly. This was for us the first time running a fully remote Q&A session set, I guess. We had some experience with D-Voc and had gotten some help from there on how to do this. But just to compare our usual procedure is to have a signal angel in the room. They collect a question on their laptop there and they communicate with the Herald on stage and they have a microphone like I'm wearing a headset but in there we have a studio microphone and we speak questions into it. Yeah, but remotely we really can't do that. Next slide. Because well it would be quite a lot of hassle for everyone to set up good audio setups. So we needed a new remote procedure. So we figured out that the signal angel and the Herald could communicate with a via pad and we could also collect the question in there and the Herald will read the question to the speaker and collect feedback and stuff. So we had 157 shifts and sadly we couldn't fill five of them in the beginning because there was not enough people already there and also technically it was more than five unfilled shifts because for some reasons there were DJ sets and other things that aren't talks and also don't have Q&A. We had 61 angels coordinated by four supporters, so me and three other people. And we had a 60 additional angels that in theory we wanted to do signal angel work but didn't show up to the introduction meeting and yeah. For as I've said for each session, each talk we created a pad where we put in the questions from IRC, mess it on Twitter and we have a bit more pads than talks we actually handled and I have some statistics about an estimated number of questions per talk. What we usually assume is that there's a question per line but some questions are really long and have to split over multiple lines, some structured questions with headings and paragraphs, some heralds or signal angels removed questions after they were done and also there was some chat and other communication in there. So next slide. We took a Python script, downloaded all the pad contents, read them, counted lines, moved the size of the static header and in the end we had 179 pads and 1627 lines if we discount the static header of nine lines per pad. So that in theory leads to about nine questions in quotation marks because it's not really questions but lines but it's an estimate per talk. Thank you. Talk and what I've learned is never miss the introduction. So the next line are the line producers. Ha, ha, ha, ha, STV, are you there? I am here in fact. So... So the people a bit older might recognize this melody badly sung by yours truly and other members of the line producers team and I'll get to why that is relevant to what we've been doing at this particular event. So what does what do line producers do? What does an Aufnahmeleitung actually perform? It's basically communication between everybody who is involved in the production, the people behind the camera and also in front of the camera. And so our work started really early basically at the beginning of November taking on like prepping speakers in a technical setup and rehearsing with them a little bit and then enabling the studios to allow them to actually do the production, coordinate on an organizational side. The technical side was handled by the Vok and we'll get to hear about that in a minute. But getting all these people synced up and working together well, that was quite a challenge and that took a lot of mumbles with a lot of people in them. We only worked on the two main channels. There's quite a few more channels that are run independently of kind of the central organization and again we'll get to hear about the details of that in a minute. And so we provided information. We tried to fill wiki pages with relevant information for everybody involved. So that was our main task. So what does that mean specifically, the production setup? We had 25 studios mainly in Germany, also one in Switzerland. These did produce recordings ahead of time for some speakers and many did live setups for their own channels and also for the two main channels. But I've listed everybody involved in the live production here and there were 19 channels in total. So a lot of stuff happening, 25 studios, 19 channels that broadcast content produced by these studios. So that's kind of the Eurovision kind of thing where you have different studios producing content and trying to mix it all together. Again the Vok took care of the technical side of things very admirably but getting everybody on the same page to actually do this was not easy. For the talk program we had over 350 talks in total, 53 in the main channels and so handling all that, making sure everybody has the speaker information they need and all these organizational stuff that was a lot of work. So we didn't have a studio for the main channels, the 25 studios or the live channels, the 12, they actually did provide the production facilities for the speakers. So we can look at the next slide, there's a couple more numbers and of course a couple pictures from us working basically from today. We had 53 talks in the main channel, 18 of them were pre-recorded and played out. We had three where people were actually on location in a studio and gave their talk from there and we had 32 that were streamed live like I am speaking to you now with various technical bits that again the Vok will go into in a minute. And we did a lot of Q&As, I don't have the number, how many talks actually had Q&As but most of them did and those were always live. We had a total of 63 speakers, we did prepare at least the live Q&A session for and helped them set up, we helped them record their talks if they wanted to pre-record them. So we spent anywhere between one and two hours with every speaker to make sure they would appear correctly and in good quality on the screen. And then during the four days, we of course helped coordinate between the master control room and the 12 live studios to make sure that the speakers were where they were supposed to be and any technical glitches could be worked out and decide on the spot. If for example, the line producers made a mistake and a talk couldn't happen as we had planned because we forgot something. So we rescheduled and found a new spot for the speakers. So apologies again for that and thank you for your understanding and helping us bring you on screen on day two and not day one. I'm very glad that we could work that out. And that's pretty much it from the line producers. I think next up is the Vogue. Thank you SCV. Yes, you're right. The next are the Vogue and Kunzi and JWCAC Alex are waiting for us. It's Franzi from the Vogue. 2020 was the year. Hi. This is Franzi from Vogue. 2020 was the year of distributed conferences. We had two devogs and the Frostcon to learn how we are going to produce remote talks. We learned a lot of stuff on organization, big blue button and jits according. We had a lot of other events, which was just streaming like business as usual. So for RC3, we extended the streaming CDN with two new locations, now seven in total, with a total bandwidth of about 80 gigabits per second. We have two new murals for media CCCDE and are now also distributing the front end. We got two new transcoder machines, alphas, enhanced sets that now have 10 alphas with own productions on media CCCDE. So the question is, will it scale? Next slide. Yep. Next slide. You can see that it did scale. We did produce content for 25 studios and 19 channels. So we got lots of recordings, which will be published on media CCCDE in the next days and weeks. Some have already been published. So there's a lot of content for you to watch. And now Alex will tell us something about the technical part. My name is Alex, I will not tell you the technical part first, but more of the organization. I was between the rock and the line producing team, and now a bit how it worked. So we had those two main channels as you want it to. Those channels have been produced by the various studios distributed around the whole country. And those streams, this is now the upper path in the picture. And to our interest relay to the fam, to the master control room. In Ilmenau there were a team of people adding the translations, making the mix, making the mix down, making records, and then publishing it back to the streaming relays. All the other studios produced two channels. Those channels took also the signals from different studios, make a mix down, et cetera, published to our CDN and relays, and we published to the studio channels. As you can see, this is not the tutorial setup we had in the last year in the present. So our next slide, we can see what this leads. Lots of communication. We had the line producing team. We had some production in Ilmenau that has to be coordinated. We have the studios, we have the local studio helping angels. We have some mumbled stairs, some Richard here, some CDN people, some web where something happens, we have some documentation that should be. And then we started to plot down the communication paths. Next slide, please. If you plotted all of them, it really looks like the world, but this is actually the world, but sometimes it feels like they're just getting lost in different paths. Who are you after us? Who do you have to call? Where are you? What's the shortest path to communicate? But let's have a look at the studios. First going to Kaus West, Kunzi. Yes, on the next slide, you will see the studio setup at Kaus West TV. So thank you, Kaus West, for producing your channel. At the next slide, you see the Bikipakke television and Fernseh stream in VTF, who have the internal motto, absolut nicht sendefähig, cause of recording. But even then, at some studios, they look more like studios. So this time at the next slide, at the hack. Yeah, at hack, you will also see some of the bloopers we had to deal with. So for example here, you can see there was a cat in the camera view. So yeah, and Alex, tell us about the open infrastructure orbit. The open infrastructure orbit show, in this picture, you can see that it's really artsy. How you can make a studio looking really nice, even if you're alone, they're feeling a bit comfy, a bit more hackish. But you have also those normal productions, as in the next slide, the Kaus Studio Hamburg. Yeah, at Kaus Studio Hamburg, we had two regular walk cases, like you know, from all the other conferences, and they were producing onsite in a regular studio setup. And the last but not least, we got some impressions from Kaus Zoner TV. As you can see here, also quite regular studio setup, quite regular, no, there was some corona views ongoing, and this is, we had a lot of distancing, wearing masks, and all the stuff that everyone can be safe, but C3 yellow, C3 yellow, we will tell you something else about it. But let's look at the nice things. For example, the minor issue, we were on the second day we were sitting there looking at our nice Grafana, oh, we got a lot of more connections, the service is increasing. The first question was, have we enabled our cache? We don't know. But the number of connections growing, then people are watching our streams, the ingest goes up, and we were, well, at least the people are watching the streams. If there's also, it's a website on the website, who cares, the ingest works. But then we suddenly get the relations, well, something did not really scale that good. And then you see on the next slide, the issue, we switched pretty fast from after looking at this traffic graph, well, that's interesting into, well, we should investigate, we get thousands of message on Twitter DMs, we get thousands of message in rocket chat, ASC, and suddenly, we had a lot of connections to handle, a lot of inquiries to handle, and that of phone calls, et cetera, to handle, and they have to prioritize first the hardware, then the communication, because otherwise the communication won't stop. On the next slide, you can see what our minor issue was. So at first, we get a lot of connections, so streaming webpages, then to a load balancers, and finally to our DNS servers. A lot of them were quite malformed, it looked like a storm. But the more important thing we had to deal with was all those passive aggressive messages from different persons who said, well, you can't even handle streaming, what are you doing here? And we're working together, C3Infer team, thanks for that, how to scare decent price even more just to provide the people the connection power they need. So I think in the context of last years, we don't need to use more bandwidth, so we can provide even more bandwidth if we need it. And then now tearing everything down. So is it time to shut everything down? No, we won't shut everything down. The studios can keep their endpoints, can continue to stream on their endpoints as they wish. We want to keep in touch with you in the studios, produce content with you, improve our software stack, improve other things like the ISDN, the internet streaming digital note, the project for small camera recording setups for sending two speakers, needs developers for the software. Also, Kevin needs developers and testers. What's Kevin? Oh, we have prepared another slide, the next slide. Kevin is short for killer experimental video internet noise because we initially wanted to use OBS Ninja, but there are a couple of licensing issues. There is not everything available OBS Ninja is open source like we wanted. So we decided to code our own OBS Ninja style software. So if you are interested in doing so, please get in contact with us or visit the wiki. So that's all from the book. And we are now heading over to C3 lingo. Exactly, C3 lingo, Oscar should be waiting in studio too, aren't you? Yeah, hello. Hi, yeah, I'm Oscar from C3 lingo. We will jump straight into the stats on our slides. As you can see here, we translated 138 talks this time. As you can see, it's also way less languages than in the other chaos events that we had since our second languages team that has everything that is not English and German was only five people strong this time. So we only managed to do five talks into French and three talks into Brazilian Portuguese. Then on the next slide, we are looking at our coverage for the talks. And we can see that on the main talks, we managed to cover all talks that were happening from English to German and German to English, depending on what the source language was. And then on the other languages track, we only managed to do 15% of the talks from the main channels. And then on the further channels, which is a couple of others that also were provided to us in the translation team, we managed to do 68% of the talks. But none of them were translated into other languages than English and German. On the next slide, some global stats. We had 36 interpreters, which in total managed to translate 106 hours and seven minutes of talks into another language simultaneously. And the maximum number of hours one person did was 16 hours. And the minimum number of hours, the average number of hours people did was around three hours of translation across the entire event. All right. Then I also have some anecdotes to tell and some mentions I want to do. We had two new interpreters that we want to say hi to. We had a couple of issues with the digital thing that we didn't have before with regular events where people were present. For example, the issue of sometimes when two people are translating, one person starts to interpret something on a wrong stream. Maybe they were watching the wrong one. And then the partner just thinks they have more delay or something. Or for example, a partner having a smaller delay and then thinking that the partner can suddenly read minds because they can translate faster than the other person is actually seeing the stream. Those are issues that we usually didn't have with the regular stream, but only with the regular events, not with remote events. And yeah, some hurdles to overcome. Another thing was, for example, when on the R3S stage, the audio cut out sometimes was. But because one of our translators had also already translated the talk twice, at least partially because it was already canceled after those, they basically knew most of the content could do a PowerPoint Karaotik translation and was able to do most of the talk just from the slides without any audio. And then there also was... Yeah, the last thing I want to say is actually, I want to say give a big shout out to the two of our team members that weren't able to interpret with us this time because they put their heart and soul into this event happening, and that's STB and CUTTY. And that's basically everything from C3ingo. Thanks. Hello, C3 subtitles is it now? TD will show the right text to his slides you already saw a minute ago. Okay. Okay, hi. So I'm TD from the C3 subtitles team. And next slide, please. So just to quickly let you know how we get from the recorded talks to the really subtitles, when we take the recording videos and apply speech recognition software to get a raw transcript, and then angels work on that transcript to correct all the mistakes that the speech recognition software makes, and we again apply some auto timing magic to get some raw subtitles, and then again angels do quality control on these tracks to get released subtitles. Next slide, please. So as you can see, we have various sub-title tracks in different stages of completion, and these are seconds of material that we have. You can see all the numbers are going up and to the right as they should be. So next slide, please. In total, we had 68 distinct angels that worked four shifts on average, 83% of our angels returned for a second shift, 10% of our angels worked 12 or more shifts, and in sum we had 382 hours of angel work for 47 hours of material. So far, we've had two releases for RC3 and hopefully more yet to come, and 37 releases for older Congresses, mostly on the first few days where we didn't have many recordings. We have 41 hours still in the transcribing stage of material, 26 hours of material in the timing stage, and 51 hours material in the quality control stage, so there's still lots of work to be done. Next slide, please. When you have transcripts, you can do fun stuff with them. For example, you can see that important to people in this talk are people. We're working on other cool features that are yet to come. Stay tuned for that. Next slide, please. So to keep track of all these tasks, we've been using a state-of-the-art high performance lock-free NoSQL columnar data store, aka a Kanboard in the previous years, and because we don't have any windows in the CCL building anymore, we had to virtualize that, so we're using Kanban software now. At this point, I would like to thank all our hardworking angels for their work. And next slide, please. If you're feeling bored between Congresses, then you can work on some transcripts. Just go to c3subtitles.de. If you're interested in our work, follow us on Twitter, and there's also a link to the release subtitles here. So that's all. Thank you. Thank you, TD. And before we go into the pock where Drake is waiting, I'm sure everyone is just asking, why are those guys saying next slide? So wait. In the end, we have the infrastructure review of the infrastructure review meter going on. So be patient now. Drake, are you ready in Studio One? Okay. Hello, I'm Drake from the Phone Operations Center, and I'd like to present you our numbers and maybe some anecdotes at the end of our part. So please switch to the next slide. And let's get into the numbers first. So first of you registered about 1,950, 5,195 zip extensions, which is about 500 more than you registered on the last Congress. Also, you did about 21,000 calls, a little bit less than on the last Congress, but we are still quite proud of what you have used our system with, and it run quite stable. And as you may notice on the bottom, we also had about 23 decked antennas at the Congress or at this event, so please switch to the next slide. And this is our new feature. It's called the next slide. It is called the Eventphone Decentralized Decked Infrastructure, which we especially prepared for this event, the App DDI. So we had about 23 RFPs online throughout Germany with 68 decked telephones registered to it. But it's not only the German part that we covered. We actually had one mobile station walking out through Austria, through Passau, I think. So indeed, we had an European Eventphone Decentralized Infrastructure. Next slide, please. We also have some anecdotes. So maybe some of you have noticed that we had a public phone, a working public phone in the RC world, where you could call other people on the telephone system. And also, other people started to play with our system. And I think about yesterday, someone started to introduce the C3 fire. So you could actually control a flamethrower through our telephone system. And I'd like to present here a video. Next slide, please. Maybe you can play it. I have quite a delay in waiting for the video to play. So what you can see here is the C3 fire system actually controlled by a decked telephone somewhere in Germany. So next slide, please. We also provided you with SSTV servers via the phone number 229, where you could receive some pictures from Eventphone like a postcard, basically. So basically, you could call the number and receive a picture or some other pictures, some more pictures. And next slide, please. Yeah, basically, that's all from the Eventphone. And with that, we say thank you all for the nice and awesome event. And yeah, bye from the first Certified Assembly, POC. Bye. Thank you, POC. And hello, GSM. Linksys is waiting for us. Yeah, hello. I'm Linksys. I'm from the GSM team. This year was quite different, as you can imagine. However, next slide, please. But we managed to get a small network running and also a couple of SIM cards registering. So where are we now? So next slide, please. As you can see, we are just there in the red dot. There's not even a single line for our five extensions. But even we managed 130 calls of our five extensions. And next slide, please. So we got five extensions registered with four SIM cards and three locations with mixed technologies. Also two users so far, sadly. And one network with more or less zero problems. So let's take a look on the coverage. So next slide, please. So we are quite lucky that we managed to get an international network running. So we got two base stations in Berlin, one in the Hacker Space, another one in the north of Berlin. And yeah, one of our members is currently in Mexico. And yeah, he's providing the remote chaos network sticker. Yeah, so that's basically our network. So before we go into the next slide, we have what we have done so far is, yeah, we are just two people instead of 10 to 20 and had some fun with improving our network and preparing for the next Congress. And next slide, please. And yeah, now I'm closing with Edge Computing. We improved our Edge capabilities. And yeah, I wish you a hopefully better year. And yeah, maybe see you next day remote or in person. Have fun. Thank you. And I give a hand to Linform for doing the slide, DJ, all the time. He now is to switch to the Hexen, who are next. They bring an image and Meltzer is waiting for us in studio three. Hello, what's phones without people? So I'll give you an introduction and over here, how many people we needed to run the whole Hexen assembly. We had around 20 organizing Hexen and we had around 20 speaker in our events. And we had in total around 40 events. But I'm pretty sure that I haven't even done all of these. As you've realized, our world is pretty large. So we needed around 7 million pixels to display the whole Hexen world. And that needed around 400 commits on our GitHub corner of the internet. Around 130 people received the fireplace badge in our case. And around 100 people tested our swimming pool and received that badge. So great a year for not going really just swimming. Also around 49 people showed some very deep dedication and checked all the old memorials in our Hexen assembly. Congratulations for that. There were quite a many of these ones. Our events are run over the button externally from the Congress. And so we had starting from day zero, no lags. And we're able to host up to 133 people in one session. And that was quite stable. We also introduced for new members around 13-year Hexen joined just for the Congress. And we increased now to the size of 440 Hexen overall. Also somewhat, we got new Twitter accounts following us. So we have added over 200 more Twitter accounts. And so our messages are getting heard. But besides the virtual world, we also did some quite physical things. First of all, we distributed over 50 physical goody backs for the people with microcontrollers and self-sewered masks in it, as you can see on the picture. And also, sadly, we shopped so many RC3 Hexens in trunks that they are now out of stock. But they will be back in January. Thank you. Oh, thank you. I'm going to send thanks to the Chaos Patenin, Chaos Patenin, who are waiting in Studio One. This is Mike from the Chaos Patenin team. We've been welcoming new attendees and underrepresented minorities to the Chaos community for over eight years. We match up our mentees with experienced Chaos mentors. These mentors help their mentees navigate our world of Chaos events. Divak was our first remote event. And it was a good proof of concept for RC3. This year, we had 65 amazing mentees and mentors, two in-world mentee mentor match-up sessions, one great assembly event hosted by two of our new mentees, and a wonderful world map assembly built with more than 1,337 kilograms of multicolor pixels. Next slide, please. And here's a small part of our assembly with our signature propeller hat tables. And thank you to the amazing Chaos Patenin team, Fragulent, Yali, Azrael, and Lila Fish, and to our great mentees and mentors. We're looking forward to meeting all of the new mentees at the next Chaos event. Yeah, I think that was my call. That was great. So next up, we'll have the, let me see, the C3 Adventure. Are you ready? Hello, my name is Roy. And I'm Matt. And we will talk about the C3 Adventure, the 2D World, and what we did to bring it all online. Next slide, please. OK. So when we started out, we looked into how we could bring a Congress-like adventure to the remote experience. And on October, we started with the development. And we had some trouble in that we had multiple upstream merges that gave us some problems. And also, due to just Congress being Congress or remote experience being remote experience, we needed to introduce features a bit late or add features on the first day. So off was merged just in 4.30 AM in the first day. And on the second day, we finally fixed the instance jumps when you walk from one map to the next. We had some problems there. But on the second day, it all went up. And I hope you have all enjoyed the badges that have finally been updated and brought into the world today. What does that all mean? Since we started implementing, there have been 400 Git commits in our repository, all in all, including the upstream merges. But I think the more interesting stuff is what has been done since the whole thing went live. We had 200 additional commits fixing stuff and making the experience better for you. Next slide. In order to bring this all online, we not only had to think about the product itself, not only think about the world itself, but we also had to think about the deployment. The first commit on the deployer, it's a background service that brings the experience to you, has been done on 26th of November. We started the first instance, the first clone of the work adventure through this deployer on 8th of December. And a couple of days beforehand, I was getting a bit swamped. I couldn't do all of the work anymore because I had to coordinate both of the projects. And so my colleague took over for me and helped me out a lot. So I'll give over to him to explain what he did. Yeah, so imagine that on day minus 5, I get a message from a friend that, hey, help is needed. So I say, OK, let's do it. And Rang tells me that, OK, so we can spawn an instance and we need to scale it somehow and do that. And I spawn the deployer and my music stops. I streamed music from the internet. And I wonder, why did it stop? And I have noticed that, oh, there are a lot of logs now, like a lot. And I have finally on day minus 4 noticed that the deployer was spawning a copy of itself each few seconds in a loop. So that was the state back then. Since day minus 4 until day 1, we have basically written the thing. And well, day 1, we were ready. Well, almost ready. I mean, we have like four instances deployed. And I forgot to mention that when we were about to deploy 200 at once, it wouldn't work because all of the things would time out. So we patched things quickly. And at 13 o'clock, we've had our first deployment. This worked and everything was fine. And why is everyone on one instance? So it turns out that we've had a bug not in the deployer in the app that would move you from the lobby to the lobby on a different map. So during the first day, we've had a lot of issues of people not seeing each other because they were all on different instances of the lobby. So we were working hard. And next slide, please, so we can see that we're working hard to reconfigure that, to bring you together in the assembly. So I think we have succeeded. You can see the population graph on this slide. The first day was our almost most popular one. And the next day, it would seem that OK, it's not as popular, but we have hit the peak of 1,000 and 600 users that day. What else about this? The most popular instance was the lobby, of course. The second most popular instance was hardware hacking for a while, then the third, I think. Next slide, please. We have counted, well, first of all, we've had in total about 205 assemblies. The number was increasing day by day because people, through the whole Congress, they were working on their maps. For a while, certs had over 1,000 maps active in their assembly, which led to the map server crashing. Some of you might have noticed that it stopped working quite a few times during day three. And they have reduced the number of maps to 255. And that was fine. At the end of day three, I have counted about 628 maps. This is less than was available in reality because it was the middle of the night, as always. And it wasn't trivial to count them. But in the maps I have found, we have found over 2 million used tires, so that's something you can really explore. I wish I could have. But deploying this was also fun. Next slide, please. And what? Yeah? Just a quick interject. I really want to thank everyone that has put work into their maps and made this whole experience work. We provided the infrastructure, but you provided the fun. And so I really want to thank everyone. Yeah, the more things happen on the infrastructure, the more fun we have, especially don't like to sleep. So we didn't, I basically exchanged with Roang the way that I slept five hours during the night and he slept five hours in the day. And the rest of time, we were up. The record though is incorrect. Roang is now 30 hours up straight because the badges were too important to bring to you to go to sleep. The thing you see on this graph is undeployed instances. We were redeploying things constantly, usually in the form of redeploying half of the infrastructure at any given time. The way it was developed, you wouldn't have noticed that you wouldn't be kicked off your instances. But for a brief period of time, you wouldn't be able to enter anyone. But next slide. I have been talking for a few days of the Congress that I have been implementing pseudo Kubernetes thing because it's automatically deployed things and managed things and so on. And I have noticed by day three that I have achieved through enlightenment and through automation because we have decided to redeploy everything at once at some point. The reason was that we are being d-dosed and we had to change something to mitigate that. And so we did that and everything was fine. But we made a typo. We made a typo and the deployment failed. And when the deployment failed, it deleted all the servers. So yeah, 400 and five servers got deleted by what I'm remembering was a single line. So it was brought up automatically and that wasn't a problem. It was all fine. But, well, where is human to automate mistakes is DevOps. Next slide. What's important is that these 400 and five servers was provided by Hasner. We couldn't have done that without their infrastructure, without their cloud. The reason we got up so quickly after this was that the servers were deleted, but they could have been reproduced almost instantly. So the whole thing took like 10 minutes to get back up. And next slide. That's all. Thank you for testing our infrastructure. And see you next year. Thank you, C3Adventure. So this was clearly the first conference that didn't clap for falling matter bottles. If that's not a thing, maybe try next year. The lounge. And I now have to ask for the next slide, too. The RCC lounge artists. And I was asked to read every country where someone is in because everyone helped to make the lounge what it was, an awesome experience. So there were Berlin, Mexico City, Honduras, London, Zurich, Stockholm, Amsterdam, Rostock, Glasgow, Santiago de Chile, Prague, Hamburg, Mallorca, Krakow, Tokyo, Philadelphia, Frankfurt am Main, Köln, Moscow, Taipei, Taiwan, Hanover, Shanghai, Seoul, Seoul. I think, sorry. Vienna, Hong Kong, Karlsruhe, and Guatemala. Thank you guys for making the lounge. So the next is the hub. And hey, Seoul should be waiting in Studio 2. Thank you guys for making the lounge. So the next is the hub. And hey, Seoul should be waiting in Studio 2. Software is based in Django. And the intent is to be used for the next event. The problem is it was a new software. We had to do a lot of integrations. Yeah, live during setup, during day zero, day... Oh, okay. Yeah, hi. I'm presenting the hub, which is a software we wrote for this conference. Yeah, it's based on different components. All of them are based on Django. It's intended to be used on future events as well. Our main problem was it's a new software. We wrote it and yeah, a lot of integrations were only possible on day zero or day one. And yeah, so even still today on day four, we did a lot of updates, commits to the repository. And even that numbers on the screens are already outdated again. But yeah, as you could possibly see, we have a lot of commits all day, night, or all night long, only a small ditch at 6 a.m. Sorry for that. Next slide, please. And yeah, for the numbers, you are quite busy using the platform. Some of these numbers on the screen are already outdated again. Out of the 360 assemblies, which were registered, only 300 got accepted. Most of them were event or people wanting to do a workshop and trying to register an assembly or duplicates. So please organize yourself. Events currently we have over 940 in the system. You're still clicking events. Nice. Thanks for that. The events are coordinating with the studios. So we are integrating all of the events of all the studios and the individual ones and the self-organized sessions, all of them. A new feature, the batches. Currently, you have created 411. And yeah, from these batches redeemed, we have 9,269 achievements and 19,000 stickers. Documentation, sadly, was 404. Because yeah, we were really busy doing stuff. Some documentation has already been written. But yeah, more documentation will come available later. We will open source the whole thing, of course. But right now we're still in production and cleaning up things. And yeah, finally, for some numbers, total requests per seconds were about 400. In the night, when the world was redeploying, then we only had about 50 requests per second. But it maxed up to 700 requests per second. And the authentication for the world, for the 2D adventure, it was about 220 requests per second, more or less stable due to some bugs and due to some heavy usage. So yeah, we appreciate that you use the platform, use the new hub and hope to see you on the next event. Thanks. Hello, hub. Thank you, hub. And the next is Bitterlas is waiting for us. He's from the C3RT team. And he will tell us what he does and his team did this year. I'm Bitterlas from C3RT. And we've been really busy this year, as you can probably see by the numbers on my next slide. We have 37 confirmed auto angels. And today, we surpassed the 200 hours mark. We have 10 organ numbers leading up to the event. And there are almost 5 million unique pixels in our repository. I'm pretty convinced we've managed to create the smallest fairy dust of RC3, provided by an actual space engineer. And the tree of solitude is not the only thing we've managed to create or contribute to this wonderful experience. On our next slide, you can see that we also contributed six panel sessions for autistic creatures to discuss their experiences and five play sessions for them to socialize. We helped to contribute a talk, a podcast, and an external panel to the big streams. And on our own panels, we've had up to 80 participants that needed to be split up to five breakout rooms, so they could all have a meaningful discussion. And all their ideas and thoughts were anonymized and scored a start on more than 1,000 lines of markdown documentation that you can find on the internet. But 1,000 lines of markdown wouldn't be enough for me to express the gratitude I have towards all the amazing creatures that helped us make this experience happen for all the amazing teams that work with us. I'm so happy to see you again soon, but now I think I will need some solitude for myself. Thank you, Bette Lars. So, Lindvarm, are you ready? The next one is a video, as far as I know. It's from the C3 Exclusive Operations Center. I don't know their short name. C3 IO Walk. And I was counting down three, two, run, go. So video is like a very difficult thing to play in those days, because we only used to do stuff live. Live means that a lot of pixels and traffic is done from this year, from this glass to all the wires and cables and back to the glass of your screen. And this is like magic to me somehow, although I am only being a robot to talk synchronously with all the heads. OK, now I spent already enough time, I think to switch back to Lindy with the video. Oh, I tell you what we're going to do. The event for everyone, and especially people with disabilities. Hello, everyone. I'm NWNG from the new C3 Inclusion Operations Center. This year, we've been working on accessibility guides to help the organizing teams and assemblies improve the event for everyone, and especially people with disabilities. We have also worked with other teams individually to figure out what can still be improved in their specific range of functions, but there is still a lot to catch up on. Additionally, we have published a completely free and accessible CSS design template that features dark mode and an accessible font selection. And it still looks good without JavaScript. 100 internet points for that. For you visitors, we have been collecting your feedback for mail or Twitter and won't stop after the Congress. If you stumbled across some barriers, please get in touch via c3ioc.de or at C3 Inclusion on Twitter to tell us about your findings. Thanks a lot for having us. Thank you for the video. Finally, technique is working. We should. Does someone know computers? Maybe? Critis is one of them, and he is writing in Studio One to tell us something about C3 Yellow or C3 Gelb, if you hear that. Yeah, welcome. I'm still looking at this hard drive. Maybe you remember this from the very beginning. It has to be disinfected really thoroughly. And I guess I can take it out by the end of the event. And for the next slide with the words, please, we did found roughly 777 hand wash options and 3FF waste disposal possibilities. We checked the correct date on almost all of the 175 disinfectant options you had around here. And because at a certain point of time, people from CERT were not reachable in the CERT room because they were running around everywhere else in this great 2D world, we had the chance to bypass and channel all the information because there were two digital cats on a digital tree. And so we got the right help to the right option. Next slide, please. We have a couple of options ongoing. A lot of work had been done before. We had all the studios with all the corona things going on before, but now we think we should really watch into an angel disinfectant swimming basin for the next time to have there the maximum option of cleanliness and we will talk with the book if we can maybe achieve to use this globally MaxiCubes for the chunk in the upcoming time. Apart from that, in order to get more Barblüten and everything else, we need someone who is able to help us with the potensieren for homopatic substances. So if you feel welcome with that, please just drop us a line to info at 33gelp.de. Thank you very much and good luck. Thank you, Chris. I'm finally happy to hear your voice. I only know you from Twitter, where we treat our stuff together, our ideas and your mind. Don't maybe you're going to change it, please. And talking about messages. Chaos Post was here, too. And a tree later, whom we already heard earlier, has more to say. Okay, welcome. It's me again. I've changed outfits a bit. I'm not here for the signal angels anymore, but for Chaos Post. So, yeah, we had our online office this year again, as we had with the Devox before, and I've got some mail numbers for you that should be available. I've got some mail numbers for you that should be on the screen right now. If it's not, if it's still the title page, please switch to the first one where it lists a lot of numbers. And we had 576 messages delivered total. This is numbers from around half to six. And 12 of them, we weren't able to deliver because, well, nonexistent mailboxes or full mailboxes mostly. We delivered mails to 34 TLDs, the most going to Germany to .de-domains, followed by .com, .org, .net, and to Austria with .at. We had a couple of motifs you could choose from. The most popular one was a fairy dust at sunset. 95 people selected that. Next slide. About our service quality. We had a minimum delay from the message coming in, us checking it and it going out for about a bit more than four seconds. The maximum delay was about seven hours. That was overnight when no agents were ready or they were all asleep or having, being busy with, I don't know, the lounge or something. And on average, a message took us 33 minutes from you putting it into our mailbox to it getting out. Some fun facts. We had issues delivering to T-Online at the first two days, but we managed to get that fixed. A different mail provider refused our mail because it contained the string C3 world, the domain in the mail text, and apparently new domains are scary and you can't trust them or something. We created a ticket with them, they fixed it and it was super fast, super nice service. Yeah, and also some people tried to send digital postcards to Macedon accounts because they look like email addresses or something. Another thing that's not on a slide is we had another new feature this time that was our named recipients. So you could, for example, send mail to third without knowing their address and they also have a really nice postcard wall where you can see all the postcards. You send them, the link for that is on Twitter. Thank you. Thank you, Kaußpost. Lindromm, are you there? Yeah, yeah, ich bin da, ich bin da. Hello. So we are almost done. I hear you. So I have to switch some more slides. It's kind of stress for me, really. You're doing an awesome job. Thank you for doing it. So just out of curiosity, did you have problem accepting any cookies or so? No, not really. I heard somewhere that some really smart people had problems using the site because of cookies. Oh no, that was not my problem. I only couldn't use the site because of overcrowding. That was often one of my little problems. And please, I hope you don't see what I'm doing right now in the background with starting our paths and so on. And what I wanted to say to all of you, this was the first congress where we have so many women and so many non cis people running that show and being up front the camera and making everything up. I would really thank you all. Thank you that you made that possible and thank you that we get more and more diverse year by year. I can only second that. And now we are switching to the C3 infrastructure. Yeah, we need to. I'm sure a lot better will be answered by them. And I try to make up the slides for that, but I do not find them right now. Yeah, welcome to the infrastructure review of the team infrastructure. I'm not quite sure if we have the newest revision of the slides but my version of the stream isn't loading right now. So maybe Lindblum is possible to press control R. And if you're seeing a burning computer then we have the actual slides. It's just played over karaoke without deep background music. Yeah, and without the PowerPoint presentation in real time. Now I'm seeing me. Let's wait a few seconds until we see the slide. You want to wait the entire stream delay. It's just about 30 to one minute. Yeah, I'm tease and I'm waiting and this is Patrick and he's waiting. Yeah, but that's in the middle of the slides. Can we go? Okay. Yeah. I'm now seeing something in the middle of the slides, but yeah, it seems fine. Okay. Yeah. We have the team C3 info, our C3 info. We're creating the infrastructure. Next slide. We had about nine terabytes of RAM and 1,700 CPU cores. On the whole event, there's only one that SSD died cause everything's broken. We had five dead rate controllers and didn't bother to replace the rate controllers. Just replace them with new servers and 100% uptime. Next slide. We looked about 42 hours on starting screens of enterprise servers. 20 minutes max is what HP delivered. And we are now certified enterprise observers. We had only 27% of visitors using IPv6. So that's even less than Google publishers. And even though we had almost full IPv6 coverage, except some really, really shady out of band management networks, we're still not at the IPv6 coverage that we are hoping for. I'm not quite sure if that's the right slides, but I'm not quite sure where we are in the text. Yeah. Patrick. Yeah. So before the Congress, there was one prediction. There is no way it cannot be not DNS and well, it was DNS at least once. So we checked that box and let's go over to the next topic, OS. We've provisioned about 300 nodes and it was an Ansible powered madness. So yeah, there was full disk encryption on all nodes. No IPs locked in the access locks. We took extra care of that and we configured minimal logging wherever possible. So in the case of some problems, we only had warnings available and there are no info locks, no debug locks, just the minimal logging configuration. And with some software, we had to pipe locks to DevNow because the software just wouldn't stop locking IPs and we didn't want that. So no personal data in locks, so no GDPR headache and your data is safe with us. The Ansible madness I've talked about was a magical deployment that deepwood stepped into the life system and assimilated into the LC3 infrastructure while it's still running. So if you didn't put the machine, then it was just running. When a OS deployment was broken, it was almost always due to network or routing, at least the OS team claims that and this claim is disputed by the network team, of course. One time the deployment broke because our trigger had the infra-angel, but let's not talk about that. Of course, at this point, we want to announce our great cooperation with our Gold-sponsored DDoS24.net, who provided an excellent service of handcrafted requests to our infrastructure. There was a great demand or great public demand with some million requests per second for a while, but even during the highest or peak demand, we were able to serve most of these services. We've provided some infrastructure to the Evoque and they've quickly made use of the provided infrastructure deployed there overall an amazing time to market. We had six locations and those six locations were some wildly different special snowflakes of all. So we had DDoS816 CPU cores there, two terabytes of RAM, and we had 10 gigabits per second interconnect. There was also a one terabit per second Infiniband available, but we couldn't use that. That's what has been nice. The machine store had a weird and ancient IAPMI which made it hard to deploy there and the admin on location never deployed their metal hardware to a data center, so there was also some learning experience there. Fun fact about DDoS, this was the data center with the maximum heat, one server, seven units, over 9,000 watts of power, 11.6 to be exact, where they had to take some creative heat management solutions. Next was Frankfurt. There we had 620 gigabit of total uplink capacity and we actually only used 22 gigabit during peak demand again by our premiums on the DDoS24.net. There was zero network congestion and 1.5 gigabit per second where IP versions exist, so there was no real traffic challenge for the network engineers of you. It was a full layer 3 architecture with MPLS between the LAN routers and there was a night shift on the 26th and 27th for more servers because some shipments didn't arrive yet. The fun fact about this data center was the maximum bandwidth. Some servers there had 50 gigabit uplink on the server configured. It was the data center with the maximum manual intervention. Of course, we had the most infrastructure there and it wasn't over subscribed at any point. We had some hardware in Stuttgart which was basically the easiest deployment. There were also some night shifts but thanks to Noina and team, this was a really easy deployment. It was also the most silent DCs, so no incidents from day minus five until now. So if you're currently watching from Stuttgart now, you can create some issues because now we set it. What was the smallest DC? We only had three servers and we managed to kill one hardware rate controller, so we only could use two servers there. And then Hamburg was the data center with the minimum uptime. We never could deploy to this data center because there was a broken network and we couldn't provision anything there. And of course, the six data center was the Hedzler cloud where we deployed on all locations. The problem fun fact, we received a COVID warning from the data center. Luckily, it didn't affect us. It was at another location but thanks for the Hedzler and the warning. The team leader of a sponsor needed to install Proxmox in a DC with no knowledge or without any clue what they were doing. We installed Proxmox in the Hamburg DC and no server actually wanted to talk to us so we had to give up on that and they had to be a lorry relocated before we could deploy other servers. So that was standing in the way there. Now, let's get to Chitzi. Our peak user account were 1,105 users at the same time on the same cluster. I don't know if it was at the same time as the peak user account but the peak conference account was 204 conferences. I hope you can still beat that today but that is data from yesterday. The peak conference size was 94 participants in a single conference. Let me give condolences to your computer because that must have been harder on it. Our peak outgoing video traffic on the Chitzi video bridges was 1.3 gigabit per second and we had about three quarters of the participants were streaming video and one quarter of them had video disabled. Interesting ratio. Our Chitzi deployment was completely automated with Ansible so it was zero to Chitzi in 15 minutes. We broke up the Chitzi cluster into four shards to have better scalability and resilience so if one shard were done it would only affect a part of the conferences and not all of them because there are some infrastructure components that you can't really scale or cluster so we went with a sharding route. Our Chitzi video bridges were about 42 peak 42% peak usage excluding our smallest video bridge which was only 8 cores and 8 gigabytes which we added in the beginning to test some stuff out and it remained in there and yes, we over-promissioned a bit. There will also be a blog post on our Chitzi Meet deployment coming in the future and for the upcoming days we will enable 4K streaming on there so why not use that? We want to say thanks to the FF Meet project who contacted us after our initial load test and gave us some tips to handle load effectively and so on. We also tried making deck call out or deck call out working spent 48 hours trying to get it worked but there were some troubles there so sadly no adding deck participants to Chitzi conferences for now. Chitzi.lcf.world will be running over new year so you can use that to get together with your friends and so on. Over the new year stay separate don't visit each other please don't contribute to COVID-19 spread you've got the alternatives there. Now let's go over to monitoring. Yeah, thanks. First of all it's really funny how you added this page but reveal.js doesn't work that way until linform reloads the page which hopefully doesn't do right now. Everything's fine so you can leave it to be. Yeah monitoring. We had a previous and a light manager setup completely driven out of our solemnly one and only source of truth, our net box. We received about 4385 critical alerts it's looking at my mobile phone it's definitely more right now and about 13,070 warnings also definitely more right now and we tended about 100 of them the rest was kind of useless. Next slide please. As it's important to have an abuse hotline into the abuse contact we received two network abuse messages both from Hetzner one of our providers letting us know that someone doesn't like our infrastructure as much as we do props to DDoS24.net and we got one call at our DDoS hotline and it was a person who wanted to buy a ticket from us sadly we were out of tickets. Next slide please. Some other stuff we got a premium Ansible deployment brought to you by Turing Complete Jamel that sounds scary and we had about 130 KDNS updates thanks to the world team at this point they're really stressing our DNS RP with the redeployments and also our DNS Prometheus and Grafana are deployed on and by NixOS thanks to Flipge and head over to Flipge's interweb thingy he wrote some blog posts about how to deploy stuff with his NixOS and the next slide please and the last slide from the infra team is the list of our sponsors huge thanks to all of them it won't be possible to create such a huge event and such loads of infrastructure without them and that's everything we have Amazing thank you for all you've done truly incredible and sharing everything to the public so I promised that there will be kind of behind the scenes look of this infrastructure talk or review and I really have nothing to do with it everything was done by completely different people I'm only a herald somehow lost and teleported to this dream and so I'm just going to say switch to wherever show us the magic three hours ago I got a call hello and welcome from the last point of the infrastructure review and greetings from Karlsruhe so three hours ago I got a call from Lindröhm and he asked me how is it with this last talk we have it may be a bit complicated and he told me ok we have a speaker I'm the herald oh as always so and then we realized yeah we don't have only one speaker we have 24 and for that we called Karlsruhe and built up an infrastructure which Dampfkazze will explain you now in a short minute I think so thank you yes oh I lost the sticker ok after we called Chaos West we came up with this monstrosity of the video cluster and we start here the teams streamed via OBS Ninja onto three Chaos West studios they were brought together via RTMP on our mix one local studio and then we pumped that into mix two which pumped it further to the walk the slides were brought in via another OBS Ninja directly onto mix two they came from Lindröhm also the closing you will see shortly hopefully will also come from there and Yusuf and Lindröhm were directly connected via OBS Ninja onto our mix one computer and mix two also has the studio camera you're watching right now and for the back end communication we had a mumble connected with our audio matrix and Lindröhm, Yusuf and the teams and we in the studio locally could all talk together and now back to the closing with no to the Harald News Show I think Lindröhm will introduce it to you Lindröhm is live is Yusuf still there or do you come with me? so it will take a second Yusuf years so thank you very much for this review it was as chaotic as the whole congress