 So, moving on to the last lightning talk for the day, it's Richard with Forzum Infrastructure Review. Yes, hi. I have too many of these, so I'm just going to do it that way. So, infrastructure review. There is, fortunately, not too much to talk about, at least not too much different. There's one thing which actually changed for us. This is the first time ever where we actually had time to, like, sit down and reflect and not just firefight, and we were kind of weirdly anticipating that something has to break at some point, because surely it couldn't just work. It has to explode at some point. And it didn't, and that's the first ever year in 19 years, so that was really nice. So, our official word of the conference is boring, of course. We didn't have any hard problems. It's really, we have reached this magical place of stability where we can more or less take last year's conference out of our storage, just toss it over the new year and most of it just fits. And this is something, I mean, basically we're using code, we're using config files and we're doing all these things which people should be doing anyway, but we are actually seeing the benefits of what we're doing. We'll see some timelines later. It's a huge change for us. And everything which we do change, it's just iterating on top, so we don't have to reinvent the wheel, like, every two years or sometimes even every year. We have this, basically we can iterate on what we did last year. So we actually got a lot of sleep for FOSDEM levels, so some people even got six hours or so, that's like a lot. The core infra is pretty much unchanged from last year. We still have a Cisco ASR 1006. It does all the routing. It does all the more network-close features, like doing the ACLs for protecting our infrastructure. The NAT64, which magically turns this IPv6-only network of FOSDEM into something where you can connect to your old legacy stuff, unless you use dual stack, we have two IP addresses. But with FOSDEM, what we actually do is we lie to you. Every time you request anything which only has an A record as an IPv4 address behind the DNS, we rewrite this into a quad A and hand this to your client. When your client takes this IPv6 address, tries to connect to the Internet, we intercept this automatically and then we push it out into the IPv4 world. So this is what is all happening on the ASR. We now have two servers that they actually run and they are actually maintained and they are modern, which is really, really nice. Yes, it is. And everything is enanceable. So all the monitoring is done with permissives and Grafana, which works really, really well. All the public stuff is through causal. So you guys don't need us too much when you look at dashboard.FOSDEM.org. The video stuff, we have those nice video boxes. They have been the same for three years now. And they actually do their work really, really nice. We send our streams to a render farm in our heating area behind the knock. It's literally a stinky heating area, but it has servers. For what you would point, it has a ton of these laptops, which we buy bulk on eBay and then just sell them off after the event, same as last year. And no, they're all gone already, I'm sorry. But for next year, I think one of these costs 130 euros. So for next year, if you want a really cheap laptop, talk to Infodesk early. From there, all the streams go into the cloud and there we do the actual processing. So what we do there is we do the streaming. And we also have our system where speakers automatically get an email after the talk, like directly after the talk. Deaf room managers get email directly after the talk, requesting them to do the review of the pre-cut raw version of the talk. And then we can just really quickly after this initial review is done, we re-encode again. If that's okay, you click okay as speaker or deaf room manager and off it goes to be transcoded one last time and uploaded to the murals. So this takes a lot of effort and time out of what we're doing. And you can actually just clone this and use it for your own conference if you want to. So looking at the timelines, as you can see, we improved massively over the last years. Like two years ago, we installed our core infrastructure Friday afternoon because we didn't have any access before that. This improved a lot. The actual network working. It used to be just distilled panic until we almost fell asleep in the morning. As you can see, our record was Saturday 5 o'clock in the morning. Well, actually, we came back and fixed it, I think, during the opening talk. We had like two talks, one without network and one with network. And we made the call to have the one with network and barely worked. And this has become a lot better. So now we actually, more or less on the day before FOSDM, we actually finalize all the work which we have to do, which is a huge difference. And again, this is due to code and configuration reuse, which all of you should do a lot more than you're probably doing. Monitoring is the same story. Video is also the same story. Two years ago, we actually lost quite a bit of video. This changed massively. So now things actually work. And even for the first talk, you get a full video stream and everything, which is obviously nice for all those people who are not on site. There is one thing which we started to notice. Now that we are not always in a state of panic and not throwing just people at problems and a dozen people fix the same thing, we actually become kind of lazy in some regards. So the two things which I really noticed and which really made it stand out that we have to improve this part, we actually forgot to switch out the background for the live video streams. So anyone watching the live video still saw the 2017 background for like, I think, three hours. We changed it. And then the uploads, it's also going to be fixed. That's easy, that part. But also, for example, our internal t-shirt tracker, which you again can see on dashboard.phosdom.org, it thinks it's still 2017. Of course, we just didn't change the config. We just reset all the counters. So once you start doing this, especially for conferences, you have to really make sure that what is the part of the conversion which you actually want to change, you actually have to have like a primary list of stuff which you need to change. Of course, once you have all this automation, things just work. And so we ignore all the tiny bits and pieces. And this really, really gets you into a bad place. We still don't have a decent CMDB. If anyone knows any, we'd really like to know, of course, they all suck, at least as far as I could discern. If you want to look at our monitoring, it shouldn't break down. The room is small enough. No, actually, we upped all the resources. So even if we treated it again, it should probably hold up. And yeah, also all our infrastructure work is being done in the open. So if you want to have a look at our stuff or at the 3D cut stencils for the video boxes and what you need to build your own video box, that's all in the open. If you miss anything, just file an issue and we'll upload anything. But I think we should be pretty complete, except for the stuff which has passwords and like, I mean, obvious. Yeah, and that's already it. So now we come to questions and I hope you have a few. Yes, I can just repeat the questions fine. What's a CMDB? Configuration Management Database. Basically a database where you put what devices do I have? What services should be on that device? How should it be in the network? How should those services be configured? What IP addresses should be attached to what services? What DNS names, blah, blah, blah, blah. So basically the state which your whole system should have and then you go from there and push it into the real world to make sure that you actually have the state in the real world. And then it's also a lot easier to do monitoring because then you have this ideal state of the world in your database and you can configure your monitoring exactly like that and if there's any delta, that's something you need to fix. Yesterday I saw a tweet about men in the middle. Do you know about this? Yes. Do you have information? The thing is all these things quickly touch legal stuff so we fall back to phrases like we suggest you take care and stuff and what we actually saw and what actually happened we can't really tell for obvious reasons. Of course this would give the people who did it more insight into what we actually know and what we did, blah, blah, blah, blah, blah, there are some things which we couldn't do because we don't have the full configuration access to the infrastructure of the university. We could have actually stopped it pretty quickly if we had full access that's something which again will take up with the university. So we get like full visibility but we don't get any right access to their infrastructure which is already really good but it's not as good as it should be. Long story short, encryption. Lots and lots of encryption. You shouldn't do a single non-encrypted connection to anywhere no matter if you're here or at home. You should always have your firewall running or ITB tails, discard everything other than related establish and outgoing. All these things really, really matter because as these things become easier and cheaper to do more stupid people will actually try and do them. No worries. Hey, are you going to plan on changing anything for next year since it got a little stable this year? Yes, but nothing fundamental. So we're thinking about upping the power of the router. We're still trying to get a second uplink to here so we can actually have two routers which are then redundant. Of course, if the cold uplink or our main ASR dies, you're all offline like immediately. So these are things which we really, really ought to do but we are constrained by the outside world. For the rest, we would like to have more of our profiler dashboards in actual code so we don't do that by hand. We just push and deploy them. There's a few other bits and pieces but basically it's just taking what we have and making it even more efficient. One notable exception are these laptops. They're still installed by hand. I really dislike that process. I would like to have one master image by USB and just have a few USB sticks and just install, install, install, done. That might be something for next year, I don't know. Okay, sounds good. Sorry, just because I love breaking gags. You said that everything worked perfectly but actually there were some Wi-Fi issues in Johnson. Do you have any idea what it was? Just the number of clients or what? So the question was about Wi-Fi breaking down in certain places. For example, La Mer, Chavan and Johnson which are the largest rooms. Yes, we know pretty much exactly what the reason is but the thing is we can't actually change it without configuration access to the WLC which is the wireless access controller of ULB's network. The very short answer is the access points are configured not ideally, especially on 2.4 GHz they're all sending at max power and there's massive over-speaking between the different channels. We sometimes even see channel utilization of 90% which is basically just random crap floating around without any data being transmitted. So this is really, really bad. We could relatively easily fix this if we had either access to the controller or even more segmental antenna, not like omnidirectional but like have segments over the people. None of this is a hard technical problem but it's hard for us to solve it within the context of FOSDEM or ULB. We always try to get more access but obviously they are very of this one single thing which happens once a year changing all their configuration which I can understand but which still is a pity because we could probably help their infrastructure quite a bit but I get why they don't want to do it. Sorry. Do you have the option to just ask them to shut everything down for the weekend that is what you usually do in hotels or stuff? No, they wouldn't shut it down and also we would need to pull a lot more cable and install a lot more hardware which we don't even have. So we are really glad that we can like of piggyback on top of their stuff. We connect a few more access points to the WLC and that's about it networking-wise or Wi-Fi-wise because we can't do anything anymore. So yeah. It's not an option, it's definitely not an option. Thank you. You're next. So all the devices that are currently connected to your network, you're able to see MAC addresses as they effectively roam across the campus. Is there any view around how people are actually flowing around this infrastructure that you're persisting in some anonymized or either like a heat map or something? It might help kind of future understanding which events are more interesting to people, how people are moving around. So there's two answers to this. Yes, we can easily see how many associations we have on stuff. That's trivial. What's kind of hard is to actually aggregate this into useful locations. Of course, the naming scheme of the access points is somewhat random. So it's really, really hard to actually place them in the correct place. Sometimes they have names which would imply they would be somewhere and they aren't. So this is relatively hard for us to actually grasp where they are on top of that and you'll see that number in the closing talk. We have had 11K-ish MAC addresses last year and this year we had almost 200,000. This is a direct function of privacy extensions. Of course, people start getting more and more MAC addresses regularly. I mean, we could do, for example, powers out which are actually attached to established companies in which are obviously random and somehow get data out. But yeah, that's at least as of right now a manual process or semi-automated and that really takes a lot of work. There's one here, I think. So my question is not related to the infrastructure, but is it possible to have larger rooms, for example, for next FOSDOM? For example, they're building AWU. I tried several times to assist at all there, but I couldn't. We are always trying to get more rooms and ULB is actually building out rooms and we are always getting as many rooms as we can. We might be able to get two more quite large capacity rooms next year in U-building. They're like at the fifth floor and really hard to find and we will probably lose a few visitors and they'll die. But we would have more rooms, so. Also, all those people dying in some corners of U, because they starved, they would also not be in the rooms anymore, so that would be. No, but seriously, we are having as many rooms as we can and we are overcapacity, we know about it. Okay, also just as a reminder, you didn't estimate the number of audience this year. The number of what? Of audience. It's really hard to do, especially given that now the privacy extensions are making all our assumption wrong. We don't have any tickets or anything. It's a lot. The video render farm was a bunch of laptops. Can you tell something about why that's so interesting to have them be laptops instead of, I don't know, small anonymous looking boxes or something? It's a relatively easy reason. These have a UPS built in. You can repurpose them. So for example, we took a few and put them in the knock and now have them running as computers which display our internal dashboards. You can have them here, you can have them everywhere. You basically buy, I think we bought 40 of these and just we have like five or 10 spare and if one breaks, we just put in the next one. So having the same stuff for everything is a huge benefit. And also what's really nice, we buy them in bulk so we get a good price. And directly after FOSDM, we sell them or even during FOSDM, we sell them off at cost. So we basically get a loan for free and attendees get cheap laptops. And we don't actually have to store them or maintain them or make sure they work next year. Of course, we actually buy them refurbished with a warranty by whoever sold them. So it's just easier and cheaper. How much of this infrastructure is translatable to other locations? Sorry, I can change my question. No, no, it's... I mean, there is a ton of stuff which you couldn't transpose easily. For example, most people won't have the exact same hardware when it comes to networking. You wouldn't have the same IP addresses, blah, blah, blah, blah, blah. You wouldn't care about our bind configuration all too much and all these things. But the main stuff, it should be relatively easy. Of course, from the services side, you need working network. You need working multicast on the network. That's kind of icky, to be honest. Else these boxes just work. So if you were like to create a dozen, they just work. And our streaming setup and everything, you should be able to reproduce. And if you aren't, just send email. We are more than willing to help. Okay, so yesterday's problem with the internet connectivity was related to the wireless network, correct? Yes. Thanks. I thought it's some core equipment or something like this. No, the core network was always stable. It's trivially easy. It's really easy. Okay, I think we're way over time anyway, so thank you very much and see you at the closing talk.