 Char Hadman, Richie H, will do a forced infrastructure review. Hello, everyone. So like last year, this will be a short overview of the actual infrastructure, a little bit of changes and stuff, what we did, what we did differently. But I expect you to also have questions, because this should also be just basically feedback what people want to know specifically, what they might need to know for their own conferences or whatever. So please do ask questions, because the talk section is not very long, but the questions are almost the most interesting thing about this. So the physical setup is pretty much unchanged from last year. We still use the wireless access controller from the university. Most of the cabling is pre-installed. We have some more upgrades to single mode. This used to be more fiber mode. We actually have more than 10 megabits going out of the U building, so that's also fine. Most of the access points are by ULB. In Jean-Saint, as you probably saw, we put our own. But else, it's pretty much all ULB infrastructure. And we get some, say, in where we want to have improvements over the years. Again, this time we have upstream sponsored by cold. We might even have two upstreams next year. We hope so. We'll work on that. We were supposed to have this this year, but it didn't work out. Doesn't matter. It works. We still have an ASR 1006, which is way overpowered for what we need, but we get it for free, so that's fine. And we also get a few switches, which we put in place and then basically roll out the configuration from last year, which remains mostly static, which is a very good thing. And all the custom cabling, all the buildup of the access points, all the putting of the video boxes, of the cameras, all this stuff happens basically Friday noon-ish, until Friday evening or Friday night, depending on how well things go. And there's a lot of hands which do things in parallel. And because this is always the same venue and we have the same people who know what they're doing, this works a lot quicker than a lot of other conferences which sometimes switch places or whatever. Yeah. So what is new, we have even more people, which distributes the load quite nicely. So we really get to sleep sometimes. This is really, really nice. And it's getting better every year. So yeah, most of us aren't even sick of the FOSTA these days, so it's really getting better. We even managed to install the server and the router, the main router in December and at most of the configuration in December, simply to have more time to do stuff. There were still some things which you could do basically on the last day because we didn't get access to everything we needed, but by and large, we had quite some time to prepare stuff and just to make sure it's actually automated. It's not just all written in a hurry and in a blank panic. This is obviously a good thing. So this year, network basically worked at 16 o'clock on Friday, which is a record. Three years ago, or was it four? Two of us, it was basically me and Merly, we stayed until 20 past four in the morning and we returned at a seven-ish just to make things work and they started to work during the opening talk. So when you compare a few years ago to now, it's amazing. Same for the monitoring. It started working with all the bells and whistles we needed a lot earlier. Again, we could build in part of what we put in place last year and we can actually put all those back in within seconds for next year, so this is also becoming better. It was also quicker. Video had some hiccups also this morning, as you probably noticed, we lost a few minutes of talks in some of the rooms. We had to rearrange a few things, but we expect this to also be better next year. So we are actually getting into a state where we can reuse things and actually see improvement over time. As it's supposed to be, most of this is invisible to you, which is why you're sitting here and want to know what we are doing. We are now a leader. So basically, we became a member of RIPE NCC, which means we get our own permanently assigned AS number. We get our own slash 22 IPv4 space. We get our own slash 25 IPv6 space. If we need it, we still use the three-old slash 48, but we've got more than enough to address space in V6 now if we need it. And obviously, we still get the temp PI slash 17 for roughly two weeks from RIPE NCC, where we put all the people on FOSDOM ancient on for the dual stack. We also have some services still in that range, but we moved almost all of it over to our permanently assigned PA space simply because this allows us to really keep the configuration the same for every year, keep the same host names, keep everything the same, instead of potentially having to use different IP space. Thankfully, RIPE NCC gave us the same IP space three times in a row, so that also was really, really good because we had a lot less work because we did not have to remember, but we don't want to rely on this happening year and year again. And at some point, maybe we don't even need IPv4, and we can do V6 only as we do on the main SSID as of right now. So how do we do this? Basically, what we do, if you connect to FOSDOM, you only get V6 addresses, but you still need to be able to connect to the V4 world because, as we all know, you have to. So how do we do this? Basically, we lie to you. You ask for some service, and you get an A record back, and it points to some random IPv4 address. We detect this, and we give you a quad A record with an IPv6 address of our ASR, and you connect to that, and then you get basically translated to V4 on the egress. So we've got a few addresses. We actually used up several IPv4 addresses because there's only 65,000 ports outgoing on each V4 address. So at peak, I think we used like six or eight V4 addresses for parallel connections times 65,000. So there's a lot of connections going out. And we basically do this on the ASR, on one of our servers, and that's how we lie to you. If you have any issues, A, file box and fix your stuff, and B, migrate to Fostum Ancient, where you get dual stack. We six and V4 native. This is also unchanged compared to last year. Our backups are still done with oxidized, which still runs quite nicely, and you should use it if you don't already use it. Our config management and deployment is still done with Ansible, but there's a lot more in Ansible, and it's a lot cleaner. Last year, it was very fragile, really fragile. This year, we can even deploy it from several laptops independently, and it still works. So this is a really good improvement. We're using Prometheus for monitoring, as should you. We're using Grafana for visualization, as should you. I'm not sure if this was actually announced. We might have forgotten. But did we retreat you? OK. So on dashboard.fostum.org, you'll find a few stats and graphs about what we're doing. And IP management, we tried to use NEPAP. We probably won't do it in the future. So to show you even more about what we did, this is also new. This is the video dashboard, I think. Yeah, it doesn't matter. So what you're seeing here is basically the stream which goes out. You see the slides and the output and the input of the slides box. So the slide box is this one. And you have one which looks quite similar, attached to every camera. And these basically ingest the VGA. That's fine. I'm not sure what this does to the video recording if this goes back and forth, but doesn't matter. Sorry. So basically, we ingest VGA into the box, and we output VGA to the projector. And also we dump all the stream on disk, and we stream it back home where we can make actual online streams out of it. And it's similar for that box. And it's pre-made, and you basically just put it in place and connect a few cables. And most of the times, it works quite reliable. You also have a little display which you can't see. This gives you information about the stream, about the IP address, about the MAC address, which is really helpful for debugging. All the designs, all the hardware you need, they're open. Look at our GitHub account, and there's the video repository in there. You'll find all the things. You need to rebuild those if you want to. Or you just write email, and we all can tell you about this. But what you also have, again, is this new dashboard which shows speaker, input output of the box, and basically chat to see if anything's breaking. And we just hung this in the walk, and people can look at it. So this is our router and server, which looks a lot better than last year, because it's not balanced on chairs. So we actually have a rack, which is new. We might even be allowed to keep this rack in that place, and maybe even keep hardware there so we don't have to carry it back and forth every year. Maybe we are even allowed to keep it switched on. We don't know yet. We hope so. This is something which you might enjoy. This is our server or our video re-encoding farm. Initially, I was supposed to tell you that you could buy vouchers at the InfoDesk to get those machines, because we basically bought them off of eBay and are reselling them. But apparently, they're all gone. I'm not quite sure, but I think that's the case. So if you want any, you can head over to K InfoDesk and try your luck, but I'm relatively sure they're all gone. These are X22 with i5 processors. It's actually cheaper to do it this way than to rent, which is kind of weird, but it's the way it is. So this is the outlook from 2016. Yes, we managed to have even more preparation. We have more in Ansible. We managed to migrate all our, well, some, yeah, we managed to migrate some stuff, but no real progress. As I said, we became a leader. Our offsite and onsite infra is moving towards our own addresses. And our goal to have a conference in a box is way not there yet, so we really have to have to keep working on this. But we are on a good track. And hopefully, at some point, you can clone at least part of our configuration and just do your own thing wherever you are. So, yeah, prepare stuff in advance, refine it over the year. You reuse the same stuff and get some sleep. Yeah, this is the old one. Sorry, that's dashboard.fosdm.org. And this is our infrastructure. And you can clone it and you can file issues if you want to, and just steal stuff and suggest stuff. Questions, do we have two microphones? No. So the question was, how many engineers do work for the Fosdm infrastructure? As of four years ago, I would have said three-ish. Now, I would say about a dozen. But there's people who do, like, work all year round and to prepare a month in advance. And there's people who basically pop in and they know one specific thing, be it Rafauna, be it Prometheus, be it wireless, be it whatever. And they pop in. They help a little bit and they pop out. So it's not very well-defined. But by having this mix, we can, A, draw on more people and more diverse people who have different backgrounds. And so we get more specialists for various stuff. I think the next one was somewhere here. No, one here. So the question was if we had any issues with the wireless during the first hour of the conference. Yes, we repeatedly had issues with parts of our infrastructure, especially wireless. The main reason being this is not our infrastructure. So we get it from ULB. We can make suggestions. We can put up a few more access points. But for Jean-Saint, for example, you just have too many people. So what you would need to do is you rip out all the old access points. You put in ceiling-mounted high density ones, which have a lot of different antennas. And then you basically just segment the space below them. So this is not a technical problem. This is a funding and also a planning problem because ULB keeps on repeating that they want to renovate all of Jean-Saint, which they never do. But still, every year, they have the plan to do it. So they are not really willing to invest more money. And we are so many people we break the access point infrastructure. But it would technically be possible to improve this a lot. And we would even be willing to hand money to ULB to fix this. But at the moment, it's more administrative than technical. No, no, it's fine. Yeah. So the comment was it's not a complaint just to know. But yeah, we fully understand. And if it breaks, it sucks. And we would like to avoid it. But that's the reason. And not an offense taking it. I didn't think it was a complaint. So I was asked about DNSSEC. So the inherent problem of what we're doing with DNS64, as I said, we are lying to you. And if you have DNSSEC, we break DNSSEC. Because as I said, we are lying to you. Ask for something. There is an A record in there. And we give you quad A. So DNSSEC and DNS64 cannot work together. If you need DNSSEC or if you have security requirements, which is totally fine if that's your constraints, use Fostome Ancient. That's why we keep it there, to keep things like that working. But there is no way to make DNS64 and DNSSEC work, because their design goals are directly orthogonal. They cannot come together. It's by design. So I won't repeat everything, but basically I got a suggestion how to fix it. I think I have one issue with it, which went break things. But I suggest you join Fostome Minus Network on Freenote and repeat what you said, because that's the better venue to discuss. But I think this might break because we are doing it on the network level, not on the host level. But OK, if it's something I need to do on every host, we cannot do this for Fostome, because I can't. But again, let's discuss this later. The question was if we have any stats of usage of ancient versus current. A, it's on the dashboard. And B, it should be. There should be v6 versus v4 usage. And B, it will be part of the closing talk, which I still have to write. The question was why there is no IPv4 and no Wi-Fi. It's quite simple. So we were pretty much the first public non-networking conference with only v6, or even v6 enabled on the main SSID. Most of the other ones had IPv6 enabled on secondary SSIDs at that point. We were the very first to do v6 only with DNS64 and NAT64 on the main one. Not even a ripened ITF did that as far as I'm aware. Why are we doing this? This is a developer's conference. And we want your things to break and for you to fix them, because you need to do this at some point. And there's a lot of people who, back when we first did it, never ever even once in their lives got exposed to v6. This is different today, but still that was the main motivation, to see if it breaks and burns or if only something burns. It went well enough. So the first time we did this, the current Ubuntu desktop version, whatever it was, had issues with v6 only. And by chance, I knew the old main developer of a network manager. And I saw him walking by on just somewhere, and I grabbed him, put him in a knock, and gave him a lap and was like, OK, this is broken. Try and find out why this is broken. In the meantime, we sent all the users over to the Ubuntu stand to complain about this not working, because we couldn't do anything. And he found out what it was. And we sent him over to talk to the people to actually get a fix in. And I think it took a day or two. And this is why we do it, to force people to acknowledge that v6 is there and it will come. Speak up. So the question was about the portable radios we are carrying. Yes, you may ask about them. We rent them from someone. Go to Fronteskin K and ask them. I have no idea. They just appear, they work, and they go away. Now they're using radio, and we put up a repeater station so we can reach AW. So the question was why, I think the question was, again, why we are moving away from v4 and force-ish people to go to v6, correct? So no, there are no technical reasons as such, at least not scalability on our side, because we've got a slash 17 from RIPE NCC, which translates to 32,000 addresses, if I remember correctly. So this is way more than enough, and if we need it more, we could get more. This is not the issue. It's really about forcing people to see that things break, especially as this is open source, and we are developers conference. So the question was, what hardware we are using for Wi-Fi and if we would recommend it? Basically, yes, it depends on your scale. If you've got large scale, you will probably want to go with Cisco or Aruba. They're quite expensive. If you have like 50 people in your office or maybe even 100, you might want to look at Ubiquiti, because they're a lot cheaper and they're good enough. But it depends on your use case. Anyone else? Just louder. So the question was, why do we use multiple laptops instead of a cloud engine of some sort? Because we were able to get those laptops for quite cheap. So basically, if we had had a few servers, which are beefy enough, we could have done this ourselves. But it was just cheaper to do it this way. And also, we don't want to rely on an outside connection. Because as I said, as of right now, we have one uplink. If that breaks and we lose anything local due to that, that's really bad. And also, laptops have built-in UPSs, which is quite nice. The question was, if we have UPSs on everything else? No, we don't. Because, for example, at Switches, there is no UPS in that place. We do have UPS in our main room, which we're now using because before, someone cabled up all the video laptops and then plugged them in. That was the time when both the server and the router went off because the circuit breaker broke. So yeah. But this is now on UPS. Anyone else? So the question was, what CMDBR are we currently using and why are we looking for alternatives? We are currently using a Moin Moin wiki. And we're looking for alternatives because we're currently using Moin Moin wiki. The question was, how fast the uplink is? It's 10 gigs, and it's way too much for us. Because as we only have Wi-Fi, no one really can use it except for ourselves with the video. The question was, if there's 10 gig network cabling within the building? Our stuff, no, because we don't have the Switches, which are quick enough, but this is only whatever we get from the Cisco demo pool, basically. So everything which is single mode, you could easily do 10 gig or 20 or 50 or doesn't care. The question was, if there is any upstream video equipment? So the only thing we have upstream is we have a few cloud servers to do the streaming setup for us because we duplicate the streams outside of the building. I think we're done. Thank you.