 The next speaker is Richard Hartmann and he will be talking about the FOSSTEM infrastructure and give a review about it. And please welcome Richard Hartmann. Thank you. I didn't. Just a second, let me unify my output. Okay, I need to look in the other direction. Doesn't matter. Well, maybe, maybe. Okay, I shouldn't have to. Ah, no, it works. Sorry, sorry. I should have done this earlier. Okay, so infrastructure review. The subtitle is very important. This is like the third year in a row when we didn't have any major incidents, which and things are settling down. Same as always, I'm going to give a quick overview of basically the current state, because we always have new people and also what changed over the years, and then uncommonly for lightning talks, we actually have questions. So we will be passing the mic around. So anything you want to know, you're more than welcome to ask. So this is almost the same talk as last year and the year before, which is really, really good for us. Of course, this means we, it's becomes a little bit more and more settled. I started doing infrastructure taking over the video team in such a video network for the active network 2015 ish. And everything was on fire all the time at the same time in all the places, which obviously was not very nice on our nerves and sleep cycle and everything. And that got majorly improved. We actually, like literally in those 20 years when we had FOSSTEM, this is the third year where staff could actually sit down and not run in circles all the time. Like the last 17 years before that were maybe not as nice. I could even spend half a day in my death room without having to run to put out fires. And like the first year we had this people were super nervous because something is about to happen. Something must break. Everything is working. This is not normal. And people are getting more relaxed, which is again really good for us. So we have this place of stability. We don't need to throw everything out the window and reimplement within a week. Like we have a place of stability. And we actually really get to get a lot of sleep when compared to the years before, which is really, really nice. Core Infra is basically the same as last year. We still have the ASR 1006, which is doing all the net stuff. It's doing ACLs. It's doing DHCP. It's doing all the VLANs for the Wi-Fi. It's doing the BGP upstream to cold. We have the same two servers, which are now really fully Ansible. So we actually redeployed them last week and nothing broke, which was super nice. Well, actually one dashboard broke. We are at the place where we can really have this conference out of a box and be really quick about it. Our monitoring still permits this in Grafana. We are super happy with this. And we put the public dashboard on the Cortex cluster by Grafana. So that's backed by some actual... We don't get hammered into death when I tweet about dashboard Fostum org. It actually stays stable, which is nice. The video box has a completely new version or somewhat new version. You can see all the updates in the repository. And also there is a talk, which ended, I think, like 40 minutes ago, where they go into detail about how those things are actually done. Same as last year, those video boxes stream into the render farm, which we'll be seeing a picture of in a few seconds. And those also transcode everything for streaming off-site and from there. It's duplicated to everyone who wants to watch the videos. And anyone who is a speaker or a death room manager, you will get emails pointing to S-Review, which allows you to self-cut your talks, which is super nice. If any of you are organizing conferences, this is really, really nice because you have the overview of the different audio streams. If you have several video streams, you can choose what do you want to see. When does it start? When does it end? Does this need some improvement? Maybe it needs some cleaning up of the audio track. You can give this feedback to the video team. They can clean up, do whatever. Republish it for your cutting. And then you can put exactly where your talk starts, where it ends, which is super nice because else we would have to do literally hundreds and hundreds of reviews, which I think the record was we got more or less done after the next foster, which kind of sucked. So paralyzing this and giving this to the actual speakers and death room managers is super, super nice. Of course, this means we can paralyze. And if you're doing conference, I highly suggest you do something similar. Render Farm. Yes, it is literally a heap of thinkpads. Same as the years before, we buy those bulk off of eBay, all the same model. We use them this weekend, and then we sell them at cost at the info desk. That usually happens on a Saturday. So if you want to have a cheap laptop, which also has been used for foster, you can get it. We even leave the data on them. Of course, it's not secret data. And the nice thing is every year we get a quicker machine. Of course, every year we just buy the next delta of machine. These are the X2 250s. And it works. And also it scales really nicely. If you have 10 more death rooms, just put a few more laptops, and that's it. And they have built-in UPS. So if you have power outage, they actually keep on running, which is also nice. Yep. One thing we did change is we changed our DNS 64 to coordinates, which is super brand banking new. Ben literally cut not even a release. One of our team members is a maintainer of coordinates. And this is an experimental branch running DNS 64 just for here. And it worked. And we load tested it, or you all load tested it a little. And we actually had 50% reduction in CPU usage as compared to bind. Of course, everyone hates bind. I mean, it works. And it keeps the internet alive, but no one likes it. Some timelines. Router installation is more or less static. That's totally fine. Network up. We actually improved by one full hour for the passive caving, which was super nice. Like, as you can see, like the 2015 one, that was really bad when I got, when I got to leave here at like 5 in the morning and things have worked. But then we had two opening talks, one with we have network and one we are sorry we don't have network. So things really improved. Same for the monitoring. Monitoring actually was here year round. The servers ran through. And except for that redeploy, we had monitoring 24 seven for the whole year. It didn't monitor a lot, but like we had it running. So that's also super nice. Video team also improved. 2016 was kind of icky course. We lost quite some video content, as you can see, like 26 rooms times two hours. That hurts. So they also get more sleep. Like literally a few of the video team slept in the knock a few years ago. So that all this gets better, which is nice. For next year we want to have centralized locking through Loki. So everyone can see what's happening at the same time. All the video boxes and such will also do all their logging to a central instance where you can go deeper into stuff. We have a dashboard. I invite you to hammer it so it gets, it goes down, but it doesn't go down, but hammer it if you want. Pretty much all our stuff is in there or in the video thing. And also we are actively talking to people, trying to help them bootstrap their own conferences. We actually had several groups of people which we show through all our infodesk, knock and such. So they get a feeling of how we do things and explained how to run a conference. If you have a conference, which is obviously same in intention and stuff like FOSDEM, like fully open source, no major interests by any companies, like a community thing, feel free to talk to us. We are actively trying to reach out and try and spread this to other places. That's it. And I hope you have a few questions. Thank you. There's one. I just want to know how to get one of those sweet thingpads. The thingpads, on Saturday at I think 10 or 11.30, we have a sale at Infodesk K every year. I don't know if that's the exact time. So if you want one next year, like for this year, they're already gone, but for next year, be at the Infodesk early-ish on Saturday and just ask when that sale is. And yeah, basically that's it. Yeah, for this year it's too late. Like they're gone super fast because it's like 40 laptops or 30 for, I don't know, how many thousand attendees. Thank you for all the great work. Is there any data on how much the NAT64 has been used? Do we see more native IPv6 data? And is that somehow related to the drop in, proportional drop in CPU usage for the NAT64? No, so first to answer the second question first, we ran bind during this weekend and we switched over during this weekend. So we could compare directly. So these are actual numbers which directly relate to each other. As to the first question, we have two networks. One is dual stack, one is completely IPv6 with NAT64 and DNS64. Most of the traffic we see is IPv6 only. That's already the case since last year. I actually dropped that slide because it's been the same for the last two years that we are basically IPv6 has more or less one. The major IPv4 usage we see is mainly VPNs which we think are stuck on either literal octets in the configuration or old versions of open VPN or something which just didn't support IPv6. But most traffic these days is IPv6 except for what we think is VPN traffic. Yeah, so do you also log the video streams? I mean a couple of years ago there was a lot of talk about oh, if it was on YouTube, it was Flash, it was proprietary Adobe and you're also streaming in WebM with this open source. Is there any difference or doesn't user care today? You mean the content of the people who make the talks? Yeah, yeah. What are the statistics about that? Yes-ish we do but not in a nicely integrated system. We are moving more and more stuff into our observability platform. We kind of have a split between video team and infrastructure team. So the video team is independent in what they do but they keep adding more stuff to our observability. So we also have more insight into what's actually happening and all of that data is completely public on just a second. Yeah, dashboard for some org. Like everything we have, everything we use internally you can just see literally the same thing. So what exactly is the render farm used for and why not use a, dare I say it at this conference, a cloud provider? So first question, what the render farm is used for? Transcoding video. So we have the raw video which, like the raw video comes in here you have to stream from this laptop. You have to stream from the camera. Both are dumped on local disk here and also they're streamed to the laptops. They transcode it, they reduce the size basically. Transcode them for streaming and for another disk dump. And why we're not using a cloud farm? We used it I think 2014 or 15. We actually used Google Cloud back then because we had some issues and that was quick because we had a few people who could do this quickly but we prefer to use open source software and most cloud stuff is not a GPLv3 which means it's not really fully open source in the actual intention of open source at least under my or most of us, our interpretation. So we just prefer to do it locally. Also if ever our internet cuts out we have everything local so at least the people who are local could still be watching some stuff. But the main reason is ethical reasons. We don't want to export this to somewhere else. The only thing which we really rely on is the Grafana cloud because people destroyed our instance in the early days and that meant we didn't have monitoring ourselves and that kind of stuff. Anyone else? Any plans to further disincentivate the use of forced and dual stack instead of the main Wi-Fi ACID? Because I was wondering if you kept, for example, the SSD stable and people connect to it three years ago and didn't change the laptop, didn't change anything. They will still connect to this one. If you, for example, change from Fosdm-dwaStack to Fosdm-underscore-dwaStack they will be forced to do this choice again. Yes, Ish, we used to call it legacy for internal political reasons that got changed to dual stack. We don't actually mind if someone is on dual stack and I don't want to break anyone's connections. People are more than welcome to use dual stack. The reason why we have IPv6 only on the main SSID is this is a developers conference. We want to kind of push people towards the IPv6 only because they tend to fix stuff. We have a major distribution fix quite a bit of things once we just started sending all people who had complaints just to that booth and tell them, okay, talk to your distribution because it's their fault. They fixed it really quickly, but we don't want to be too pushy about it. Thanks a lot, Richard, for your talk.