 Thanks to everyone! You did it, Nordsec 2021! We hope you liked it! I know I had a blast. That last stretch was intense. So first, I'd like to acknowledge the teams that have, that ended with more than 100 points. So that's Coldroot, Shopify, DCI, X-Men, Zendesk, Colonel Space Invaders, We Buy Zero Days, Panico Village, X-Ray Troop, Shared Sec, Okiak, and Gildarus Tigers. Now, I know you're alluding to know this year's winners, but you'll have to wait a bit more because Nordsec is not only a CTF, it's also a lab for a pretty impressive LXD deployment from our incredible infrastructure team. So I'll pass on the mic to the virtual mic to Stéphane Graba, that has a few things to say about that. All right. So, yeah, I'm the head of the infrastructure team at Nordsec and we obviously run everything for the CTF and infrastructure needed for all the folks throughout the year and then during the event. But we also run things like the public website and all that stuff, so we're a bit of a year long type thing, a bit different from some of the other teams at Nordsec. So, bit of a bit of a view of what we're doing. So, Nordsec is based primarily on Ubuntu, LXD, and Ceph these days, with pretty fast networking, that's taking a bit bounded effectively, using Ubuntu 24 LTS everywhere, with the current HWE channel and live patch. Now, for those who are not very familiar with LXD, LXD is a system container manager and virtual machine manager that's developed by my team at Canonical. So, LXD uses that, Nordsec uses that quite extensively and has been for years. It lets us very easily run, you know, well, virtual machine all containers, that's kind of, that's the whole point of it, but pretty large scales, including clustering, including advanced networking and storage and all that kind of stuff. So, in the case of Nordsec, we're running three levels of LXD these days, one level at the bare metal, across eight physical servers, and then on top of that, we're running a bunch of virtual machines and then running a secondary cluster on there that's just used for the CTF, and then each team gets their own LXD server that runs the actual challenges. The things we can unchange this year, so we moved to 2004, we used to be on 1804, so we're pretty much reinstalled and redeployed everything. We changed some hardware, so we used to have dedicated machines that would run just infrastructure services. We've now moved to a unified set of hardware that runs both CTF and infrastructure services that gives us a lot more resources and like, monitoring margin, we can easily move things around. We don't have specialized hardware, we've got less stuff in the rack, so we've been doing that this year. As part of that, we've also, we used to have dedicated machines that would act as the firewalls of everything. Now, those are just a bunch of other containers on the infrastructure, so we've got five containers that handle routing and firewalling for everyone, and they interact with each other of a BGP, which then gave us some of the advantages of that to run any cast services effectively. So in past years, we would run like two DNS servers, it would give you two addresses, but everyone would always hit the first one and nobody would ever hit the others. This year, we just had a single address, which actually would hit three servers, and that's completely transparently balanced inside the infrastructure. If one server goes away, you won't even notice it, and we had a perfect split between all three DNS servers and all three HTTP endpoints throughout the entire event, which really made things a lot simpler. It also means that if we notice something going slow, we can just deploy another one of them and just get some extra capacity that way. So again, as far as scale, so each team had up to 20 participants this year. We had 38 challenge containers for each of the teams, we had these two Windows virtual machines as well. In total, that means around 854 participants, 88 teams, six physical servers in the past year, as I mentioned, running 50 infrastructure containers, so those are like all of the main services as DNS, HTTP, the discourse forum, and all that kind of stuff. We also had 52 virtual machines that would run the actual challenges, and inside those, we effectively had 88 very large containers, one per team, 176 virtual machines, so those are the Windows virtual machines, and then within those 88 containers, if you count for every single team, that you end up with 3,244 challenge containers running across the entire cluster. As far as hardware, that means 208 CPU cores and 416 CPU threads right now, and we have quite a bit of RAM, so just shy of 2 terabytes of RAM right now. The other thing we cannot change this year other than removing some servers is that we started moving from Intel to AMD, so I don't know how much you managed to get out of your Windows VMs, but those were actually running on very shiny new AMD Epic servers, so we're starting to move over to that. Those machines give us two CPUs for the total 64 threads each per chassis. We've got two of those, so 256 threads effectively that we can use across those two machines. We also started moving from conventional SATA-based SSDs over to enterprise NVME SSDs, because we were just killing the SATA SSDs too fast. At NOVSEC they would usually die within about a couple of years, which was a bit annoying, so hopefully the enterprise-grade drives will fail better. We're still using our old six Intel servers, so those were actually running all of the internal services and all of the Linux things as well. That's going to be moving away, and we plan in the next year or two to be pretty much everything on the AMD Epic. That also has some nice advantages around hardware vulnerabilities. Currently, AMD is in a bit of a better spot than Intel in that regard, and that's definitely been saving us a bunch of resources by not having to mitigate quite as much as we used to. As far as the remote setup, obviously we were not in person this year, it's the second year we're doing that, so the physical infrastructure was hosted at my place here in Montreal. Everything is UPS backed, so if we had some kind of short power outage, it was all fine. The public internet was routed over BGP to my home network using effectively two internet connections, one symmetric gigabit fiber, which was the main one, and there was a cable link as fallback, because something would go wrong. As far as what you would connect to, you were actually being load balanced between eight different VPN servers, so each of them would handle 11 teams again, so we can spread that across the infrastructure and not get the issues we had last year where we just ended up, everything just cannot die at the beginning because the VPN server just hit some kind of kernel issue with too many active neighbors or something along those lines. This year, we had no chance of having that because we were actually spreading that load way better, plus we actually had the kernel bug fixed too, so that helped. As far as how you had connected, the teams would actually land in the exact same VLAN as you would if you were on site, so that makes it very easy for us, like next year when we're going to be in person again, there's nothing weird really, we can use the exact same setup and just not do VPNs. So what went wrong? That's always kind of an interesting one. So before the event, we noticed a bug in OpenVPN Linux and OpenVPN Windows that two different bugs on Linux, it ignores the fragmentation command, so it just doesn't fragment, and on Windows, it forgets to set the do not fragment flag on the other layer, so we have to work around both of those issues in different ways by changing config's kernel last minute to avoid really weird MTU issues during the CTF. We also found a bug, well, not so much as it. It's an upstream weird design weakness that we had to work around in our NAT64 gateway, which was also kind of hard coding a very low MTU, so we had to bomb that so that VPNs would not get needlessly fragmented. During the event, we noticed that Asgot Discord, which is one of our own piece of software, had a bit of an issue. It was processing posts multiple times in parallel. That's why we had to take out the forum for a little while at the beginning, because we just had posts showing up like three or four times, which obviously was not good. We also had Linux soft CPU lockup show up on a few of the virtual machines. We think it was another hardware glitch or firmware glitch that triggered that, but that effectively took out something like 16 teams, which is why we had to suspend the CTF for, I don't know, a half an hour or so, I guess, to go and fix that. We also found that NetworkD has a bug when we deal with a lot of interfaces. It just doesn't know how to cope with that, and we were losing connectivity as a result. And lastly, if you go to as we had scheduled tasks, not spreading over time properly, which includes log rotate, PHP session, key ring, and sys start, all triggering at the same time causing a bunch of issues. Queue in queue, VLANs, so if you do VLAN in VLAN, also had some issues with the picket MAC addresses, and our switch is not playing dance with that, so we had to change all the MAC addresses just before the CTF, which was a bit of a pain. And we noticed, and you probably also noticed that one, very high memory pressure would cause the VFS cache to be emptied, which means that then every single file had to be fetched again from the remote storage, and that would just cause massive load spikes. All right, almost getting to the good stuff. As far as metrics, we were recording a whole bunch of stuff. We started doing that last year quite a bit, we're doing even more this year. These days we're tracking statistics on all the physical servers, network, DNS, HTTP, databases, Asgard, and challenges. So anyone trying to brute force a DNS, we usually know it within 10 seconds. Same thing with Asgard or anything else. We're getting a lot of, we're actually running two premiere databases now, one just for the CTF challenges, one for the internal infrastructure. All right, so the stats. We ended up reading 100 and three terabytes of data from remote storage. We wrote 402 gigabytes, so it's extremely read heavy, obviously. At peak, we were doing about 28,000 IO per second from the remote storage. Peak rise was just 158. The issue I mentioned earlier with the VFS caching was causing quite a bit of load. So we recorded a load of rate of 755 at our peak, 271 as far as what the team would have noticed. We had four teams that didn't show up, which was kind of interesting. And we had a peak of 720 participants collected on the VPN at the same time. As far as public internet traffic, we did really not a lot this year compared to past editions. So downloaded 115 gigs, pushed 394 gigs to you folks. And for the flags, the team with the most duplicates was Zeras InfoSectame with 24 duplicate submissions. And the team with the most invalid flags was CSIC Saint-Jean with 120 invalid submissions. Most of our stack and tools are open source. They can be found on GitHub. We've got Asgard. It's got Discord. It's got WebUI. The badge code is there. And some of the tooling we've got around like these is all on there as well. And that's it for me. So this was quite a bit longer than usual. But yeah, hope you thought it was useful to your folks and pass it back to whoever is next. Actually, Serge was going to talk a bit about the next part. Hi, people from all around the world. Actually, we had people from about 25 countries who registered for the CTF this year, which is not bad. I hope you had a hard time. That was intended. A hard time, but a good time. And by the way, you never really get used to it. You just eventually get to enjoy it. So CTF is hard. But that's fine. You just have to start enjoying solving these kinds of these kinds of challenge, which leads me to feeling that a lot of my fellow InfoSect workers and just tech workers feel a lot. It's the imposter syndrome. And just want to say that it's normal to feel overwhelmed by this. There is a lot of stuff in InfoSect and you cannot know all of it. So just surround yourself with people that know better than you in many different fields. And this is how you build teams and get to solve these kinds of challenges. So I just want to give a big round of applause to all the team that work extremely hard. You have no idea the amount of hours that were put into it. You just saw the little presentation from Stéphanie Rabar. We have a team of volunteers that are world-class elite in what they do. And it's a real, real, real pleasure for me to be able to work with them. So if you can right now give a round of virtual applause in the Discord, these, these person work so many hours to provide you with this kind of experience. So it would be an honor for them to know that you enjoyed it. And also consider doing write-ups. I know challenges are really, really loved to see and know how people have solved their challenge, especially if it's in an unintended, you know what I mean, the way that was not intended. So thanks again. And I hope I see you in person next year. If you never came to Nordsek in person, it is an extremely interesting experience because it's super intense on site. So thank you very much. And see you next year. Thanks. Thanks. Thanks a lot, Serge. So now it's time to announce the winners. But I know you might be a bit tired of this. We have some honorable mentions to share with you. First of all, team Summer Jedi that solved the, the challenge in a quite an interesting way. They used FFMPEG on the Ackers movie to extract all the frames, used OCR on them, and they were able to solve seven flags that way. So that's pretty good. Besides that, team Hackers and Pajamas had four people use Go Buster at the same time, and they failed their hard drives with 16 gigabytes of logs. So that, that created some issues. And shout out to Indy Chickser from Cyber Ages that streamed their CDF on Twitch most of the weekend. So that was a first for us. And also a shout out to all the teams that tried their very best to find what the kind of toothbrush brand was in Hackers. So now the winners. First and third position with 221 points. Skitties as a service. And second place with 223 points. So only two points difference. Huberts hacking. And you might have guessed, but in first place with 243 points. Pwned the winner of Nordsec CDF 2021. So I'd like to send a heartfelt thank you to all the volunteers that made this event possible. Celsior already touched a bit on that, but for the participants that are looking for a new kind of challenge, we're always looking for new challenge designers. Palme, now that you have your first place, I'm calling you out. I expect you to reach out. Thanks all for playing and we hopefully, hopefully, we'll see you in person next year.