 All right, my name is Manu, I work for Meta, and I work on the CI. I'm doing a live demo, I'm a bit of a chicken, so, you know, the demo got in stuff. So essentially the demo is printed in front of you. This is a live demo on how to use the BPFCI. You can, as you send a patch to the mailing list, the patch has been tested on the CI. But you can also actually bypass the mailing list and try your patches directly on the CI. So you can iterate, and then it starts getting feedback once it works. So, the link at the bottom, tinyurl.com, BPFCI just test, sends you to the documentation within the BPF documentation, how to do that. The TLDR is you need to clone the kernel dash patches slash BPF repo that we have on GitHub. You create your own fork, and from there you do your local clone. You get an updated version of the BPF next underscore base branch. You create your own local branch. You make a great change, like adding a new line to the rhythm, which obviously needs to be tested. Great. Commit message, SFMM 2023, I got the date right, CI test run, push that to your GitHub branch, and then, typically, GitHub will give you a link to create a pull request, which, when everything works, is able to merge like that. And as you click on create pull request, magic happened. Now you wait a bit, and there it's been picked up by the GitHub CI. And from there, you wait some time for build and self-test and stuff to happen. And eventually, you get the result directly onto your PR. And I guess that's pretty much about it, how to run the CI yourself. Do you have any questions? Good point. Daniel wrote automation to delete your patch if you don't touch it for a week or something like that. So you don't have to care much about it. And this is not going to emerge anywhere. It's just like being tested, I guess, here. So first, we're going to get all the builds. These takes somewhere in the 80-ish minutes on the x86 and ARM architectures. A390x is a bit slower to run. It takes about 10-ish minutes. Once everything is done here, and it passes the build, and it's going to start running the test, and that can take between 10-ish minutes and on S390x close to an hour at the moment for some test. And what is the rate limits? The rate limits. We have a limited amount of workers that do run the test. So essentially, you're going to be limited by that. Your GitHub user is going to be limited in the first place also. I guess this is for you to test, not for you to build a bot that hammers it. Typically, you need to be approved to be able to run the test first. GitHub provides some switches. One is either you need to be approved. You need to be, to have committed first. I believe in some ways it reads what is in the kernel tree and finds that you may have contributed to the repo. It didn't seem to work all the time. And there is another way which is essentially accept anybody that had the GitHub account for some amount of time, which kind of get rid of some bots. Do you know if I can filter? For example, I know that I care about X390x, and I don't care about X86. So I don't waste any sources. Essentially, what you want to do, you want to, as part of your commit, comment out. Well, just leave that line here. And actually, I would recommend the resources we have. The X390x are the most constrained. So if you don't really want to test on everything yet and you just want to get quick signal, use on more X86, pretty much limit it. And you're going to wait less time too. And then once you're happy, you can put everything. This is super useful, because at some point, I was trying to run this GitHub actions, something, something that you can run those locally. It didn't work for me. Yeah, the act stuff. No, yeah. That's great. I tried to. Yeah. I wish it worked better, because it would help in iterating faster. But yeah. So yeah, definitely try to avoid X390x if you just send your first test. And once you're kind of happy, then put it back in to get full results. But this is the most constrained resource we have at the moment. So well, I guess, for that, we just have to go somewhere here. So once upon a time, so I guess, yeah, good point. We can get it from here. Once everything has run, so I mentioned the build already on, and then you got all the tests. At the workflow level, you got a quick summary of what potentially failed. So you get it here. But let's say if you go and on a specific test, this is, oh, now it makes sense. I was avoiding, we don't see the test failure. So here, you see that. Essentially, the NetCounts, NetCNT test failed. And if you expand it, then you get the actual output from the error. Back in the days, it used to be long and divvier to get that information back within all the logs. But yeah, so you get directly access to the information of what test failed. There is some tricks I could try to document somewhere where you can. I'm not sure how that works if you're not a member of the repo itself, but you could get the artifact that we use to build the VM and kind of rerun the VM locally if you want. So you don't have to go through all the iteration. But I guess just getting to use this here in the first place is a good start. That's really cool. Are you using your own runners? Yes. It makes a big difference to run the runners on bare metal when you run QMU, because we can use KVM. So yeah, historically, we used to use self-hosted runners that were still AWD VMs. We moved to using bare metal machine where we put a bunch of runners. They have many more cores when they want to combine and stuff. I'm trying to work with Ilya and the S390X to improve that for them. We have got some performance issues there. Hopefully, we get to shrink that number. But yeah. Yeah, we've been struggling running RCI on ARM. So yeah, because GitHub doesn't have ARM machine, so it's tricky. From my empirical test, it was faster to run QMU for ARM on x86 than on ARM without KVM. So if you do full virtualization, it was faster to run it on an x86 machine. On x86 with KVM, right? Well, it doesn't make you don't have KVM because you're a foreign architecture. But let's say I guess the emulation code is more optimized on x86 than it is on... I didn't go deep into the width, but that's my guess. You have to be a little careful with that because QMU does not scale at all for full system emulation past maybe four cores. So I don't know how many we're using, but uh... Four. Okay. At most. Yes. Yeah, I think it's not a big problem here in our use case. But yeah. Okay, yeah, I'm just like, if we ever do want to have a larger cluster of ARM hosts, it might be better to run on the other case. So, I mean... Or if we want to run more in parallel and whatnot. Like RCR, I see you torture or whatever, may require more CQT. Yeah, RCR torture, or even just test progs with more parallelism. Yeah, well, I think we want to be, maybe we don't gain as much as we have at the moment. So let's say it would be good enough like that. Maybe it's not like a requirement at least. But good to have in mind, I guess, for later. Manu, do you know if people will see logs, actually, if they don't give access? Yes, we tested that last time. So you need to be logged in GitHub. If you're not, you just see the summary, I guess. I think you essentially see this kind of view. We can't expend that. But if you're logged in, they will see the whole log, right? Yeah, what I'm not sure is about, it's the artifacts, what I mentioned earlier on. I'm not sure you can download them if you're not a member of the people. And also, guys, because you need to be approved to execute the test, this is the reason why we wanted a few people to try so we can approve you in bulk. Yeah, I opened the restriction. So if you've not a brand new GitHub user, I think you should be fine. You tell me. The artifacts, I believe, are here, yes. If you go in the summary page, so here you've got all the outputs of, I mean, the GWI is kind of created for a rather generic use case and cattle to many people. So, but yeah, here you've got the artifacts, which I can download. I didn't create a separate account to see if I could access it. But essentially here, what you've got are the self-test and the kernel that have been compiled. You, I could document that, but from the test script, you can see where the image has been downloaded from and then you can create here. You can essentially, okay, cool. Do those artifacts expire somehow? Yes, I think 90 days is set in the UI. I didn't check. Even the logs eventually expire. I think after some time, like they just, you see the result, what you see on the left pane, but you don't see the details from that page, for instance. You don't see that anymore. All right, tomorrow I will be going into a bit more detail about what has been going through the CI for the last year. But yeah, so you can play. We want you to make a live demo, but it took about 10 minutes to clone the repo. So it's easier to just for that. All right. Perfect. Thank you so much for the demo. Thanks a lot.