 This is the Fuego status and roadmap boff, so I apologize, I'm going to go kind of quick here and if you're not familiar with Fuego, a lot of this stuff might fly past kind of too quick to understand. I'm going to have the obligatory flames. But I'm going to give a little micro introduction to Fuego in case you're not familiar with it and then we'll jump into the boff and I want to leave plenty of time for discussion at the end. So Fuego is Jenkins plus some abstraction scripts to do host target test stuff, bunch of prepackage tests all inside a container. This is the architecture diagram. You run most of this stuff on a host. Your device under test is connected to the host usually via SSH but we're adding some new transports. The vision, super, super high level vision is we want to do for testing what open source has done for coding. If you look around the industry, you see that there are significant parts of the testing process at companies that is still all internal, the QA methodology, the tests themselves. That's not to say that there aren't open source frameworks like Jenkins or Autotest or Lava and there are test programs like LTP but there's still significant pieces of the whole testing QA process that are unshared. And so the intention of Fuego is to address some of this. We want to promote the sharing of test methods and test results the way that code is shared now. Make it easy to create, share, discover tests, make test results easy to share and evaluate. So that's the super high level vision. The goals for this, we want to allow really quick and easy setup, support a wide variety of boards, transport types, distributions, eventually send the data to a centralized repository and make it possible to join a decentralized test network. So people are building board farms with things like Kernel CI but there's a lot of overhead in building up an entire board farm. Basically I want a board farm to consist of a developer who's on his development sheet and a single board that's right there, have it be easy for that developer to join a decentralized network, share tests, share results with other developers. So the status really quickly, there are three main forks, which is bad. We have what I'm calling, this is new terminology, the Sony fork, the Toshiba fork and the AGL fork. And I just want to talk about what features each one has, kind of where they're at. So the Sony fork, I'm the maintainer of that. This stuff, most of the stuff I've been working on is in the next branch. I've got a command line tool that's working pretty well. A lot of reason this is in the next branch is it's kind of prototype code. So some of this stuff is not really ready to be released to the public yet but it's pretty close. It shows proof of concept. I have a test package system. I've introduced a test package format in YAML, of course, why not? Well, I'll talk about why not later but anyway. There's a client side of a test server system so I can issue test requests, send test run data back to the server. I can do packaging of the tests, of target information and the run information. I have, there are some new transports that were added recently, serial transport. So if all you have between your host and a box is a serial connection, you can run Fuego on that. And then TTC, which is a Sony internal tool. I don't expect a lot of people are interested in that too much but if you happen to be running TTC it's pretty cool. Some of the work in progress is a new test dependency system and I'll talk about that more later. So in the Toshiba fork which is being worked on by Daniel Sangreen is a refactored Jenkins integration. This has been a big issue that we have a static version of Jenkins that's baked in right now. And so this one uses minimal Jenkins plugins to make it easy to use the latest Jenkins version and it actually does run on the latest Jenkins. It also refactored the directories somewhat. There was kind of a, there was a whole lot of sim links around there. It puts the Fuego core directory outside of the container and then volume, volume mounts it in. And so that's actually much easier for development. That means you can work on your, for us developers that's really handy for just users. It's probably not that big of a deal. And then there was some work on, with the outputting results to Excel files. The AGL fork, I'll call it, has been really focused on lava integration. And they are also using the latest Jenkins, my understanding, or relatively recent one. Not the super old one that, that my repository has. I thought I saw some stuff on the mailing list go by about test categories. Oh, okay. Okay, so there's four. Okay. Well, good. You should have as many forks as possible. And then I thought you guys had also done some stuff with reporting features. Okay, also in the old one. Okay. So, I'm going to go over the feature list, kind of where we are with different things and talk about kind of what the current status is, what we've got prototyped and kind of looking and these are some of just the major feature areas. So the Jenkins integration, the Toshiba fork simplifies the integration greatly. And I think we should just adopt that. We are going to lose some things in that though. We're going to lose the dynamic parameters. And so I don't know if that's a big deal or not. Is AGL using dynamic parameters? I mean, are you launching tests manually or not? Are they all triggered automatically? Okay, so you don't care about the dynamic parameters. Dynamic parameters is a feature in Jenkins where you're allowed to specify some parameters before the job runs when you launch it from the interface. What about your situation? Okay. So you just have to alter the job configuration. Okay. For me, this is not a big deal, but I would kind of later on I'll talk about, oh, well, on this slide, no, right now. So it's definite that we want to use the latest Jenkins. I think AGL raised some issues with security issues with older versions. We're using a simple text template now in Daniel's scripts, but there's been discussion on the mailing list about using Jenkins job builder. As long as we can hide that complexity from the end users, so they never have to see any of that, I don't really care which method we use. It sounds like Jenkins job builder, I looked at it, it's kind of complicated. So I don't want any of our end users to have to learn that themselves, but if we use it, if we have like Fuego install target or something, and it's using Jenkins job builder under the hood, that's fine. So I think we should probably look at that. And then I want to take the Fuego install tools that you've written and integrate them into FTC, which is my kind of, I'd like that to be the central command tool. I talked on the list about renaming that, but anyway, that's not that important. I'm going to skip over this. This is the overview of the test framework landscape. You can look at this, kind of the differences between the different systems. I don't care. So let's go to containerization. So the Docker container builds okay, except, of course, the guy showed up on the mailing list this week and said, oh, my container didn't build. So one of the things I want to do, one of the things I want to do is actually just have a Docker container available. Someone can just download. It's a big download. It's like a gig download, but you don't have to build it yourself. So that would avoid the build issues. The Docker container is really useful. I think it's critical since we're doing actual test program building to have a reproducible environment for that. But I did consider running Fuego outside of the Docker container. If you're not doing a build step, then you don't need all that huge overhead because you're just going to connect to the board with SSH or maybe over lava. So I don't know if that's worth putting a whole lot of effort into though. Are people, anyone finding the Docker thing too heavy or too cumbersome? Would be nice? Okay. I kind of thought that's kind of, it's nice to have, it's kind of lower on my priority list because we've got a lot of other stuff to work on. But I think it would be nice because that would make it really lightweight because the script system itself is only a couple hundred K. And if you can avoid having to do the whole Docker thing, that would be nice. But there are a lot of, especially since I added the server support, there's a lot of Python module dependencies, YAML, JSON, and Python requests. So the overlay generation, I did some simplification and I think the way that Daniel's doing it in history is actually using plan differently than it was intended by the cogent guys. And I think that's the way we should adopt, which is a plan is just a list of the tests you want to run with whatever variant you want for those tests. The spec, I think we should leave alone, they're in JSON, which is kind of overkill I think because most of them are just declaring some shell variables, but it's okay. It looks like we're going to end up with JSON in the system. I want to avoid, if possible, having too many languages for someone to have to learn to learn the system. Now we've got shell scripts, Python, JSON, and YAML, and it's like, that's too many. That's too much junk to throw at someone when they're just learning a system. One interesting effect of this, the overlay generator that you may not know about is that we can actually do inheritance on our base scripts. So you can define a board really simply by inheriting from another board. It's a lot like the Lava device type feature where you can have a device type that declares most of the parameters and then you just declare a couple of custom ones. So we actually have that capability, but I don't think anyone's using it. I don't think it's documented very well. Anyway, that's just FYI. The script system seems okay. Overlays actually, I have this nagging feeling that we could actually accomplish what we're doing with overlays just with shell sourcing. So we may be kind of have too much complexity there. But it's not the type of complexity that the end user has to worry about, right? It just happens under the hood. So unless you're actually a Fuego developer, this is not that big of a deal. I have been working on simplifying the specs and plans. So you now do not need to specify a test plan default, which was a ridiculous test plan anyway, because all it did was specify to use the spec default. So that I think is actually pretty good. That's particularly useful for packaging. On the transports, the serial port, support for the serial port is about 80% there. You can run most of the tests now. There are a couple that have timeout issues because it takes a lot longer to transfer large files over the serial port and then reboot has issues. But I think we're going to get a two for one here because if we do the lava integration the way I think we should, which is to add an OV transport connect and an OV transport disconnect instead of using the hook that's in there now. I think it'll kind of be much more official layer of the transport layer, and then we'll be able to put the commands in there. And that's what we need to get the serial port to handle reboot as well. I don't know, is that clear or is that, any questions on that? Let's see, then I added support for TTC. Still really want ADB. I think we're really close. I don't think it'd be very much work to add ADB support. The biggest issue I ran into when I was trying to add it is that Docker, when you're using ADB, as you reboot the target USB nodes come and go. And Docker, that freaks Docker out. So you have to create a different, yeah, okay. Well, what about like the actual USB device nodes though? I'm not familiar with ADB mon, is that something that? Right. Okay, you're saying put the demon inside the container. Okay, okay, sorry, I'm going to. So will that work though, will the USB nodes show up inside? Well, okay, so I'll look into that. That's, thank you for that suggestion. Okay, so yeah, okay, that's good information. This is why we have buffs. And then I think we have, in terms of transports, that's where the, that's where you put the lava integration, is it the transport layer, right? And then there, okay, it's it, yeah, yeah, yeah, because we don't have these connect and disconnect, right? So you put it in pretest, right? Okay, yeah, it'd be good to have, well, yeah. So I think once we, we put it in the transport because there are a couple places you want to call it. You want to call it on reboot as well, possibly. I don't know, I'm still trying to process your presentation from yesterday. Yeah, reboot is tricky. The test collection, okay, yeah, yeah, okay. Yeah, that's actually really cool. Is that what you used for the Docker target? Okay, yeah, that's actually super handy because then you don't even have, in order to play around with the system, you don't have to have a board or anything. You can just, and I don't know, there might be, well, that's inside the, well, yours is inside the Docker container, or is it running on the host outside? Because theoretically, if you didn't mind all the disturbance that running Fuego causes, you know, you could use Fuego on the host itself. So, yeah, yeah, we should be able to, as long as there's a transport, the command tool could do it, then it would be pretty lightweight. You wouldn't have all this Java running and Docker running and stuff. So, is there another question? No, no, it's, as was discussed in earlier sessions, Fuego is pretty dumb about the boot loader layer. It kind of assumes that there's something else that gets the board up, and that it successfully comes up to the point where you have a network or serial console. So, and not handling that was a conscious decision on my part, because I knew that Lava 2 had all that junk in it. So, that may be a mistake. We don't necessarily want to have a requirement on that if people don't want to do Lava 2, but we had other stuff to worry about in the short term. The test collection, I think this is one of the most important things, is to actually have a whole bunch of tests that people can run. And it's shocking to me, how many years have we had Linux? It's like 25 years, and there's not a collection of tests. Well, there are, but it's all, there's onesies all over the place, like cyclic tests is over here, and IO zone, and Bonnie is over here. There's nothing organized for how to get, well, what is the new thing? The new shizzle in file system testing is, oh, Ted shows thing. I can't remember. Anyway, but anyway, we have not many new tests in Fuego itself. Currently, I would say of the 50 tests we have, about 20 of them are actually useful, but they're still general purpose. I would like hundreds of tests. I would like tests of all kinds of different things in the system. The reason that I haven't really dug in and started, started fleshing out the list of tests is because I want to get this infrastructure first, right? I want to know what the package format is. Before I make 100 test packages, I want to get the package format right, or at least close enough that it's not a huge burden if we change something. But it's really important not to delay too long actually making some tests. That's like the whole point of the tool is to test stuff and so we need actual tests. So I see that as a phase we need to do. This is the 50 we've got now. Roughly in the area of functional benchmark and stress tests. Results parsing and post-processing. So we have kind of three things. Well, not including the Jenkins stuff, but we have log compare. We have the parser.py thing, which is only used for benchmarks right now. To extract from the log file the values, the metrics that you're going to use for your benchmark comparisons. And there's a couple of extra little files for how you do the compare and whether it needs to be higher or lower than a certain number. And then we have the float charts, which is the graphing. We also have this thing in there, I don't know if you've noticed it, which is the functional LTP pause neg. So that's for when you have these huge tests that run a thousand things. Instead of, this allows you to say well of those 1200 things that LTP runs, if like 1150 of them succeed, I'll call that success. It's kind of a cop-out. There's actually a feature in there that's not utilized, which is a diff against a reference log. But Cogent never provided, well I don't know if the tools are in there. But they had some ox scripts that would allow you to catch the output and use that as a reference log and you could diff against those. And basically it would say if you saw any differences, that was a failure, but it needs a little bit smarter diff because there are things that you end up with like timestamps and date and stuff that you have to filter out before you do the diff. But that's super, super handy for finding what actually failed. Right now we just use these counts, which is pretty coarse. It's much nicer to be able to say well this is what failed and in the log this is the delta. This is the difference in the message in the log for what failed. And we need to do more here. We can talk about a unified output format. The command line, this is what the command line supports right now. And it supports doing these operations on the local machine as well as on the server. I've tried to have kind of a verb object format for all the things, so it's not too hard to figure out. It seems like a whole lot of things, but once you get into it you can list the targets on your machine, you can query a target, see what its settings are, you can get and set in values for that. And then you can list the requests out on the server for the boxes, you can make your own request and then you can run a request off the server. And the same thing with tests, you can package a test. So if you have developed a test on your machine, something really simple that just has some shell script stuff, you can very easily write the ammo file, turn that into a package, put it up on the server and now anyone in the world can go grab that and run it on their machine. And if they trust you, the idea is that you can put it up on the server and say, I'd like this to run on all Beagle Bones in the world. And so the people that have said, well, I'm willing to run stuff off the server from unknown, well not unknown parties, but certified parties would run that and give you results back. So there's a couple more command line options I want to do. I want to do the install target, which is basically your stuff and the install test, I want to extend to support your tools. Then I want to do some query things. Let's see, when does this thing go to? Cause I'm going to run out of time. Just talking up here. One o'clock? Okay, I got another 23 minutes. So the Fuego server, the idea here is that it could be a distributed test coordinator. The code here is really, really slapped together quickly. In fact, I did most of it in just the last week. But it does actually do this stuff, right? It holds, you can go browse the tests that are up on the server and that are available. You can see what the description string is, see if you want to download it. You can download the test, you can install it on your target. And you can make a request that's stored on the server, run that request on your local nodes. I even did some stuff with wild carding so you can, there's limited query functionality. The vision here is that I want a test store just like an app store, that you can, that there's all these tests that people have developed and you can go look at the description. I don't know if we're going to do like a rating system. Five stars on the block test. So make it easy for people to share the tests themselves. There's an awful lot of tests that, like a developer will sit at his desk. Well, one that I saw that go by was on the kernel mailing list, someone just runs DD on his block device every night. And every, on the kernel next tree. And he just happened to notice one time when it regressed, right, he said, I don't know what happened, but between yesterday and today, the block layer had some issue on my device and he was able to catch it fast. Let's see, let's go. Okay, that was super quick and I'm sorry I'm going so fast. So let's talk about the projects that I think are currently in flight. We have the Foygo command line tools currently in flight. We have the Jenkins integration refactoring, the directory and the link cleanup, test packaging system, test dependency system, although it's not in code anymore. That's just, that's one of my projects that I'm working on the side. So you can't see it yet, although I could write the spec up for how I think it's, how I'm planning on developing it. I've got some of the, some of the code is written, but again, it's not suitable to put in a repository, public repository yet. And then the lava integration. And that seems like enough work just to get those all production grade would be great. There are some other things that people have requested that is actually kind of driving these at a higher level. Siemens last year said they really wanted to use Foygo with something besides Jenkins as a front end. And so that's been kind of driving a lot of the emphasis on working on the command line tool. We want, people wanted to make development or release management easier and that's kind of been driving the Jenkins integration refactoring. There are a couple of bugs have come up that we've been able to fix. And, oh, the execute individual test phases. I think that could possibly help a lot with the timeout issue on lava. And yeah, yeah, you can, well, I have an idea. And for some tests, like particularly if you're building a static binary, you could have the build results and it wouldn't matter what the, you know, as long, you would need to know the platform like ARM, but you could build a static ARM binary and have that available. So there's no build. Nobody never needs to build that tool. And that would simplify things a lot. Well, it just, it's not that it, it makes the architecture more complicated, but it makes the end user experience more robust, right? Because then if you can avoid, any step you can avoid is just avoiding, you know, yeah, yeah. Okay. Yeah, that's basically, I want to treat it like a binary cache. And it would only work, I think, if you had things you could do static because one of the big reasons to rebuild is you got to match up where the dynamic libraries are for all these different targets. Let me jot that down, though. Well, that's true. That's true. It's a test for the SDK. Sorry, just second rank source. Be available. Okay. The other thing I don't, oh, okay, this is the big one. Kevin Hillman, who couldn't be here today, he said, create and submit Lava v2 jobs and post-process the results. Actually, I think we're like really close to this. Yeah, because it would be super cool to be able to provide a, and I don't know if there's anything in Lava we need to change, but it'd be super cool if every Lava lab, or every Kernel CI lab was instantly Fuego capable, right? So then, we can just keep encouraging them to build out more labs and then it gives us a place to run our tests as well. So I think that's really cool. That's kind of high on my list, is to finish that up and make it work. The other thing with test scheduling is the server doesn't, well, and this had to do with yours, but right now, Fuego itself doesn't do board reservation. Jenkins does, but when I run something from the command line, I just have to know that the board's not busy and that's kind of lame. That's one of those features that needs to be finished up on the FTC. Okay, so now I'm on to this massive list I got from Daniel yesterday. Some of these I've already gone through so we may not need to discuss them, but we'll kind of go through this list real quick and see what we have left. So clean up unneeded stuff, overrides. The pre-test should be able to automatically select overrides, so I'm not sure I understand that one. Can you, I may have mangled the wording when I put it in the, oh, oh, okay. Oh, I'm all over this one. I agree, I totally agree. I, as part of the test dependency thing, and maybe I'll just go to that slide real quick. So the test dependency, the reason, one of the reasons for the test dependency is to find out which tests will apply to your machine. If we're gonna have, I'm already trying to address the scalability of the system. If we have hundreds or thousands of tests, some of those are not gonna have anything to do with yours, like if you're testing wireless and you don't have wireless on your board or you're testing ethernet, you don't have ethernet, or if you, there's specific hardware required or specific buses, you know, you're doing a CAN bus test or something. So one of the ideas of this is that we specify in the base script a bunch of the dependencies in a declarative form. So you have these need variables that say, what do you need? And then the board can provide some items and it says, well, I have ethetool, I have wireless, I have, you know, a CAN bus or whatever. But I think it's really important that you don't require humans to go populate this board data, because that's gonna make it just too hard. You should never ask during installation a question that the user doesn't know the answer to. And that's the problem with a lot of this stuff, is like, wow, you know, it's like, do I have a GPIO 7 that controls an LED? I don't know. Some of this stuff you can't probe for, but to the degree that we can, I'd like to make probe tests that can go populate the board file automatically. So I actually have some stuff in the tool for getting and setting variables in the board file during the test run. So probe test could go say, oh yeah, this guy's got this cake and fig, or this board uses this cake and fig, it has ethernet, it's got the proc file system, it's got sysfs here, it's got usb. It can put those automatically in your board file so the end user doesn't have to do that. And then the test, you know, can know ahead of time. And then you could, also the end user can use that to filter what tests are appropriate. So that's the idea. But we probably should discuss this more on the mailing list to see if it matches what people wanna do. Okay, okay, how many Daniel screens are there? There's actually, there's actually, I stopped at five, you notice I stopped, okay, keep it simple, I agree. I think we wanna keep it super simple. And so anytime we can reduce the complexity, reduce the number of steps, I think that's really important. One of the things I've wanted to do is make an automatic board detector, or like a board wizard. So you could say, you know, oh, I see you have a serial port. I just ran some commands over it. It looks like it's running Debian. So should I set up a Debian board for you? Board definition, same thing with SSH. SSH only runs on a couple of things. And there's some things you could probe on the host. I don't know if people would be game for that or not, but it'd be an opt-in type of thing where you'd say, well yeah, go ahead and probe my system, see if you can find what my target is. And if you already have some other system installed, such as TTC or Lava, you could hopefully just find those out directly without having to do any setup on the Fuego side. Provide deploy and boot as in Lava. I agree. Although for me, this is a lower priority. I don't know where it is on your priority list. Okay. Well, this would be Fuego putting support for this stuff in. But if you're already on top of Lava too, right, then we don't really need to, right? We don't need to add that complexity to Lava or to Fuego. So I think it would be good eventually, but it's kind of low on my priority list. The transports, I think ADB is real critical just to kind of finish out the common transports. There may be some other target agents, but I think that we'll do. I need to finish the serial support actually anyway. And then updating deploying the OS. That's kind of in the same category as the first one, right? Okay. Common output format. Okay, so we had some great discussions on this in like November, December, but we didn't kind of come to a conclusion. I think it would be great. I actually think that maybe we should look at what Avocado is doing because they seem to have done this pretty well. Yeah. Well, I think what they do, I'm not sure, but I think what they do is they actually go to some their own common format and then you can specify as an output parameter, which one to go to. And they also support like HTML, right? So you can dump out a page of just HTML that then just plops into a browser. So the way they've done it, I think is good. And I don't know if we want to use Avocado itself, but we should at least look at how they're doing it. Let's see, parallel testing on same device types. I think your idea was that you could do this with Jenkins labels. So this is Macy. Okay. Let's see, multi-node tests. Yes, we should support that, but we have kind of, it's kind of lower priority at the moment. I know you may have some stuff for CIP that an external node, right? Right, well, that's what we're doing with the existing Ethernet tests is we have the host as one endpoint. And that's pretty easy to do with our current system. To do non-host, you know, other nodes that participate in the test requires some thinking about the APIs that we want to use for that. Let's see, ability to run tests already on the target. We're almost there. We've got, I don't know if you saw my is on target stuff, but there's something that we'll probe, look on the target and find the path. And then the next step was to, you don't have to cache it, but sometimes the operation to find it takes a while if it's not in one of the normal places. So I wanted to add some support for caching that. Then auto, let's see, I'll automatically prepare TFDP, NBD route before testing. Oh, okay. Okay. Yeah, I think this is, I think that's what the Mitsubishi guys did. They had a presentation at ELC Europe where they were talking about how they were doing their kernel testing. So support of matrix of board tests is something Avocado does pretty well. Smart support command line Fuego tool. I think we're pretty close with FTC, creating an interface to download and install new tests. Actually, FTC does it at a very gross level, except I did use tar balls. So, okay. Well, I don't mind making that an option, but I do want people to have a discrete ball that they can just say, here's this thing. You don't need to talk to me anymore. You don't need to connect to the internet to get it. So, we could talk about that one offline. How are we doing on time? I'm gonna bisects kernel CI integration. Okay. Okay. Well, yeah, I agree that it's valuable. It's also like really hard. What? Yeah, I kind of wonder. Well, one thing is, I mean, you could use FTC as part of a bisect as long as FTC, like the command line tool, returns appropriate error messages or error codes. FTC, yeah, yeah. Kernel CI integration. I think that's, okay. So, is that talking about using our stuff with the kernel CI front end? Right. Well, yeah. Yeah. At the place where you were integrating greater jobs for the building. Right. Well, yeah, I looked at the kernel CI API and it's pretty simple. Basically, it's like four different JSON transfers for the different types of objects. So, the only issue, I think, is there's a little bit of a, currently an impedance mismatch between our run information and the run information they have. There's this very specific for booting and they have some kind of funky weird fields in there I need to ask Kevin about. But I think it would be good. If we're gonna run tests on their system, I think they need to, they'll probably have to adapt to running a non-build boot test. Yeah, hopefully we could. Yeah, it would be the latter. Yeah, yeah. Yeah, they've got, one of the things that it's important to keep in mind is I'm going like totally off. This is a really random thought. But the reason kernel CI is successful is they're catching bugs right when they have the developer's attention, which is that time of patch submission. Right? And a lot of these automated systems, the problem is they're running some tests in the background and no developer's actually looking at the results. Or they're not tied in to give the results back fast enough to the developer so they can take action when they're interested. And so that actually is reflected in kind of the design of how the request and the architecture of the system move stuff through the system is that you wanna get results back when a developer is interested in them. That's when they're most likely to actually do something with them and make a change in the code based on the test results. Let's see. Yes, quick to run. Never gonna run what? Oh, LTP? Yeah, LTP is a disaster. Well, on that note, I really think that we should have specs for every test for a quick, and I don't know if you wanna call a long one, but there should definitely be a quick variation of every test that runs in under a minute. And... Right, right. Well, I would like to see LTP, even if you're using LTP, I'd like to see it broken up so that you're not just running one huge blob. If you wanna run the huge blob, you should be able to, but... Yeah, dash dash all or something, but if you just wanna, what are the semaphores doing? What are this set of syscalls doing? Or what are these tools doing? Okay, so I have one minute left. I'm gonna, I'm sorry, but I'm gonna kind of scoot through these. Oh, we're almost done. I think this is, okay. I already covered the vision. So the roadmap. So I think we have some process issues. I wanna talk about what technology level, what are the priorities, and we'll probably have to take this offline. But in terms of process efforts, we gotta unmerge, or we gotta merge the forks. We gotta unfork the forks. And I think your, my understanding was the main AGL requirement was get off of stinky old Jenkins. So I think if we, if Daniel and I can get our trees together on the latest Jenkins, that'll go a long ways towards making you happy. So, and I don't think we're that far apart. I mean, you did that big refactoring, which I have to do, but I think, so I think the first priority should be for me and Daniel to get our stuff together and then come back to you guys and see if we can get that lava stuff in. I think we'll be in really good shape. So my, I think we need more real-time communication. I'm proposing a monthly conference call and I'm actually proposing that we piggyback on your guys' call. Is that okay? So I'm not gonna come every week because you guys have AGL stuff to talk about. Yeah, yours is bi-weekly, but we'd come every other time. I don't know, is that okay? When it, the call is at 5 a.m. in the U.S., which is a little bit challenging, but it's not that bad. Seven, yeah. Well, there you go. No, no, no, I'm fine. I actually, I can make 5 a.m. work. I'm not gonna drag down the whole thing because of my sleep schedule. Oh, 10 p.m. for you? Okay. It's really tough to get something that works all around the globe. And that's, you know, that's a reasonable time. Oh, okay. And then I think we should do a FOIGO mini-conference. So we did that thing in Japan. I thought it was actually pretty good. It'd be good when we get kind of the stuff merged together called a V2 or something like that and roll out the features, explain them to people. I don't know if we wanna do something at LinuxCon Japan, or that's not the name anymore. Open Source Con Japan, well, I don't know what it's called. May 31st. Yeah, May 31st. I'll be there, so, but I don't know how, do we wanna have some kind of official FOIGO conference or should we take that one offline? Okay. I think super high priority is lava integration. I think that'll get us a lot of interest. If we could actually be running FOIGO tests in kernel CI board farms, I think they'd be happy. I think we'd be happy. I think everybody'd be happy. So that, I think that is, after we get through with the merging, I think that's our highest priority is to get that done. Okay, here's some repositories that I know about. I'm sorry, I didn't know. I kind of trailed off because I ran out of time making the slides, but your repository, I think was in your slides. So, and I don't remember if you had posted your repository location, but anyway. Yeah, so, see Jansmone's slides for the other repository locations. FOIGO, it's hot. Grab some candy on the way out. I'll feel bad if nobody eats it. It doesn't matter though, you don't have to. I know it's hot, anyway. So that's it.