 So the goal of this presentation, I mean it's 15 minutes, 50 minutes, so it's a bit long. So it's to not just have a plane, slides, something or to have a discussion. So there are slides, the goal would be like 30 minutes of slides and then have 20 minutes of discussions, but you can have more discussions if you don't want to look at the slides. That's fine. So maybe who doesn't know anything about RISC-5? Do we have? Okay, so let's look very quickly at what RISC-5 is. I'm not going to go into details and very complicated stuff, like how the boots flow exactly is working, but just let's overview what RISC-5 is. So RISC-5 is royalty, so it's completely for free. It's a module, it's very extensively a CPU instruction set. Okay, it started in the University of California, Berkeley, and we now have RISC-5 Foundation and started 2019 or 2018, it's fully managing the specification. So it's moved into a foundation with multiple members, different companies, different universities collaborating on the specifications, releasing, ratifying it, having different working groups to develop the specifications. There's not any more, there's not a single university, there's some contributors, it has a bigger framework, how it's being managed. So again, the spec is fully open, so it's a CC license, so anyone can take the spec. There are multiple specifications, there is a private user mode, the debug specifications, which describe different parts. The RISC-5 targets everything from very, very small systems, embedded systems, to a very large system, like supercomputers or large accelerators. A majority problem for its five implementations today is going to be embedded, and there are some companies working on Linux-capable systems, some targeting like AR accelerations. For example, Sci-Fi has a Linux-capable chip, and these also have AP blogs to do that, there's a Sparanto company who's working on a big, little, high-core, count AI accelerator based on the RISC-5, so it's a very wide application. So it's not the processor itself, I heard this question that people sometimes think that it's a CPU, very specific one, no it's not. And there's also recent discussions on the mailing list, but some people think that you mentioned RISC-5 already, like you run Linux, it's not the case. So only a fraction of the chips are Linux-capable. So again, because it's very modular, you don't need to implement specific extensions to get the Linux running, especially if you're targeting verbal devices or something that doesn't even need to run Linux. So while the spec is fully opened, anyone can take it, anyone can write the course, and there are multiple people who wrote their own course, it doesn't mean that it's supposed to be the course free. So for example, Sci-Fi has a rocket course, that's what they used. It's fully open sourced, there's another boom core, which is out of order of execution core, which is also based on rocket open source. There's a pull project in Europe from between Switzerland and Italy thing, which you have a pull organization. We have multiple cores, and those are also open source. But again, it doesn't mean. For example, Esperanto Company, which is designing probably the most powerful RISC-5 chip, there's many cores, and targeting AI. I don't think that is, well, we're basing some of the work from the boom, but it's probably not going to be open sourced, it's a commercial product. So, yes? Can I assume the RISC-5 instructions are distinct from the open power instruction set? Did you compare the two? Probably not in very good way. But they're both open instructions. So I haven't followed what was happening with open power foundation. So I think you can now contribute, and the power 10 chip is going to have some contributions through open power. I guess that's the direction they're taking. RISC-5 has, so it defines the spec in different extensions. I'm going to show you in a few slides. But you also allow it to do your own extensions. So your own proprietary extensions if you need it. So it's fully modular. So the base instruction set allows 32 bits, 64 bits, and 128 bits. Of course, the only two that matches 32 bits, 64 bits, 128 bits, it's not fully frozen. And at least the door is five, what we are focusing is a pure 64 bit. So a 32-bit ABI is not frozen, so there was just a Jellip C release 2.30, and that didn't make the final RISC 32-bit changes. So that's never 6 to 12 months probably before it's going to be frozen. The way it's done is that all these 32, 64, 128 are independent. So that means you cannot run 32-bit apps on 64-bit RISC-5. You probably can build a chip, but it has two different modes. But again, there's no compatibility mode, there's no multi-lip. There's just pure 64-bit or pure 32-bit or pure 128-bit or whatever. So that's a nice thing for people who don't like to have multi-lips. So the way that the licensing is done or the compliance is done in RISC-5, you have to become a member under different levels of becoming a member. It gives you different rights. And once you become a member, you then can actually use the RISC-5 trademarks and put the RISC-5 compliance CPE. So this is how it's being handled. Because if anyone can take a spec because it's open source, they do some kind of changes. It can be RISC-5 base, but it might not be RISC-5 compliance chip. So that is being done and managed through RISC-5 foundation becoming a member. And then you have rights to become RISC-5 compliance chip if you make one. And just a few days ago, and I think yesterday, the public at Red Hat is finally joining RISC-5 foundation, which is very nice. So again, it's very modular. So just a look of what it is. So we have a base instruction set. So again, RV32i. So that's R stands for integer. Then have M for multiplication and division. You have A for the atomic. You have F for single floats. D for double floats. Quad for quad floats. C for compressed. So there are different modules, extensions. And basically, the way that software is developed, we have predefined target, which is usually RV64gc. So where the gc is defined by the UNIX pack. So the Linux, FreeBSD, whatever, everyone's targeting that RV64gc target. And that's, yeah. And if you want to have a Linux-capable chip, your chip needs to support, including compressed instructions. So none of the distros, including Fedora, of course, does not support chips that have not compressed instructions available. As far as I know, there's only one chip taped out from the Indian Shakti team, which doesn't have a compressed instruction that the next one is supposed to have. So yeah, it's a bit cryptic. Again, it's basically known as RISC-564, and that's it. But it's the same name, RV64gc, which expands to a very long name. And that long name can also expand to even a longer one, which includes minor and major revisions in the ICI string. So again, if you have a very quick look at what you get here for the two registers, the first register is also hard to var to zero. Most instructions are going to be a 32-bit lens, and if you have a compressed, those are going to be six-in-bit. So some of the very popular instructions can be compressed into six-in-bit. Because you have multiple extensions, you can build a chip with different extensions, different pieces. It also means that you have different ABI's. You have a lot of ABI's. So the one that they're using in the Fedora, which is the default in the GCT, is called the P64d. So it's being 64-bit, and it has everything up to the double floats. Yes, you have a PC, which is at the bottom. There you have a register for that. So building Fedora is complicated, because you need the Fedora to build the Fedora. So typical problem. The way you solve that, you kind of attempt to build a very minimal Routafest file system, and you want to get the RPM build, and then start from that building something, and after many, many iterations, it starts looking like Fedora. So it looks something like that. You build your tool chain. So binutils, GCCs, you take a lot of the common projects, like Bash, SED, and stuff like that, and you attempt to build a very minimal but bootable under QMU file system, and you want to get the RPM. So you want to get the basics. It's not going to look correct. It might not work correctly. It's probably going to be missing a lot of stuff, so you're going to start adding stuff to your Routafest. And at the same time, you're going to start taking those SRPMs from Akagi and start using RPM build attempt to rebuild it. So you want to rebuild that small Routafest from the RPMs at some point. So that involves a lot of hacking, going back, doing all the worst things that you're not allowed to do in Fedora, but you have to do that to get to the point that's going to look like Fedora. And at some point, you might be very likely to start where you can actually import those packages into Akagi and start building in a more similar way that you do in Fedora. So the first time it started basically in almost three years ago, let's say exactly three years ago, by Richard from Red Hat, a visualization group, and just a few days after I joined and Stefan joined, over a few months, we built like 5,000 packages, and we had the somewhat Fedora-looking system. Richard had a lot of blog posts, so if you want to look exactly what was happening you can look into those links below. So the project stopped, so a lot of stuff were not finals, and there was a breakage in ABI, and it meant that whatever we built is not exactly what we can use anymore. So it was to stop and wait till the final agility patches emerged and it restarted our work. That took some time, it was finalized, and at the end of 2017, before those changes landed, Richard already started testing them, and then after it landed, almost very close to the FOSDM, we started fully rebuilding, before the final time bootstrapping with Fedora distro. And about in mid-April 2018, I got the Kaji running, we imported our packages, and we started actually pulling the source from official Kaji and starting to build that. Of course, if you look at whatever we had before we Kaji, it was a mix of Fedora 25 till 28. It was a very, very complicated beast, and up till now, you're not using Kaji Shadows, so packages are not built in the same order that you built in official Fedora, so that comes with some problems. So building a distro just for fun, maybe you can do it. I don't know if bootstrapping could be called fun, but I think that the fun comes from when you start seeing it's being used in real life. People use it, people are doing something is it, people are showing something is it. So let's look at a few pictures, I guess. So the first time we booted Fedora in the real of Harvard, so that's at the sci-fi offices in San Mateo, California, there is a sci-fi board connected to the Xilinx FPGAs, it's FMC connectors, and then the cable goes to the box which has extra PCI Express devices. You have a keyboard and a mouse and the GPU attached, so you get the nice display, not a serial console on your laptop, and it's the first time we actually booted Fedora 28, and they're doing an upgrade on the system, and they're actually running that office SSD. So that's the first time that Fedora landed in the real physical hardware. So we got some boards, so sci-fi seeded various development boards to different projects, distros, major programming languages, and stuff like that. So Richard has two boards, one is official Fedora board, another one is his private board, the DJ from Chillipsy has another one, and that's a picture on the right. So again, it's a similar setup, it's probably the most expensive way to attach SSD, probably $6,000 to attach SSD, but at least you don't need to deal with NFC or NPD, or whatever, networked file storage. So when I was at sci-fi and we first booted Fedora, I, the same night I started working on the GNOME desktop, it took a while, but the same year we got it actually running. It was surprising because it worked out of the box. We spent several days trying to figure out one major issue, we didn't have a mouse and a keyboard working, it was a very stupid thing. Nothing, just as being dumb, I guess. But yeah, other than that, it worked, so it boots into initial setup, you set it up, so it just asks after make normal install, and it works. So Western Digital starts using it in demos, there is an As demo, so it's nice to work. So this one doesn't use the Xilinx FPGA, there is an extension board. So what that extension board gives you, it gives you ability to have SSDs, M2 SSDs and PCI Express. Another thing, you can actually boot Fedora 29 in X11 mode or in CLI mode on your browser. So Bellard has a protocol, JLinux, and he has multiple images available on the internet, so if you have a Chrome or Firefox, you can go to web and just boot it, play with it. You don't need to install anything on your system, but done it's very high performance. It's basically based on a tiny emu, which used to be called the RISC emu, so we used to support that, and that was the first emulator to actually support graphics outputs, at the time a emu couldn't do any display output. This was done yesterday, so the way I encourage at least new users to use Fedora's 5 is you use Libvert. If you go to the QEM emu, you need to pass a lot of things. Everything in Fedora 30, by default, should work. As far as I know, I tested everything is already in, so it should work out of a box, including a display output. If you want to do some more testing, you can go and use Vert main sig, copper repository to pull in the latest stuff, latest Libvert, latest Vert manager, latest whatever, QEM emu builds to play with it, so yeah, so the support is everywhere. I tested at least two device outputs, so VGA and VARGE display, that's both work. We also somewhat recently switched from Vertio MMO to Vertio PCI, so that's nice, all the devices are now connected for PCI Express. If you run Libvert, you get the proper networking, you don't need to set up any bridges or anything, so, and all the infrastructure that you're running now is fully Libvert-based. So there's also a new starting for vOpen, we actually had the 10 minutes video of how to build the RISP 5 PC, and that's also used as Fedora GNOME desktop. So basically, as you can see, people start using it, it's not just a small CLI thing that you can run in other QEMU, you can actually have a large scale Libvert QEMU-based setups, you can boot into GNOME desktop and you can use it. You actually can also play quick too, if you have hardware, you can play quick too, but yeah, I had a match, it's decent performance, had no problems. So the current state, so we have a Codget build farm, so that's a very nice thing to do, we don't have a script which is built from A to B to Z, whatever, it builds, attempts to build all the packages. So a current build farm has free physical boards, a couple of them are using NBDs to load the root FS, one is used against SD. We have one X8664 node, so that's a main Codget server, it has everything, the main Codget storage, all the elements, database, everything. We have another virtual node, it's a sub-storage, which is useful backing up the whole configurations in the Codget storage in case something happens and you already had to use it to recover a database, so back-ups are very important. We're running 94 QEMU instances today, so those are all four core, eight gigs, most some are eight core and 32 gig instances, and everything is managed by Livert. We have a new server coming, so it's already running, we just need to finish the configuration, finish thinking of the storage and we can actually probably grow even more. So there are some discussions to add more QEMU instances. Um, on both, so you're mixing the boards with the QEMU instances, yeah. No, we're not cross-building, no, we're running emulation, so we have virtual machines for QEMU. Yes, of course, yeah, there's no, so right now there is a patch set being upstream from KVM support, so technically Vesta Digital already proved that you can launch multiple virtual guests on the RS5 system, so that's being upstreamed right now, but considering the strengths of the hardware, you don't probably wanna run virtual machines in it, especially on a built farm. So we started doing a repository mirrors to official Fedora websites, so if something goes terribly wrong, the disk images and all the source, the bug, whatever, all their PMs are available so someone can take that and kickstart the new card instance. So at least some way to survive if something happens. So this is the current statistics from the card you pulled the last night, so we had probably now 34,000 bills, I guess, by now, about 7,000 failed for very different reasons. Anything from overloaded main server cannot deliver our PMs to generic errors, to RIS5 specific errors, to mistakes in building things in the wrong order and stuff like that. So you're doing right now master build, so this month we did about 9,000 packages already built successfully. There's one thing that you don't do, or we didn't do for most of the time, was we didn't run any tests. So when we started the card instance, we only had eight VMs and doing tests was expensive, so we had to cut the test to actually build more packages faster. So as we're increasing the number of builders, we can actually, so we have more bandwidth, so we can actually make our jobs longer and do more proper things. So all the tests are now enabled in the card sheet. So if you look at the numbers, this is a FOSDM from 2019 and the numbers from today, most of the Fedora builds, surprisingly builds, surprisingly results, testing works and you rarely get any kind of error. So yeah, so that's a good result. The major missing pieces today was upstream kernel was not able to boot on the board, so five free kernel can finally boot on the board. We didn't have a little VM clang stack and 9.0 or C1 already supports that. We don't have a rest, but of course clang just happenings means that rest is gonna happen very soon after and there's no gulang. But that's also basically done. Carlos, I think she's from Red Hat, she's working on that, so if you have the gulang port working, that includes a docker, could burn it open fast and other pieces, it's being blocked because the gulang is supposed to cut the release, so it's basically frozen for adding a new architecture. So we have problems, so the servers for the cardji what we have is basically very, very, very old. So we have memory issues, we have IO issues, especially if you wanna do fast composes, that's really terrible thing because you generate about like 150 gigs per repository if you try to do like two Fedora cycles, it quickly becomes basically a project of moving data, like hundreds of gigs per day, do different places. To solve that, we got the new server, so it's full NVMe storage, corporate stuff, loads of IOs, loads of bandwidth, it's not being used yet until we fully can configure that and then can switch to that and finally start building fast new composes and images. Our sub-domain was added into ASTS preload list which means automatically redirected to HTTPS which is bad because the cardji configures it is self-hosted as a self-certificate which of course no one has installed on the system so people can start complaining but they cannot access the website and there is DNS alias fedora.earth5.rocks which solves most of the problems until we move to the new server. We still have problems with QMU instances, so every day I have to recreate some of the VMs because the CPU stalls for some unknown reason and with the 5TN5 free kernels using the current server, we have problems even more, probably till the flushes not fully properly working and that causes extra instability in the kernels which is being investigated by multiple companies right now to be solved. But we think that's a problem, there's a similar that we build and still being investigated where exactly the problem is. So if you're using related fedora.earth5 images you might get extra instability so that means it might crash in a few days if you're doing a long build. If you just occasionally run it, common stuff booted, tested something short, it works fine. So the problem is if you spend all this time on some infrastructure and you have less time to deal with the actual reporting stuff, fixing release related stuff, GA stuff. So what's missing? If you don't sign any RPMs, I somewhat consider that as a feature so you shouldn't trust it yet. It's not up to the fedora full standard so that's one thing. Same as a build or a repository is not signed. If that's something that bothers you very much and you wanna have it signed, it's possible to do that. Considering our current server, we cannot do that but once we move it should have enough bandwidth to start signing other PMs. We have no bodgy. I don't think that we need it. We don't have no punjee. So at this point we are using, so Kaji is generating distribution repositories. It's also generating images that works. It's basically the way that RMV7 works but in the future we probably have to look into two things. So that's a punjee. If you wanna have modularity and then the whole MDS infrastructure which I think there's no tutorial how to set it up except the ensemble scripts. We don't produce workstations or server images. We probably could do that and I'm gonna talk about the different images that you produce later on. So yeah, so from an infrastructure point it's gonna be punjee and modularity because otherwise we're missing a piece of Fedora. So for the boot stuff we have no BLS and Fedora is moving to BLS. I think RMV7 might not have it. The good part is that we have a U-boot. It works on QMOO, it works on a physical hardware and we just got a Grub 2.04 which also has Refi support. So technically you should be able to use U-boot EFI and boot EFI binder of Grub 2 and then you stop here because kernel doesn't have any EFI stops at this point. That is on the list but that's only a thing missing piece and then plus bugs of course. To get something, the boot, getting the boot which looks like AR64. So because we do have a U-boot that means we finally can actually install a new kernel on Reboot, a QMOO instance which was not the case before because they have to pass the binaries directly to QMOO or Libvert. So that's a very nice addition. We didn't run it and again we didn't run any tests which we do now. So finally because we do have a band which should do that. So custom bits. So we don't use a card shadow. That's probably my fault. I did try it once. I followed the configuration, there was a wiki page from 2015. There's some to-do list of, not fully described but how it's supposed to work. I did that, it launched it, so it was paranoid and waited, looked at the screen for like one hour or something and there's a lot of NVR comparing that, comparing that to replace a very new NVR and stuff like that and it's finally imported a single package from Fedora Core 12 and I killed it. So that was like too far ahead. So I'm a bit afraid of what card shadow could do to the current state if it starts tagging packages from Fedora Core 12 because the last tag package is the one which ends up in repository. So we should use a card shadow. I think that would be a requirement to do that. So, but I currently don't have the correct documentation of how to set it up but it doesn't go that deep and I just wanna have the recent stuff that goes in the cardji, official cardji to go directly to my cardji, not to go all the way to dinosaur age and try to import something that I don't wanna have in my cardji. We also didn't produce a certain PN packages the way the cardji did because it was expensive. So setting up the build route takes one hour and then you have to wait like one to two hours to get your SIRPM. So that was done on X86 target in a darker and a pod man at the beginning. And you also didn't wanna submit the packages but it's gonna fail because that's also expensive, especially if you have like eight virtual machines. So I had my own script kind of my own card shadow which also pulled the DNF data from our cardji, checked all the requirements, tried to figure out if we can get them and then submit the builds and the success rate of our build was like, you could hit like 90% of it, which is quite good. We do have SCM overlay for this git. So that's a separate repository that holds everything that I'm hacking on. So if the release has dot number dot risk five before the disk, that means I did something to that package. So it's either a patch applied, which is not upstream though upstream but not in the release or had to make a changes to the spec file and I didn't yet push them to the disk git. So that's, I changed something, but probably gonna be a change like entry of what I did and there's a separate SCM. And also that means that I can make changes without waiting like two weeks, four weeks until someone reacts on a disk git. Disk images, so basically we're free now there is a developer, what they call which is like a kitchen sink. It has everything that you would like to do RBMs. It has minimal X11 support, the buggers, Emacs, VIMs, everything you might wanna do even manipulate the virtual disk images. So the problem is if you wanna boot a system and you wanna get the package, you might need to wait 10 minutes to get it. And if you wanna have like, oh, I'm missing a git, another 10 minutes, I'm missing my VIM, another 10 minutes, I'm missing that in another 10 minutes. So it's very annoying. DNF and parsing XML files I believe the last time you looked, but DNF is slow. So yeah, and there's a GNOME disk image basically developer plus whatever you needed to run a GNOME desktop and there is a minimal. So some people don't wanna spend talent one gigabyte, we just need to boot and save it at work. So for example, virtualization people, we just need something small but boots. So that's basically, that's it. Again, we probably could build workstations of servers at this point. I don't exactly see something blocking us but it needs to be investigated. So I'm, yes, yes, if you have a hardware, it's somewhat fast. Yeah, it's decent, yes. Yeah, on my QM, well, unless you have like Intel's 5.2 gigahertz system running, probably not gonna have any problem. If you're running in a laptop, like I tend to do, then it's gonna be slow. This image is, so we started to have a bless this image, so that means I take some image, I boot it, let the QM move, I start playing this, it's seeing if things working, you know, if everything on the system looks green. So most images are posted on fedoraproject.org, so you can get those. You can pull directly images from Akaji, if you wanna look at all stage four this images that's also still available. We don't provide this images for the board. So all, because it didn't, there wasn't support on that stream kernel. I didn't wanna pull in a large number of, you know, moving patches, so anyone who wanted to do the board, run the door on the board, they had to build it on kernels. This is gonna be changing very soon because 5.3 can do that. So after we finish master build, how long, I don't know how long it's gonna take yet. I can, I'm gonna start looking at a 5.3 kernel and actually having an image that can boot on the board out of the box. I also set up a WordBuilder repository. Again, it's not a signed one, but it makes your life easier because you can just have a WordBuilder to generate your disk images, do modifications to it, you know, change the size, host, passwords, whatever. And then you can do Word install to get it into your libvert. And the only thing you need to extract from that disk image is your firmware blob. So that's what is an OpenSBI with a U-boot. It's also available on the web page if you follow the first link on the disk images. It's also extracted so if you don't wanna deal with the disk images, you don't need to deal. So the targets that you support is the QMU, the Word QMU setup, and you used to support, at least the older disk images, also work with the TinyMU, but we moved so far ahead that it's not fully working now. So if you pull out all the Dory image that's still gonna work this TinyMU, newer one's not gonna work. There's a physical board as I mentioned, the Firefree is gonna support it, and we're gonna do, so yeah, that's gonna be in disk images as soon as we can finish the master build. And if you wanna run this micro-expansion board, you're probably not gonna have those patches in. So you're gonna always support the board, not the old expansion stuff that you can add to it today. And all the instructions are available on the Fedora Viki page. So annoying bits, so configure config scripts. So config.guests and sub is still the case. So there is a configure, a macro, which attempts to replace it, but not always work. So that's still a bit annoying. Versus a somewhat an issue with Atomics. So RISP-5 only supports more size Atomics and everything else goes from a little Atomic. And people tend to use minus LP thread, which is not the correct way to use. You have to use minus P thread, and that expands to different things on the different platforms. So apparently a lot of packages do that. So they use minus LP thread, and then that happens that you get undefined references to libAtomic calls because the compiler is not gonna take care of it. So the current GC does not inline those libAtomic calls. You can fix that by manually linking to libAtomic, or you can probably, I'm somewhat considering, but not doing it yet, is actually replacing the P thread with a linker script, similar way like let's see works. So it automatically includes as needed minus lAtomic minus, minus as no needed. So if you do use minus P thread on the RISP-5, there's a GCC, so the GCC is gonna do correct thing for respect and it's gonna link minus lAtomic. Another thing, for example, what was the dynamic sections being read only in the RISP-5? Other architecture that does that is MIPS, and then web streaming was happening and the Jellip see if it was copied from the MIPS files. So technically that comment is wrong because the spec doesn't require it to be read-only, but now it's part of ABI. Another thing is that the libraries live in a bit of different location. So you have slash usr slash lib64, and then you have an ABI. And again, we're using a P64d as an ABI. The way we solve that in the Fedora, we have a synlinks to your parent directory, so that works, so we can use either the official RISP-5 defined location for the library so we can still use the same slash lib64, and that still points to the same thing. And we found no problems with that. And many ideas, so again, we have to move to new server. There's still ongoing discussions with the audit, so I have streamed audit framework in the kernel. The library in the user space is a bit blocked because the maintainer wants to refactor the library, doesn't want to take the tables for RISP-5 architecture because then someone might not maintain them and then it's gonna, you know, typical problems. The second patch also exists. I have it included in Fedora disk images. I have to send out the VQ version to the kernel mailing list, but basically it works, except one kernel self-test is failing, which is a new addition, so I still need to look. So we still have packages in this git. We probably have a hundred of them, which I still need to go and check and probably submit to this git or push some patches to upstream, but that's going slow because infrastructure is number one. If that's not running, so there's no Fedora RISP-5. And, you know, we had a lot of new changes. So we have D-Lang now working. We have Adda compiler. We have Free Pascal working. We have Haskell compilers. The missing pieces are Rust, a little VM clang, which is basically done. Once it lands in official college, I'm going to pull that in and that's going to work. What's supposed to be going to work. And the Go lang, which also exists, but it's not yet merged because it's blocked by upcoming Go lang release. I'm supposed to use a Cardi shadow, but I'm a bit afraid of it again, because I'm not fully sure what it's going to do to the current Cardi state. We used to have a Berkeley bootloader before we had the proper firmware, which is, you know, OpenSBI and the U-boot. So that's also solved. And again, you can install new kernels and can reboot and you don't need to kind of kill a VM, change your CLI arguments to a new kernel and boot it. So that's very nice. And they'll officially produce the images for the board. And there were a number of companies that's directing support for Doors 5, directly or not directing. So most of the servers that we use for builders are old Facebook servers today, hosted by a GCC Compile Farm project. And Sci-Fi donated the boards, turned Coolity donated the initial Cardi servers, backup servers, Western Digital also donated the storage for the new server. So a lot of individuals, a lot of individuals from Red Hat, different companies, specs. So it's moving very fast and it's expanding very fast. So, oh, that's long. So, but you still have time for questions. No, it's a separate one as far as you know. I think there might be, there are still discussions change to some potential changes for virtualization. So I don't think that the spec is frozen at this point. How so does the integration still work? Yes, so there's a key. So we started working on implementation. There are some ideas how to improve the performance and stuff like that. So there are some discussions of what can be done and improved. It's still gonna work, but in the future, it might work a bit more efficient. Yeah, well, the baseline is just integer. That's it. You don't get to atomic multiplications, flow is nothing. So, yes, that probably gets, I need to relook at the spec. But basically you can build a very, very minimal chip without anything much. For Linux, it's every 64 GC, that's defined in the RISC-5 Unix platform specification. So it's not instruction. So it's instruction that defines the basic software target, which is our V64G. That's the G stands for integer multiplications, atomic still defines a set of extensions that you wanna have. And then the Unix spec defines that, oh, you need also extra compressing instructions. You need to, you know, this virtual memory stuff. You need this and that for your Linux system. I'm sorry, yeah, yeah. It doesn't talk about that. So the spec is very shorter today. So RISC-5 Foundation has started, it has a group to define, you know, have a very well written document of what's even Unix. And it's all named as a Unix spec specification, but it's a bit wrong because you wanna have specification not only for the Unix-like systems, but beyond that. So you might have different profiles or different kind of systems you wanna target. Like embedded, you know, needs a different target. So that- It's not frozen. It's not frozen. So again, there's a working group, it has a group, within the RISC-5 Foundation, which is working on that specification, which also includes an open SBI or SBI supervisor binary interface, similar interface basically. So that's still very much moving target. So we just needed to have something as a base. So it ended up being like two paragraphs at the beginning. It was reformatted recently, but it's having moved very far from that. So it's very basic, not like something that you could get from ARM. If you have a custom patches, yes. There's no hardware visualization. So, yeah. So the way the sci-fi at least compares the chip, so the current sci-fi and leaves board is, I think it's U54 cores. There are four cores of that, and that's performance like Cortex-A53. And the next generation, U74, I think, is the market is as similar to Cortex-A55. The way we found it that surprisingly not, well, depends on your host machine. But if you compare our builder systems, we find that the board is still faster, like building kernels or GCCs. So if you have a very complicated C++ project, the board is still better than running on the QMove. Well, unless again, you have a high-performance central and ditched between boosted like 5.2 GHz or something. In that case, I don't know the numbers, but you might be as performant as a board or maybe even better. You can go up to eight cores on the QMove instance. That's a limit. So my suggestion was like years ago, but you need something maybe like the Nara developer cloud so that the people can actually get some allocations of the boards to run the CI's, do porting or performance checks. Today, that's not the case. Sci-Fi was the first company to seed the development boards, physical boards for free to like Debian, Gildang people, anyone who's working on some major project could get the physical board. So we would have direct access to it. The rest, it depends. I mean, it's, yes. I mean, I would like to have that. I mentioned that, but we don't have it yet. And I'm not sure if it's gonna be in the future, but I think that would be great. Yeah. So this, it checks my card instance and if my card instance has a newer package, or? If you're good, you have to do a package. Okay. There's a case in the top. Okay. Does anyone plan to write a full documentation for CardiShadow? That would be great. Yeah, so that was my idea. The only way to figure out what the CardiShadow is doing is just take a code and read it. So yeah, so I think in the future, CardiShadow is something that you need if you wanna ever become an official secondary architecture. So yeah, so I would kind of wanna avoid it, but I can't avoid it. Yeah, that would be great. And you could update the Wikipedia page too. Maybe. So just to go works. That's the first thing we tried and that works. There might be a small, small issue within the Fedora in the future related to LibFFI. So the problem is that the Fedora uses the last official release, which apparently some distros don't even use because the LibFFI is not being released and the maintainer is very slow, I believe. So the risk five is merged in. It's even the patches are merged on the GCC copy of LibFFI. So at some point it would be nice if the Fedora could bump to something to a git verse, you know, specific git commit of LibFFI because I don't expect to have any release in the next whatever, a few years, if ever. Well, that's the interesting questions. I think we have the first ones. So we worked very close to the Debian, believe it or not, but I would say that Fedora is partly Debian and Fedora baby. You're gonna even find the Debian people in the Fedora Spark channel and we're discussing all the things. We had more bandwidth from human power to push it through. So I think we got in a better shape and we still may be delivering some things faster than Debian, but again, I'm not fully sure because I'm not running a Debian, but I was working just a full time for a year, so I had the bandwidth to push things faster. And Fedora, to some people, are interesting, mainly Rahide, so it's a fast moving target. So we wanna be too fast in this case and that, for example, this git overlay allows me to be too fast. If there's a package someone comes in and says we need it for academic research or I'm a developer and I need that, you know, then that's what I can do without actually waiting whatever it takes, weeks, days or whatever to get that in this git, even if it's not yet fully merged or something. For example, I am planning to have a GoLang and not waiting for the next GoLang release, for example, because people wanna use it. So there's no board specification, but the at least sci-fi board has the V9 chip, which stores one of the things called FSBL bootloader, which also has a DTB file, which can be flashed in your build, which may or may not be replaced in the future with open boot SPL. So in that case, you most likely wanna end up in RIS5 Foundation platform specification task group and say that that's something you wanna put in a spec. So today I don't have an idea how to support multiple boards. So I mean, there's a single board. There is an Andes platform that also has a U-boot and MMC support, so we technically could do that, but we don't produce chips. We have the same as a sci-fi, we are targeting a custom solutions. But only sci-fi released a chip that can be used by developers. So the boot stuff is not being defined. It's being worked also part of that working group, as far as I know. So if you wanna define, you wanna have some concerns that you should raise them. Even that, RIS5 was added to ACPI, UFI, Redfish and PC BIOS specs. There is a channel core which can boot on FPGA chip and get into a shell. There is a Grep 2.04 which also supports RIS5, UFI and the U-boot also supports on RIS5. Again, the only missing piece in the kernel to you is the UFI stubs. Yeah, it has to be at some point. Then again, it's a working progress. If you have concerns that we're supposed to with the best place to raise it or at least on the various open mailing list, that that's what you like to have and what is your reasoning. I personally don't expect to have a full channel core running on the boards. The best I could expect is something EBBR like. And my target, my personal target is something that ER64 has. Slowly moving towards that. More questions? Okay, thank you.