 So, morning everybody. My name is Kastan Mjaka. I'm one of the Debian developers who are working on the Debian Risk 5 port. And today, I'm going to tell you the story of how the Debian Risk 5 port came to life. What we have achieved until now, which obstacles we had to tackle to add a completely new architecture to a binary distribution like Debian. I'll show you what we have achieved, what we are planning to achieve, and give you an outlook about what we hope to have within the next one and two years. I'll start with a short introduction of the Risk 5 architecture because I know there are people around here who come from the Debian side, not from the Risk 5 side, and are not familiar with the details of Risk 5. So, I keep it short. So, what's Risk 5? Risk 5 is a CPU instruction set architecture that has been designed at the University of Berkeley, originally as a student project, has grown a bit since then, and everybody is free to implement CPUs based on the Risk 5 ISA, and the specification that defines this ISA is available under free license under CCBi, and to make sure that implementations actually conform to the specification. The Risk 5 Foundation has registered trademarks on the name Risk 5, and you are only allowed to call your implementation Risk 5 if it actually conforms to the spec, to make sure that we don't get incompatible implementations. The Risk 5 ISA exists in three different variants. It's available in 32-bit, 64-bit, and still work in progress 128-bit, so for practical purposes, it's 32-bit and 64-bit right now. It has been designed to be modular, to scale from microcontrollers up to supercomputing, and the ISA designers are trying to have only non-patentable technology in the ISA specification so that it is possible to actually implement the Risk 5 CPU without having to buy patent licenses. There is no formal legal guarantee for that, but designers have put quite a bit of effort to make sure there are no patent problems. So we know what Risk 5 is. What Risk 5 is not. Risk 5 is not a specific CPU. It's just an instruction set architecture, the set of commands and the behavior, but not a specific chip. And while the specification is under free license, there's of course the possibility to implement proprietary CPUs based on the ISA. There are already some. And one thing that's somewhat of a problem for open source projects is that the stewardship of Risk 5 in the hands of the Risk 5 Foundation is open in the way the semiconductor industry defines the term open, but not exactly in the way open source projects define the term open. All the specifications, once they are released, are available under free licenses, so no problem with that. But taking part in the specification process itself requires a Risk 5 Foundation membership, which requires signing a non-disclosure agreement, which is why many open source developers like me are not a member of the foundation. Most of the software stuff is actually handled outside the foundation, but the actual ISA specification runs inside the foundation and getting access to previous documents actually requires membership. Everybody can become a member as a personal member, but personal members don't have voting rights, so actual decisions get made by corporate entities in the foundation. So why is it called Risk 5? That's rather easy. It's a classical risk architecture, like there have already been a lot, and it's the fifth major risk architecture designed at Berkeley, so the name is rather obvious. There have been other risk architectures, for example MIPS, if you have a DSL router at home, chances are about 95% that it's MIPS-based. If you're carrying a smartphone, it's about 95% ARM-based, and there are several others. Most of them are closed architectures designed by companies. MIPS is just now slowly changing. There is opening up a bit. Some micro systems has actually specified Spark in an IEEE standard, but that standard itself is not freely available, so one has to buy the standard, which again is not nice for OMSOS communities. There has been open risk, an actual open CPU instruction set architecture. So why, yet another one, there are already so many to choose from. Risk 5 has been designed at university. One of the targets of the architecture has been teaching, and closed architectures are not suitable for teaching because you're legally not allowed to just implement X86 CPU without having a license from Intel and AMD. Spark has the problem that it's only halfway open and the Spark designers have actually made some choices that, well, people wouldn't do now today. And open risk is a fully open design, but open risk has a historical problem. Originally, when open risk was designed, one of the GCC porters for open risk didn't allow a copyright assignment of his code contributions to the FSF and the FSF requires a copyright assignment to get code in GCC upstream. Without this copyright assignment, support for open risk could never go upstream, which is why all efforts to broaden the use of open risk have actually failed. Right now, this has changed. An open risk developer is right now working on re-implementing all the open risk GCC support and getting it upstream. But, well, in the meantime, we have several years of RISCRIVE. So we'll see. RISCRIVE has been designed to avoid some of the decisions that older architectures have had. It's designed to be easily palatable, to be easily implementable, and to be extendable at the same time. We'll get to the extensions in a few minutes. One thing that's important is that RISCRIVE follows a strict upstream policy, so all toolchain stuff is handled in upstream. So we don't want 5,000 forks of binutils, compilers, whatever. We want to have everything standard in upstream. The ISA itself consists of several modules, which are labeled with characters, as you can see there. There's the base ISA, which is just the integer operations. There's hardware multiply, atomics, signal and double precision floating point, and all those together are called the general purpose set, which is what one would expect for a system to run Linux, PSD, whatever. Then there's the compressed instruction set, which is actually not new instructions, but a shorter encoding for existing instructions. Normal instructions in RISCRIVE are 32 bits long. The compressed instructions provide a selected set of instruction with a shorter 16-bit encoding, which has the advantage of having less cache pressure, having less memory bandwidth requirements, better performance. And if one wants to specify which bits of the ISA-CPU supports, you get an ISA specifier, like you can see below. IMAFD is obviously 64G. When the Linux distributions have started porting for RISCRIVE, we had to decide about whether to assume the compressed extension as available or not that had some discussions at that time. All the binary distributions that are working on RISCRIVE support right now assume that the C extension is there. There is work on a platform specification, which defines that all RISC5 Unix-style platforms shall have the C extension. The ISA has three privilege modes. Machine mode is the highest level. Microcontrollers usually only implement machine mode. If you want to run Linux or PSD, you need all three modes. The 32, 64, and 128-bit ISAs are actually independent from each other. That's probably strange to people who are used in intellectual architecture, where every 64-bit CPU also runs 32-bit code. That's not the case on RISC5. We have a similar effect on ARM64. ARM64 CPUs can optionally support 32-bit code, but they don't have to. And actually, there's quite a bit of ARM64 servers, which only run 64-bit code and no 32-bit code. Short side note on Meltdown Inspector. In discussions about RISC5, often people claim, yeah, RISC5 is the solution to Meltdown Inspector. It's an open architecture. We won't have any problems with that. It's just nonsense. Because Meltdown Inspector are implementation bugs. They are not a bug in the design of the instruction set architecture, but in specific chip implementations. So why currently we have no RISC5 chips, which do speculative execution, which is the base for spec that in Meltdown? So currently there is no RISC5 chip that's vulnerable. Nobody can guarantee that that won't happen in the future when we get chips that do speculative execution. So now we have an ISA, but if we want to use an ISA, we'd like to have hardware, or at least otherwise, run code. There are multiple options. We can do QM emulation. QM has support for both user space as well as full system emulation. We have CPU designs, which can be incendiated in field programmable gate arrays. I'll get to that later on. And we also have real CPU chips. Last year at FOSDEM, we had the presentation of the Sci-Fi Freedom U540, which is the first Linux-capable RISC5 chip in the world. There is the Lowres project, which is working on designing a community stock, but that's still a lot of work. And until we actually have those chips on our desks, we'll probably take three, four, five more years. Doing chip design is hard. And well, it needs work and it needs to be paid. Then there's Shakti. Shakti is a project in India, which aims at designing a full family of RISC5 chips from small IoT-style embedded stuff up to supercomputer hardware. And the Shakti team has recently actually taped out their first locally produced Linux-capable chip. So what you here see is a picture of actual RISC5 hardware. The left one is the prototype that had been presented at last FOSDEM. The right one is one of the early G-Lipsy development systems. That is the prototype chip. That is an FPGA board, which acts as Southbridge and provides PCI Express. And that's a PCI Express NVMe SSD. So why porting DEMI to RISC5? From a philosophical point of view, DEMI and RISC5 are a perfect match. We have open source hardware, we have open source software, and we would like to combine those two. There have been efforts of porting DEMI to open risk, but those have not been successful because of the GCC problematic, because we would have shipped different GCC versions on different platforms, which is something that Debian doesn't do. Debian uses the same versions on all platforms, so that was not an option. The RISC5 ISA is probably the first Open ISA that has a realistic chance of actually gaining mass market traction. There are a lot of companies supporting RISC5 right now, and things in the semiconductor industry are slowly changing. It's become slowly possible to actually have community-led chip production. It's still very much expensive, but things are changing in the industry, and within a few years it might be possible to actually have community-led chip designs mass produced. Yeah, the first work on porting Debian has started in early 2015, and at that time a lot of stuff was still in movement. Binutil's GCC were changing every day. G-Lipsy was, in its early infancy, the Linux kernel as well. Things have been changing about every two days, and code generation actually got incompatible from one week to another because of changes in the compiler in the binutil's, which meant, well, if I build stuff today, I probably cannot link it against stuff built into weeks, which is kind of sad if you want to do binary distribution. Then there was somewhat of a culture clash between people from the hardware design world and from the software design world that took, say, half a year to sort all that out. And another problem was at that time we had no QEMO support, so there was only Spike. Spike is a CPU architecture emulator, which emulates the basic RISC-5 CPU. It only emulates a block device and a dump-serial console, but no network. So I think you want to put some distribution. You build a root file system with something. You want to compile something and find, okay, I need another bit of software. Well, normally you just download it, but if you have no network, that's kind of difficult. So shut down emulator, unmount block device, mount block device on host, copy files in, unmount block device, start emulator again, start working. If you have done that 20 times, you want something else. Yeah, due to the ABI breaks and the incompatibilities in the tool chain, all early attempts at actually building packages have not been successful. So both Fedora and Debian developers have chosen instead to work on getting stuff upstream and getting stuff, Binyutils, GCC, JLPC, Kernel, stabilized so that we have a stable IBI and can be sure that stuff that we build today still works tomorrow. That included some modifications to the linker for Debian-style multi-arc support that included getting an agreement about using device tree for hardware description similar to what we use on ARM and on PowerPC and recently also on MIPS. And well, things were looking quite well until early 2016 and we had some form of déjà vu because we wanted to get GCC support upstream and the people who wrote the GCC support were happy to do a copyright assignment. Unfortunately, some of them had work contracts with the University of California which gave the UCB the copyright in some of the contributions. And UCB was happy to license the stuff under free license under GPL and the BSD no problem, but they were not willing to agree to a copyright assignment. So we were exactly in the same situation that the open-risk people have been facing beforehand and things were looking rather dire. It took nearly a year to get the legal stuff sorted out. So end of 2016, the stuff could actually go upstream and from then on, BinyoTiltz and GCC went rather fast. Just a few weeks after the legal stuff had been sorted out, BinyoTiltz went upstream, three months later, GCC was upstream. Then came Gellipsy and Kernel. The Gellipsy and the Kernel are very much dependent on each other because they use low-level interfaces and getting that specified out so that the Gellipsy people were happy and that the Kernel people were happy took nearly a further year from there on. So effectively at last forced them exactly one year ago. We were there that we had Toolchain, Kernel, things were working, things were upstream. We could start with a bootstrap or the distribution. There were some tools that were still missing. GDP Linux support is still missing. We have bare metal support but not for Linux debugging. As you have heard probably in the talk before, LLVM and Clang support is still work in progress. We hope to sometime this year have a release that will probably work for distribution users. Java has been a problem. We have Java support now but only interpreter support, not just in time compiler support. So Java works but it's dark slow. Getting JIT support is still quite a bit of work. Rust depends on LLVM. Goaling is something that to my knowledge nobody is working on. Okay, somebody is working on Goaling. Where's your line? And then there are of course other languages and other compilers. V8 is the JavaScript engine for Node.js which is also not working yet. So now we have compilers. We have binutils. We can build code. We can build packages. But we are starting from scratch. We have nothing for the Rust Drive architecture. So how do I get a distribution from thin air? We have nothing. Well, I need a cross compiler. So I run a compiler on a PC but generate code for RISC-V and make use of the Debian multi-architecture support which allows us to have code for multiple architectures within one root file system. So I can gradually generate packages for new architecture. That of course requires getting RISC-V support into the true-chain packages that we had in Debian. But as the support had just been released a few weeks ago, the binutils GCC packages in Debian of course did not yet have RISC-V support. So the first step was actually packaging new versions of all that and getting them into experimental. Then how does one start in bootstrapping a new architecture? I need a base set of packages. So which packages and how many are that? Well, Debian has package priorities and the packages with the priority required are the base package set. And while priority required, okay, that's easy. Just a handful of packages. That's 55 packages, that's no problem. Well, actually it is because those packages have dependencies, they want libraries. And those libraries have dependencies themselves and to be able to build those libraries I needed a lot of other stuff. So I need dependencies of dependencies of dependencies of dependencies of dependencies to build stuff which ends up in several hundred packages I've stopped counting at nearly 400. One of those problems is that, for example, if you have source code that uses mason to build, mason uses Python. And Python, building Python is a problem in so far as Python has lots of dependencies because Python has modules for a ton of stuff and to build Python, the Python package, you have to have all that stuff. So that gets a rather large dependency chain. Dependency issues are two-fold. One are deep dependency chains, as I've just shown an example. To build one of the command line base packages, we had to cross build some 80 or 90 other packages just to get the build environment set up to be able to compile a single simple package. Another problem are circular dependency chains which is where the cache is at scale. Packages in the base system are, for example, audit. Audit requires open add-up for building. Open add-up, in turn, requires SIRIS-SASL for building. SIRIS-SASL requires PAM for building and PAM, who has audit for building. So the cache has just found its tail. How to resolve that? We have to build packages with smaller feature sets to just remove certain features from the packages to break the cycle. And doing that by hand is rather tedious. So Debin has a mechanism for that that is called build profiles. So you can build a package with different feature profiles and during the bootstrap process, we actually have added several build profiles to a number of packages to be able to break those dependency cycles. What you see here is a dependency graph of Firefox, just to give you an impression. That on top there is the Firefox source. The boxes that are led to by the brownish lines are direct build dependencies of Firefox. Each of those build dependencies has runtime dependencies which are all those blueish lines going to different packages. And that's just the first level of dependencies because all of those blueish packages have build dependencies on their own which have runtime dependencies. So if you look at that graph, you can probably imagine that building Firefox comes very, very, very late in the bootstrap process. I've said that we've started cross-building because we had no native code. And cross-building is not as easy as doing native builds. Several packages which build finitely don't cross-build properly. There are differences in cross-build support in different build systems. People often say, GNU Auto Tools, old stuff, ugly stuff, and for just leave me alone. But GNU Auto Tools has one major advantage. Cross-building effectively works out of the box. Effectively, everything built with Auto Tools just cross-builds. That's unfortunately something that we cannot say about many CMake and Mason-based projects. Both CMake and Mason have, in theory, support for cross-building. But many packages that use Mason and CMake don't make use of the infrastructure that those two provide. And in practice, we had to fiddle around with many of the Mason and CMake using packages to actually make them cross-build. Perl is, well, just strange. Perl claims to be cross-buildable. Well, the way Perl cross-building works is that the Perl configure script requires an SSH connection to a system already running the architecture I want to build for and then run the configure script natively on the architecture, which obviously does not work in a bootstrapping scenario. So the only way to build Perl is actually build all the rest, so I have a native compiler and then build per natively. Another common problem is generators. Makefiles sometimes compile code, which they then execute to generate further source code. That works on a native build, so if the compiler is running on a PC and the binary I'm creating is for a PC, no problem with that. But in the cross-build setting, I'm running a compiler on a PC but generating code for RISC-5 and the PC cannot just execute the RISC-5 code, so that does not work. Proper build systems should take care of that and in a cross-build setting, use the proper compiler for the code generators. Another option if one has QME support is using QME userline emulation that helps, but we didn't have that in the beginning. Another thing is that many upstream packages don't properly separate between host arch and build arch tools, so if I, for example, want to call package config to get the GCC parameters for linking to a library, those are at least somewhat architecture dependent, so I need the package config for the correct architecture and many makefiles just don't take that into account and always call the package config for the architecture the compiler runs on, which is wrong if I'm cross-compiling and then there are some cases of packages that are not multi-arch co-installed with themselves that can happen. I'm using a tool to build a package which has a library dependency and the code I'm trying to compile for RISC-5 also requires the same library in the RISC-5 version. With most libraries in Debian, it's possible to co-install both at the same time, but there are libraries where that doesn't or doesn't yet properly work, so we had to work around that. Then there's, of course, general portability stuff. Data type sizes are architecture dependent. How long is an int? Is an int 16 bits, 32 bits, 64 bits that changes from architecture to architecture? There's endiness, so the way numbers are represented in memory. A very common occurrence is that upstream ships outdated config sub and config guest autocon files. The FSF recommends that autoconfusers should always regenerate config sub and config guest from current upstream so that new architectures automatically get supported, but many upstream authors just put the versions they have used when building the release table which are old and they don't know about RISC-5, so we had to replace those. Another thing that's rather common is atomic support. There is a tiny but important difference how to call, how to link in the pthread. There is the dash pthread flag to the compiler and just dash lpthread which just links the library. Problem with that is on PCs, just linking the pthread library is enough that works. That's the case for x86 and amb64. That's not the case for quite a bunch of other architectures. And as most developers probably develop on PC hardware, they just don't look at something like that. So we had to patch quite a number of packages for that. And then there are packages that actually require hand-crafting header files for each new architecture. LibGPG error is one of those examples. Then there's type constraints. We have atomics, but on RISC-5, atomics are only supported on native word sizes, not on smaller objects. The PC architecture allows using atomics on arbitrary object sizes. That's not the case on RISC-5. We have lipatomic, which abstracts that away. So if people use lipatomic, that's not a problem. But again, many upstream projects just assume that atomics work on arbitrary sizes because that's just the case on PCs. But that's not true for all other architectures. And then there is, of course, stone-etched stuff like looking in the x11 sources, which doesn't make decisions based on properties like even in 32-bit, then do something. If you look into the x11 sources, there's really, if architecture is through then, list of code. If architecture is bar, then list of code. If architecture is bars, list of code. And so we had to write just new code for that. We tried to get it upstream, but that didn't work because the x11 developers don't want to add code for new architectures because they say x11 is in maintenance mode and we don't want new architectures upstream. And then, of course, there's the problem of compiler support. We lack Rust, so building, for example, Firefox and some other stuff currently does not work because we don't have a working Rust compiler yet. Yeah, doing all the package bootstrap, getting the ordering right, patching packages is rather tedious and, of course, we would like to have that automated. Fully automating that is way more complicated than one would expect at first sight. A Devin developer has, some years ago, written his master thesis about bootstrapability issues, which has helped a lot. And there's a tool by Hamot Groner, which is called Rebootstrap, which at least, as far as possible, automates the early bootstrap. So getting binutils, getting compiler, getting various new tools, getting several basic libraries, which we were very happy to have. So now we have packages, but we want to have a proper distribution, so we need to get those packages into Devin infrastructure. Devin has two major types of architecture. So regular architectures, which have regular stable releases, get security support for stable releases, and we have port architectures, which is somewhat of the kindergarten of Devin architectures. New architectures usually start in ports, because they're incomplete, there might be stability issues at the beginning, and bugs in ports architectures don't influence regular architectures, so just unstable testing, stable migrations on regular architectures still runs even if a package is broken on one of the port architectures. Then for port architectures, we have a bit more freedom, for example, the port architectures have an additional suite in the Devin archive, which is called unreleased, which is allowed to carry architecture-specific patches, which are not in the main package source. This is only for temporary stuff, so the aim is that having unreleased empty, that's the final aim, but while we're working on that, there is some stuff which we currently can only keep in unreleased. So that's where we are right now. We have a bunch of packages. We aim at becoming a regular Devin architectures sometime in the future, but that requires being able to build about 95% of the archive, and we are not there yet. This is a package-built graph. The gray line here has been the percentage of the RISC-5 packages that have been built. This graph is several months old. The spike that you see here has actually been created by porting the Haskell compiler. At that point in time, we got Haskell bootstrapped and built a bunch of Haskell packages which gave the drives in available packages. Unfortunately, the number of packages actually has dropped down again in the meantime because we have some packages which we had been able to build in the past, but which we cannot build anymore. One of the problems is that several upstream sources that have in the past been using C or C++ are moving to Rust. So we could build the old C or C++-based versions, but we can't currently build the new Rust-based versions, so we had to drop out some packages. Another problem is the QT ecosystem which in the past has had its own C++ parser which has now moved to using LRVM, which brings us to the problem that we don't yet have LRVM. We are currently at 84% or 85% of packages which provides a rather usable system, but of course it's not everything yet. To become a release architecture and officially supported stable release architecture, we need to have Debian installer support that's work in progress, but not get finished. We need to have enough buildies with redundancy. Buildies are the package builders Debian uses to automatically compile packages for all architectures. And we need to have everything in infrastructure managed by the Debian system administrators which have requirements, so they want great mountable server, great hardware, remote management capabilities, which is somewhat difficult with small chip prototype development boards. So that's also something we hope to get within the next years, but we haven't yet. And of course there need to be people to take care of the architecture. Okay, we have those. The Debian installer support is work in progress. It's possible to build a Debian installer for us five from Debian installer git, but it's not really usable yet. We have two issues that are still being worked on. One is that we have problems with LQtills. Somebody is working on those. That's the last big blocker to get the Debian installer released. Another thing is that we currently don't yet have bootloader support. The UBoot port for us five is still rather new. Just yesterday, two days ago, the first OpenSBI code base, which we would need for that, has been published. So that's something that I will look into the next weeks. Then there's the topic of 32-bit support. Currently we only support RV64, 64-bit for us five, because that's where most of the work has gone in, where we have hardware. 32-bit support is still work in progress because we don't have 32-bit support in Gilypsy yet, and the kernel support for 32-bit is also not complete because that needs to be coordinated with Gilypsy, and we have the problem that you might have heard of the year 2038 problem, that we have a time type overflow in a few years, and we want risk five to do that the right way, but that requires infrastructure in the kernel and in Gilypsy, which hasn't been there in previous Gilypsy series, which is just now getting into Gilypsy, and we didn't want to release a 32-bit port that would be binary incompatible in a few years because we would have an ABI break to handle the year 2038 problem. So that takes a bit. Fedora and Red Hat probably won't release 32-bit ports. Debian might do it, we'll see, if interesting hardware becomes available for 32-bit platforms. The way such hardware could become available are small FPGA implementations. Designing CPUs on field programmable gate arrays is a common technique in commercial CPU development, but in the open source field, that hasn't been done very much because for all available FPGA types, one had to have proprietary compiler tool chains, FPGA tool chains from the big FPGA vendors, and while open source developers don't want to build this stuff based on proprietary tools. In recent years, there has been a lot of effort to reverse engineer the FPGA bitstream configuration format and to implement free tool chains for FPGAs. We actually have one that's fully working, that's Project iStorm, which is packaged in Debian. So you can actually implement a CPU with tools in Debian and run it on an FPGA. There's Project Trellis, which is currently working progress. There's a talk tomorrow just upstairs from here from the Project Trellis project leader about his support for the ECP-5 FPGA series. What's interesting with the ECP-5 is that it's about 10 times the size than the small I-40 that we have supported right now. Problem with the sizes, I can implement a CPU on a latest I-40, but that's more microcontroller class, something Linux capable with memory management unit in that size is very, very difficult. David Shaw has actually implemented an open-risk CPU on an ECP-5 and is planning also to do a RISC-5 implementation on one. Then there's Project X-Ray for even bigger FPGAs from Xilinx that's in rather early stage. Things are moving forward, but I wouldn't hope my breath while waiting. That will probably take one or two more years until we have something that finally works there. Well, FPGAs are nice. I can actually in place update my CPU. If I have a better version of my CPU, okay, compile it, place in the FPGA, new CPU. But FPGAs have disadvantages. The clock rates that I can achieve are rather low. So if you get 200 megahertz that's already rather fast, more realistic on cheap FPGAs is something about 50 megahertz, which by today's standards is just slow. Getting fast chips is rather expensive and compared with actual mass-manufactured Aztecs, they use a lot of current. Another problem is getting memory. Modern memory interfaces are very, very complex and attaching modern double-datter rate memory to an FPGA is difficult. That requires a specific file layer that requires an appropriate memory controller. That's not easy. So we have currently only the choice either use memory that's easier to address, but it's slower and it's more expensive or if somebody manages to reverse engineer the common DDR5 layers in the bigger FPGAs, we might be using standard DDR memory, but that will take time. Well, we have FPGAs but there are also projects for Aztecs. I have already mentioned the lowest project, which is working on an RV64 community sock. The current development version of the low-risk sock runs on an FPGA and it actually runs Debian. The software development for that chip is actually using Debian. Then of course, if I want to mass-produce a chip, there's not only digital logic in it. The CPU itself is digital logic, but I need voltage regulators. I need analog digital converters. I need a random number generator. I need a brownout detection to make sure that the voltage doesn't drop too much. So there's a lot of analog technology which you also need to produce such a chip. And there's a spin-off of the Universidad Industrial de Santander in Colombia, which is working on providing open source licensed analog components for such CPU. There's also an interesting project. Several people will say that's just insane, but nonetheless I find it interesting. The Libre Silicon Alliance is a group of people from the chip design world, which are trying to define a standard silicon chip manufacturing process. Currently you have one problem. You want to produce a chip. You go to one chip tab and say, okay, here's my design, build that for me. You have to use a set of standard cell libraries from your FAP. You're not allowed to disclose how those look like. So if you want to build your chip in another FAP, you can't just take your design to another FAP. You have to redesign your chip with their standard cell library. There is no portability between FAPs. And the Libre Silicon Alliance is working on defining a standardized process that is actually transferable from one FAP to another. They've started very, very small. They currently have done their first prototype chips, which are based on one micrometer node size. That's technology from the early 80s. So nothing you could build a modern CPU in, but that's a long-term project. They're estimating a timeframe of perhaps 10 or 15 years. So nothing for tomorrow, but nonetheless interesting. And then there's Chipsomakers, which is a project to improve the software side for ASIC design, which also has a talk tomorrow upstairs. They are aiming to produce a community-funded small CPU. This is 16-bit, 32-bit, 68K, Z80 type stuff, and have actually managed to get the design done, but in the first round, the funding for the actual mass production of the chip didn't work out. They're trying to do another run, and we'll see how far they get. So that's my talk so far. Questions? In some cases, Rust is a problem, and Rust has a back-end LLVM, and because they don't have LLVM, then you cannot have Rust, you cannot have Firefox, and so on, and you can't have any other project which is Rust-loaded. What is LLVM doing to handle the Rust-specific patches of LLVM, because even if you have LLVM... It still needs to be ported, yes. ...needs to have the Rust-specific patches to LLVM, because the Rust community also has a set of patches which they didn't get upstream to LLVM, so even if you were in a situation where you had LLVM, you don't have necessarily Rust on the LLVM. Okay, I try to repeat that, because the people on the stream probably haven't heard the comment. So to get Rust, one does not only need to have LLVM working, but there are also Rust-specific patches to LLVM which also need to be ported, which would also need to be done, yeah, to be upstreamed. The question was, how is there been handing those patches currently not at all? We haven't actually looked into that yet. Further questions? Sorry, I just didn't understand that acoustically. Okay, the question was, Debin has rather recently bootstrapped AMP64, and how much of the bootstrapping procedure have we been able to reuse for the Debin-rescribed bootstrap? Actually quite a bit during the AMP64 bootstrap, several packages have gained build profiles to make dependency cycle resolution easier, and that has actually helped a lot. The RISC-5 specific stuff, of course, can't take anything from AMP64, but the infrastructure has gotten a lot better, and there are people like Helmut Grohne, who has written the rebootstrap tool that I had mentioned, are working permanently on making bootstrap ability better by fixing cross-architecture bugs, by adding more build profiles, so that definitely has helped. I actually don't know that has been handled by lawyers behind closed doors, and I have no information about that. Not within Debin. Okay, I'll just repeat that. How was the legal problem with the copyright assignment of the GCC code to the FSF been resolved? I really cannot tell that, because that was something that had been handled between the RISC-5 Foundation and the University of Berkeley, and they haven't told the public in which specific way an agreement was reached, so I can't tell anything about that. I'll just repeat for the stream listeners. The question was that that the base package set appears to be growing compared to past ports, when 32-bit ARM had been ported, the base package set had been something around 200 something packages. Now we have reached nearly 400, and which problems have been the major pain point in porting. The first major pain point has been Perl, the lack of cross-bootstrap ability of Perl, because all of the packaging infrastructure in Debin is based on Perl, so you need Perl for almost everything. So getting Perl ready has been a real problem. The second one in my view has been Mizon, but perhaps where's Manuel? Perhaps you can comment on that. Can you talk a bit louder? Just don't hear you. Okay, I'll just repeat that for the people listening on the stream. One of the pain points in the early bootstrap had been that Jellipsey for his craft had only been in experimental because we had to use a newer version than all the other Debin architectures, and that caused some compatibility problems in the early bootstrap, and then one problem has indeed been co-utilz, because co-utilz uses help to man to generate man pages, and help to man does not work in a cross-setting that took quite a bit of workarounds that should be resolved in a newer co-utilz version than we had at that time, but when we did the bootstrap, that was indeed a problem. That's, of course, another problem when one builds an architecture in emulation, in QAMO, and one has to handle a bug. The problem is, is the bug in the code I'm trying to debug, or is the bug in the emulation I'm using to run the code? We had some cases where the code actually was fine, and the bug was in the emulator, which, of course, has needed time and a lot of debugging, so having real hardware is, of course, better, but when you start such a port, often you only have emulators. I suppose the ARM64 port had the same problem, you started with the original architecture emulator. So that's something that's nearly unavoidable if you do an early bootstrap. That is easier if you wait with bootstrapping until the architecture has already brought use, but, of course, you want to distribute software early in the release of new architecture to make the architecture actually usable. Further questions? We have tried to get the cross-compilation patches upstream in some cases that has worked and they were accepted, in some cases they weren't, because some people consider cross-built support and maintenance burden, because you have to think of things of which you don't have to think when building natively, and some upstream authors don't see bootstrapping in new architecture as a relevant problem for their software. That's unfortunately, but that's reality. Yes, we carry the patches in the Debian packages, but we would have preferred to have them upstream, but they weren't accepted, just because the upstream developers consider X11 to be in maintenance mode, they don't want feature additions, even if it's just a new architecture. Any more questions? We have a CI system running rebootstrap, Hamilton-Gronis base bootstrap tool, which works with cross-building, and we have Jenkins jobs doing that, so if you have packages that are part of the base systems, it's helpful to look at the Jenkins results and see if your stuff fails. Debian has portal boxes for multiple architectures on which you as a package maintainer can log in on each architecture that Debian supports and try to build your stuff, and of course you can cross-boot. We have cross-compilers in Debian, we ship cross-compilers for all architectures that we support in Debian. So the Debian packaging tools actually support cross-building packages, so you can just say sbuild-dash-host-arch is RISC-564, and it runs a cross-compiler and tries to cross-build your package. So if your package maintainer, it's rather easy to check whether your stuff is cross-buildable. That's more of a problem, upstream is more of a problem. We had some discussions yesterday evening about ways to provide RISC-5 machine instances to upstream developers to enable them to test-build their code and debug issues on native hardware. That's being worked on. I'll just repeat that for the stream. The common was that we should try to avoid bootstrapping RISC-5 32-bit because other 32-bit architectures in Debian are already struggling with keeping up and building large packages. One known problem is building stuff like a debug build of Firefox, which is a problem on 32-bit architectures, in fact. Yes. PGA, even with an ECP-5, that's gonna be most of your FPGA gone, whereas RV32 is something that can very nicely fit on FPGA. Okay, I'll just repeat that for the listeners on the stream. Counterpoint to that argument when one is targeting CPU implementations on FPGAs, 64-bit architectures are a problem because they use a lot more FPGA resources than 32-bit architectures, way more than double of 32-bit architectures. And getting a 32-bit RISC-5 CPU Linux capable on an FPGA is quite doable, even on smaller, not tiny, but smaller scale FPGAs, but 64-bit CPUs is a problem in resource usage. Any more questions? Okay, thanks.