 I'm Germany on Sponky! Yeah, hey, I'm Martin and since ten years I'm working at Genote Labs in Dresden in Germany. So what I was mainly involved there in the genote framework, I don't know, do anyone know about the genote framework here? Yeah, someone? Okay, it's like an operating system framework to create operating systems. And yeah, I was involved there in creating the first in-house kernel for Genote and drivers and hardware support, especially for ARM. And after that I switched a little bit to testing infrastructure and built an automated test store. And then, yeah, I finally ended up with Ada and Spark last year and to be honest I'm a total newbie to this. I visited Ada's death room last year the first time and yeah, I was very excited about experimenting with it. And then I thought about how can I leverage from this, how can I do some Ada in my own system, in my working system in Genote and I thought why not write a kernel for it because it's the most fundamental task, I want to trust it the most. And yeah, so first a little bit about Genote in general because not everyone knows about it. I said that Genote is an operating system framework. It's not actually an operating system. And this framework consists of three parts. First you see here at the bottom of the picture that Genote runs on several kernels, mostly micro kernels. There are a lot of third-party kernels like NOVA, SEL4, Fiasco C or Linux. And there's also this in-house kernel I mentioned that has the beautiful name BaseHW. And on top of that we have the first component of Genote. It's the core. And what core does, it actually translates from all these different kernel interfaces to the generic Genote API abstraction. So that every component then runs on top of it. Generally as kernel agnostic you don't have to recompile it normally to use it on a different kernel once you decide to switch. And then on top of it you have a waste collection of components that you can put together like legal bricks to create your own individual system. And this individual system can scale from embedded systems up to desktop systems. Actually in our office we use all our own desktop system built on Genote for working. And you can see there are a lot of drivers for the most common hardware on x86 and on ARM. And then you have these native resource multiplexers like router for networking and audio mixer and so on. Then there are system services like for instance the init. This is a very special one because it's used very broadly in Genote because it can dynamically load subsystems. So you can start other applications with it and control and manage them. And also the services between them and the resources. Then on top of that you have applications several. These green bricks here are native applications that run natively on Genote. For instance you have these depo query and depo deploy. These are tools for the package management which brings Genote. Or for instance this guide manager is like a managing component for our desktop system. And then there are a lot of ported programs also because Genote provides also LPC abstraction and POSIX for porting components. And you can see we have a GDB, virtual box, GCC and all the stuff you need normally to work independently from other systems. And then a lot of libraries I have put in with different bricks because what I wanted to show with these bricks is all the components are very well separated from each other. So what the system does is it applies a very strict organizational structure. So the complexity of the whole system can be managed. So normally all these components don't know from each other. They only see services and they know how to use them but they don't know from where they come. So normally you have only client-server relations between them and parent-child relations but the components themselves don't know from where it comes. So I wanted to start here with the base HW corner because it's an in-house corner. I know a lot about it, about its internals and it's very easy to modify because the base HW corner is not really a self-standing corner. It's a library that is linked against the core of GNOT. And it's less than 10,000 lines of code, a lot less if you specialize on one architecture. And these modules that the base HW corner is made of are also very well, pretty well separated from each other. So you can pick them out and translate them for instance or modify them without having to change all the interfaces. Okay, and one special thing about the base HW corner is as I said it's linked against core. So it runs in the same address space but it's separated from the core functionality through a syscall interface. The syscall interface is very small because we wanted to keep things very minimalistic in the base HW corner. It has like 25 public syscalls to user components and 20 private syscalls that can only call core. So they are for internal operation only. Yeah, and another nice thing about it is it's single threaded. Our approach was if you have SMP or something like that it's enough to have a thread for each CPU and you simply lock the whole kernel while one CPU is in the kernel because kernel passes are very, very fast. You get into it and then you get out of it and done. You have nothing to block there, you have nothing to... Normally you don't do a lot in there and so it should be fine with a big kernel lock. Yeah, so this makes this whole kernel only a big state machine that is really good to control and especially really easy to transfer into our context for a newbie like me and Ada and Spark. So I thought it's the perfect goal for me to go for. Yeah, now I have to get back a little bit. We had an Ada and Spark project last year that was the main reason why I started to learn this language and get into it. It's a block encryptor. Unfortunately a block encryption is a little bit complex so it's not ready yet. I'd like to show it maybe next year, I don't know. But what we had during the design phase of this block encryptor was an in-depth discussion about how to find a good design, how C++ and Ada and Spark can work together because the native genome environment is in C++ and we don't want to change this in the first step, of course. And so we came up with this design for every Spark package that shall interact with C++ or that shall be called from C++ battle. There is C++ class that has only the public interface declarations so you can do the public calls on it and it is bloated to the size of a record in Spark that this record, this is the object that contains the object layout. It's like a tech type if you want to. Then you can leave all the allocation things out of Spark and Ada. You can allocate your C++ class here in the C++ world and call your functions and what they do is they go through some kind of intermediate package that is prefixed with CXX, for instance. And this package does all the conversion between the arguments that are still in C++ without any restrictions. They are all translated to a Spark compliant way to fulfill this type safety and then the same function is called at the real Spark package. So what this makes is you can have in your C++ world this class that is interchangeable. You can have in C++ implementation or a Spark implementation. It doesn't matter for the C++ code. It doesn't see this even. And on the other side it's the same. With the Spark package I can do all my cool stuff with contracts and stuff without getting annoyed by the fact that it's actually allocated and called by the C++ side. So I wanted to use this approach also for BaseHW because I wanted to take the BaseHW design but I don't want to reimplement it all in one step because I wouldn't have any intermediate steps where I can test it and see if my approach is really successful. So this let me translate the single modules, pick out single modules of the BaseHW corner and replace them with other implementations. Actually it's not Spark. I will come to this later. So how far did I get? First of all I had to move all the other runtime into the kernel. This was a little bit of work. But fortunately I had a lot of preparation work done already by the guys from Componentlet, by Johannes Tiemann and Alexander Sinier. And they helped us a lot with this. They had always an open ear if we had any problems with the other runtime on Gnode. And there were some quite difficult problems that I couldn't understand with my knowledge. Once this was done I started with the data structures on the right side. It's pretty simple. It's only a list essentially and a Q, a double link list and a Q in BaseHW. But there I had one reason why I wouldn't go for Spark directly. I wanted but the problem is that BaseHW is dependent on pointers and it uses it a lot. And it's a little bit hard to get rid of it in the first step. So I decided to use access types and unfortunately we don't have borrowing supported yet in the other runtime on Gnode. So it should be fine with adder. We can't go to Spark directly. Then the second step was signaling in the RPC. And my experience with this was once I had translated the modules and I had managed a huge amount of compiler errors that I had to face. This really made me almost crazy. Like hours of compiler warnings. But the cool thing about it is once they were implemented they worked out of the box. And I could not only do some single little tests or something like that but I could directly put a huge system on it and run it and it worked. And that's really cool. That's something you're not very accustomed to if you're developing in C or C++ I guess. And the third thing that I started this winter was the scheduler. And it was essentially the same. The scheduler is pretty complex compared to the other modules in the base HW kernel because it is the only kernel that applies the quota-based CPU scheduling system that Gnode has. So you can trade CPU time between components. And this makes it a little bit hard. So it's a little bit complex but anyways it worked out of the box and that's really cool. So that's a nice thing. And between them what you may have figured out is all the other modules that are yet not implemented in EDA are, I take them directly from base HW and they work together with the EDA modules without any modification. Okay, so what are the plans? This year I very definitely want to put all Spanky and EDA. That's the first thing. I want to get rid of the residual base HW code and all the C++ stuff. Then we want to publish a blog in Krypton maybe next year I can show it here hopefully. And until the end of 2020 I want to do a second step. Once I get rid of all the base HW stuff in Spanky I'm free to redesign it. So essentially I can try some strategies that I have for getting rid of pointers. That's the main thing I think. And to convert all Spanky and Spark. And then of course I'd really really like to prove some basic assumptions about the code. Yeah, and in 2021 I'm looking for Newland. I'm very deliberate about porting the base lib of Gnode so to say the native Gnode environment. So you can have pure EDA components in Gnode. And this would also allow me to do another project namely converting all core, the whole core component of Gnode into EDA. And this would be really cool because the core component is like the kernel itself, something that everything on Gnode depends on. And it would be a nice experiment for me to learn new stuff with EDA and Spark. So yeah, that's it from my side. You can try it out if you want because actually I wanted to show this. I forgot this. If you want to watch system is this, what I'm running on it's Spanky itself. And you can see here that it's already pretty productive I would say. You can see here, okay, I wanted to start a window manager for instance. And then I have to grant it the rights to access the services it needs because the system doesn't decide this on its own. I will give it a GUI and pointer shape service then some clipboard stuff. And I add this component, it starts here. And now that I have a VM, I could lend out a presentation. Now that I have a VM, I could start for instance a top view that was programmed by Alexander Bertscher that is also working in, who is also working in our office. And now I give it the VM that I just started and font file system. I forgot to start the font file system, sorry. Where is the system font configuration? I start this font server and then now to its top view. So I can try it again, whoop. And where is it here? Okay, so it should start and we can go out of the control panel and see here some thing whoop. And now we could start this and actually whoop. This is a bit fast. So yeah, you see it's working pretty snappy and it's already running all the communication power and the RPC and the signaling and the scheduling with all in it. That's really cool. I like it. So thank you for listening and try out these things. Okay, any questions? For you said that you have converted some parts from C++ to EDA and then you said that you were a little bit happy because it worked out of the box. In your opinion, experience, what has helped to reach this in the language or what are the things that helped to achieve this? I think that's only for me. The question was what has helped that my implementation worked out of the box with EDA and Spark. What do I think helped a lot? And I think at least in my case it's that the language disciplined me to find ways to clearly, to find a clear solution to the problem. Not to try to find some shortbacks if I get into troubles or into problems but the language doesn't allow for this. It says no stop, this is the wrong way and then I have to turn back and go again and find another design or something like that and that helps me at least a lot. The other thing is that I felt that I get a certain way of programming while doing this for a long time. I get another kind of thinking about problems and that's really cool. It encourages you to break things down into simple solutions and not to just use the tooling that is very complex and fancy. I think that's it. Excuse me. With EDA at the beginning of last year I started at the beginning of last year. I mean I had a lot of support. I had some books. I had Johannes Liemann and Alexander Sainier who already know a lot about Spark and EDA and you have seen eventually the presentation of Johannes Liemann. They are not doing the high abstract, no say simple stuff, but yeah. And yeah. That's it. These are actually not my questions. They come from Reddit. Okay. That's nice. Some of them might have been answered already. The first question was if the effort goes well, will we consider having the Spark EDA version replace the original C++ one and as opposed if not, what would convince them to you enough to make such a move? What was the beginning? Convert what? If the effort goes well, will we consider having the Spark EDA version replace the original C++ kernel? Ah, yeah, yeah, okay. No, I don't think that this will happen because the first thing is we always leveraged from the benefits of having multiple kernels underneath G-node. So what you have normally is that it's very broad testing basis. One kernel behaves one way, the other, the other way, especially the scheduling for instance, the timing, it's all different. So for us, I had a lot of benefits regarding development and testing. So I don't see the point why to remove the base HW kernel. It's a nice approach and has its benefits. And I think, I think especially the other thing is that when I start to redesign the Spark E kernel, it will end up with an approach I think that is quite different from the base HW approach because all the memory layout has to change. I have to rethink this and therefore I don't think that this is a replacement for base HW. It's just another way to emerge. Okay. Yeah. When there was a question, what hardware are you using? Excuse me? What hardware are you using? It's x86 with 64 bit only. I specialized on this platform for the first step of course. And unfortunately, yeah, I don't have the time to do it all at once. But yeah, I'm planning to go on ARM also if this is possible. And the base HW kernel already supports several ARM platforms like x86 and RISC-5 and stuff. We'll see with time. But for me, the most important thing is x86 because I'm using it on my laptop. So in your particular case with the Spark E kernel would basically be a question of the runtime level from ADAR? Sorry. So from your side from the Spark E kernel it would basically be the question of support of ADAR runtime? Yeah, yeah. Yeah. The question was why if I specialize on a certain architecture and if it from my view was only the decision because of the support of the other runtime. And yeah, I mean I haven't thought about it a lot. The thing is that this project that we had is developed for x86 now and so it was natural for me to keep this also because it has the most beneficial the biggest benefit for me. Okay, next question. Yeah, sorry. Are there any other questions here in the room? Sorry, yeah. I intermix them, yeah? Okay. Yeah, yeah. There might be maybe you've missed it. I don't know. There is this borrowing support for proving contracts with Spark but the problem is that this borrowing is not yet supported in the ADAR runtime of Gnode. So this would be a thing, yeah, I don't know. It's a little bit more work. This is another thing I want to go into that I missed it in the plans that would broaden the support of the ADAR runtime on Gnode because there are several things missing. Yeah. Okay, yeah. So what are some of the significant challenges when converting to ADAR? Okay, the question is what are the significant what were the significant challenges when converting to ADAR? I must say it wasn't a lot because I restricted myself a lot in ADAR because I wanted to make it very well fit to Spark later. For instance, I wanted to have pure packages only I wanted to don't have inheritance yet because I don't know a lot about inheritance now more. But yeah, I changed a little bit the base HW design for instance to not use inheritance that heavily and yeah, these preparation works or for instance I don't want procedures with any functions without parameters. For that I had to change the C++ code a little bit. But I think the main thing were a little detail in the devil that generics didn't work out of the box because we had no finalization not generics in general but generics with how is it called with incomplete types. If you have an incomplete type and want to put it in a generic there you need finalization by default and this isn't supported yet so Johannes helped me with this a lot again he knew then after some research that there was a pragma for no finalization. Thanks. Johannes, you had to... I just blended it out for the first time actually. Okay. No problem. Is conversion to Ada helping to identify issues that weren't apparent before? Not real issues. I must admit because we had already productive systems running on base HW before and it was tested very well. But what I found definitely were some I don't know like gray areas where I was lucky that we didn't have problems with the C++ code before because it was programmed a little bit dirty. When I started to fix all the compile errors I had with Ada and Spark I had to clean up these gray areas but there were no real issues with the base HW. Last question for normal Ada codes but basically from what I understood the gluing code between C++ and Spark are you using any static analyzers and if so which one? For the glue code between them? No. I didn't analyze it in general you mean with a proof or something like that? Yeah well I just verbited a couple of questions already. This will be a future task. I want to get rid of the glue code and the C++ code anyways so I didn't see a point analyzing it. Thanks. Any other questions? Okay Thanks.