 Díky, měli vědět, že se vám přijde do týstvě hlavní projekty. Máte v 2016. Máte Martinděcký. Toto průjtě by to bylo přednout jako běhrmář. Ale závědět, že byl jim jim jim jim závěd. Tady návědět. Není se nezávědět, ale je to nejvědět, můžete se na to závodit, aby se to závodit. Takže můžete mít, když můžete vzítiť v závodit. Můžete mít, že to než největného kvalitáho důležitou. Takže, můžete to závodit. Předmým vždym v těch technikách, můžete mít, že se však vykáváš. Však můžete mít, že se to závodit. Jsou u罩ny vnětní vězky, to jsou hlavníka, kteří tu diskrétu, a je je nejlepší genulální náčel. Toto je proti vědětní škoda, mikroekerno-multiřchávo dát vyvnít a hlávnit z okazí. A jste pod nevědětným vězeem a také hlavníků na naše netle. Máli se na hlavníku. Let's dig a little bit deeper. So open-source means that our own code, the bulk of the code base, has the BSD license, the new BSD license, and we also use some third party components, which we don't link directly to, but which we use indirectly, which are under GPL. Generally purpose, that means that we don't want to be biased towards když na výstředku, nebo na výstředku, nebo na výstředku, nebo na výstředku, nebo na výstředku, nebo na výstředku, nebo na výstředku. Vy moc tady se střeba, že je zvukovat závodný, desktubovat embadité systemy, jako na výstředku nebo na výstředku. Multiplatform. Pozražujeme než tedy svoje hrátvě hodinou. To je všechno. Alež to vám se vzluv Frutist. Máte vědět, že vlastně máte vás 8 arhidáčů, když je 7. To je, protože jsme vytvořili jednu. Vytvořili jsme vyspárný V8, 32-bitný vyspárný variant, protože jsme vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný. Máte něké vyspárný rand, ale to je celké vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný vyspárný, protože nemůžete odstavit s námi vytvořit které je to nechce vyjít, a to nezávědět na skále, takže to je všechno mikrokerna, a tohle tohle požížení můžu vědět, až nezávědět nezávědět nezávědět, takže to je to nezávědět, ale to je nezávědět, že tohle mikrokerna může nějaké výstředě. Je to dividená na velkou některéjstváné Ichakosí a úplnité dní vždy. Protože se v tom, že je partner-obstrakci na zvuku je všechno ziličné. Je to když 10-15 % dnes s tím, že jsme vůbec skládili kód, jak se陳 750 km vyskávodné. A to se pěkná, když se jizbo se dělá, když je to celý komponent do stíli, až se je to zvuk, když je to dozvuk, když je to dozvuk. Takže se když je to dozvuk, když je to dozvuk, když je to dozvuk, když je to dozvuk. One note, for example, the networking is also decomponentized, so basically if you know the networking reference model, the physical layer, link layer, transport layer and so on, this is exactly the way how our networking stack is structured physically, so there is a separate component, separate task running in isolation from the other tasks. In the networking stack for each of the networking layer. Just to give you some rough idea, in total this is really not just the kernel, but in total this is the size of the code base, this is some rough estimate of the work effort. Of course this is totally wrong, because the basic code command model is not really suited for systems or systems software, but it can give you some idea. And except of let's say 6 to 8 core developers, this has been developed in the course of some 24 master phases, 4 bachelor phases, 11 Google Summer of Code projects and so on. So yeah, I should really mention our contributors here, because we are eternally thankful for them. Ok, now you might think these are some crazy people who are just trying to reinvent the wheel, and this has no sense of really giving any attention. Let me explain it from a different perspective. The legacy operating systems, of course they are working quite well, I would say, but their designs and APIs are in many cases broken, insecure. At the very least they are morally obsolete. And that doesn't necessarily mean that everything is wrong with Linux, or that there is no hope of taking inspiration in principles or implementations that have been around for many years. I have spoken about this topic two years back. But many things can be designed in a better way with current software engineering standards and principles by thinking out of the box, by not implementing yet another Unix, by not trying to reinventing Linux or creating new Linux. And also if I can slightly separate from my respected colleagues, from other micro kernel systems, we also think that it's important that we write the bulk of our system core components ourselves, that we don't import drivers from Linux using some driver device driver kits or ramp kernels or whatever, because we are really afraid of franken components. We don't want them. We want our components to suit our architecture, not to somehow glue them together using adaptation layers. If you don't get my idea, I'll show you these three pictures and let them sync for a few seconds. So we don't want this. We want a nicely designed and engineered operating system. OK, let's move on. There's also another issue with borrowing components from other systems, which we would like to avoid, and that's the maintenance burden of these fork components. So once you fork something, which is external, you basically create what Jakupier Marsch calls software for sale. So it's a snapshot of the code, which the upstream is no longer required to maintain or to update. You are required to do it, and this creates the possibility of you forgetting or not managing to backport, security fixes, new features. Sometimes you have to deal with diverging licenses and all these kinds of stuff. So the Helenos mainline, the microkernel and the core components, they are always fresh. They are always working in sync, and they don't age, while the Helenos coastline repository, which contains the ported software I have spoken about, they tend to do this. This for us really shows the difference between creating the system in our own image and porting some franken components from other systems. Because the franken components, they don't follow our basic development principle to design something, then implement it, then run some verification tests on it, and then reiterating by learning from our own mistakes. Let me do a short intermezzo here. We would really like to cross-pollinate. I think we try every time we can. So last year, Jakub Jermáš has created this website, microkernel.info, and the idea behind it is really to join our forces with our fellow microkernel friends with other projects. It has been inspired by the way by the website unikernel.org from the unikernel community, and the purpose is the same. So it basically lists in alphabetical order the currently alive and progressing microkernel operating systems. Everybody can be included. I mean, there are no hard criteria for being accepted. There's just a brief overview of each of the systems and a link. Of course, the unikernel guys, they went already a little bit further, so their unikernel.org website also contains some tutorials and some other helpful materials. We can also have it. We can also move in that direction. No problem with that. Nobody just did it. So again, your input from everyone here is more than welcome. Really, the purpose is to create some quasi-neutral ground for our own microkernel advocacy. Of course, it won't be totally neutral ever because everybody is somehow involved in some of these systems more or less, but we can at least try. And if I can even say so, we have also the common enemy, right? So we can join forces. Well, I was really more like a joke, but I mean, I would say that we are more friends with the unikernel guys than we are with the monolithic kernel guys. Let's put it in that way. So again, please, any contributions are welcome. And the other suggestion for cross-pollination, we have already discussed this in the mailing list of this dev room. We would really like to apply for this year's Google Summer of Code as a number organization. I mean, Helen West has some experience in Google Summer of Code. We have been accepted three times and we were quite successful in GSOC. We weren't accepted five times, but still Google Summer of Code is very, at least we see it as a very valuable thing for a niche software project, because it really widens the awareness or spreads the awareness of the project more than anything else, I would say. So the ratio between the acceptance and non-acceptance, we don't know why we were accepted in these years and weren't accepted in those years. I mean, obviously our application was consistent. We didn't just screw up in the other years. Everybody knows that Google is not sharing its internal criteria for accepting projects into GSOC, so we don't know what's the problem for them. But we think, or at least we have a hypothesis, that one issue might be that there is simply just an abundance, that there are too many operating system projects in general, applying for GSOC. I mean, there is Linux, FreeBSD, NetBSD, OpenBSD, Illumos, all you can have it, and probably they don't quite distinguish between the monolithic systems and micro-canon systems. So maybe, and I really believe that we should at least try, even if we don't succeed, an umbrella organization for all micro-canon systems might help in this way. So we had already some, I would say, positive feedback from some other guys. So if somebody wants to add something to it here, let's discuss it here or during the dinner, just think about some of the technicalities. So the deadline of the application is approaching quickly. We can manage it, but we have to really agree on this sooner than later. So first question is, who should fill in the application? Are there any volunteers? So we are volunteering. We have some experience, so we can give it a try. I mean, obviously we won't do it like copy-pasting for our own application, but we will try to give it some twist. Again, an input is more than welcome. Where the idea page for this organization should be hosted? I really think that the micro-canon.info is the perfect place to host it. Again, please do send us your topics, your ideas for GSOC if you want to be included. I mean, the more topics, the better. That's probably the key. If you need some inspiration, I can send you links to our previous ideas pages, even from those years that we were really accepted, so you can see some working example. And the last question I would really like to ask here to my colleagues, should we discourage individual applications from the individual projects? What's your opinion on that? Anybody who wants to join this, if it's okay or not? We haven't planned to, I think we have tried once or twice, but they rejected it for reasons that I don't know. Nobody knows. So we did not consider to apply another client. I would like to know your experience about, you say that it's very valuable for renaissance, so I would like to know how sustainable is the attack. Obviously, Uber is paying money for people working on specific topics, but that's just a kind of one shot of money. But the question is, does it suffice to ignite the interest of those people, so that they stick to the project, or do they just do this kind of three months of work and then they disappear magically? I would say the investment of mentoring is not so efficient, because it has not a sustainable effect. So that's your experience. So my experience, there are two layers in your question. The one layer is whether it is worth the short-term investment. I would say still yes, because the usual, at least in our case, the usual GSOC project creates roughly the same amount of code as a master thesis. But a master thesis can take a year or two years to be finished and sometimes even never gets finished. In GSOC, the same amount of work gets done by the student in three months. So if it succeeds, it's quicker. If it fails, it's also quicker. So from that point of view, I would say it's worth the investment but you probably also do some supervisor master thesis from time to time, so you know what I'm talking about. And about the long-term, whether it really ignites some students to stay with the community, I would say the return ratio is not very high. In our case, it's something like 10% to 20% of the students. But it's still more than zero, meaning that these students or these people would not know about us without participating in GSOC. So it's net benefit still. Even if it's just marginal, it's a net benefit. So I mean, if this should fly, all that is needed is to come up with, I don't know, four interesting projects from each of the participating projects under this umbrella, which is probably not so hard. I mean, you should probably have some ideas flying around for other purposes. So just putting them in some shape that is somehow acceptable might take, I don't know, two, three hours. We will see. Of course, I cannot guarantee anything, but I really think we should try. And if I may answer to my own question, I think we should not discourage the individual projects for applying still because, I mean, we don't know what the Google's criteria are, so they might see it as a problem that they are trying to have more stakes in the ballot or maybe not. We will see. We can't try in two years in advance in different settings and hope that we are not fighting in Oracle, which changes every time. I mean, we can just try. So if somebody is in, please email me or email Jakub, we will just make it happen somehow. And everybody is welcome. Don't get me wrong. I like this idea because you mentioned the cost for the nation. Yes. And so far, I think, after all these years, this really hasn't happened so much between our projects. We had once this kind of project there, the Spark, the Spark and Tunelbust, or to get out to know, or at least attempted. So I like this as a chance to bridge between the projects in some way. This is not even necessary. I mean, the AMBRA organization is really something that covers multiple individual projects. I mean, GNU is doing that. And GNU is participating in GSOC in many years as AMBRA organization for various GNU-related or even GNU-unrelated projects. So this is just a single organization that supports. I think, so I'm with the Patcher Software Foundation, and I think I really agree with your point. So what a lot of times happens is Google Summer of Code looks for formal organizations. So it can basically, and I don't know why, maybe it's because they like how the money is managed or they guarantee that it won't be some random people or whatever, but if you have a nonprofit in the United States, it's actually much easier to get your projects accepted. On top of that, again, in ASF, the trick is that ASF always gets accepted because we're to the front of the known open source organizations. And then you can basically have as many projects within the ASF umbrella as you want. So it's actually very beneficial. That's my point exactly. So perhaps we should skip this discussion to the offline time, but I just wanted to show this here. And again, everybody's welcome. So now let's move to the real status update of HLEN OS. We are talking about the year of the fire monkey, which somehow coincides with the period from last for them to this year's for them. So that works nicely. Some general observation. There is slightly less activity on our side than in previous years, mostly because the core contributors are somehow distracted now, me included, so we don't have so much time to contribute to HLEN OS. Also, there are much less students, so we didn't participate in Google Summer Code, we didn't participate in ESA Summer of Code in Space, and there were only two master phases running compared to, say, six or eight in the previous years. And I would say we are somehow plateauing. Really, this is the key message here. Of course, I cannot claim that HLEN OS is perfect and finished. That will never happen, but it's finished from some perspective that we don't lack any major subsystem now, so we have networking, we have sound, we have USB, you name it. So there is no big chunk of code totally missing, so we are more switching into the phase of optimizations, refactoring, and so on, and that's probably not so attractive to many people. That's my explanation why the output is smaller. This can be also illustrated on our traditional HLEN OS camp, which is our hackathon, which has been done since 2005. So not so long as the OpenBSD's hackathon, but also for a long time. And this year only some three developers participated. There were some overlaps, but on average three, so this is quite poor, but we managed to implement something. So what we are currently working on, or what has been done recently, I was hoping to make a release before FOSDEM, but then Jakub fell ill and I had to write those slides and so on, so I didn't manage, but I might manage tomorrow, so it might be a FOSDEM release. So we have done some improvements on our Spark 64 support for Sun4U hardware. This has been jointly done with parallel developments in QMU, so we are very... It's hard to quantify, but we are somehow close to be really able to run HLEN OS on Sun4U in QMU S on physical hardware. This has been linked with somehow generalizing our user space zero console code. So previously there were different ways how to use user space zero console on different platforms, but this has been somehow unified. We have a simple installer, which is non-interactive, maybe a little bit dangerous. It tries to not to overwrite anything on your local hard drive, but I will still not really run it without consideration, and it's basically a first step into really deploying HLEN OS as a permanent operating system. So it currently targets QMU and works on these two platforms. But it includes a reproducible build of GRUB. So we are pushing this idea that our builds should be reproducible, that you should be able to run a single command on any platform, on any guest system, and this single command should create the same binaries, the same composition of components and so on. And of course the GRUB is one of the components that we don't link to, but we use it as a bootloader. So we build it also. Finally we have USB 2.0 support that has been in writing for three or four years, because one of the original offers of our USB stack decided that he would like to rewrite it completely and he's more or less did with some input from us. So it not only enables USB 2.0, but it also opens a much easier path to USB 3.0 and so on. We have optimized our processing of hardware interrupts in user space drivers. Originally there might have been a large overhead if the device was generating too many interrupts in a single unit of time, because each of the interrupts was translated to an IPC message and for each of this IPC message a new worker fibril was always created and then disposed, which created the overhead. So now it works in much more optimized way. Basically it's coalescing the calls somehow so that the worker fibrils do not have to be created and destroyed every time. There is again much more to do here. Some fibril pooling might help even in different workloads. We have improved our dynamic linking, especially with respect to the threat local storage handling and we finally have dynamic linking enabled by default on AA32. So if you compile HLNOS for x86 it will be created dynamically linked binaries will be created by default now. Another small improvement, which is somehow related with the threat local storage for a very long time from the early beginning of HLNOS we had this ugly special syscall for setting the TLS base address on x86 and on AMD64 basically because this sets the FS or GS register the selector, which is then used as the pointer for the base address for the threat local storage and this cannot be done or this was not possible to be done for a user space so there had to be some way how to deal with it and we really didn't like it of course. It's a ugly thing to have this stupid syscall just for the sake of one architecture. So our current solution is to have a different configuration of the threat local storage that can live with a static configuration of the GS and FS segments and only a pointer in this threat local storage structure needs to be changed and then this is used as the basic address for the threat local area. There is even a better solution we have discussed it with Yakub Miermas how to do it in an even more elegant way there are these two new instructions since IV bridge which can be used to set these two registers or not these two registers but basically the values of the segments or the base addresses of the segments from user space of course on older CPUs this does not work so how to do it in a generic way well it can be emulated so if the kernel receives an exception that there is an invalid instruction and it's this particular instruction it just can be emulated in theory it works nicely and it's elegant but the performance is worse than our current implementation so we are still thinking about what should we do with that and of course we have been doing like I already said a lot of coder factoring we have fixed many bugs that have been discovered by verification tools we are still on the bleeding edge of the GCC tools chain and we have helped to discover a regression 2M270 on ARM risk 5 I know I have promised that this should be ready in 36 hours last year and frankly I really didn't spend 36 hours to do it so it's still not there is some skeleton code in the mainline so at least you can try whether the risk 5 compiler is still compiling the sources but it's still not it's booting but it's not doing anything past the initial kernel initialization if I may have one small excuse for that they have been some changes in the previous specification of risk 5 during the year and they have been also changes in the spike simulator which is something like a reference implementation of the risk 5 platform and some of these changes were not explicitly tracked and documented so it was really a bad surprise that you compile the code run it and then you have to debug it for an hour to really understand what went wrong why it's not working as it used to work a week ago and two final items I would like to mention a little bit in more detail it's a service manager and user space pages so we have where is he there he is, Michal Koutný my former master student who implemented the service manager for HALAN OS as his master thesis so I have already mentioned the term micro services so naturally micro kernel multi server operating system is composed of services but this is an overloaded term so we are talking about the system services now like the micro kernel servers and there are also logical services like if you run an HTTP server I don't know NFS server on your machine that's a different kind of service not necessarily tied to the architecture of the operating system so this service manager for HALAN OS tries to combine these two entities somehow together so to have a unified look on the micro kernel services and on the logical services it is remotely inspired by framework or system please don't kill me let's move on so basically there are two new components one is doing the dependency resolving so it basically starts and stops the logical services according to some dependencies these dependencies can be either explicit so they can be specified in a configuration file in some declarative way similar to what you might know from other system management frameworks and of course there are those implicit dependencies that are built from the architecture of the composition of the micro kernel services so this is basically for more or less for free because it used to work without this new component previously now it's just integrated into into one package and there are some unit types individual service instances that can be either a service meaning a process a mount point a configuration or a target and then there is the taskman service which basically is monitoring the life cycle of the services and provides and monitoring API so it manages the logical relationship between the physical processes because there is no such thing in HLEN OS as a parent and child process so this has to be created this has to be created a new and this service can later be used for restarting for example if a service fails or something like that so it's dependency part and life cycle part and the final implementation piece that has been done is are the user space pages this is one thing that in which HLEN OS architecture differs from I would say most of the more family operating systems which really focus on removing all memory management from the kernel except the privilege parts like accessing the page tables and managing the TLB in HLEN OS we have decided to have a different approach to have a slightly more traditional architecture of the memory management with a kernel frame allocator a kernel heap allocator which is a slap allocator by the way and virtual address space manager it's still mostly mechanisms so there are almost no policies in the kernel except maybe some basic algorithms like single fit and so on our motivation for doing or for implementing memory management in this way was that we believe that it's still a single point of failure so even if we push it out into user space the kernel would not survive even for its own purposes if the memory management service or the memory management server would fail in user space so in that case we don't see a major point for not having this in the kernel itself because usually there were three address space area back ends for mapping physical memory for devices because also the kernel needs to have support to the hardware at least for the timer then for anonymous memory that basically is used to give memory to the user space tasks and for mapping elf binaries because the kernel during bootstrap needs to run some initial it needs to be able to access that structure in some structured intelligent way and now there is another backend which can forward page faults to user space basically so the current implementation is really quite straight forward there is a task that needs to access some memory which is being provided by some other user space task and it can be anything it can be for example the virtual file system providing memory map files so each of these tasks has its virtual memory map in the kernel and first the new address space area is created by the client and it's a special kind of virtual memory area with that user user pager then this creates a dedicated IPC connection to the server and the server creates its internal representation of this client of this request then if the client accesses the memory touches the memory within that area that needs to be served by the pager it naturally creates a page fault and this page fault is forwarded to the pager as an IPC message while the pager is blocked and now the pager asks the kernel to give the client some part of its address space so it basically answers the IPC call by this return value says that it's okay to provide the memory to the client and supplies its virtual address of the piece of memory that it wants to provide this virtual memory is then translated to the physical address by the kernel and this physical address is then mapped to the client and the client is woken up and that's it now this this dot or this page is alias to this page so it's a really quite straightforward implementation we have some additional ideas how to make it in a more generic way in the future but just for the purpose of implementing memory mapped files this works pretty nicely so any plans for this year for the year of the fire rooster we have some research papers in the pipeline we would like to finish them and submit them I would really like to finish the risk five port we have ongoing cooperation with the CZNIC that's the check national domain registry they have this nice project called tourist omnia you might have heard about it even on last for them and this for them so it's basically a very well designed home or small office router and we would really like to fine tune LNOS on this particular piece of hardware and then start some interesting collaboration with them we would like to switch to be a new release cycle just because we are no longer interface that we are waiting for this major subsystem to be written that major subsystem to be written it's probably easier to take inspiration from the genome guys where they're simply release often with new small perhaps smaller updates and we have already done some IPC optimizations and again now it's time to really dig in deeper and to for example implement some weight free algorithms to improve the overhead and so on so that's basically all from me thank you very much for attention and if there are any questions I will be happy to answer them and we can still discuss the GSOC stuff if there is some interest maybe later thank you