 Okay, so as Justin already introduced me, I'm now going to talk a bit about what's new and systemed in 2016. I'll actually cover two things, half of it is, or more than half of it is actually going to be about the stuff that we did in 2015 and the beginning of 2016 and the other stuff that we're working on 2016. So yeah, a little bit more about the, I mean it's going to be a fairly technical talk actually, but I'll initially talk a little bit about the community, like we had the Github presented just talking in front of me before me. So one of the things that actually changed in 2015 is that we switched to Github with our project, which is like, it's much better I think than the stuff that we had before. It also has quite a few drawbacks, but the major goal that we wanted to achieve with that is actually increase our base of contributors by making it easier to contribute, because most people already have a Github account, while the stuff that we're using before, which was like free desktop and these kind of things and back there and stuff, it wasn't really like people generally don't have accounts there, so it's much more difficult for people to contribute. And that worked out. I think nowadays that we are in Github we got a lot more of these drive-by contributions, like people who just found some buckets in something or want to have some improvement in one component of system D, and they quickly hack it up and push it into a PR and then we can actually merge that. So these kind of drive-by patches I believe are one of the most important kind of contributions to open source projects because they actually make the polishing of a project, like they fill in the little gaps that make the software nice to use. So yeah, Github is much better, but it still has major issues. It's kind of incompatible with our workflow in some ways. So it was actually surprising to us that Github, which is like the thing that everybody uses these days, still has these issues that we think there is. I'm not going to go into too much detail what that is, but yeah. Also system D has nowadays been adopted by all the big distributions, like the one major exception that people know it's Gantu, but everything else, like all the commercial distributions like RHEL and SLAS and things like that have adopted it, and all the community distributions like Debian and Fedora have to. So if you're running a distribution from the last year or two or something, you are very likely to run system D. So yeah, it's commercially and community adopted, I guess, but of course raises the question if all the controversies around system D, which I think most of you probably noticed in the community going on, are over now, and whether we are boring now, right, like because well, it's done now, right, like the decisions have been made. So maybe there's something else to discuss over the next years. Anyway, something like, you know, system D, we always define that as this basic building block, like this basic set of tools that you can build an operating system from or a distribution from. One of the more recently added components of system D is network D. Network D is a, well, as the name suggests, a network configuration system. It's, of course, there are many of those already in the around, but network D I think is a much nicer one because it is very generic and is capable of applying configuration, like that you write once to many interfaces and it has all these interlinking. So in many ways it does what network manager does in most Linux distribution, but it does it differently and does it in a way that we think is actually nicer and generally more compatible with much of the stuff that administrators and embedded people want to do with it. Network D is far from complete, right? Like you cannot even, you cannot even talk to it dynamically, right? Like everything you do is you write rules and we'll execute them, but you cannot even talk to it just yet, but that didn't stop many embedded developers and many cloud distributions to actually adopt it already. So this is one of the more recent successes we had is that both Fedora's and Ubuntu's cloud edition now install network D by default and use that for network configuration. Because in the cloud systems usually it's pretty easy, right? Like you have usually just one interface and you want to do DHCP for that and it should be dynamic and things like that. And that's pretty much network D is perfect for that because it can do that and can do that dynamically and can do that nicely and things like that and has not many, many like dependencies and stuff like that. So yeah, another big success I think that we saw with system D, another component of system D which is Antspawn, it's like it's a minimal container manager. Actually, my second talk today is going to be pretty much about that. It got actually adopted by CoreOS Rocket. I'm sure if you guys know CoreOS Rocket, like you know Docker, like Docker is like the hot thing in Linux these days. Rocket is something like that, it's developed by CoreOS and they decided to, for the actual container, containerization like the backend, like the lowest part of the containerization that there's no need for them to actually develop that again and then instead decided to use this minimal functionality that's built into system D right away. So it's kind of like we like to see this kind of stuff because we consider ourselves as the providers of these basic building blocks, right? Where other people build stuff from, like if it's an operating system or if it's a container manager like Rocket, that doesn't really matter much but we see our position as the ones who just provide the building blocks and we do not necessarily build the finished product. That's what other people should be doing. But also a big success, which is a little bit surprising actually is for Network D we wrote this minimal DHCP implementation because you know everybody or many people here I think know what DHCP is, it's this thing that assigns you an IP address. It's actually one of the most trivial protocols we have, right, like all the devices in the world implemented and it's basically something where you exchange five messages and that's it. Now most of the network management solutions before us, before Network D, used one of the standard implementations of that and if you look at them, most of them are crazy and they are not integrated because they call out to them and we decided, okay, let's just do it as a library and do it minimal and do it something that actually can be reused by the network management solutions directly instead of calling out to these external projects so that they don't have these dependencies and asynchronous behavior and things like that. So, and that actually, we wrote that for Network D and we wrote it in style of a library that we could eventually even make public. But we never actually made it public yet because we weren't really sure about the API and if it should look like that when we make it stable but that actually didn't stop Network Manager to adopt that too. So now both Network D and Network Manager actually use our internal library for DHCP which I find pretty cool because I mean in a way they are the competition to Network D. Another project we have been working on is SystemD Resolve D. SystemD Resolve D is a stub DNS resolver. So you might ask a question of course, what does a stub DNS resolver have to do with SystemD? And again, our definition of what SystemD is is really like it's a toolbox of the basic components that make up an operating system and we believe that DNS resolver should be part of that. Now, Glypsy, right, like the GNU, the C library, it already contains a DNS stub resolver. So what SystemD Resolve D adds on top of that is actually that it is a DNS stack capable. DNS stack is this thing that basically allows authentication of DNS lookups so that you actually prove that this IP address is actually the IP address for the domain that you try to look up so that if you go to your banking website and let's say it's called foobank.com that you actually end up at the website foobank.com maps to and nobody can interfere with that. It's something that has been deployed on the internet for a while and we thought, okay, it should actually be something that should end up on people's systems that it actually is used because there's information by default not used on these systems. SystemD Resolve D also adds a couple of other things like it does MDNs and LLMNR by default so it not only does DNS like the internet name lookup stuff but also local stuff. It's a relatively closely integrated with network D so it gets all this DNS information from network D. Yeah, most of the time hopefully people will not even notice that SystemD Resolve D is there. It's mostly just a component that is low level and an API for applications. Yeah, that's basically what I put here. The stuff that it enables among other things is that you can actually embed certificate information as DNS, right? You know the DNS for those who don't know is actually those naming system that is used for example too so that www.rathat.com is resolved to API addresses and by using DNS stack, right? Like by using this authentication information you can actually embed security information like certificates in the DNS and you can make sure that they are only trusted if the signature actually matches. Anyway, I don't wanna talk too much about all this stuff because it's fairly specific and probably not too interesting for many of you. So the next thing I would like to talk about is unified control group hierarchy. That's actually a fairly technical topic like it's a fairly low level topic but it's actually one that has been driving us for a while. Control groups are a Linux features that basically allow SystemD to take services with all the processes, apply resource limits to them. Right, resource limits means for example that they only can have that much memory and that much CPU or that they have other constraints during runtime. It also allows SystemD to track the runtime of the services so that we know exactly how many processes a specific service has running at a certain point in time and when these processes die. It also allows us because we can enumerate and we can kill the service and things like that. C groups has been around for quite a while. C groups, like C groups are short for control groups. C groups is a kernel interface and quite frankly it has always been a disaster. API-wise what the kernel provided there and we always had to deal with that in a SystemD context and try to abstract that away and integrate it nicely with service management so that administrators don't have to care. So administrators can write in the service files they can say this service gets so much CPU and so much memory and so much disk and things like that so that they don't actually have to think about that. But we under the hood in SystemD we always had to deal with the fact that the kernel APIs for all of this were horrible. Now the kernel community and specifically Tejun Heyo has been working on cleaning that up on the kernel side. That however involves completely redesigning the kernel API and that project is called Unified Control Group Araiki. So it basically looks at this control group stuff and throws out all the really badly designed API parts and turns it into something that is much, much nicer. What we have been working on with SystemD is actually making sure that SystemD can work with that. It's not complete yet. The new Unified Araiki was actually released for the first time as a stable API in the kernel that was released like two weeks ago which whose number I don't actually know was SystemD actually doesn't yet run on the newest kernel because I changed the API in that part in the last weeks or something about that. But basically all the groundwork in SystemD is done that makes it work with Unified Araiki. With Unified Araiki everything gets so much nicer because the code base in SystemD gets much nicer but then it also means that a couple of new features are available for users because we can actually start exposing a lot of resource management things to users. In the long run that for example means that we can even do firewalling and SystemD in these kind of things. It also means that we have the PITS controllers which is something very simple actually. Like if you do service management which SystemD does you want to be able to put a limit on the number of processes that a specific service can have. And PITS controller is that it's a simple thing but there's also the concept of safe delegation. It's fairly technical it's really about that if you run Docker or some container manager so that the C group Araiki so these C groups these groups are organized in Araiki you can actually give parts to that to the container manager so that they can run their own stuff and can even start SystemD below that recursively. It's probably yeah I figured like you probably can't make much of the sense of all of these things that I'm talking about here because it is really fairly low level and very technical. But yeah just think that we worked on. So effectively like what the only thing that users will see of that right if they ever interface directly with the C groups Araiki which you can do because on Linux that's actually exposed as a file system then basically the only change is that if you go to SysFS C group and they previously had the controller name there and then you see the Araiki the SystemD manages afterwards it just looks like this and the controller was removed. That's kind of the essence of it. But anyway I'll go to the next thing now because that is a little bit yeah if you want to know more details about the unified Araiki ask me on Sunday. Another thing that we did in system D recently was SD bus. SD bus is a it's a divas library implementation. For those who don't know divas is kind of the IPC system that we generally use on Linux systems these days. It's what AppStart used to do. It's what system D uses. It basically allows applications to talk to each other and talk to system D. We used a generic implementation like the classic divas reference implementation for the longest time in system D. But it's not very nice code and it's very Barak like you need a lot of code to write a lot of code to use it. While we always thought it should make much more stuff do that automatically for us so that we don't have to write the code for that. So a while back we wrote it as divas which is our own divas library and it's much, much nicer. I blocked that about that a couple of times. So if you actually have to talk to the divas system and I think most Linux developers probably have to do sooner or later then consider using that library. It's a C library so it's only relevant for C people and C++ people maybe. But yeah, it's really written to make divas nice to use so that the code that you need to actually invoke a method remotely on the local system is actually shortened as much as possible. Related to this we actually provide a library called SD event. It's an event loop library. It's because if you do divas then you always get these massive calls coming in and going out and you get the replies asynchronously and these kind of things. We need some kind of event loop. Like an event loop is basically the core part of every process that you have on Linux. We publish that thing as SD event. There's no need for people to actually use it. They can use their own event loops if they want to. They can either write them on their cells or use an event loop provided by some other library like glib. SD event is simply the event loop that we use inside of systemd and we think it's a really, really good one for many reasons. Yeah, I don't want to go too much into detail because my time's already over anyway mostly. But yeah, if you're writing a low level component like an embedded component or something for Linux consider having a look at that. I already talked about network D and then this network management stuff. We added a couple of things there too like Slack and IPv4-ACD that's for those it's mostly IPv6 stuff, the Slack stuff it's about assigning addresses. We have our own implementation about that now which basically means that network D if you have it in the end it doesn't have any dependencies, right? It doesn't pull in DHCP and all these external components to actually function in the most basic way. It just does it natively. It also does IPv4-ACD which is a collision detection like it's automatic detection if two hosts on the same network use the same IP address. Like some operating systems have that functionality. Linux never had that build in and we added that to network D so that you can actually will notice that which is really interesting information to know that if you have misconfigured your network. Yeah, and it also has a Divas API now so it's not complete but it's something we're going to work on. And Spawn, I'm going to talk about that in more detail in my other talk so I'm going to skip over this mostly. Something completely random is we added support for USB function FS. This basically means that you have kind of services and system now that don't do socket activation for those who know what socket activation is but USB function activation which basically allows you to run a service. Like if you build an embedded device for example and it has no USB connector so that you can connect it to your laptop and that device can provide services over USB that don't have to run all the time but are only started the moment where somebody actually connects the USB cable to that device and tries to mark use of them, right? Which most of the USB functionality is like that because after all the USB functionality is used mostly as exception and not as most of the time. So we have built that natively so that you can do automatic activation of that functionality so that basically you have a service and you just say, yeah, don't run that service normally but the moment somebody plugs in something and actually requests that USB service started that moment. It came actually from the Samsung people who did that for embedded devices. Something else completely different is we added key ring support, kernel key ring support for LUX keys. The kernel key ring is a functionality that you can upload cryptographic keys that are required by the system into the kernel and we closed one gap there so that the cryptographic keys for a hardware encryption can also be placed there. Something for administrators is we have system control reboot desktop firmware which basically allows people who run EFI systems at least if they wanna reboot the system they can specify desktop firmware now which puts them in the firmware setup, right? This is actually really, really useful in functionality because on many of the newer laptops that boot very, very fast they try to avoid to chief booting very fast by not initializing USB controller and not initializing USB controller means that you cannot actually use the USB keyboard that you might have to interact with the BIOS firmware and actually get the BIOS setup. So that basically means your USB keyboard you can use it to interface Linux but not with the BIOS. This is basically, it gets you out of the misery that you can actually get into the firmware setup anyway by simply specifying that thing. And that's actually the last slide. I have something we have been working on system control revert and system code refresh. It's not out yet but basically allows revert allows you to if you get a service like say MySQL or HTTPD and you did resource management with it like you specify memory limits or CPU limits or whatever else with it and then you finally figure out, okay now I wanna go back to the original vendor supply defaults and you can use system control revert and we'll drop all that. And system control refresh is something we have been working on for a while which is actually really really complex but hopefully very useful for people. It's basically a way how you can reload the configuration of specific diamonds only instead of the entire system D system. So it's actually, you know it will reload the configuration about specific system D services not so if you if you write system control refresh HTTPD it will actually reload the stuff that system D knows about that service not the service itself. Previously to explain that in system D the only way how you could make system D reload its configuration is by doing a full reload where all the service information system control diamond reload that was called would be reloaded which turned out to be a problem if you have large setups because some people run like a 50,000 service or something on a single system. And if they do system control diamond reload then the entire reload takes ages and with the system control refresh you can pinpoint that on specific services. Anyway my time is already over since a while so very sorry for that. If you have any further questions please come to that workshop on Sunday and I'll answer anything about this stuff and about anything else about system D. I know this was very technical I still hope that many of you could follow at least some of the topics. Thank you very much and talk to you in either two hours or something or one hour when my other talk is or on Sunday for the questions. Thank you. And sorry for that.