 Everyone, this is Josh Triplett speaking about Glimpse into the System D future. Hopefully it will be lots of fun. Thank you. So we've had a few discussions about System D this year. And thank you, I'm glad we can get this started on a fun note. So we did actually manage to, shockingly, make some decisions this year. It's very uncharacteristic of us. But in particular, the biggest decisions we managed to make this year, the biggest discussion topics were, what are the default init needs to be? So what has S been a init point to? Or what does etsyinit.d get interpreted by or replaced with? We talked a lot about whether you're allowed to use System D-specific functionality or whether you have to be generic to all init. And we also talked about whether packages should support other init apart from whatever was chosen as the default. And we did end up with the decisions for all three of these. The answers turned out to be System D, yes. And that seems like a pretty reasonable set of answers. The first one is a foregone conclusion, but it was nice to see. And the remaining two are pretty much the result we get most times when we make a decision in Debian. Sure, you can use all of them, but I want to support something that the package maintainer isn't going out of their way to support. So we have talked a lot about those items. In particular, we've talked a lot about System D, the replacement for etsyinit.d and System D, the harbinger of the apocalypse. And we've also talked a lot about System D-login.d and System D-shim and how we can usefully support desktops on platforms that don't have System D. But in all this vigorous discussion about System D, we've not necessarily talked a lot about the technology of System D itself very much. We've talked about the pieces that touch on defaults and interpretation of etsyinit.d and login.d and the various services, but that's really the tip of the iceberg of what System D has. So key things we haven't talked about are Debian policy for System D, which is actually currently in progress, being developed and hopefully will get written and published in the near future. Other System D components, apart from the one or two that have really made all of the major discussions and flamores, and how we can try to get System D as well integrated into Debian as possible and vice versa, how do we not just work but fit together very well. So part of the goal of this is to get a lot of information on the things that are having to be talked about. One of the big goals that System D has been pushing for is a lot of common infrastructure across many different environments. So this isn't just about common across distributions. This is also about common infrastructure across desktop environments. As much discussion as we've had about GNOME and System D and the interactions thereof, one of the big goals is actually to pulse a lot of logic out of GNOME and out of other desktops and try to put system logic in a system demon where it belongs. So that means we have one common implementation instead of here's how it works in GNOME or KDE or XFCE or ICWM or here's how it doesn't in many cases in the case of minimal environments. And try to make this accessible in all cases of, oh, I'd like to opt into the various useful functionality that's available. So this includes session management, which has been widely discussed in the form of LoginD, but also includes things like power management, so suspend resume functionality. That was previously very specific to particular desktop environment. Let me go invoke suspend, which will either do desktop specific things to clean up before suspending and resume. It can do various miscellaneous things before you suspend. There are things like backlight levels or mixer levels or various fun system configuration where that, again, it should not necessarily be desktop specific. And then there are things like RFKill where there's been a longstanding bug with GNOME and other environments not remembering, oh, I've turned off Bluetooth or I've turned off Wi-Fi and that should be remembered from boot to boot. That really isn't a desktop thing at all. That's a system thing. So it ought to be remembered on a system-wide level and then just controlled within a desktop environment. So that's one common case of factoring out a lot of infrastructure and trying to put it in a common location where a lot more than just desktop environments can usefully use it. All of this functionality works just fine on a text console, for example. One that's been reasonably well talked about is JournalD. So it's intended as a replacement for syslog, but notably it includes structured metadata similar to the latest standard for syslog, but not quite different from the classic text-only syslog approach where structured data meant it's designed to be read with set and AUG. So one of the nice features of this is it's designed to record a lot of trusted metadata about processes. You can ask the kernel what user ran this process, did it come from, what binary did it run, and record that information in the journal in a way that doesn't just come from the process that you're supposedly trusting. And it handles rotation automatically. You don't have to run cron jobs to do rotation. It handles not just time-based, but also I'm filling up my disk and I'd rather have new data than old data. And one of the really interesting features is it handles logging automatically from demons. So you emit data to standard output or standard error and it automatically goes into a log without necessarily going out of your way to call syslog or write to devlog. So if your program crashes at a system-wide level, emits random junk on standard error, you'll actually capture that and have some kind of useful error analysis and recovery later. So it's important to necessarily mandate that you must use JournalD, but it's reasonable to say, you know, I've got a persistent JournalD, maybe I don't want syslog. You may in the future want to have a package that turns on persistent JournalLogging and at the same time provide SystemLogDemons so that you end up with a satisfactory syslog that other things depending on SystemLogDemon will be satisfied with. It has an implementation of devlog that's compatible. So that way you don't necessarily need our syslog if you've already got one source of logging. So that's a fairly simple one. That's end-user choice. That's pretty easy. LogingD has been pretty widely discussed, but one of the key details is that it actually show up as busy or idle or disconnected instead of apparently there until you try to send a message, then this will nicely clean that up, and it doesn't have to put in GNOME specific here's how you inhibit suspend or here's how you inhibit idle. It can ask SystemD to handle inhibition across any number of desktops. The same thing goes for the desktop itself. Effectively, don't try to suspend until the desktop has gone idle and locked to the screen so you don't have a race condition if you can wake up and see screen contents. So this is another case where we're likely to have a number of individual transitions in the future where some of the latest desktop environments use LoginD already. A lot of others have not yet moved to LoginD, so we have issues like double suspend problems where the desktop suspends and SystemD suspends so you wake up and go right back to sleep the first time. So it's something we're going to want to fix in desktop environments to, if not rely on LoginD, at least be aware of LoginD and say, okay, if it's around, then I'll delegate suspend resume to that and just handle taking an inhibitor lock or delay to get some work done before suspending. One that I don't think that has seen any discussion at all is what the initRMFS should look like in light of SystemD. We've talked a little bit about, oh, initRMFS should mount user and a couple of details like that, but one thing we could do is replace some of the existing somewhat intricate shell scripts that we have now in the initRMFS with a SystemD-based event-driven init. So right now we can handle initRMFS with initRMFS so initRMFS with initRMFS in order to cleanly unmount the root file system before you shut down. You've gotten rid of all the processes on the root file system except for the shutdown and similar processes themselves. Then rather than just remounting root-reownly or similar, we can unmount it completely so that it has a very clean shutdown. So that's a really appealing proposition, if you've ever seen incorrect shutdowns on your system and suddenly had to do an FSCK on the next boot. This makes that a lot safer. So this is one where we have, again, the option of a fairly gradual and slow transition. We could do one of two things. We could either start porting initRMFS tools itself to use SystemD, move some of the individual scripts to be service files, replace them incrementally. The initRMFS tools is looking like a bit of a dead-end, frankly. So, most likely, I think Dragon has support for using SystemD already. So what would happen would be switching just to Dragon as the default. The problem is we have a lot of packages that integrate with initRMFS tools that don't yet integrate with Dragon. Fair enough. That is the large part of the transition, I think, that even though we could just switch over to a system that already does SystemD, even if initRMFS tools is not necessarily what we want to be using in the future, there are a dozen or two packages in Debian that install initRMFS hooks and would expect to successfully run, and they don't necessarily all have hooks for Dragon. So we would want to make sure that we don't break the existing integrated support in the process of doing that transition. Was there a question in the middle here somewhere, or did it get answered? Well, actually with my initRMFS tools, we already are discussing about DrayCut or initRMFS tools for Debian. I mean, the main problem is currently from my point of view that few people are actually interested in working on initRMFS related stuff. So if anyone is feeling like we could do better or get it up and running, we would be very much interested in getting SystemD support in initRMFS, so please talk to us. Fair enough, so yeah, there's our first call for help for the talk. I think that's likely to be a common thing of a lot of these are faster if somebody else is helping. Speaking from experience in initRMFS is a really interesting environment to work on, especially learning a lot about the boot properties of the system, so you might enjoy it, give it a shot, it's fun. InitRMFS tools maintainers shaking his head, but... InitRMFS is fun. I can't speak for initRMFS tools, but initRMFS is fun. So going in a bit of a different direction, this is one that's actually been around longer than SystemD, but has more recently been integrated into SystemD and used to enable various interesting functionalities, NSS my hostname. So one of the interesting problems in Debian is, how many edits do you need to make to change the system hostname? You need etsy-host, etsy-hostname, any number of other places where the hostname's been hard-coded. It would be really lovely if there were only one of those in etsy-hostname. And the NSS my hostname just provides name resolution for localhost, localhost.localdomain, and the hostname itself to make sure that they resolve two sensible IP addresses that you can access the localhost with. So it's a fairly simple thing. It would be easy to install by default. This is one where the only transition needed is, why are we not already doing this? This seems like it would be really handy and eliminate the need to have duplicate configuration in etsy-host. And anytime we can cut down on duplicate configuration, that seems like a feature. It's also ridiculously tiny. Question? Does this help with the problem of something like postfix mail to my hostname? Do you see what I mean? Postfix mail handling for your hostname. I don't know the details of that problem, so I'm not entirely sure. If the issue is that it can't successfully figure out that the hostname is, in fact, local, then it may help, especially if it has problems when the network interface is down as opposed to up. That's the other nice feature of this, is that it will successfully resolve your hostname even if all external interfaces and everything other than localhost are down. So you don't have to rely on your hostname being in DNS or otherwise associated with some particular interface. Beyond that, I don't know the details of that particular problem, sorry. So this seems like a fairly, I hope, uncontroversial case where perhaps we could just get this into standard or better and start using it by default and eventually drop the default etsy-hosts in favor of this. So to go from hopefully relatively uncontroversial to the other end of the spectrum, system D network D. Long, long ago in a flame war far, far away, we talked about network manager. So, yes. One interesting thing is that network manager got a lot of fairly legitimate flak for being a kind of desktop only tool of, oh, I can bring up network interfaces beautifully if you have an Ethernet interface or a cell phone interface or a Wi-Fi interface. But if you have a bridge or a bonding interface or a VLAN or really anything non-trivially complicated, network manager was not the tool for the job. And there was always discussion of, oh, we'll get around to that at some point in the future. But in fairness, it did the job it was intended to do, which is I want to plug in my wireless or wired interface and have things more or less just work on a laptop. And it kind of did that. Network D went a completely different direction. That is not its initial target. It's trying to handle servers, virtual machines, and similar configurations. So it does handle bridging, bonding, containers, tunnels, VLANs, all of those types of things very well, actually. It's working on support for wireless configuration. There's a plan for an IWD to handle wireless networking and integrate that with Network D. But one of the big goals here was network manager always spawned off other tools. So it would run DH Client in order to get DHCP. It would run DNS Mask in order to run a DHCP server and get a network. Question. And the horrifying thing about DH Client is it puts, because it uses raw sockets, it puts your network interface in promiscuous mode all the time. I agree with you completely other than that you said the horrifying thing about DH Client. Oh, well, yeah. But yes, exactly. So DH Client is, it gets the job done more or less, but it's difficult to fork and exec a tool and monitor its progress when you're trying to interactively work with it and find out status of the interface, see when it goes down, deal with all those types of information. You get very little feedback and network manager does have a tendency to have failure modes where it just, oh, look, the interface went down. It's especially infuriating when the log says network interface brought down by user choice. No, it did not. I'm the user and I did not choose that. So one of the nice things about Network D is it has a built-in DHCP client and server as a library rather than as a separate tool. And one very lovely side effect of this is that they put in all of the support that other platforms have had for a while that you shouldn't take seconds to bring up an interface. You should take milliseconds to bring up an interface. And in fact, on virtual machines, it brings it up in fractions of a millisecond. On physical hardware, it brings it up in, I believe, tens of milliseconds. So that's certainly appealing to bring up an interface that quickly. It means if you have a transient interface that is not overly reliable, you'll actually get useful connectivity in the moments when it's up, rather than, oh, I see some interface. Let me try to run DHCP client. Oh, it's gone now. So quite handy. And I would expect in the next year to two or three years to see most higher-level networking tools like Network Manager, Conman, and other similar utilities becoming fairly thin front-ends on top of something like NetworkD and IWD and similar tools. So IWD, for example, is likely to work on WPA supplicant. And rather than going and spawning off a demon, again, it will have built-in support. So those types of things. Let's build this in. Let's make it reliable. So another interesting and fairly obscure one, actually, is managing virtual machines. There is a MachineD and a related NSS to resolve the host names of those. This way, if you want to spawn off a container, then you have something tracking. I have these three containers. They have these host names. They have these IP addresses. Here's how I write route packets to them. So if you want to containerize a set of services on your system and say, here's this container, here's this container, MachineD, and then a related tool, MachineControl, will manage those for you. And the nice thing is they will be brought down when you bring down the system. You can set it up so that they're brought up with the system. And their host names will always be nicely resolved again without needing a local DNS server. So this is quite handy. One that just went in in the version of SystemD that was released yesterday or today, I believe, was SystemD ResolveD, which is actually a local caching DNS resolver that fits with NetworkD. So again, you're bringing up network interfaces. You want DNS to more or less magically work, whether you have a VPN or a VLAN or similar. ResolveD is designed to handle that case. Again, replacing tools like DNS Mask or similar. And there is an associated NSS module for this as well. So the idea here would be this, rather than being associated with one particular interface, this would be running all the time across several interfaces. Notice as they come and go and deal with, oh, wait, I can't really resolve that name the same way anymore, because it was cached on this interface. So if you've ever run a DNS cache and had the problem that, well, wait, I've got an entry in my cache for when I was on this network, but now I'm on that network. Hello, captive portals. Then this solves that problem quite nicely. And as an added bonus, it handles link local connectivity as well. Hook up a network cable between two systems. You don't need Avahi. You don't need any form of DHCP server. You just get connectivity. So that's handy. So one thing that I think everybody's heard to death from SystemD is, oh, socket activation is, you know, the wave of the future, and everything should be using this. So this is definitely one that had no shortage of discussion and hype, but I think one thing that gets missed in a lot of that discussion is what exactly it's there for. And there's been a lot of mention of, oh, you can bring up a service as needed, and if it's infrequently used, you don't have to bring it up. One of the things that's missed here is that this is really about eliminating explicit dependencies. So rather than saying, let me build a service that depends on such and such other service, and when that service is done being brought up, I can start, you can simply bring up multiple services that depend on each other in parallel because all of their sockets are available before the service even starts. And then you simply block if you try to talk to the service. So rather than having, you know, we have these lovely graphs of parallel boot up and say, first we launch this, then we launch these five things and these 12 things. We do them in nice little batches, but we still end up with a bunch of bottlenecks. The same graphs in the face of socket activation tend to look like we start this thing and then we immediately start this thing that depends on it, and they can start up in parallel, and only when they start talking to each other do they start waiting on each other. So that's a new degree of parallelism we have not been able to do with mechanisms in Sysvia-NIT or for that matter, Upstart. So another one that is not yet in Debian or in the version of SystemD that's available in Debian and is in development upstream is KdBus. So SystemD has a fairly close relationship with Dbus in that it launches it as early as it can and it supports activation of services via Dbus as in I have a service that I'm providing this service if you try to connect to it on the bus, launch this Dbus. And that's been a fairly large problem for SystemD in trying to have circular dependencies from PID1 to Dbus. On top of that, Dbus itself has a lot of overhead. If you want to send a message from DemonA to DemonB using Dbus, you're going to have approximately four context switches bare minimum just to get a message back and forth. KdBus eliminates that by creating it as a new kernel multicast messaging bus instead and does all sorts of clever tricks like using bloom filters to say, okay, I'm going to broadcast to just the things that care rather than to everything on the system, but without needing a demon to moderate the limited broadcast or multicast. So this is actually available now in an out-of-tree kernel module and it's being developed in the SystemD code base for the pieces needed in user space to manage this. It's definitely too late for Jesse to migrate over to using KdBus instead of Dbus Demon. That would be far too late at this stage. We would run up against the freeze. However, it seems very likely that in the course of Jesse plus one, we're going to migrate entirely from Dbus Demon over to KdBus, at least on architectures that have it available, so on all Linux architectures. Question? Yep. So in this case, so a lot of packages may use test suites, which use private Dbus instances, private system buses or private session buses for test suites. So it doesn't interfere with the system when you're actually trying to send messages between processes in a test suite in this case. So KdBus is not a, this is not like, for example, the control groups case in SystemD where there is one and only system instance of this. SystemD runs the system Dbus and a SystemD user session would run, moderate the user Dbus, but you can, with a KdBus available in the kernel, create a new bus master for a new bus and then other people can talk on that bus. You do not, in fact, even, I believe you can do that without root privileges or at least you can set the permissions on KdBus so that that's possible. You simply create a new bus of which you are automatically the bus master and get other people to connect to it. And the various Dbus libraries have all been ported so that they can talk to KdBus as a socket type as opposed to talking to DbusDemon. So there should not be any missing functionality there if there is, that is a key use case that you can run a private bus. So I would recommend trying that out with KdBus, poke at the available third party module, make sure that a package you care about that you create a private bus is capable of doing so. But that should work and if it doesn't, we'll fix it so that it does. The libraries really aren't ported yet. Sorry, where was that? The library porting is somewhat in progress. I know the client libraries work. I would not be overly surprised if the pieces needed to run a Dbus server are much more go poke at the files in Dev and Sys and similar yourself. Which client libraries? There are out-of-tree patches for Dbus and then there are in-tree patches for the libSystemD Dbus that is supposed to be more or less compatible. Yeah, those patches are not in a very good state for libDbus or glib, which is what everything is using more. That's one of those, too. So to say that you could just go and do this today is a bit premature. I do know this is currently out-of-tree and this is one of the main blockers for getting it in-tree is making sure all the various implementations are stable. I had the impression that glib bits are further along, though I was aware the libDbus bits were not. Okay, we should talk more about that. I'd be interested to know what it would take to get that in shape. But again, one of many reasons why I think by the time Jesse plus one rolls around we will have this all sorted out. You may have just answered my question. What are the prospects for KDbus in Manline Linux? So two blockers for KDbus. One was MIMFD, which just went into 3.17. That lets you create a temporary in-memory chunk of data that is moderated by a file descriptor that can then be locked so it can't be extended any further and then shipped off over a Unix socket. So that makes it fairly easy to ship data efficiently across the KDbus in large quantities. That has gone into the kernel. The other bit is just nailing down the API and saying, okay, this is the thing that we want to apply the kernels. We will never break user space stability guarantee to. And I don't believe KDbus is quite there yet. We need again to nail down a couple of the user space libraries, make sure they work rock solid and so on. So I don't know whether that's targeting 3.18 or whether it's more likely it'll hit 3.19. I'd be shocked if it waits till 2021. So I think we're likely talking this year or early next. The statement was, I thought Linus had strong objections. Not anymore, as far as I can tell. This was more make sure you have it right before getting it in, but that wasn't a no. That was a make sure you've got it right. There were no, last I've seen, philosophical or religious objections to KDbus, just technical ones. All right. So a couple other items. SystemD makes it fairly easy to containerize services. And in particular, oh, there's a question. Yeah, just to point out, you can ask Linus yourself about it tomorrow. Yes, there is a session with Linus and that would be a fine question. I suspect the answer you'll get is nobody sent me patches and says put it in the tree yet. So because nobody has and they're not going to till it's ready. But I would be interested to find out if there are strong opinions other than that. So one interesting bit is people frequently talk about, oh, SystemD is Linux specific and it's only ever going to run on Linux. Well, one of the big reasons for that is we have a lot of really interesting features in Linux for locking down services, for compartmentalizing services, for reducing privilege as much as possible. And SystemD goes out of its way to expose all of those features that make sense. So if you want to apply a Seccomp filter that says you can only make these syscalls, if you want to reduce the set of capabilities you have or say that you don't have these file systems accessible or these file systems are mounted read-only or you have a private temp directory or that your entire process runs in a separate network namespace that is limited to only local host because, hey, you have no need for network access. A lot of those types of things can be put into a SystemD service file or other types of unit files. There's a common SystemD.exec set of directives for how do I run a service that applies to just about anything SystemD is capable of spawning off. So this makes it really easy if you want to wrap a service in a container. And one of the things I'm hoping that we see very incrementally over time in Debian is to lock down more services by default. We want security to be our default and if we have a service that doesn't need certain privileges it shouldn't have them by default. So the more we could put things in empty change routes with no permissions and the only thing they can do is listen on the network or vice versa they can't listen to the network then those would be really good defaults for us to have. And I think as we migrate to service files from init.d files we want to take a look at some of those directives and say, okay, let's not just do the minimal port of init.d let's start adding more of those lock down features. This is likely to be a notable feature of the new SystemD policy in Debian policy are a list of particular directives that you should look out for should you lock down the network, should you lock down temp, that kind of thing. The answer is usually yes if you can. One that I don't think has gotten any press at all really is SystemD timer units. So SystemD has effectively the functionality of Cron and more recently Anacron to make sure that you can run services on a regular basis. An interesting feature of this as opposed to Cron itself is that it's well integrated with SystemD's logging, service launching capabilities and various other pieces like that. So if you want to say this service is run at system startup time or on this timer then that's really easy to do with SystemD whereas with Cron you would need to install a common script run from Etsy and from a Cron tab. The other really nice feature of this is if you've ever looked at how Anacron works and Anacron is something we actually install by default on laptops I believe and if you have Anacron installed then you have a Cron job running daily and weekly and monthly that will go check and see if you have any Cron jobs that need running on an Anacron style basis and go run them and even if there is no work to do you'll get processes spawning off on a regular basis just to see if there's work to do. So this is a case where SystemD can more easily say okay one of the 47 things I'm throwing into a big poll loop is the timer so go read a timer FD and see if I need to spawn off some timer unit. So this is nice from the perspective of let's wake up as little as possible let's improve performance and power management a couple of other items that have been added recently transient service units where you don't just want to create a service on the fly you want to create a service on the fly not just something you've installed but something you're launching so if you're a daemon that launches other services you don't need to implement logic for daemon management and child management yourself you ask the system in it daemon to go launch this for me and let me know what goes on with it relaunch it if needed there is a system users facility to declare I need this user as part of my system service please create it if it doesn't exist yet so one more case where we could replace maintainer script snippets with a declarative I need this user and it should have these properties like home directory so this is one where we may have an interesting transition in the future for how do we handle these files in Debian but this one seems rather welcome from a perspective of one less thing to put it in a nit script and finally there has been a lot of work recently to support first boot or fresh system configuration as part of system D saying well I'm booting up and some of the things that I expect aren't there let me go off and create them and one of the really interesting things about this is it means that with some of the most recent changes you can boot up a new container with an empty Etsy and it will successfully create the small handful of files that are actually needed and then go launch a service likewise you could launch with an empty there and with system D's unified user approach you could just mount user inside of a change route and then launch it as a container and everything else will just work so that's a bit of a whirlwind tour of system D features so we've been looking at a bunch of individual pieces individual components case by case now I'd like to go back up a level and talk about how a lot of those components fit together and what integration between them can provide so I'm going to give a couple of examples one of them again I've mentioned containers a few times a lot of these services were designed a lot of the new services in system D were designed for the purposes of effectively launching containers without having to recreate a lot of system configuration inside of them or install a full separately managed distro inside them because right now that is kind of the best known method for dealing with containers is let me install another distro inside the container and manage it that way and that's kind of painful so the machine D for example was created to manage containers the dynamic and transient units were designed so you could spawn off a container on the fly as needed the network D was designed in large part it was an easy way to provide network services to a container the host name resolution and dynamic DNS handling the DNS caching resolver was designed largely to say well why do I have several independent containers all talking to my upstream DNS server handling service lockdown handling journal D with logging per container handling minimal file system again the first boot work the sys users work and for that matter we're mounting up an entire container on demand including its own a knit just because hey I got a connection on a socket so let me spin up a new container to run this web service another case is again we'd really like to kill off maintainer scripts this is a pattern we've had in Debian for a really long time any time we can take dynamic scripting out of pre-inst, post-inst, pre-rm, post-rm and put them in some declarative configuration file that says I need this make it happen and put that in d-package or put that in some etsy foo.d directory where you run a trigger all of those types of things you know triggers are one step but they still run a script it's even nicer when we have configuration files of some kind so this is really handy and again a number of tools out of system D are designed in that direction a couple others we've been talking about power management a couple times at debcon and one of the goals here is well why would you need more than one event loop on a system to run these various system demons that all need to spawn or wake up for various purposes so everything from timers to sockets to dependencies to signals to any number of other things that demons need all get thrown into one big wake up when I have work to do and a lot of features that have gone into the kernel have been for the specific purpose of let me stop polling for this let me create a file descriptor I can throw into a select loop instead so a number of those features have actually been driven by system D and the last big unified use case I'd like to talk about is system D user sessions this is likely to be a rather notable transition that we'll be going through in the future so we want to handle all of these various graphical session startups there are tools like Start KDE or Genome Session or various other tools for other graphical environments and they're designed to spawn off a bunch of processes and handle them in order and handle dependencies and that kind of thing that sounds a lot like what system D is supposed to do and that's exactly what system D user sessions are for let's replace those with launching services out of a clean environment without necessarily handwriting it for each different environment and handling respawning them if they fail handling spawning new ones that get installed handling the parallelism and the socket activation and in general trying to bring up your desktop quicker and more reliably so this also has the advantage that most of the graphical session scripts don't tend to get right which is per user rather than per login services so I don't need one foo agent per time I've logged into the system I need one per user one for me across the whole system so one thing that has not been looked at actually by as far as I can tell anybody in the system D or distro community is the idea of unifying user sessions a little bit Genome is talking about replacing Genome Session with system D KDE will be replacing Start KDE with system D we have X11 X Session.D to launch an X Session so it would be really helpful if all of these are not just separate mechanisms built on top of system D but are instead one big user session that is configurable for what am I trying to launch which then means you can as a user have your own services that integrate with the environment and Debian can install system wide services that integrate with the environment as well things like auto starts so this is one case where we may want to look at putting in a little bit more unification so we've covered some of the high level use cases that make it nice to do integration now I want to go full circle and say well a lot of the discussion about system D was all about user choice and you know I even came with the prepared with the shirt system D is about choice and the it turns out that while choosing between system D and Sysvia NIT is not necessarily the thing that system D is designed to cater for that helping you pick among components of other tools actually works really well with system D so it's really easy for a user or an administrator for the user or system sessions to override individual service units or socket units with their own either individually or in large groups or little pieces of configuration it's really easy to override and edit without having to go in and edit a very long shell script it's also possible in system D to provide and depend on various virtual units so we've had discussions about GPG agent and SSH agent versus GNOME key ring which one do you want in your session well might depend on what environment you're in or what your user preference is so it's really easy to say well I have this GPGAgent.socket unit that I want something to provide and I don't really care whether it's GPGAgent or GNOME key ring if I'm depending on it I just need something to connect to whereas the services that provide that could choose what they want to provide was there a question? no, okay there's also the possibility of similar to .D directories if you have a particular target for system bring up you can create a target.once directory and install service links in there so it's easy to start up various units like that it's easy to extend services with local configurations so if the sysedmen wants to lock it down further than we ship in Debian or containerize it further that's fairly easy to do without overriding Debian's existing configuration it's pretty easy to mask services and finally because we're going to have system D user sessions then we'll be easily able to move services between well do I really need to launch that system wide or should I launch it on a per user basis and shut it down when the user is gone so a number of cases where system D could potentially help us there and with that any further overall questions? yes you were talking about network D I wonder if you have any opinion on what's going to happen to IF up down much the same thing that's likely to happen to sysv and it it depends greatly on how much people deeply care about keeping it alive the biggest issue with IF up down is that it is not at all event driven does not handle dynamic configuration of any kind static is just more or less just fine with it but it does not handle dynamic interfaces at all so I'd really expect that unless somebody rewrites it from scratch and even if someone does I think network D is more likely to win that particular one one of the really nice things about IF up down is that it's extremely extensible for example packages like OpenVPN or VDE can install scripts into Etsy network IF up D dot D and all of a sudden they've got first class support in Etsy network interfaces for bringing up interfaces for VPNs or VDE and such is network D ever going to match that functionality? yes and no half of it so right now it's possible to easily detect my interface went up my interface went down from IF up down and to the extended notices that you can very easily hook interface up interface down with system D as well as for extensibility of network interfaces themselves it's unlikely that network D will add arbitrary script call outs for the same reason that it doesn't spawn off DH client it's much more likely to link in libraries or possibly plugins with a well defined interface to say I want to bring up this kind of network but I would suggest looking closely at what IWD does with wireless and how it integrates with network D my understanding of the proposed architecture is go bring up a wireless network however and then hand a more or less configured network interface to network D then saying well go run DHCP or go set up interfaces or similar so if you have a VPN that would probably be the way to implement it of first bring up interface then hand it to network D to manage in its configured state that seems likely to be the main extensible approach there apart from getting it into network D and I think that's all the questions that we have time for but I am more than happy to have further discussions with people offline or via mailing lists or however else, thank you yeah and then when the second image came up yes you knew exactly where that was from didn't you I did think that we have the main thing is we shouldn't accept random tools that want to support the parts of the network so if you have configuration there are several different tools that say in a fine element where if I have a server that puts in super easy as part of an internet web I don't really care whether it would make it as long as that system needs is an internet web to wrap up I think the interesting question is what is the remainder of Delta's shield rocketing I need to have a bit of a question about it I was asking if there is a separate value I don't see a user that can go around as long as they come in their face to go and install a service file exactly exactly yes absolutely so I think that's where I want to exchange an internet to customize yes please I think that's where I'd be happy to look at how the work is going I'm sure you already know you're up for a while I would be happy to help with that but I certainly have to help with that especially with the system it's very interesting I'm I'm I'm I'm I'm I'm I'm I'm You're I'm I'm somebody have submitted some yeah So one good example of this that people were looking at is the recent new system we drew support for rather than the cap theory that I think it's not merged in with me in a moment now. I think it's important for a resume of the system being able to have a resume unit that knows how to write a major minor of a resume and divide it into an appropriate system that's biased and similar. And it did go in. Okay. I thought it was how to just go by. Okay, excellent. So yeah, that's one case of how do you handle a system start off with shutdown and that's something that's ideally run to help the NFS. NFS roots in other case where you need a doc out of the NFS. The NFS now for forward slash that depends on various NFS surveys and depends on networks. Which is another good argument for having networks be in the NFS. Yes. Well, I don't think we're going to have it. Yeah, absolutely. And while we might not necessarily have a network be as one network for everything. I think we're likely to have it running and handling local system container. So that would be run system. Now, whether we use it for anything else by default or whether it's only. No, I agree completely. It would be crazy to transition almost any of those things. We want to provide them so that people with transitioning away. Yeah, exactly. Yeah. Yeah.