 Okay, good afternoon everybody. So my name's Scott James Rowment. I've met a few of you before I'm sure but I'm going to be talking today about Upstart, which is a sort of personal project of mine and one that Canonical has been sponsoring for a bunch of. More particularly I'm going to be talking about the roadmap to Upstart 1.0, which is going to be the next major version, I hope. So I just want to start off by talking a little bit about the purpose of Upstart and then talk about a little bit of its history. There's a lot of common misconception about why we did Upstart. The first one of those is that we did it because of boot performance. And with some of the recent push towards the 5-second boot and so on, Upstart doesn't help that. Well, that's true because that's never been one of the purposes of Upstart. So I want to go a little bit over again why we did it. One of the first and obvious things Upstart is designed to do, it provides a true service manager. We don't have one at the moment in Linux. If you go and boot up Solaris, you'll find you've got Solaris SMF in there, which gives you a true service manager. If you look at Apple, they even have LaunchD. If you look at Windows, obviously Windows has a very, very well stabilized service manager in there, but we don't have one at all. In fact, if you look at your Linux box, you'll see that what you think of as services are actually just shell scripts that do a bunch of commands to try and start a particular process. And when you want to stop it, there's shell scripts that go and try and find that process in the process list and then try and kill it. It's not really doing service management at all. We recently added the status command to a lot of our Linux scripts in Ubuntu. And we found that to do this, we had to edit every single Linux script. We had to add a status command to every single Linux script. And that status command was different for just about every single Linux script because different processes could be found differently. So there's no service manager. What I mean by a service manager is just what you can do probably four basic commands. You should be able to start a service. You should be able to stop a service. You should be able to restart a service, obviously. And restart is not quite the same as stop and start because restart has to be atomic. When the restart finishes, it should be started again. And then status. You should be able to see the status of a running service. You should be able to say, you know, is Squid running? It should be able to tell you yes. It should be able to tell you, you know, its process ID and various things like that. So that's sort of one of the first goals. Provide something we don't have today. Well, kind of, ironically, the Sys5InitDemon kind of has some of these functionality, but we don't use it in Linux. Probably because it's quite hard to do. Another kind of main goal of Upstart was actually to provide an API for other processes to communicate with the InitDemon. It's all very well and good having a service manager, but if your service manager kind of keeps itself to itself and doesn't really talk to the rest of the system, then, you know, it's not really that useful. So one of the main goals of Upstart again was not only does it provide, you know, the service management commands, it actually provides them in a way that other processes can access those commands without having to just fork and execute the start and command on disk and parse its output. The earliest versions of Upstart used a private API for this. The later versions of Upstart just simply use debus. So you can communicate with the service manager over debus. Particularly, you've got various commands. You can tell Upstart to get a job by the name. So you can say, give me the Apache service, give me the Squid service. You can list all the jobs. Obviously, you can start the job and you can stop a job and you can restart a job and query its status and so on from the debus interface. And most useful, of course, you can create a job from the debus interface. One of the things Upstart doesn't require is doesn't require that the configuration is on the disk. So if you want to just create a service and have it maintained by the service manager, you can provide all the details of that service over the debus interface and start and stop it and so on. But particularly, for example, if you need a long-running F-server or a long-running monitoring service, you can use Upstart to actually manage that for you so you don't have to worry about the details of the service management. And also, this is kind of where the confusion comes from. When we're talking about boot performance work, we always talk about Upstart in boot performance work, but that doesn't mean Upstart itself makes you boot faster. What Upstart instead does is many of the primitives it provides, and by providing a true service manager, it allows you to eliminate busy loops, sleep loops, wake race conditions and so on from your boot process. Typical example for this kind of thing is that we have in our boot sequence a loop which waits for the root file system device node to appear on disk, and then another loop after that which actually waits for that device node to be set up, because if that device node is LVM and RAID and so on, we actually need to keep spinning until the RAID is actually activated. With Upstart, the idea is that you can, by doing these things of services, by managing the tasks, you can get notification that it's complete, you have an atomic status notification, and you can sort of chain things off each other and services can chain off each other. You can effectively eliminate all the sleep statements or the busy loops and so on at your boot sequence, and by stating what the requirements of a particular service are, you can get rid of race conditions, which nobody likes race conditions. The other point about Upstart which wasn't quite in its original design, but that's because it didn't really exist in its original design, is Upstart's intending to be a part of what we call the plumbing layer, which is quite day-glow and colourful here, and I do apologise because I'm sure that there are areas of the plumbing layer which have been missed out of this graph, but this is kind of the stack that I need to bring up an X server and use the desktop session right now on this laptop. The plumbing layer, it's a sort of new term. The first time I really heard this term being used was last year, so Upstart's kind of two and a bit years old now, so that's why it's not really part of its original design. The plumbing layer is something that's only really been described as a single group of processes in the last year or so. You start off the kernels beneath the plumbing layer, your desktop sessions above the plumbing layer. It's all of the individual pieces that allow you to have a desktop running as an ordinary user while system hardware is being controlled by privileged processes. So you have Udev sitting up, it runs talks to the kernel, receives a notification of changes to the hardware and changes to the system state from the kernel. Udev feeds this information to HAL or device kits. I don't think David's working here, but if he was, you could ask him about device kits. It's a replacement for HAL coming in the next couple of years. Those talk and announce and provide services and objects for pieces of the hardware on your machine over D-Bus, which allows their system demons to talk over D-Bus as well and provide service information. Avahi provides information about services available on the network, so printers, other machines, SSH servers, HDP servers, iTunes, and so on over the network. Network managers obviously uses the hardware information from HAL to gain networking information, bring up network cards, set those up. Pulse Audio allows you to manage multiple sound cards, multiple channels on those sound cards, and allow multiple applications to talk to them. So it's a sort of sound mixer, but one that has awareness of multiplicity in both directions. These use the console kit and policy kit libraries to do authorisation and allows us to worry about systems which have multiple seats or even multiple CPUs. So we can deal with systems that have five consoles, each of which has three cables and mice and so on. We actually know then when a USB stick is plugged in, which console of which seat the USB stick was plugged into, and can actually provide authorisation only for the user logged in at that seat, and all of these services are aware of it. An upstart naturally fits into this plumbing layer. In particular, the point at which upstart appears, I have the worst clicker in the world, sorry. The point about which upstart appears is alongside the lot. It doesn't provide services, it doesn't provide the networking, it doesn't provide hardware, but what it does do is keep the rest of the system running. So upstart would be supervising the Udev daemon, making sure it's working, supervising how the device gets, supervising the D-Bus daemon, supervising the other policy daemons, and in effect it actually does in fact it's communicating with a D-Bus that provides a plumbing layer service for services. You can use it to start and stop services on demand. One of the areas that comes in useful is you can start doing things like having plumbing layer components like the Bluetooth staff only started if you've got a Bluetooth device and so on. So it fits in with this lot somewhere alongside the lot of it. So it's kind of what upstarts for and where it fits into the ecosystem at the moment. So I just want to give a little bit of a talk over the history of upstarts up until this point. So the first version of upstart came out in August 2006. So that's two and a bit years ago now. In fact it kind of never really came out. We didn't put the packages many places. It never went into a release. It was actually developed at the Ubuntu developer spring team, vise button, I think it was prior to Linux tab or just after Linux tab. It was quite simple in the terms of jobs, jobs themselves didn't really, you couldn't define a service by much more than sort of a command to run and maybe some scripts to tear it up and tear it down. But the events part of it is quite complicated. It was actually turned out to be far too complicated at the time and maintained a history of events and so on. And it had a sort of syntax which kind of would be ordered by today's standards. The version we actually first really released in Ubuntu was in Ubuntu 6.10 and that was 0.27. The event system was vastly simplified. So the event system is kind of the core of upstart, one of the cores of upstart. You have a service manager which can start and stop and so on your jobs and you have an event system that can automatically start and stop jobs. So you can start and stop jobs and events. The events are generated by hardware on your machine, by software on your machine and thus you can build up a service management and a measure services which communicate with each other as well as with the rest of the system and start and stop themselves. The IPC, the actual communication with upstart was very home brewed. It was a Unix domain socket and you talked to it and you had various methods you expected to pass. It was kind of never really adopted. It was very odd and arcane and very much designed for the Unix control process which comes with upstart and it turned out to be quite difficult. We tried writing a simple GTK front end and it turned out that the asynchronous and out of order messages it tended to deliver were really not suitable for a graphical program which was also expected to be out of order and asynchronous. But upstart 0.2 was probably the first version that got the start on, stop on syntax that upstart still carries for this day. The kind of the major milestone is upstart so the early development would be the 0.3 series. 0.38 was in Ubuntu 704 and 7.10 and 0.39 which was in Ubuntu 804 and Ubuntu 8.10, the last two releases and also in Fedora 10 and 11. I think it was Fedora 10 that it first went in and Fedora 11 is obviously the next release which is now for now. This is the very stable version of upstart. It was intended for distributions to be able to be able to deploy and test out and experiment with. One of the key things about it is that it kind of completely backwards emulates this five-init the actual six-five-init daemon, not the actual scripts but by doing that it can run the script the actual six-five Rc script just as well as the system five daemon can. This has allowed us to deploy it to keep it deployed for a couple of years making small changes to the code making experiments, we have experimental kind of complete upstart based boots and so on without actually worrying about having to have a flag day. At no point in its development do we need a flag day to say this is the day where everything switches to upstart we can always keep support for init scripts around forever. We just would gradually phase them out over time so that's kind of allowed us to do this and Fedora is operating much the same way they've replaced this five-init daemon with upstart but haven't yet replaced their init scripts with upstart jobs. This will have a description there of some of the changes. There's actually been a major restart start since then. Upstart 05 came out last year. This was a fairly large rewrite of upstart and this was basing it around debus. The debus version was the kind of realisation that in the plumbing layer debus was really becoming the central process. It's now no longer really conceivable that you would have a machine without debus installed on it. Minimal servers and so on may do but even in the embedded space debus has become the standard daemon for communication between the different parts of an embedded mobile device and so on. This had a kind of a large effect on the development of upstart and I re-engineered it to be based on debus so it uses debus internally between its init control and the upstart and also all jobs and all properties available over debus. There was kind of a controversial feature of 05. One of the things I've been trying to work out is how to supervise daemons. Daemon processes have an annoying habit of forking and going off into the background, disconnecting them from the process that runs them and this is kind of annoying because you can't really supervise them. Many other sort of init daemons have kind of said, well, don't fork off into the background but that doesn't work either because then you don't know that the daemon is actually running. If you take most servers, they actually don't fork off into the background until they're listening on their well-known port or they've got their well-known name registered on the bus and this is very useful because you can use the fork as a notification of readiness so we kind of wanted to be able to just supervise these but the kernel doesn't really provide many interfaces for this. Anyone who follows LKML will know I've sent a few patches over the years to try and remedy this, the latest one that needs to be rewritten again. But Upstart 05 has a feature to do this from user space using the Ptrace Cisco so it can Ptrace daemons and follow their forks and execs and then shortly tends to hang and crash a machine. It turns out Ptrace isn't really that reliable. One of the main changes as well that came in 05 was this operators for events so you can say or and and in its expressions for syntaxes. But this doesn't work out so well, which I'll go into in a bit. So that's kind of the history of where we went. The other thing about Upstart 05 is it's not actually been deployed anywhere yet. We've deliberately not deployed it in Ubuntu because it's not as stable as 03 and I know Fedora has deliberately not deployed it because Fedora 11 is going to be the base for RAL6. So they again, they don't want a relatively unstable daemon being part of, as a core is the process one on the system. But it's sort of the base for the current and future development. So one of the problems with Upstart, which I kind of want to talk about, is that while we've got the service manager part of it very stable and very secure, we haven't really got the syntax for defining when services are run very well. In Sys5 in it, it's easy. You just put the init script in a particular directory, RC2 or RC5, depending whether you're using a Debian-based system or a Red Hat-based system. In Upstart, you kind of have to define it using events. And a simple sort of multi-user service that, well, that's not multi-user if you're a Red Hat, but a simple service that runs in many run levels would, in Upstart, be defined with something like that. It's kind of a start-on-run-level event, 2, 3, 4 or 5. But stop-on-run-level event changing the run level to something not 2, 3, 4 or 5. So it's kind of, you know, there's lots and lots of problems with that. You can easily get it wrong. You can easily mismatch the two sets of arguments. If you do mismatch the two sets of arguments deliberately, it's often not obvious why you did that. There's a lot of ways that isn't really the nicest syntax to be able to do it. And that's the simplest possible example. So one of the big changes we're going to be making for Upstart 1.0 is in this kind of syntax. And in Upstart 1.0, the syntax for that will be just while run level 2, 3, 4 or 5. You can express that with 1 on level 2 or run level 3. It merges the two kind of the syntax lines into one and matches on the run level to date. Another example is the single dependency of our service. This would be a service probably much like how it would be a service that's just depended on D-Bus and need to be running while the D-Bus demon is running. And we would, right now, you'd have to say it started and started D-Bus. Started is an event that D-Bus, the D-Bus service would emit when it started. It's one of the sort of internal events at Upstart. And you'd have to say stop on stopping D-Bus, which is an event emitted when D-Bus is about to be stopped, but it has been detected that it's died. Again, there's lots of little problems with this. The difference between started and stopping is not entirely obvious, and this open office throws a cunning error into the slide. If you had died, you'd want to start and stop on different services again. It's kind of tricky to express. So 1.0 is a much simpler syntax, again, just while D-Bus. While the D-Bus demon is actually running, you want to be run. Upstart is a neat feature. You can insert things as dependencies of other things. An example of this kind of job is Tomcat. If the Tomcat service is installed, it needs to be running while Apache is running as a dependency of Apache. It's something that most, I don't think there's another any demon that allows you to do this, but in the Tomcat job, you can actually define it to say that it is running all the time that Apache itself is actually running. If you started Apache, Tomcat would get started automatically and get to be running before Apache is running. If you stopped Apache or stopped Tomcat, Apache would come back down again. Now, the interesting thing about this is the syntax is very slightly different to the syntax for a normal dependency. Quite confusingly, sir. You actually just reverse the ing in ed on the starting to started, to starting, stopping, to stopped. That's not exactly the most and easier syntax in the world to use. One point I've introduced is before, so you just say before Apache. There's actually a slight competition. Not entirely convinced before is the right keyword for that, but it suffices for now. You just simply put in Tomcat job before Apache. It has an interesting side effect, which I'll introduce later, but it does make the service you're defining a dependency of the service Apache. I want to show the difference where, if you're used to the current version of upstart, where the difference runs, jobs have a waiting state and a running state. They move from waiting to running when they're started, running to waiting when they're stopped. Starting is submitted when they're moving waiting to running. Started when they're at running. Stopping emitted running to waiting. Stopped when they're back at waiting again. So you have this four-step process. The starting and stopping events are interesting because they actually block the service. If you start a service, the starting event has to complete before the service actually starts. This is what makes the inverse dependency case work. The while part of the syntax matches when it's actually in the running state and the before's part matches when it's in the waiting state. But again, with the blocking, so a process in before would, if you start a process and it has things listed as before, then it waits for those things to be started first. There's also, if you don't want, if you want to define your own kind of pieces, if you wanted to do something running from when something's starting to stopping, when something's first started to when it's first stopped, you could do this with from until. It's a kind of easy way to do it. I don't really know of any reason you'd ever want to do that, but there might be people who might want to do that. So this brings us to sort of one of the bugs of Note 5, which is one of the main reasons Note 5 never got really deployed. If you wanted to do a multiple dependency, a dependency on D-Bus and Udev, this would actually be more like what Hal would look like today. It depends on both D-Bus and Udev. Udev and D-Bus have no interdependency. You might try something like start on, started D-Bus and started Udev, and you might try something like stop on, stopping D-Bus or stopping Udev. So actually it would work. You'd put that on your system, you'd boot up, Hal would wait, D-Bus and Udev would both be started, then Hal would start, and then when you shut down again, Hal would stop first, D-Bus and Udev would have got stopped first. The point which it doesn't work is if you then restarted D-Bus or restarted Udev. If you actually, with this job, restart D-Bus, Hal would stop. It would... Hal would stop. Hal would actually do exactly what you expect. D-Bus would start back up again. Hal would not start. And this turned out not just to be a bug, but it actually turns out to be a basic problem with the way upstart was processing events. In fact, the reason Hal wasn't restarted again was waiting for Udev to start. It didn't know that Udev was already running because it was waiting for an event. It wasn't working on a state. So it would just sit there waiting. If you restarted D-Bus and Udev, Hal would restart, but you very, very rarely want to do that. Now, syntax-wise 1.0 is obviously exactly what you'd expect here. It does the while D-Bus and while Udev. You get a much easier syntax, but without the actual bug of having this restart problem. While works on the state, while actually makes sure that D-Bus is running, it matches the D-Bus job. Sorry for the dry throat. Matches the D-Bus job inside upstart. So if you were now to restart D-Bus, while D-Bus and Udev goes false because D-Bus isn't running, D-Bus restarts. D-Bus becomes true. Udev has always remained true, so the while condition is again satisfied. So it restarts, which is what we actually wanted to do. And you can combine these in quite interesting ways. So that's a very silly job. It runs while Udev and before Hal or before device kit and from sunrise to sunset. Although I was writing this and as the silly example, and I realised that there's an interesting side effect to the before because Hal and device kit now depend on Udev as a result of this because you've inserted your job as a dependency of Hal or device kit and your job depends on Udev, so now Hal and device kit and you can't run either of them at night either. You can create very interesting things. And it actually turned out as soon as I did this, someone came up with a use case exactly why you would want to do this. So it comes up for the battery power status. I'll talk about it a minute. But you can kind of create very interesting side effects with this, but it works. So I kind of wanted to just go on a kind of basic example of what you might do as a server administrator. I think you've got a job in Upstart North 5 which is run on Apache, run on MySQL, stopped when either of them stops. That might be a lamp job. Now, this is kind of something you can't really do. You want to define a run level, but only a run level for your servers, your lamp server. So you might define it like this, Apache MySQL stops on Apache MySQL. It's actually an Upstart North 5 to start lamp. It'll tell you lamp is now running. But annoyingly, neither Apache or MySQL are actually running. And this is because the current versions of Upstart are very much event-based by design. They entirely rely on the knowledge of events and the passage of events to process things. And when you started lamp, you can override it as a system administrator. So when you started lamp, you started this lamp service. In fact, you overrode its start-on condition. So it would actually stop if Apache MySQL stopped, but it doesn't do anything that makes them start. In fact, to make them start, you'd need to go and edit the Apache and MySQL jobs to put in start-on, starting lamp, and so on in there. It doesn't quite do what you want it to do at all. So 1.0, bit more interesting. In our lamp job, we could put while Apache and MySQL. Run while both of those are running. And if you try to start lamp, cannot start lamp, it's not running Apache and MySQL. Well, that was the first idea. In fact, that's actually completely wrong. You don't want to do that at all. I tend to think any software which tells the system administrator off for doing something wrong or complains that it can't do something, and then gives exact details of what it can't do and why to fix the problem is being just annoying. In fact, you actually... If the system administrator can tell you exactly what commands you effectively need to run, and it's just being annoying at you, it's telling you what it won't do itself. So what we actually want to see is lamp running. We want to see if we've got a job called lamp defining Apache and MySQL, then we try and start the Apache... Sorry, try and start lamp. We want to see it running, and we want to see Apache and MySQL running as a result. This is something that a dependency-based system tends to do very well, but an event-based system doesn't do it. This is something that does work in Upstart. It works because the while condition works in both directions, basically. If you start trying to start the lamp service, it knows the Apache is false, it knows MySQL is false, it knows that they are Upstart-defined jobs, it knows how to start those, so it can start those for you, and so you start lamp, it brings Apache and MySQL up, but the event of lamp coming up starts Apache, starts MySQL, and if you were to stop lamp again, subsequently they would go back down again because the service that brought them up has gone away. It's also most interesting what's the other way up, as well, if you bring in... If you start Apache and start MySQL, without lamp actually started, it would actually still say lamp is running, so the system works in both directions. If you started lamp, it would start MySQL for you, it would start Apache for you, if you start Apache and MySQL separately as a system administrator, the lamp service would also be running because its conditions to be run are also fulfilled, so you can kind of define little statuses like this. This has a nice side effect. The levels now can just go away. You don't need two, three, four and five with some arbitrary difference. You can define specific states that you want. If you want to have a lamp state that is when your lamp service is running, or a customer-facing website state for your customer-facing website and its service dependencies, you just define that as a file called customer-facing website and list in its dependencies, and it works in both directions to start those individual services. It will say your customer-facing website is running. If you stop customer-facing website, it will also stop those particular services again. So you can kind of get rid of this. You can quite easily get rid of the level two, level three, level four, level five, except for that compatibility. You can define, playing with this, you can define arbitrary states. The system states start to become easier then if you define them as these lists of dependencies. You can define a battery state. So the battery state or a power state would define what services could be running when you're on battery power, what services can be running when you're on AC power. To make a service not run on battery power, you simply emit it from the battery state. Then when you switch from battery to power, the dependency goes away and the service would stop. If you didn't want your database server running on battery, you only have it listed in AC power. Then the AC power and battery power objects can themselves depend on network and hardware and all sorts of similar things. So other things coming into, I'll start 1.0, are some changes since the way that events work. Events kind of get used in various ways. For what to start off as just a simple string and grew arguments and environments as it went along with it, turns out you can do a lot with them. So you have sort of events like signal events. These signal that something has occurred, that's all they do there. They signal it has occurred. You don't have to remember it occurred afterwards. It's a transitory event. The typical example of this is the control of the delete key. The kernel sends a signal to it in it in the case of upstart. It emits this as an upstart event and allows you to hook on to it. Every time you press control of delete, you get a control of delete event. The interesting thing about this is you don't really care how the user pressed control of delete at some point in the past. You want to do something when it's pressed. If you were to define quite a simple upstart 1.0 job there, it's running well. Multi-user is a run-level state there. It's just on control of delete. So while you're in the multi-user state, if control of delete is pressed, it would go arg to wall to beep all the users. More interesting, if you just keep holding down control of delete, you'd get incrementally more and more wall messages. It wouldn't stop the first time it's run. You might get overlapping wall messages if you manage to get it really fast. Each individual control of delete press spawns a new one of these. So if you ran debus demon when you on control of delete and had on control of delete, you'd get a lot of debus demons. So don't do that. Another type of event we tend to get used. This relies on the property that when you ask upstart to omit event on your behalf, it doesn't reply to say that the event is complete until services have been stopped, until any tasks have completed that are run on these events. This allows many demons to use upstart as a dispatch demon. You can network manager can say the interface is up. Upstart will then respond when any changes to the system as a result of the interface coming up are complete. ACPD is another example here. Right now it has a .d directory that runs various shell scripts. You can quite trivially turn that into an upstart event and have those shell scripts inside upstart's configurations instead. And then you want to know about the completion of these. Rather than control delete, you don't really care that the event is completed. It's just run every time control delete is broken. ACPD is probably omitting the suspend event into upstart. Upstart runs various shell scripts, various tasks, it might stop services, it might start services. Boot chart, that's why you want to start a service suspend. It will start and stop services and then it can tell reports to ACPD when all of the side effects of the suspend event are complete and then it allows ACPD to take further action. In fact, I think in this case ACPD doesn't do much more now than run the scripts, but if you reboot and shut down a tipping example of these kind of events, you run reboot from the reboot command. It's rebooting to upstart. There's a whole bunch of processing. When upstart comes back and says the reboot effects have been completed, the reboot command actually calls the reboot syscall. So that's sort of how it can use cook events. And then you kind of have these state change events, which indicate that a service has changed state or a network art has changed state or something else. And probably one of the biggest changes again in 1.0 is that these got reversed from 0.5. So what used to be stopping PostgreSQL changed back to PostgreSQL stopping, which was how it was before. One of my biggest complaints about upstarts is that it's not very well documented. It's not very well documented because they change things too often. It's still 0.0, so we've still never deliberately asked people to move over to upstart jobs because we kind of want to get things right first. So this allows you to, for example, in this case, when the PostgreSQL server stops and has a failed result, failed means that the demon crashed. You can obtain the reason the demon crashed in the exit signal or exit status. You actually want to run some other command. You might want to back up your database. You might want to get the logs. You might want to vacuum it, the store back up, who knows. But it allows you to run things when other things change state. Again, if PostgreSQL was repeatedly stopping, you'd get multiple copies of the script running. You can kind of do other things as well. You can do this when something starts. So you could have onexim starting, do some script, which maybe delivers a failed previously held messages or so on or be onstarted. But again, the difference between that and this is whileexim, is that if exim was to stop, your script keeps running, which may or may not be what you want, but if it's not what you want, use while instead of on. It's kind of the difference between the two of them. Another one of the sort of things we've finally introduced has been talking about it for quite a long time. It's kind of been talking about it because it's never been clear it's what we actually want to do or not, but I've taken the decision that it is what I want to do, which is to replace CrononATD properly. At the moment, we've been just kind of leaving CrononATD running and not integrating with Upstart. We considered integrating them with Upstart too, so they used Upstart services on timers, but we've now made the decision that they're going to go away and the unit itself will run these features. There turns out to be a lot of good reasons why you would want to get rid of these two demons and fold them into in it. The behaviour, well, it doesn't seem to overlap initially when you actually look at it. Cronon has behaviours you want in it to have and in it has behaviours that you want Cronon to have. So, you know, their timed events you'll get is a sort of daily, hourly, weekly, monthly kind of event that Upstart will generate. So you would be able to sort of have something run daily. Just an Upstart job would say daily script, whatever. You can have specific timed events. The syntax for that isn't quite worked out yet, but you can, for example, at 8 p.m. is a good example. If you want something run daily at 8 p.m., you can just say that. You've got at-like behaviour as well. The in-two-hours kind of behaviour, so I want this to run in two hours' time. Every two hours' step-like behaviour, repeatedly repeat behaviour, you can also then start offsetting events from each other. This is where the at-like behaviour you want in Upstart. You want to be able to say in it, I want to run something 45 seconds after startup. You don't want to run it at startup, you don't want to run it in the boot sequence, but 45 seconds later might be about right and you might, you know, assume you run very IO nice and very nice, or whatever process you're running doesn't take up any particular user CPU. You can combine them with other wild states, like every 10 minutes while a network device is up, you want to run a state over an event, a job, over and over again. So you can kind of do a build up on these. We're not talking about this overlap. Cron has, for example, an Act Reboot command, which is on boot, and Cron also sends mails on failure and on battle tone codes, well, it would be really nice. I think if, in it, sent you a mail with a patchy's output when a patchy crashes, so I think there's a lot of overlap between those two demons. Just going to quickly jump through actions here. This is something that's less well-defined, is an action would be, for example, you want to be able to support reloading the syslog demons, so when you go start syslog reload, you want it to send a hop to the master demon. You want to be able to execute another process, so you want to be able to do an Apache graceful, send an Apache 2k graceful when you run that. These are defined inside the Apache job file. You might want to have a rotate logs kind of sub-job action, and that's run daily. So you can define Cron events inside the same file that defines your service. And you can define, for example, a remote sync sub-demon which runs while there's a network device. So you can have a demon running. When there's a network device up, you start up a second demon that runs and pairs with it. You can actually have them unattached. Right now, those can only run when the parent demon is running. Unattached actions can run at any time. Even if Apache stopped, Apache back up, or Apache test config would still work. You can even define completely separate services in this manner. There's an argument whether SAMBA should be defined as two jobs, SMBD or NNBD, or just one job with two sort of SMBD and NNBD blocks inside it. They're sufficiently related. So that's kind of the actions. I'll just jump over that quickly, but it's not well defined enough yet that's a plan. I'll start at 05x. Plans to do very rapid releases at least monthly from now until 1.0 is ready, maybe even weekly if there's particular things land quick enough. So 051 was released a couple of weeks ago, and this is a feature identical to 050, but it has some major code-based improvements, and these will sign to bring in the code-based improvements to do the further development. 052 is intended to arrive this month, and that brings some major changes to the Debuts API, allowing you to get sort of various properties on the objects you can't get right now. Then there's 053 when someone planned after that with whatever changes occur. There's no particular full list of changes in a particular order. We're just developing those and when. Upstart 010 will be the first with the new job syntax, and I kind of expect that around June 2009, so there'll probably be four or five releases of upstart 05 for then, and then we'll switch to a 010 development series for bugs and everything that comes up, and this is intended to arrive in the next version of Ubuntu, so we'll see that if you want to play with this version and stuff, you just use what's going to be Ubuntu karmic, so you can use that, and I suspect Fedora 12 will probably pick that up as well in its development process. Then upstart 1.0, the current target release date is for the next Plumice Conference in September. That's important, and if I win power before then and decide it's not 1.0 because it's not feature complete, it might be declared 0.5, but I'd love that. So a lot of these features should be ready in time for Plumice later this year. Okay, so that's my quick sort of tour, so just any Q&A for about five minutes left. Yes, if sorry, I didn't catch the last bit. So a question there, would it be easier to get it integrated into the distributions if we didn't keep futzing the syntax? Yes, probably. It's one of the reasons that there's not much documentation on the current syntax is because we recommend that any distribution that wants to pick it up doesn't deploy upstart jobs just yet and that they stay very close to upstream and we do it as step releases. This does cause some problems because there are some distributions which aren't yet happy to deploy it because it might change, especially because there's some problems where some upstreams want to be able to shift upstart job files and they come to me to find out how it would probably be easier, but at the same time, it's release early and release often. If we stuck the jobs syntax in stone when we did 0.1, then we would right now have something that didn't work, so it seems better to release it early when it's in 0.0, it's still an alpha really. It's test code, it's beta code, alpha code. When we actually declare a beta release later on, even a final 1.0, then the jobs syntax will be in stone. It will be easier to get it integrated, yes, but it will be much harder to develop it and get it right if the jobs syntax will stay in stone all the time. Once upstart 1.0, the intent is that is a justified jobs syntax that will be documented and that won't change then. If it changes after that, it will be upstart 2.0, a major rev, so to make it obvious. Yep, so how's that? So there's a question there. So if you start Tomcat, how do you know that it's running before you start Apache? Is that the question? Right, so how do you know when Tomcat isn't really starting, it's actually started, that it can be used by Apache? Is it? Right. Yeah. Assuming to go there, most demons, most services aren't ready when they just run the process. They need to listen, they need to set up some state, they need to have a socket open or need to connect to something. How do you know they're actually running? There's various tricks. First of all, many of them, if they demonise, don't demonise until after they've done all this. So it's very good, there seems to be two thoughts, but most processes don't do this, so they do this because then it allows them to report errors if they have problems. So this is one reason why I've been very keen to get tracking of demonising processes working, because then you just tell Tomcat that it can fork off into the background, it can be a demon, and it won't do that until it's actually ready to be run. Otherwise, if you ran Tomcat semi-coll Apache, that wouldn't work either. You can look for a socket, so you can actually do listen monitoring when it's actually listening on a port, you can do that. Tomcat might do the socket up before it's ready, but it's not going to be calling accept until it's ready, so anything connecting would block anyway. In the case of D-Bus services names, so you can say expect D-Bus name, and you can then wait for it to publish its name on the bus. It might not be ready, but it's not going to be accepting and processing D-Bus messages until it is ready, so you can go and get away with that. Many processes might not be fully-united, but fortunately most of this calls tend to block, so it's not so much of a problem. Time for probably one more question. Yes, Daniel. Yes. If we replace that in Cron, how do we tend to do user partitioning so you can set up Cron jobs up as a normal user? It's actually relatively easy. We're actually going to allow users to do any kind of job and service. We allow users to find their own services. The users can find their own Apache jobs, their own everything else, provided they've got permission to run those. It doesn't run them as root, it runs them as the user. D-Bus tells us the user name of somebody making a user request. Policykit tells us whether they're authorized to make that request, so we can provide very simple authorisations for jobs. Probably most likely there'll be a dot in it. Directory and user's home directories are a basketball in it. Either one or the two both have advantages and disadvantages, and anything in there is run as that user. Exactly the point. Policykit's there to prevent a user from meeting a controller's delete event. If a user is authorized to do it, the controller's delete event only... Policykit allows you to breach the user barrier. If a user limits a controller's delete event, they're allowed to do that. However, it would only start and stop their services. If they're defined by policykit, that controller's delete event could start and stop users at root services as well. That allows us to have non-privilege parts of the system, sending events to privilege parts of the system. If a user is authorized to do it, it doesn't refuse it, it only affects your services. You can muck around with your own services. Then it doesn't let you do anything that you couldn't do with a shell anyway. OK, I think that's all of that time for us. Thank you very much.