 All right, thank you, everyone can hear? Good. If I speak too quickly, or it can otherwise not be understood, please let me know. Sometimes I tend to go a little bit fast when I get excited about a topic, and I can get excited about this topic. OK, I'm going to break this down into four areas. I'll give a brief overview of our project, give an introduction to what SCON's configuration is actually like, talk about specifically how to apply it to open source projects, and then talk about some things we have on our future agenda. Before getting to that though, I'd be interested to find out how many people here have used SCON's on a project of theirs. OK, not too many. How many people here have never even seen SCON's configuration file? Great. OK, good. And the introduction is going to be exactly what we need. Last question, how many people actually used SCON's on KDE through the BK-SIS project? Anybody here? There's no sweet days today. All right. OK, very short history. SCON's has been around since about 2000 when the predecessor design won the build tool competition at the Software Carpentry Contest. That led to actually founding the project in 2001 by 2003. We'd actually reached about 1,000 downloads a month. Fast forward to last year, which is when we released the last major point release. And now we're in the range of about 6,000 to 8,000 downloads a month. We have, I don't know how many users from inclusion and various Linux distribution, but we're still working towards something that we want to call 1.0. And we can get into that if there's time and if people want to ask questions about it. This is what I think of as the 60-second summary when I'm describing SCON's to somebody for the first time. The key things that really stand out for people are that the configuration files are, in fact, Python scripts. That comes from at least my personal belief that the problems that we all have building software are not language design issues. There are tons of tools out there where someone got an idea to try to solve their build problem, and they invented some additional language to do it. And what almost always happens is you end up wanting to put in programmability into those languages. James Duncan Davidson, for example, on record is saying he would not use XML if he were doing ad from scratch again, because you get to some point where static definitions of the dependencies aren't enough. You want flow control. You want loops. You really want first-class data types. So within those Python scripts, SCON's is really an API for programming the entire dependency graph. And that sets it apart from things like make and other traditional build tools. And then the last thing is that we actually implement all of SCON's in Python. There are no compiled C modules. That's mainly because we don't want to get into having to try to support pre-compiled binaries on all sorts of different platforms. We go ahead and let Guido von Rossum and the Python people worry about that for us. Now, within that, these are what I think of as the four key design principles that we try to adhere to. First one, correctness of a build. What do we mean by that? Our default behavior is a correct build. Interestingly enough, there's almost no academic literature about building software. And therefore, there is no formal definition of what a correct software build is. Or at least, I've not found it. Nevertheless, as software developers, we all intuitively know what a correct build is. And when in doubt, we jump through hoops under the covers to make sure that we detect anything that might make a build incorrect and build the right things. Secondly, backwards compatibility. I'm famous within the SCON's project for treating us as if we were a 7.0 release project. We go ahead and bend over backwards to make sure that we do not break the files that other projects that have chosen to use SCONs have actually shipped to other customers. The reason why is because, of course, if we ship something, even though we're pre-1.0, if we ship something that's going to break configuration files that people have distributed elsewhere, it's not going to be because of those people. It's going to make SCONs look bad. So we take very seriously the commitment that we have to people who've chosen to use SCONs by making sure their configurations are going to continue to work no matter what we release. Third, performance. Because of the default behavior is a correct build, doing a correct build in all corner cases sometimes takes a lot of extra cycles. So what we do is provide configuration options that give the person writing the SCONs configuration the chance to say, I want to speed this up and I want to speed it up by sacrificing a correct build in a weird corner case. The best example of this is, for example, we actually go through and look through the entire list of directories that are going to be used to construct the dash i options on a compile line, for example. Because you've got a pound include food.h and last time it was in the sixth directory on the list and somebody created a new food.h in the first directory on your list, you need to rebuild because the compiler would actually pick up a different food.h file. Now, does this actually happen in real life? 99.9% of the cases know it never does. So this is a good trade-off to make for most projects but you have to explicitly configure the trade-off saying we're gonna accept the fact that maybe it would be incorrect if somebody put a new food.h file in a different directory in order to get the speed up for most of our builds. Last thing is to try to make things convenient for people. From day one we've had a dead simple installation and made absolutely sure that people could install it as easily as possible. On any Python version, scans will install and work correctly if you're running Red Hat 7.3 which shipped Python 1.5.2 as it's default. This question comes up periodically about whether we should drop support for old Python versions but in practice it really doesn't have any impact on people writing scans configuration files. We write our code so that it works all the way on 1.5.2 but you can go ahead and use any Python 2.5 constructs or anything for the versions of Python you have installed on your system. Second thing for convenience is that simple configuration should really work out of the box so we try to search for the tools that are gonna be used and we wanna try to make it easy to configure whatever brave behavior you desire. Go through this pretty quickly. What scans does well, the real sweet spot for scans right now is a full tree build of the entire thing. Kind of what a build master or release engineer would do in a commercial environment. Especially when those builds get really complicated because then they can use the Python programmability. Making sure that the build is reproducible, I'll talk more about what I mean by that when we talk about how to get variables between different environments. Another thing that scans does quite well is we handle generated header files I think better than any other tool that I know about. You can go ahead and have a dependency on a generated .h file that in fact includes other generated .h files as many levels deep as you want and will automatically detect that and correctly decide when the .h files need to be rebuilt. It does very well with creating multiple variants within a single build tree. If you wanna push the button and have underneath one build tree, your debug version, your optimized version, your cross compiled version for Windows, there are constructs that make it very easy to do that. Although I'm not gonna go into detail on those in this talk. We handle cross platform builds, abstracting out the differences between Windows and non-Windows systems and as I mentioned, backwards compatibility is a big deal for us. However, the things that we really need to do better, first off, single directory builds. When you've got a whole build tree, Scots wants to know about the whole dependency graph. In fact, it's actually very rigid about wanting to know the whole dependency graph or has been in the past. That makes it a little bit more difficult when the developer wants to build just one particular thing. The graph that you'll hear is, it takes Scots 40 seconds or even minutes to figure out that nothing needs to be rebuilt. That's because Scots has been historically rigid about wanting to have the entire dependency graph in memory. That's connected to the next one, which is we need to do better with performance. That is probably the biggest complaint that we hear from people and I know that's why a number of projects choose not to use Scots. Related to that, memory consumption is sometimes an issue. In order to try to speed things up, we've done a lot of pulling things into memory and so we're not necessarily good about freeing that up at the moment, but we're working on that. And then there are sometimes some quirky points in the interface that people will pop up on the mailings and complain about, but I think that's probably true of any tool. Okay, so let's talk about Scots configuration. What I'm gonna do is I'm gonna contrast things that you've seen in make files and then show you how they'd be done in Scots and we'll build up a basic knowledge about how Scots does things. This make file example probably simple as you can get. We're building F1.out from F1.in by copying the source to the target. And if you do that in Scots, that's the simplest example. There's actually, sorry, there's one simpler, but this works. What we're doing is we're creating a construction environment. We're calling this builder method, which is the builder method that does a generic. I wanna execute a command to do something. And in this case, it's the same CP command, although we spell out the variable names instead of using the cryptic make signals. If we wanna parameterize how this works and let people override CP, that's how you do it in make, which I'm sure you're familiar with in Scots, you do it this way, note that the variables actually exist within the context of a specific construction environment. The other thing is that we can actually enclose that in angle brackets and angle brackets actually allow you to do some other interesting things because it evaluates it as a Python statement. Again, that's outside the scope of this talk. Variable expansion is more like equals, not colon equals in make, because the variables are recursively expanded. So if we do this in Scons, notice on the bottom that we go ahead and use dollars copy com, that doesn't actually get evaluated until it's needed. And then the other variables in the construction environment are recursively expanded. Again, like makes equals assignment, not colon equals. One of the things that you can do pretty easily with this in Scons that I tried to remember enough make magic to figure out how to do it in make and couldn't is you can go ahead and then say, for this particular target, I wanna override a variable. And in this case, we're doing it on the second line for the F2.out target, just overriding the CP flag setting. But one thing that gets overlooked is you can actually go ahead and include dollar CP flags itself. You can include the variable itself and that'll actually stop the recursive expansion of the variable. So it really ends up being very similar to what plus equals doesn't make. Now, a difference between make and Scons. Make imports environment variables automatically, which makes it very convenient to use. The example here is if we go ahead and set the environment variable CP flags, then the second time that we execute the command, of course, the dollar CP flags expansion picks up the dash P dash F from the environment. Scons does not do this, does not import your environment variables automatically, which is a source of confusion. Among other things, that means your dollar's path is not necessarily available to the commands that are executed unless you do so explicitly. We do this so that the builds in a full tree build are completely repeatable unless you configure them otherwise. So in Scons, if you set CP flags and then export an environment variable with the setting dash P dash F, dash F is actually still all you see because that's what's configured in the Sconstruct file. Show you later how you can actually do that import of the external environment. Now there are things called builders. We've already seen that command is one. Command takes an actual argument that says this is the command I want to execute in order to build this particular target. But the fact that we repeat those in all those is a really good candidate for writing a custom builder and doing our own so that we can get some other benefits that we'll cover later. How that's done. First red line on the bottom is we just make a builder saying that the action is the same dollars copy come variable that will then get expanded later. We attach it to the environment by adding it to this builders dictionary. And once that happens, then we can go ahead and call a file copy method using the name that we associated with it during the previous statement. And not have to repeat dollars copy come every time that we want to make this target. Part of why you want to do this is because builders can do interesting things. The fact that we're repeating the F1 base name from F1.in to F1.out and F2.in to F2.out can actually be handled by a suffix argument when we create the builder. We can say I want my target suffix to be.out. And then all we have to do when we call in that file copy down below is give it the source name. I'm gonna do a file copy from F1.in. And the default target suffix, it'll strip off the .in and add .out for us. Notice, however, that F3 already has a suffix, so therefore it's not going to get overridden. Now, this seems a little esoteric. Why are we worrying about saving just a little bit of an extra argument that actually is readable because it's explicit? The reason why is because all of the stuff that I've shown you about how we define builders is how SCONS defines the pre-available builders that know how to build programs, object files, libraries, all the latex files, everything. Using that suffix argument, we go ahead and abstract out the suffix differences between windows and non-windows systems, for example. So here, we'll actually get F1.0, food.a, and bar when we run this sconstruct file on a non-window system, but it'll build F1.obbs, food.lib, and bar.exe on a windows system. There are other interesting things that builders can do as well, but now we're gonna move on here. Another intro, oh, part of what the predefined builders have is each set of predefined builders has variables, predefined variables that it knows how to use. Most of them correspond to things that you're already probably used to from make, so the object builder that knows how to compile an object file does so by calling whatever's in $cc. So if you set CC to GCC, that's how you can go ahead and select the specific compiler that you wanna use instead of using whatever default SCONS picks from you after looking around the system. Similarly, the program knows how to use the libs variable, which is where you define what libraries you wanna link with. Notice here, we don't have to specify file prefixes or suffixes in the same way that I showed you how you can set suffixes. There's magic that goes on behind the scenes that knows that libraries have a lib prefix on non-window systems and no prefix on windows systems. Now another thing that you can do to help make these SCONSTRECT files more portable is you get to reuse the objects that are returned by the builder. Whenever a builder is called, what you get back is a list that contains objects representing the targets that will be built by that call, eventually. You can then use those as input to another builder. So that's all of the, yeah, that's target objects, in this case, a F1.0 file or an F1.obj file, and that's the thing that we wanna link into the program. The reason why you might wanna do this is that, suppose you've got one particular object file that needs an extra variable to find. When you're compiling it, you can go ahead and call that, call object multiple times with different variable settings, then string them all together this way. Now, the last kind of key capability that we're building up to here is that because all of these things are taking place in the context of a specific construction environment, these things are just objects. You can create as many of them as you want, and they stay separate from each other. So you don't have the phenomenon that often occurs in make, where somebody sets a variable way over in one file, and then all of a sudden it's had some unintended side effect in somebody else's file, build file, because of makes global variable namespace. So here what we've done is we're actually gonna create one base environment up on the top, the first line, and then create two separate environments, one that sets the dash g flag on cc flags, and another one that sets the dash optimized. We can then go ahead, and in this case by hand, there are other ways to do variance, but this case we're going by hand. We're gonna build in the opt subdirectory, we're gonna build our optimized version of it. In the debug subdirectory, we'll build our debug version of the same program from the same inputs. Okay, that should give you, I hope, a flavor for what SCON's configuration looks like. Actually at this point, are there any questions that anybody really needs to have answered before we push on to specific things for open source projects? Okay, keep thinking of them. We'll obviously have time at the end to handle more questions. Okay, when I think about how to do the build and packaging for open source projects, particularly since I'm releasing an open source project myself, the thing that I keep on coming back to is that the build system has multiple different types of user needs that it has to satisfy, and this is where I think we get into a lot of problems with software builds. We've got the project developers obviously, but we've got our software end users as well, because hopefully we're writing software because we want people to use it, not just for our own amusement. People who are gonna be packaging the project, and then also in the open source world, we've got the downstream maintainers. We've got the people who are gonna take our stuff and not just use it, but are gonna put it into Fedora and OpenSUSE and Ubuntu. For me, this ends up leading to the follow requirements that I think go in this priority order. For the end users, I think that's the most important. I think it's more important that we as developers and programmers take on more work so that the end users have a good experience. They wanna be able to download, build the project on any system, and maybe they need to provide their own settings when necessary, but hopefully this should be very easy for them to do. Secondly, the developers, next most important audience, they want that edit build debug cycle to be as reliable and quick as possible. And most developers that I know will actually prioritize quick over reliable. I don't necessarily think that's always the best, but that's the reality is we're trying to get a bug fixed. We wanna be able to rebuild quickly. Downstream maintainers should be able to repackage easily and the internal packages really should be able to just push a button and go ahead and release the product. The upshot, the summary of all this is ideally the build system should just get out of the way and let everybody do their job. So why has this been historically a challenge for Scons with different open source projects? I've already alluded to the first reason. Biggest one is that Scons has been historically very rigid about saying I've gotta have the whole dependency graph in memory in order to make the same build decision this time that I made last time. That's changed recently, but we still have some work to do to really make it easier to subdivide a build into individual developer directory builds. But the other reason here is that we really didn't do our auto tools functionality right. I'll show you here in a little bit that we do have a layer that is similar to auto conf, but we did not provide on top of that an auto make layer and right now for my money in the auto tools world, it's auto make that really makes it possible for all of us to go ahead and configure or provide projects that are packaged in consistent ways so that the end users know that all they have to do is download dot slash configure make make install. There's an underlying process problem that we had here and that is that really what happened is that when we did our auto conf layer we didn't kind of go back and say, okay, what are the users of this stuff really want from this configuration layer? Instead what we kind of did is we looked inside the code and kind of said, hey, this is a really clever way to do it. Contrast this, in contrast, this is something that I think we got pretty right. If you take a look at the help output from make and I'm just gonna arbitrarily take the first eight lines here and compare the output from scans, the fact that they look similar is no accident. We started by saying, okay, the configuration files are gonna be different but everybody out there using make knows that they can use the dash j option to do parallel builds and they know that they can use the dash c option to change to a directory before doing anything. Well, it doesn't make sense for us to go off and reinvent a whole set of new command line options just because we think we know better. Let's go ahead and use those. So consequently we actually support for all of the command line options that make supports that we can do it the same way. We didn't do this with the auto conf stuff and that's made it more difficult for people trying to use scans on open source projects than it should be. That's something that we're working on changing and I'll show you some of the things that we're doing. Okay, so let's go back into the code here. If I've got a really simple project hierarchy here, we have to show how to do a hierarchical build in scans. The sconstruct file itself in the main directory here might look as simple as this. We create a construction environment like we've already seen and then what we do is we go ahead and export that environment. The variable n actually becomes available for other scans configuration files to use. And in fact, this calls the subsidiary sconstruct files. They each get executed in order to call the API in whatever way they want to configure the global dependency graph. But each of those files is also executed in its own Python namespace so that we're insulated from unintended side effects on the whole build. Now you notice this is the source slash conscript file, in this case, down in one of the subdirectories. What we end up doing is import that variable, which lets us actually use this base construction environment object. And then it's pretty usual, you don't have to, but it's pretty usual to go ahead right away and just clone that environment with whatever settings are specific to the software being built from this sub-directory, the source sub-directory, as opposed to the other sub-directory that we had up there, lib. And we can go ahead and reuse Python variable names here. We don't have to play games with that because, again, we're in our own namespace in this subsidiary file in order to insulate the full build from unintended global side effects. Got some extra chart junk here. Yeah, ignore the empty balloon. But this is what our autoconflict layer looks like currently. The configure call there creates what we call a configuration context. And within that, you can start calling different tests that are roughly analogous to all of the AC underbar macros in autoconflict. We don't have anywhere near as many as autoconflict itself does, but there is a mechanism for defining your own custom checks. They return success or failure based on whether the... True or false based on whether the check succeeded. And in this case, I'm just doing something dumb and simple and saying, if any of them fail, just go ahead and exit at that point. It would obviously be better to actually put a message of some sort in there. The second one, the check lib, that second argument is saying, we're looking for a C library. You can also look for Fortran libraries, D various other languages. And what we need to do, among other things, is make this add to the functionality by adding more checks that are autoconflict. And also make the way it functions more autoconflict because, oh, there we go. Right now, because the configuration checks actually happen when the scon script files are read automatically as part of evaluating the dependency graph. This is another way in which we deviate from the configure make make install model that we're all used to from autoconflict because these checks are not something that you, as the person building the package, initiate specifically by saying dot slash configure, instead it's something that scon says, hey, I'm gonna do this for you as part of doing all the dependency graph walk. That's kind of nice and convenient, except for the fact that you don't necessarily wanna do that every build. So that's another thing that we're gonna be looking at, figuring out how to make more autoconflict or auto tools like. Okay, now I alluded to the fact that scons does not import environment variables the way make does automatically. So if we're gonna take a sconsified project and try to make it as autoconflict as possible, and we wanna go ahead and pick up $cc from the user's environment, here's how you'd go ahead and do it. Import OS, the stock Python OS module, OS.Environ has the user's environment at the point where we executed scons. And what we can do is set this ENV variable, capital ENV. What this does is make the environment that scons was executed with be the same, sorry, the other way. When scons executes a command, a compiler, it makes the environment there the same as the environment that you executed scons with. So you'll have the same dollars home, dollar path, and all of that. If you don't do something like this, then one particular failure mode that's very common for new scons users is, hey, I can execute my compiler from the command line, but scon says it's not found. What's going on? It's because scons, by default, does not import the environment so that the build is completely repeatable by default, but it's a one-liner if you wanna do that. Another thing is if you wanna let the user be able to override the variables on the command line, say cc equals on the command line instead of having it be an environment variable, here's how you would do that, two lines. Arguments is a variable in which we store all of the variables from the command line, like cc equals, desder equals. We go ahead and update the environment with it so that it's available to the compiler or other tool that will execute to build something, and then we also apply it to our construction environment so that it's available to us. Yeah, sorry, I'm getting ahead of my balloons here. Okay, another difference between make and scons, this one functional, not in the configuration side, is that by default, scons makes its decisions about what is up-to-date or has been changed since last build based on the file contents. In MD5, check some of the contents, not on the timestamps. This leads to the following behavior that's a little surprising to people at times, whereas if in make we go ahead and sleep and then touch the input file, we'll go ahead and remake our output file. Scons will say that everything's up-to-date because the contents of the file have not in fact changed. It'll actually do this anywhere in the trees. So for example, if you change a comment in a .c file and your compiler does not insert timestamps in the object file header, it's obviously gonna recompile it exactly the same file as it did the last time. Scons will recognize that and say, oh, you know something, I really don't have to relink here. Saves time on the build, but it can be startling because we're very used to makes behavior of, hey, I rebuilt this input file and all of my downstream files should be rebuilt. And another, we see on the mailing list periodically, people working through this scenario and finally figuring out, oh, Scons really did know what it was doing. Okay, so if we're doing all of this extra stuff, another thing is what things can we do to speed up the build? This is a relatively new interface as of, oh, I think just last December. What this line does is it configures our decider function, the default decider function that takes a target file, an object file and uses whatever metadata is available, the content signature, the timestamp, or anything else to decide has this source file changed since the last time this target file was rebuilt. MD5-timestamp tells Scons to assume that the MD5 check sum is the same if the timestamp's the same. It's basically putting a timestamp check in front of the MD5 check sum. So you actually get the benefits of looking at the contents without having to open up the file and look at the contents every single time. There is one side effect of this, but it's pretty tolerable and that is if you go ahead and build and then modify the contents of that file within the same second after that file was built, it won't detect it. But again, this is another example of how you can go ahead and say, oh, I recognize what that corner case is and it is in the documentation that that's what you're giving up. You're giving up the ability for a build to be absolutely correct if it's updated within one second, if files change within one second after a build. Another thing that you need to do to help speed up the build is how Scons figures out or tries to figure out what compilers and other tools you have on your system is in a set of what we call tool modules. And by default, it's a pretty complete set of tool modules or pretty extensive. And so one of the things that Scons is doing is actually going out and looking to see, okay, they may want to compile Fortran, so is there a Fortran compiler here? Maybe they want to do some latex, is there a latex binary here? Maybe they want to do M4, is there an M4 binary here? And there's actually over time, we've built up a lot of stuff that gets checked. You can speed up Scons startup times pretty significantly by being explicit about which tool modules you actually want to use. In this case, we know that the only things that we're gonna do is compile with GCC and link with LD, then go ahead and configure that environment. And don't have Scons, go ahead and look for Fortran and all these other things. This is another IP configuration. This is another thing that's actually, we're looking at making the default behavior better by changing the checks for all of the Fortran compiler and M4 and everything else from being all up front, just in case you want to use those tools to being just in time so that it would recognize, oh, he's now trying to compile a Fortran.f file. I better go out and look for the Fortran compiler and if there isn't one, then I'll give an error. So, at some point, this as a speedup will, at some point we're gonna speed up the base behavior so that this isn't as much of an advantage, but it's still not a pretty good practice to cut it down to just what you wanna use. Other things that I won't go into here, but if you leave this talk and go out and wanna start experimenting with Scons, we've got a pretty extensive wiki with a lot of recipes that users have contributed and in particular, search for this page or just search for go fast. There are a number of tips that different users have contributed about how to speed up different parts of the Scons configuration. Okay, another thing for open source projects that's useful and this is, I think, a part of why we've gotten as much acceptance on different projects as we have is one of the things that I absolutely hate is when you get into that tool upgrade hell, you wanna go ahead and download and start using some cool thing that you've read about and all of a sudden, you can't rebuild it because you need about three different tools or you need the latest version of such and such a library. We tried to take steps to make people, to allow people who were gonna make a commitment to using Scons for their projects to avoid this and not have people walk away from their project because they said, oh, I don't know how to build this, I don't have Scons installed. I'm just not gonna deal with it. But what we did is we have one of our packages is the Scons-local package. It's a repackaging of the exact same libraries and modules that you get when you install Scons on your system, but it's specifically designed to be extracted, put into your project, and go ahead and do the SVN add or CVS add to add all of those, check it into your source tree, and then that becomes the Scons that you build with and that your users can build with without having to have anything else installed. If they've got another version of Scons installed, they can go ahead and use that, you can use it too because again, we take backwards compatibility really seriously so if it's a newer version, it's almost certainly guaranteed to work. We're not perfect, but quite good. But those users of yours who want to experiment with your system don't have to get into tool upgrade, held with different versions of Scons or even getting Scons on their system if they don't have it. Ah, yep, getting ahead of my balloons again. And the way this all works is that these scripts down here actually know how to look in this dash local directory in preference to system directories for all of the libraries. Okay, another thing, packaging. This is new as of just two months ago. We've got new builders here. In fact, yeah, I don't have any balloons here. We've got new package builders that actually know about a variety of formats. We've got some, you know, in this case, we're setting a name, version summary. It's stuff that you've seen in packaging systems. You know, if you've got the RPM support actually knows how to go ahead and do most RPM things, this is the list of package formats that we support right now. Microsoft installer, RPM, IPackage, and then the traditional source and tar GZs and zips. Oh, then the two helper functions here. Find source files and find install files. Find source files. Well, actually go down and walk your dependency graph and by default, the behavior is to assume that everything that doesn't have a dependency itself is, gee, a source file. That's what should go in the source package so that someone downloading the source package gets the same things that presumably we're building from. If you get your dependencies right, then you've automatically got a package that anybody else should be able to build. If they can't, if you test it by extracting that source and then trying to rebuild from it, then it indicates there's a problem with your dependencies somewhere. Find install files actually takes note of all of the target files that get installed with a builder that I didn't show you called install, amazingly enough. And then that, the presumption is that those are the things that should actually be installed on the user's system. Those get packaged up into a normal distribution file. Another thing that we haven't had until recently and that would have made life easier for a lot of open source projects is being able to add your own command line options. This is how you actually go about supporting something like dash dash prefix now. Add option to function over there. The arguments to it are exactly like the Python opt parse library. In fact, it's implemented as a subclass of the Python opt parse library. What we do under the covers is we've actually got some additional logic that when it sees that goes through and takes a look at your command line to say, oh, well, now is there a dash dash prefix up there? Okay, great, I'll use that. Because if you don't do this, dash dash prefix would look to scons just like any other target. Then what you can do is use get option to fetch the value. The command line wins. If the user explicitly configures dash dash prefix, that's always the winner. If not, it can use a default. I don't have a default on this particular example. All right, and we're getting down to the end here, so let's talk a little bit about things that are coming up in the future. Boy, let me think. Do I want to get into this? No, only fast. This is showing what the signatures used to look like in scons. What we store metadata about each target that's built. And the big long strings there are MD5 checksums of the build signature of the target. It's what we used to use to compare to decide does this target file need to be rebuilt? We actually significantly changed this file format in order to make scons less rigid about needing the whole dependency graph. The reason scons used to need the whole dependency graph is that this signature had to be derived this time exactly the same way as it was derived last time for the MD5 checksum tend to be the same and to say, oh, we don't need to rebuild. The problem with this is that you're not just storing metadata about the state of the files, you're now storing metadata about the state of the build decision the last time you built. That's exactly what this file format does instead. Each one of these, this is now just to check some of the contents of food.o. This is the timestamp and there's an option that can display it in readable form and that's actually the length of the file because we're statting the inode anyway so it's additional metadata we can save for free. With this, we no longer have to have the whole dependency graph in memory in order to make build decisions that are consistent with the last time we built. What, the reason I put, now this already exists in scons today, the reason I put this under future directions is this makes it possible to go ahead and write a scon's configuration so that you have not one big tree but a scon's struct file in every directory that can behave independently and then they can still be composed into one big tree when you're a release engineer who wants to push a button and build the whole thing at the end. What still remains to be done is we have, there's still, nobody's really worked out what's the best practice way to do that. You would have to do a lot of experimentation on your own to figure out how to set up the configuration to make use of this underlying capability. So it's a low-level capability that needs some additional work to really make it widely useful so that everyone can not have to figure it out on their own but can just take an example and copy it. Secondly, I've alluded to this, our automaker layer. It's being added by Masayaj, I don't know if I'm pronouncing his name right. He was our Google Summer of Code Programmer last year and I wanted to have this complete for this meeting but it's not there, it's in that 90% done and we have to finish the other 90%. There are some outstanding bugs and as I mentioned, we didn't really think this through from a user perspective when we did the AutoCod layer so this time what we're gonna do is we're gonna think it through exactly from how users currently use Automate try to make it as much like it as possible and right now the documentation is lagging the code. What else is new? So in conclusion, Skont is a good choice for open source projects as well as the large enterprise projects that it really shines at. We're adding a lot of things to make this easier including the stuff that I've shown you and we're gonna continue to do so. So definitely consider, I would say consider if Skont's if it meets your requirements. And with that, I'd be glad to take any questions people have. So I have a project where, like many people, we construct the dependencies automatically but the tool to construct the dependencies is in the same source tree and needs to be compiled before you can compile the dependencies. I've had to make a significant deep voodoo fried chicken with glue make to make that work. Would Skont's help in this scenario? Or in other words, is it intelligent enough to see, oh, I need to work on this partial dependency tag, make the tool and then dynamically incrementally complete the dependency tag. You use a tool that you build inside the project in order to build other parts of the project? Do I understand that correctly? In order to construct the dependencies of the rest of the project. And we actually have that several times because it's stages and stuff. Yeah, let's discuss more offline. If I'm understanding correctly, and I'm sorry, I'm having difficulty hearing. Yeah, Skont's also has implicit dependencies on the commands that are used in utilities that are used to build anything. For instance, we actually look at the C compiler that you use as well, so that's a dependency. So if one of these is part of your overall project tree and you add it to the dependency graph, then Skont's will walk the dependency graph and check whether or not the tool itself is up to date. But that is in your control because you don't have to make the call to add that dependency to the graph if you don't want to. So you could actually set up something where a variable or a flag that the user sets on the command line would control whether or not you add that dependency to the graph and give the user control over, do they want to check the dependencies of the tool itself? So if I didn't understand your question correctly, please talk to me afterwards. Yes? Okay, I'm a porter. I port software, make packages for my operating system. And what we do normally is install into a staging directory, like we're setting the desktop variable and everything will be installed not under slash user slash local, but under staging directory slash user slash local. For example, can Skonts do that easily with built-in rules or do you have to do something in your Skonts struct yourself? No, it should be pretty easy to do. The normal way I think of to use Skonts for any kind of hierarchy like that is as I showed you there to include the subsidiary files in the one tree. There are, however, sometimes where it actually does make sense to have Skonts just call Skonts in some other directory in order to build that. But if you're doing that, you're partitioning your dependency graph in a way that usually is not what you want to do, but sometimes it's the right thing. Install into a staging directory. Oh, I see what you're saying. Yeah, yeah, like Duster. Yeah, what you would have to do is write the installation. I didn't show and I wish I had now. When you call the install builder, you can tell it which directory you're gonna do the installation into. And typically what you'd want to do is make that something like dollars Duster and give the user an ability to then set that from the command line. And then everything works exactly as you would like. Okay, so we have time for two more questions. I think we don't want to cut into the next biggest time. So of course, as Steve will be here, I think to answer more questions offline. In our project, we're currently thinking about switching build systems since odd tools cause too many problems. And we wanna keep it as multi-platform with one build system as possible. Currently, we really have problems with the Solaris compiler. Since it's a different compiler, it's not GCC. And the odd tools, they currently have to hack the odd tools a little. How's multi-platform support with scones in this regards of switching compilers easily? Okay, how would you go about switching compilers? It's basically just replacing the C compiler by, for example, with a compiler call to an intercompiler to a sun compiler to whatever compiler. Okay, if I'm understanding correctly, what I've seen many projects do is create their own tool modules to give them whatever control they need over the selection. Because really all those tool modules are doing is setting variables in whatever way is appropriate. The default kind of behavior for most of our tool modules is just to look through the path and say, oh, okay, I found a C compiler, great, I'll set $cc to this. But you can use the same technique to control it in any way that you would like by setting whatever variables you'd like. There isn't really any magic that way. It depends on what your builders are expecting. Okay, so our last question, I hope it's short because I just wanted to mention about Desdir that I wrote the automatic extension for this number of code and the Desdir is supported out of the box in it. Okay, thank you. Well, sorry, there's no more time for questions. So let's thank our speaker. Okay, thank you very much.