 Okay, I had quite a fancy description for them of this time. It was written when I was hoping to accomplish everything written there before. It's not quite clear, but we did some progress. I don't know. General House Red Box started and what's it now and where we are going. Most of you here probably haven't been involved in the early Scratchbox, so perhaps a little bit of history will help to understand. Originally, Scratchbox was a built environment for an open-ended distribution that was compiled from scratch. Having said that, we made files to find out what's inside of it. The developers figured out that there's quite a bit of stuff that you need to do yourself to make it compile. It was boring, especially when the amount of time was pretty long. Hackers were typically taken directly from TNT.org and so on. They often did not have the architecture patches and other stuff that you get to cross-compile it. This stuff also didn't come from the NBM in the G2 files. The developers were lazy, so they decided it was easier to create a workaround than to fix everything. This was CPU transparency. I will explain later what it actually means. There's a few other workarounds needed to make the applications believe that they're actually being made in compile, but they're already cross-compiled. This was scratch and scratch box. This eventually opened the possibility of compiling and modifying any of the packages, which is where I became involved. What is Scratchbox? This is one of the top five search terms for how people find Scratchbox.org. I think it's quite unclear to people what it actually is. This slide is from Timo Zambola's presentation. Original Scratchbox developers, or original, quite early. While this is technically true, it's a bit more obvious to me that Scratchbox can be probably broken into pieces. CPU transparency, which I already mentioned. A large pile of bus tools ready, and having the target-librated normal locations. Binary direction, very good data-pression, and so on. And, of course, how this will be prepared the best way. The CPU transparency allows you to execute the areas of target-librating that your body is cross-compiling. So when a compiler comes to test, in theory, if you write a compiler and they're facing correctly, it will just skip the test here, cross-compiling. But in practice, this was not really true. Developers either cannot properly use how to make or they just didn't care about cross-compiling. And then there are, of course, things like monolithic X, which first compiles I make, and then uses I make to compile the rest of the stuff, which kind of subsets the cross-compiling. Originally, it was written so that you would just SSH to the target machine, NFS mount, and run the binary from there. It worked, but it was slow and fragile. So SSH was laced by SBR stage, which includes the code type of connecting to the target host and then just mounting, executing the binary and dialing the payload data over. And now QML has a pair. It's actually even more easier. You don't even need a target machine to cross-compile. This means it's important because when you're compiling a package, you have to use much of the real-time somewhere else than actually the GCC, especially if you're already super-large downloading a package running the stuff that takes really long. Those tools are located under Scratchbox tools and Scratchbox dev kits. They are compiled specifically for Scratchbox and some patches and hacks to make applications work correctly from a location that is not regular. The positive side is that it means that Scratchbox comes with everything you actually need to develop. The dark side is that it makes Scratchbox huge and it will take years to make. Target libraries are located in their usual locations. They will be located in a place which is more difficult for the other cross-compiling solutions. This means that we don't need to manually the packages we install. We just need to get any target library of packages you want. Now, if multi-art existed and worked, we would just forget this part. We've been waiting for this quite a long time. Even if it appears, it doesn't help or that there will be a basic distribution. The best side of that is that it is targeted and that you can use for directories an FS root place or just to wrap it into a target or boot into that. Yeah, of course. Is this all in C-H tools? Yes. The content is basically all of the target system with all the built-in solutions. Yeah. The binary big-art actually is needed because we just move everything to under Scratchbox, which means that if we have something like hash-file in USB, they are being made like an Indian Rules file. It will run because either it was going to be in a specific system because you have the product to compile it or it's an unbinary or whatever type of architecture you have and it's low. So you want to read out that to the correct class across the tool. For this reason, there is a healthy read-out and hack tool. Read out it into several paths. This has the problem that if you're trying to build parallel or vital modules, it usually read out it to the wrong parallel version. There are some ways to work around it but it's not as easy to build as normal C libraries or stuff like that. And this is actually part of Scratchbox but later we have created a tool named crocodile cross-compiling whatever. It's a script that builds everything you give to it and that attempts to build them in a iterative way of first building things that don't depend on anything and then things that depend on them and so on. It allows you to compile a set of packages completely out of Scratch so you don't have any of the target libraries available when you start. The other libraries are brought from the tool chain instead of compiling them because GCCP processes are quite complex and you already have the libraries in the cross-compiling tool chain anyway. So then the dependency loops are broken because we already have the tools available as host tools. We can't break them for libraries but for any kind of tools that is bought by GCCP. When you're architecture you have most of the build and you'll pop to it. It's just built only if you have this. So you're just saying that it's an important property when you're working. So you should be able to do all when you're architect, you're not dependent and they're already building a scratch version of the tool chain. Now you're going to build the tool chain first to be able to compile for that. You can build that in a scratch box already. You already can use any new tool chain nowadays. You can use that to mail something. This is the tool we use to test nowadays. Since all the host tools are built in a searcher level we currently probably can't build anything in that searcher. There are alternatives which probably are worth revealing because people have already tried already something that we're compiling. It's a nice part that you don't have to worry about first compiling problems at all. It just works. In cases of some really slow architectures it might be that no one has actually ever used that tool chain. For example, I think one of the recent examples of this was that Walter trying to construct a whole fire architecture. No one had probably run that idiotics because it didn't work. Then the exact opposite end is to fix the build systems and just build them with correct flags. This is of course the proper way instead of working around things you fix them. It's not as big a problem as it used to be when scratch blocks was originally developed because open embedded has come a really long way and many other people have also fixed their packages. Also things like fine-naked have died their horrible death finally. Why are we using them just because managing libraries and target tools in kind of a big problem with hundreds of packages comes really 10 years and there isn't really proper tools to manage them like they think they can. There is even a g-cross which is quite a helped but it doesn't get all the build devices that I remember and not be easy. For example, GTK2 once created some based on died models are not being compiled and for every I make that dies there is some useless content created and the new system never offers it. Always remember that there are still things like cross-compiled that should be supported open embedded is of course one more question you said new build systems appear but at least some of them are already designed to support cross-compilation. For example I use in my company scones for cross-compilation and works easily. Scones was just an example of a new system because I'm not aware of having any problems. I think new systems people are more open to those problems at the times when I make was invented hopefully. You said earlier that you can configure. But actually if you just fix it by not running the vibrate then how do you get the result of that? Exactly. Of course basically it provides research and answers to fill in where things don't work at all because that has limitations. If you're actually trying to discover something that changes dynamically you can see somewhere. So open embedded actually has complicated schemes as well which we've scripted that can be run in any process together out of as a way of storing the answers somewhere into a cache file. What it was actually all about. One of the compromises is to run the actual package build on an A2 cross but instead of executing the GCC there it's executed on a first compiler on a faster machine. It is really easy to set up and it troubles the compiler so everything that will compile natively as well. Like I mentioned earlier it's not always the GCC that takes all the time and this doesn't help you solve and build the legacy loops if you're trying to bootstrap something because we need all the possibilities to get it off the target system. However this has been used for example with the big NBNR project because they have a pretty simple NSLU2 hardware available they want to compile faster another one I saw was in this presentation available on the self wiki where they were fighting computer problems and decided to go this way and it's finally into the scratchbox 2.0 scratchbox 2.0's basic idea is to extend the exact redirection to open redirection so when we decided that we can redirect the binary files we might as well redirect the open files when the minors are trying to access some parallel files this would allow us to match the cost tools that they need to get instead of having these massive packages compile as part of scratchbox we theorize that if we do this scratchbox is really only a code that is part of scratchbox that it does small enough to be included in the NBNR and could probably use it in official NBNR package compiling unfortunately it's not as ready as I hope for this one to be mentioned earlier the package compiles scratchbox because the tools are based in a search and updating them is a good idea to repeat the work which would have to be done every time a package is updated in the end you would have to update the scratchbox cost tools a bit like the IE32 package that is currently using to provide IE3 as its support and payment is the form except that it takes one of them larger we are just thinking that why can't we be able to get these kind of tools to make that possible we would hate to be able to run unmodified they are binary source scratchbox because they are the ones which have our path set and payment set when we are compiling them the first part is to actually execute the binary scratchbox each of the binaries in Demian they have their dynamic memory starts with the LDS sort of minnows which is the interpreter we can't use that because that one is actually for the target so instead we accept we call LDS sort of and the proper library path this was done for scratchbox 1.0.3 unfortunately we were this is the part that allows you to do things that can't be compiled for scratchbox the second part is to allow these libraries to read all the data files and plugins and stuff which are actually not under scratchbox are under scratchbox or this we need a free direction of file calls and the extramalura we started this work and just like having it as an idea that we should do some day first implementation was basically a truth which already has all functions running with access file system so instead of just having a static place where to be able to configure a file which allows to select what to be able to where this part is done at least like level so instead of fixing this to make it work properly we need a separate implementation this time based on views to say is that file system in user space which allows you to implement file systems in user space and is part of Linux since 2.6.8 so this was also done very quickly and now since we are quite unsure what kind of configuration format we want and where we actually need redirecting we are using rule of screens to figure out where we want to redirect the rule of screens under scratchbox and just hope that we will be able to have a framework we can start tackling with just the new one or any more and we are now left with the hard questions of how to work and carry out proper rules apparently wrapper for the PUSA engine of course a function called Sbox Translate Add will have it's user very simple view it gets the binary name the function name is that either the ggpc function that is in the case of the PUSA engine it's one of the BFS level functions real situation we will have all in this place a binary value and pass and file as you see we will have that nice property of being very lively and fast to start so it doesn't add much over any view in the early preload case to choose whether we want to use PUSA or early preload pluses and minuses and we are not sure which one we will continue working on PUSA is really fast because the PUSA engine is started only once when the PUSA engine is started it's really robust you can tamper it from the process space like you can do for the early preload it requires a hold here and relatively in parallel 2.4 is still on the PUSA it knows less of the context of the binary because you can look into the memory of the application you don't have the level of GFC functions VFS less it's a lower level okay before it's a basic level yes so we know signature restrictions before it's a better level early preloads we are more familiar with working with it it works with any camera as long as you use it it gets the complete context of the process that's being executed probably it's not a good idea to probe it to the memory of the application if you are going to evaluate but in case it seems like it could be possible and be afraid of it might be needed anyway, for example we want to wrap the new names in case some of the copyrights use that to figure out what are the 2.4 slower since the new engineers started where the process started we didn't work it around with some kind of deal then we have the option of starting from complete description before getting restructure of scratch blocks like completing scripts and revolution basically there are some scripts like this already in the scratch box to the actual repository but they also have this system completely ready yet all of the studies that we could upload together right now it could be there and you can get started with me but to reach the level of current scratch blocks so all of the same tools the time I'm ready we are looking at this version of scratch blocks 1.25 which is the latest going to take quite a while to make compiles and infusions of packages we will be done in a month or so the original path would be that we would just add open redirections and we would feature for the current scratch blocks we would be able to get installed more tools or update the tools of the right tools or whenever you need something new there this system may work much faster it wouldn't break the habits of current scratch blocks users but on the other hand it would be taken quite a while to people in the active world to start moving that whole version of scratch blocks replacing that tool which might make one by one by first maybe to get installing it making it work and then removing that old compile made would take some while usually evolutionary approaches better than revolutions things don't work for a very long time so it doesn't make people happy any extra questions so in reality the packages in the medium are really broken and not really but are some serious scratch do you have any statement on that probably some of you actually use that you watch one of the student people I don't think we could spend much time recently I mean there's 1,300 packages in open bed and last time either 1,000 1,000 so I think a lot of packages actually I don't understand I feel that are fundamentally weird oh you have a visit from a volume from a very steep slope yeah we don't know an older version of open office actually in the beginning had a global which was in GIMP source format and during the build process it started GIMP into virtual X and used that to make PNG out of this pilot that was the GIMP oh yes because there are like PNG projects working on producing patches for post compilation so it should not be like that hard to join in that fight and do the right in the trademark I think that needs to happen eventually especially in open bed not that good but somebody's got to take a load of patches and these get into the debut patch and preferably in the upstream yes you would like to see that happen independently so probably managing that target library to which I started and not start that location I think you start creating it all the time unless everybody has incentive to make first compiling work and there will always be new work no actually then you know how to do for all of my programming later on some of these bits are actually all the work that's on them for everybody so that's the last one I guess of course there's another update to actually get a new work into that some of it's actually quite generic but that's a long ago so people don't take their work to the correct list it's not just about development also we have like an estimate and speed of computing and of course trying to write books for that course some program because I would assume because of that a lot afterwards you would use one you would have to run like a couple of dozen configured tests and whatever it should take some time to do there was some measurement done to get new compiling most innovative that scratchbox combination if there's something like they were able to compile in scratchbox and so GT had to send it out 20 times by the time it was a 400 megabit scratchbox with respect to an arm-native onboard but on the machine built on the machine start this is around how much how much make sure much of this overhead is not that big you can use paper to make it cost as well maybe a package we can use or maybe use as paper to do what we have slowdown things that you can use to execute but there isn't very much that doesn't happen most of the time there isn't much time so I think the most one is not much true if you use SBRSH which sends that binary to that target host to the front that slows it down quite a bit but queue at most very fast it depends on how they vibrate essentially I guess it's done the problem is that at the moment there isn't a single band to get trust in the store like you know which would be a guess only need but that's not it's not at all difficult it just needs to at the moment it's incredibly annoying to do it manually for each load but I can't see it's actually difficult and if you practice trust knowledge to act as folks deal with dependencies and much of this what SBRSH folks deals with is not really useful we have a really tiny advantage which is something like 10 years of practice maximum if you have something bigger than here they're overhead of fixing everything and working on everything it's quite big but bigger than it does it allows so it's a problem of running in store scripts without having an actual target for most of them I don't know any other solution to it but any other solution is to to manage your image to the empty storage box for the rest of the environment it's still very useful up in the store stage also there are several packages like simple example would be GCC on builds on builds itself so I'm not sure if I catch the few hours compilation which is called GCC and the other tool which is called GCC would have to be emulated by QM that would be quite slow on the other hand if you're cross-compiling anyway you don't really have the need to cross-compile GCC that's like you're just doing all the work you're utilizing one stage cross GCC and another stage GCC so how do you handle that the GCC case there are several build systems where the first build is moving around and uses that move really to a lot of C files there are files for parallel in this specific example parallel you can do so that you will have a cross-compiled in parallel and then when the build process tries to run the cross-compiled in parallel you will bring that one to the cross-compiled we are not going to mix it quite a bit faster how do you handle this case with like parallel, byte or whatever or it's just like a feature point now it's being done for parallel but you have to do it by hand being parallel is not being true you have to do it by hand you need to do it by yourself you could probably do the same for parallel but I have no experience they just have this in spread purposes you want to explain that the version dependent what version of parallel you want to compile doesn't there be loads of versions of parallel in parallel you can only compile that version of parallel and it's updated in parallel in spread purposes isn't it so like going to change in dependence on parallel users and scratch books in general on this mini-parallel no, in front of you yeah, you can use it with ucdpc for example we have did that before to criss architecture and a criss ucdpc Nebian-based compilation quite big and it's basically 4.5 to 2 megabytes of rational