 I'm Adrian Schröter from SUSE in Nürnberg. And I want to speak again about the OpenSorg Build Service. But this time, I wanted to focus a bit about our ideas, our current possibilities and our future ideas to package, to create packages which build for different distributions. Not only SUSE-limited, also Mendriva, Fedora, Debian, Ubuntu. There are problems, obviously. And I wanted to focus in this area of the Build Service. But I'm flexible. Who knows what the OpenSorg Build Service is? OK, who has used it? OK, some. Good. So very brief part. What is the OpenSorg Build Service? In first place, it was suddenly needed when SUSE decided to make the OpenSorg Distribution really a community distribution. Until then, for 10 years, SUSE was developed in-house. So if you wanted to join the SUSE distribution development, it would mean you need a local NIST account inside of the SUSE network and other limitations. We had quite automated build system at that point of time, but this obviously wouldn't work out with an open-source development. The good thing is we had 10 years of experience, and we learned a lot of mistakes we did during the 10 years. So when we started from scratch, we were able to avoid these mistakes. But we don't want to limit us to develop only OpenSUSE. We wanted to make it really a platform also interested for people who just want to offer packages, because our idea is if you can get the software authors to come to us to build packages there, then we get automatically also the better distribution because the authors work already at our system and are connected best with our system. So we want to offer them something that you don't get elsewhere. And this is building also for Fedora, for instance. So the idea is to have all sources, all stuff in one place, change it, and you get packages for all distributions. And of course, to connect with existing infrastructure so you can, in extreme case, with each SVN commit, you can get a new package. See that your package, your commit, now broke something, for instance. The build service, for this reason, had very many phases. The users are seeing it. Then there are packages seeing it, only interesting in their single package. Then there are people like our release manager seeing it, when it's also building the entire distribution. So EZO images, FTP trees, and so on. ISVs are using it for creating add-on stuff. Live systems, so really, really pretty much different use cases for it. We use it at Build OpenSuserOrg, visually. This is a place where the OpenSuser distribution gets developed, but where everybody can go also create an account and create packages for any distributions. So Build OpenSuserOrg is also supporting Fedora, Ubuntu, and so on. Then we have at Novel an internal build service, which is building the enterprise products. So the same software is used just installed a second time. Novel plans to offer an ISV build service. So a build service hosted at Novel, but where every ISV can come and build packages for the enterprise platforms. Then there are plenty examples where people simply use the build service at their site. For instance, there's one guy currently trying to port the OpenSuser to a Spark architecture. So he is able to reinstall the build service in the same way that we have in our site, and just recompile it for Spark and fix the Spark-specific things. Other companies are creating an entire distribution as well with it. For instance, there's Intel is creating Moblin with it. Kray is creating some kind of distributions. I do not know the details, actually. And many small ISVs like OpenExchange or so is providing a group bare support here. They use a buildOpenSuzorg, actually, for publishing their open source product and have a build service in-house again for the commercial product. It's possible because really everything in build service is designed to be as open as possible, as open as we can. License itself is under GPL, the code itself. The API is very simple. It's a REST-based API. So if you can operate it with just HTTP operations, that means that you can easily integrate it into any tool chain you have. You can either call it from command line with a curl option. You can integrate it with your Java stack easily because it's just HTTP, and HTTP is supported everywhere. And the build service itself is not limited to SUSE-based and not limited to package formats. Besides RPM, it supports also Debian or, for instance, Kiwi. The image descriptions are also handled like a package format. So when you place a Kiwi file there, for the build service, it's just another package. And OK, the result is an EZO image. One thing, one concept we have is a project model inside. And this project model gives very much power and very much flexibility. I just have one example here. I'm not saying something. So a project organized a space in the build service. It defines which people have right access there, can submit packages there, they can organize themselves. You put the sources inside there, and you add the description how you want to build them. For instance, for open SUSE, for Fedora, whatever, Mindriva. You can reuse different sources. For instance, you say, OK, I want a SUSE kernel, but I want to kick out this patch, which is fine for an Oracle database, but it's bad for my multimedia stuff. So you can easily follow an existing source with a different patch set. And every time the upstream kernel, the upstream package gets updated, you get a respin of your own package with your changes. And the result in the end is always a repository with all the packages. And yeah, people can just add it to their tool and install the packages from the repositories. This is just one example how you could use it in real life. So there was, at the time, the KDE4 project. It consisted from multiple packages. They were compiled for different targets. Simple thing. The good thing is that you can reuse it. So the Amarok people wanted to build also in current Amarok at that time, but they had the problem that the KDE4 was not part of any of the distributions. So they were not able to compile just the Amarok against open SUSE 10.0, for instance. But what they can do, they can compile it against the KDE4 project which compiles against the open SUSE 10.2. So you can easily reuse existing sources and collaborate with them. OK, that's the generic part. Now, the specific part, if you create packages which should compile on different distributions, you have problems. One part of the problem is actually not that it's the age of the distribution. That means an older distribution has a complete different API set, usually, because the old version of GLIPC is packaged. The old version of QT is packaged. So you don't have the API interfaces you may need. Also, the GCC is changing over time. And the newer versions suddenly complain about programming errors, which were always there. But in the former times, they were just not detected. And now, suddenly, it aborts. Or each distribution may have any QA scripts to check that the package is not doing something evil. Or require, for instance, something. Also, the policies of the distributions developing over time, and that happens changes. Then there are distribution-specific things. When Riva, for instance, has the LIP64 rule, that all library packages have to start with LIP64. And this shouldn't happen on other distributions. So you have really a number of problems. And you can, of course, solve them or just ignore them. There are some generic solutions. So the LSB is a nice idea. So it standardizes the base set of libraries up to GTK QT. But one of the problems is it's a runtime standard. It's not for standardizing the build time. So you have to deal with the build problems also. Yeah, the package name problems, for instance, there are packages which have the same content, but different name on each platform. And you want, sometimes, exceptions in your build descriptions. You want to have a specialty for Fedora or for Zuzo to support something specific only existing on this platform. So we have some methods to support this. There's an example, the Kranalypse package, for instance. It's really named differently on really every distribution. And that would mean you need a spec file saying, build requires differently on each distribution. And that's a pain for a package. The point is, when we know such things, we can have substitute rules. They are stored inside of the projects. And the base distributions like Zuzo, Fedora, and Viva are also projects in the build service. So for instance, when we could add a substitute rule in the Fedora project that says Kranalypse with capital C. And the package just submits this package, build requires, for instance, in the way Zuzo is writing it, and it gets mapped automatically on Fedora. The nice thing is, you get also another source RPM, which differs to the source RPM from Zuzo, which works on a Fedora, actually. Other common problems are macros. For instance, just example for the run level scripts for starting services. Zuzo has a standard. And Viva has a different standard. Fedora, another one, they actually don't use macros at all for some reason. Of course, you could now do specific versions for each distribution again, but we define some default macros in the build service. But we have the hope that we can push them upstream to RPM at some point of time, so that's really a standard inside of RPM. Right now, they are only defined in the build service. But that means when you submit this backfile using these macros in the build service, again, you get the right, expansion the right macros for each distribution again. You don't need to handle this manually on your own in your spec file anymore. Then, there are the exceptions. For instance, you want to do something special when you build again through the version older than 10.1. You can construct such things. This writing, it's even working if you can use this backfile on any other system where through the version is not known. So it looks a bit ugly, but it works pretty well in the end. For instance, when you want to follow the main driver policy, you can rename your spec file completely. So the resulting package is completely different via this writing. But of course, all this is too complicated. This is nice that you can to interact. It is nice that you have the power to do specialties and to use these special things. But I mean, most authors, frankly, don't want to package. Most authors of software just want, well, I do all the day hacking my cc++ code. I don't want to learn how to write spec files. And I don't want to learn how to write Debian files. So there are many approaches to generate these spec files and so on automatically. There are several generic approaches. So one guy thinking he can rule the world and automatically just trade spec files for any kind of packages. And I was also one of these guys, but we all failed basically because we don't have the insight of each software stack in the Linux environment. So then some people said, OK, no problem. We asked the user, create a wizard. And then another problem appeared. And suddenly the wizard is so complicated that it's easier to write the spec file. Didn't work out. Then there are people who tried with Scrap and guessing from the source code maybe how it could be and how it could like. But the success rate that you get a working package of that is very low. Autospec was something from Matthias Kettner, which I took over. It was nice to read when you want to learn Bash and what you can do with Bash. It tried to compile a table with generated spec files multiple times. And it learned about the failures and re-twited. It worked also not out in the end. What works out, actually, are specific approaches. So for Cpons there is already a spec file generator. Jam has one. And this means the autos there, these software stacks, contain all the information you need to generate a spec file, a Debian description. So the information is there, and it is a standard upstream. So the autos really take care about this. And then you have a very high success rate to automatically generate working package descriptions. Autos, unfortunately, very often ignores this because they use their CMake. They use their Autoconf. They use their QMake, whatever. And they make it in this way that they can run the application. They use their IDE. They click on the button. And there is some result, some star ball. That means, on the other hand, that most of the information we need to know to build a package is inside of this build environment. Because in the build environment are informations like I link against the Mesa library, for instance. So when we know that there is a link against the Mesa library, we can find out, OK, the Mesa library. For this, usually, you need the Mesa-Devil package. So we can map this. We can, of course, also do some guessing. But the point here is really that we have a specific generator for this specific build environment, which is maybe CMake, QMake, whatever. And the point is that these generators, if they are for this specific language, they can be written in this language. And they should be maintainable by the people who are working with this language, not by some single guy who has no clue about the software stack, actually. So what we designed and where we have a prototype working at the moment is the QMake example. I picked QMake because it was a very, very complete, very, there are no dependencies to outside. And they generate applications, not only for Linux, but also for Windows and Mac OS and so on. So at QMake, you have a profile. And it defines the files to install. Very important for the file list. The version, of course, and the libraries to link. And my generator reads this profile, asks the library mapper, OK, I see links against QT, then which package is it. And it generates backfiles and not yet DB and DSC files. But it is straightforward to generate this as well. The good thing is it's not much work to provide such a generator, actually, because there is something which reads profiles. That's QMake. So what I did is just I used the QMake, which is there. I added some lines of patches and a small rubber script, and I was done. That was done within an hour. And I had a working generator just for the specific approach. And the good thing is when QMake is evolving, I can easily follow because it's just a patch against QMake. Except that they want to drop QMake now. But bad luck. But just to visualize it, it should work also with any other system. So these plugins are designed in a way that they are packaged in itself in the end. That's a good thing is that we can support it on the server side, but it works in the same way also on your workstation. So you can develop it on your workstation. You can run it on your workstation. It's not limited to the build service. It's really a generic approach. Duncan yesterday saw this slide, for instance, and he said, oh, well, why do you vise is not yet integrated? And he just made a plugin based on the gem generator. There's a gem generator based on gem. He generates spec files. And now, within an hour, we had already a support for Rails. So that's the reason why I hope it will work out in this way. I want to show it to you. I have this is a build service web interface. There's also a command line interface and so on. But this is a web interface. And then a technical guy, me, implemented a web wizard. This is the reason why it looks so ugly. And it's not ready for end users. So this is an instance actually running on my notebook. It's a build service I brought to here. It's not online yet. So for instance, I can say I want photo wall. It's a QMAKE application. I just say, oh, well, photo is my package. I need to paste the URL. And at the moment, you need to save which generator you use. I hope we can skip this into the future. Well, yeah, it needs to know some descriptions, whatever. And it's done. And if so most, this means in this way we can package most of the simple applications, I'm quite happy in future. What it actually have done is writing a service file. So it's pretty generic. Again, it's just saying use two services. One is a download URL plug-in, just downloading the file in the end. And there could be some verification plug-ins which don't exist yet. And in this case, a QMAKE generator. Just use it. The result is the table, of course, and the spec file. What you see here are missing implementations. You see it knows about specialties in the distributions. For instance, Fedora has no dependency to C++, even when you install libqt-devil. OK, but we can handle it here. Then, yeah, Fedora has a different way to call QMAKEs. And Zuzer has OK, but it's handled. Zuzer has some special macro to support translations and desktop files. OK, handled. And we have a number of files which get installed, which is documented in the profile. So everything is there. And we get actually a running build. The build is already running because build service always starts the build when the source is changing or when depending packages are changing. We could watch it here. Yeah, it sets up the build environment. So for each package, the build service is trading in. And for M in the end, a complete new system to avoid that older builds influence the build. And the build is really repeatable. So I think we don't wait until it's finished, but I promise you that this particular example really compiles successful for Zuzer and Fedora, at least. Any questions to this generic, to this spec file generation? Otherwise, I would just tell you where we are with the build service in general and where we want to hit. Sorry. That's all I'm not about the deserability, rather the possibility to generate multi-distribution package. Distributing package is not just putting a file to download somewhere. It's also to provide a final user some kind of guarantee that it will work on the target environment. When I create and maintain package, I do it from on the river. I imagine, and I already had some remarks about Zuzer, that do you wonderful package work under my own environment? I have no way to test. And even if I had a way to do, I won't make any effort for it. So my point is, what's the advantage of providing package for distribution? You don't have any way or any deserability to charge a work. I mean, you mean it would be nicer to have one package, which works, one RPM, which works on all distributions? My point is, I don't care if I work as an environment. I'm targeting myself. Absolutely. But if you're an upstream developer, your goal should be to make it easier for distribution to package your software, not to distribute it yourself. Yeah, I think when we get the software authors to use this, they will see bugs in their build environment and fix it there, hopefully. And so we don't need a patch in the package anymore. That would be the best way, of course. And yeah, what we also want to have in future is the QA system, so that we can test more and show also more problems. But on the other hand, we want, of course, to have a fast success for the people. So there should be a package, how broken it would be. I mean, there are examples where they're enforced to install to user locally, ignoring everything else. If I may also add to the audience's answer, the interfaces and the results of the build service are open. So for example, notification system when packages are built. So we're talking to people like CMake and Ctest, so they could automatically download the packages, install them, run their test suites, run a UI testing tool like Squish to check that the software actually runs and installs and performs correctly. In practice, the open-source build service won't be of that much interest to other distributions themselves, but rather for developers to test them on these distributions. So there are people rebuilding Fedora and Ubuntu with the build service to adapt them for their own needs. So they're in my mode on the build service. OK. So if you are on distribution, you can get a pretty simple service to host your building or? Theoretically, yeah. Theoretically, yeah. And usually, they install their own instance. Yeah, yeah, yeah. But we could also do it at open-source.org. Yeah, because for some, it might be a lack of resources in terms of hardware bandwidth or even, I mean, also different architectures. I don't know how to support more than just two normal ones. Yeah, so the instance at open-source.org is only offering the normal ones, 64-bit and 32-bit. But it works, of course, for all other architectures as well. So the version internal, it's also for S390, itanium. OK, yeah, but do you offer those in the build service? That's also in the build service. I mean, in another build service instance, yes. OK, so you can actually build for multiple architectures on the system build service. Yeah, oh, cool. And there's a guy, for instance, supporting the open-source distribution to Spark at some moment just by installing it at some place and just reusing the remote connection of the build service because you can interconnect them on this side. So he automatically gets all the factory updates and rebuilds it on Spark hardware at this place. Maybe we should move our dead Spark port there. May I just? My ad, his question was, you can build for other distributions, but you have no way to test them. So how can you guarantee if it works? But for me, a big advantage is I can build packages of my applications for other distributions and have users test them. I have users using other distributions who would not want to do packaging, but who would just give it a try and test them for me, tell me if the package runs so I can put it on my download page. I mean, when it compiles, it can't be that broken, actually. Frankly, of course, it can be a runtime issue, of course, and you can integrate QA tests automatically, but what we don't offer is a life instance where you can connect to something we just don't offer. I mean, there are other offers. I think SourceForge is offering this and we simply see no point in offering remote lock-in shells for people. It's not in our focus, basically. That's a resource question because we want to use the same hardware better for building more packages. One more question. Might be a bit what, I know this service works on Linux. Is there a possibility to make it work with other canals like OpenSolaris or VSDs? Well, we have currently supported for packaging formats, build descriptions format, OpenSolaris is not yet supported, but you need basically to adapt two places to add another format. So if someone is spending time in supporting or implementing the parser for the OpenSolaris format, it could be easily extended. It's not necessarily about the packaging format, but rather the hosting. You know, the kernel that runs the build. You mean at our place at OpenSolaris or Huston? Well, in the first? In the instance, yeah, that should, well, it's Perl and Ruby on Rails. So it's, I don't see a reason why you shouldn't be able to run it there. Okay, and how about building the packages themselves? For example, Debian packages for OpenSolaris. That's the tricky part. Well, Debian is supported. The question is how much would this system differ to Debian? I mean, the different kernel shouldn't matter. At least if you don't use any virtualization at build time. The point is that we do, we really rebuild inside of Xen or KVM for security reasons, but you can also use just change route. And for in this instance, it would be simpler to use just change route. And as long as you have all your libraries and stuff as Debian package already somewhere, then you can, it should work out of the box. Okay, I would just inform you what we plan with the build service in general, besides these features. Yeah, right now, everybody can come to build OpenSolaris. To the main driver for Debian Ubuntu is there and it's building. You can install your own build service. We offer this release on Tuesday also appliances now. It makes it much easier to install because the build service is, I think, about 20 or 25 processes running in parallel. You need to set up Rails and Perl. It's, there are packages, but it's not automatically happening. So the appliance is really easy to start a build service on your own. What we work on at the moment is to improving the developer collaboration parts. We have started a review system. So it makes it more transparent, especially for our OpenSolar factory distribution, what is happening until a check in a new package or a change to a package gets accepted for it. So they are basically a legal and a source review and this should become transparent in the system. Also, we want to add the maintenance support so that the patch RPMs and the external needed metafiles, which are needed to do official maintenance updates for our official products get generated by it. And a very big point is to redo the web interface. There's the OpenSolar Booster team at the moment, which is unhappy with the web interface as it's right now because it just show a very limited feature set of the build service actually and it could be much better. So the next version will be a complete new web interface, basically. Then we have an attribute system, very flexible system to store any kind of extra information to package this project and so on. It's used to add information and to integrate it with external systems. And the automatic source processing is what I was speaking about here. What we also want to do this year, maybe or not, is really to enable the trust system. There was a diploma thesis, which was successful, but it's not really accessible yet. The trust system about their projects and people get some rating and based on the changes on the package, you know how much you can personally trust these packages because everybody can build a package so also every evil guy can come to the build service and you need to know about it. The LSP conform builds as another target. Would be really nice to have some pointers that it's really hard to work for the package to create a successful build against this target. But it should be there. Then we wanted to separate the QA tests which are inside as a separate framework so that the difference if the package is failing because of a compile error or because of a QA failure so that you see it differently. And there's also ideas of people who want to non-Linux platform support. Windows and MacOS are visually the most biggest targets. That depends if you find people who have time for this. The plan is there since we started the build service but it's so far not, no one really made a serious approach for that. Yesterday KDE on Windows people approached me and asked. Oh. It actually might even happen now. Okay, yeah. I talked to them before but we should talk again maybe. Yeah. In general if you want anything to know about the developments we have a concept page, we have the feature tracker. You can also request own features there but given the limited resources of our site it's better you implement it to yourself. And we have of course a mailing list, the instance and the IRC channel where people get help for getting into the stuff. Okay, any last questions? Yeah, the software itself is complete GPL and this is everybody, you just need to create an account. No money, nothing, it's just create an account and submit your packages and you, fine. Yes, there's Jan Simon-Möller running around here, not in this room. He worked on adding ARM support and of course ARM is working when you natively compile on ARM hardware but it's not what you want because ARM hardware is most likely to slow but he extended our build script and so on that you actually, it's a very clever trick in the end. It uses QA mode to emulate an ARM but the compiler backend is running natively. So you don't, you have only 10% speed reduction and you can very fast compile ARM packages and you don't need to adapt your packages because it looks like a native build. So you don't need to deal with all the broken build environments of all kind of packages where just are not designed to work a cost build. So he had pretty much success with that. Any last question? Okay, if you have any personal question or want to see something, I'm at the ZuzoBoost most of the time. So thank you very much.