 So, as some of you may remember, I had a talk with this name in Albania, so it was four years ago. And when I was submitting talk proposals for this conference, I was thinking like, yeah, it's been four years, some changes have, some things have changed, so maybe I could update the talk. That's why it has the year in the name. And then I decided I would be repeating a lot of the stuff, so I'm not going to repeat the stuff, I will just update the Wiki, whatever, and I will just do a lightning talk. You are correctly here for a full talk. That's what happened. And I found out only when I got here to Milan, in the end I decided I will use this log lightning. It has been already assigned, so I have extended some of the stuff. I'm not quite sure how much time I will actually need, so let's just see how this will go. Some of us, though, have not seen your talk four years ago, so if you want to play your time, you can focus a little more maybe on the reminder part. Yes, I can also talk to people who were in there. Still, like, this is technical stuff, there will be some comments still. The idea is more that I will talk about the stuff, but if you are actually going to use it, you will still actually need to look it up, so it doesn't really make much sense to talk about the exact specifics. I meant this more like a review, but kind of the point is, I think there's nobody who will say, yeah, LibreOffice is so quick to build. I think all of us, or I hope all of us have at least some tricks, so if you have some comments during any of the topics or if there will be some things you would like to share, or if you have some specific problems and if there will be time, we can have a look at it. This is just some of the stuff I have collected, I have found, some of the stuff I have brought about, so in case you have noticed that you probably know this stuff, but let's just go over it and see how it goes. So, I still remember times when it was like several hours to rebuild all of LibreOffice. I think the Wiki still somewhere says that the build time is something like 8 divided by the number of CPU cores or something like that, so right now for me I have 8 cores, so the normal build time is 1 hour, which is acceptable, but it used to be worse. So one of the things that we don't quite like to hear about if you want things to be fast, it helps to have fast hardware, which is not always the option, but it's in a way the simplest one, and then it generally helps to get good tools. Surprisingly getting a newer compiler actually generally tends to be faster. On Linux or Generic's platforms, client at least four debug bills seems to be the better option, as far as build time goes, and even that improves like, I'm not sure if you can see this, when I updated to Clang 4, my build time for the library SC, for the Calc library, went from more than 4 minutes to less than 3 minutes. I have no idea what changed, but suddenly more than, there was something like 30% of the build time just disappeared. I have no idea why, it just got better. So if you don't, I suggest, yes? What is the VCH saying about? Sorry? What else there is? Pre-compiler headers. It's other stuff I will get to. So one of the things, if you want stuff to build faster, if you don't, I suggest that you get Clang and preferably a new one. I also got the stuff I have, this one. You can look at the details on my blog. I'll just show the picture. This is, I did some tests. So the topmost one is just the, I think that was the, it was either compiled from the distro or I just built Clang just with the default options, and then I optimised it specifically for my CPU, which is a question if it's worth for you, because it actually takes time. But if you build with LTO, which is link time optimisation, for example, Open Suset builds all the packages this way, you still save some time, and a PGO is profile guided optimisation, which you first run the programme, in this case the compiler, and it analyses where it spends the time, and then if you compile again, it uses the information to make it even faster, so this saves like 30% maybe. By the way, we could probably consider PGO even for building Clipper Office if it's this much how it helps, but that's just a note. And then the pre-compiler headers, the last two are pre-compiler headers, which I will get to, it's a question if it's worth for you to build your compiler, but if you want an improvement, there are other ways. This is just one of the things. So what is the longest time and what is the shortest time? So the longest time is basically just getting the compiler just built in the right way. This is just one source file randomly. It's something big from Calc, and this one, I think the longest one is 14 seconds, and the fastest one is less than 4 seconds. As far as I know, on Windows actually Microsoft compiler is faster than even Clank, or maybe Clank is not as fast there as I remember the last time I tried. A similar thing, you can also use some compiler tools, but we buy default big C-Cache if it's available, which I suggest. I suggest getting at least version 3, because there were some improvements. It has switched internal compression to Z-standard, and it has very little overhead. The compression is enabled by default, so it takes way less disk space. C-Cache is not a magic solution. It needs to be exactly the same rebuild, but for example if you're using often git, rebase, or bisecting, then you will repeatedly have to rebuild the same stuff. It can save a lot of time. In some cases, for example, I have a desktop PC, and I have work laptop. I use iSKIN to distribute it. Back in the time when computers had like one or two CPU cores, I remember in KT times when there was a conference, we set up an iSKIN cluster, and it was so much faster. It's not as impactful today, but it's still an option if you have more hardware playing around. And a little bit. The linker that actually creates the binaries, the final step during building, also may have impact, configure already, at least for the backbells once, if you use the default GNU linker, which is horribly slow. So I suggest getting the LLVM, LLD, or the new linker called mold, which doesn't make a huge difference, but if you have like, if you just edit one file, it still can save a second or two or ten if you use the GNU linker. Another way is actually building glass. For some reason, we try to, by default, try to build all the external libraries ourselves, which means we rebuild it all the time. I myself try to use the system libraries, which is a bit more complicated to set up because the configure run takes a long time. And when it's like halfway done, I tell you, oh, I cannot find the system library. Please install it and run everything from again. So whenever I do a distra upgrade, which... No, not very... If I sometimes don't do distra bit, I just install the system stuff in you and then I need to spend like ten minutes running configure. It tells me something. I installed the missing dependency and need to rebuild it repeatedly, but it tells me don't spend time building the system libraries where we basically don't care normally. If you develop the graphases, you should normally do enable GDB utility, which will set up extra debug checks and so on. And as a side effect, it will also set up some useful stuff, which is generally not a good idea to do for release builds like split debug. It will move some of the debug information into smaller files so that it's faster for the link to process it. This is bad for packaging because then you have extra files, but if you do local builds, it's faster to do it. And if you do enable GDB utility, it will just try to do the smart thing out of the box for you. Yeah, unfortunately, if you use Clang, we have quite a number of them and the way I originally designed them back, I don't know how many years back, they add quite an overhead. It's quite lame coming from me, but I actually no longer use them by default. I normally build without them. And only when I submit it, I just basically just apply again the last commit so the files will be changed and then I explicitly force the plugins so it's once compiled with the plugins. So it actually warns me before I submit to get it. I don't know how to make the plugins faster without completely reworking them. It's what it is. I'm not sure what it's right now. I know at one time it was at least 15%. At some time I reworked the plugins to try to share the structure pass because back then every single plugin worked the entire AST structure of the compile program. I remember to just try to work it once and have a look. I don't remember if it was before or after the 15%. There was a time where the plugins were useful. I think by now I very rarely actually trigger a warning from the plugins. So basically right now I just check before submitting because the warnings are still useful, but I don't consider them useful enough to just trigger them around every single time. Here is another trigger which you may know. Do build system views because we have so many source files and we use GNU make which is slow for this one. If you just run play make, I think if you have ever built everything it still is like 10 seconds, 20 seconds at the very least so you can limit how much gets rebuilt. If you type for example like make SC it will just not only build only calc but it will also only consider calc which then make will have much less to build about. You can specify for basically any target. For example the second bottom line that's how to build just the SC you calc unit test and it's much faster if you tell make to first change the directory to the calc module and then just build the specific target. The target is basically the name of the make file without the dot mk. There's a little complication. Our build system tries to be a bit smart and automatically do silent build and parallel build and I personally don't like it also because it doesn't work for going into sub directory so I personally use it without parallelism and make a small wrapper script which does it explicitly and then I just am used to using my own wrapper. Also the last line sometimes you have dependencies between modules and then normally it feels like you have to use just make and rebuild everything but if I'm for example modifying some library like vcl and then I'm testing only in calc then I need to change only vcl and everything calc needs but there's no point rebuilding for example writer so for that I would write make SC dot all build and it will just rebuild only calc and everything only calc depends on. So let's get to precompiled headers. One of my favorite topics because as you could see on the previous picture and actually in one of the other blog posts I even have a you can watch it, I have a video when I built some calc library without precompiled headers then the middle one is that's kind of a setting which I did some work some which enabled optimizations in calc to compile actually faster if you have precompiled headers so the middle one is without these extra improvements that's about how GCC with precompiled headers would compile and the last one is I don't know if you can... Anyway, so it's like the slowest one is like 12 minutes under does it have time at the end? Anyway, so I think it's like you go from 12 minutes to 8 minutes with the normal PCH to 4 minutes with the faster one you could also see it like here you can spend all the time improving, recompiling or you just turn on precompiled headers and all the previous stuff basically doesn't matter it's just the difference between the last two which is still a couple of persons but it's a couple of persons from way smaller part so... yeah, it's not it's not enabled by default but actually here I'm mentioning we already use enable PCH full which is like the highest level on Windows for Microsoft visual comparison so it probably makes sense to default it for a new client I already have been using PCH for years basically without problems really only the practical problem is that if you submit it to get it which bills without recompiling headers sometimes this basically works like you make one huge include file which includes everything so... then for people who don't use it sometimes you forget to actually include the values and then the file breaks so... here's the second line from bottom it's useful again to rebuild and use block PCH which will temporarily disable PCH just to check that it bills also without but... yeah, I think I will just bring up on the next ESC call that maybe we could default for it since it can make a huge difference and... as I showed back then rebuilding the huge call library is 3 minutes I still remember times when I was trying to avoid rebuilding call like because it was so many minutes and now I just like changed the main header and like yeah okay, 3 minutes, who cares as I've noticed in case you may have heard C++20 got some new C++ modules I have just done some tests and I haven't looked that much into it but it's a huge step that's not backward compatible and it's long it's a lot of work to change it, so... yeah, if we want faster compass we are probably stuck with the compass for quite a while yes do these things real of course it is a bunch of effort to switch from an include file to an approach to rewriting these things or many of those modules but do you think it's not worth it or not worth it right now or is worth it but it will take a long time to happen well as I understand it and I just have tested it on small projects so I may be missing something but as I understand it you basically need to add the header files into modules format and it's not compatible either way so then we would be stuck requiring compiler which can handle modules and you wouldn't be able to build without it so we would need to spend time rewriting all of it I don't know how the transition would be difficult when you would have just one or you would need to have copies and also additionally building with modules build system support because with PCH you can just run the build and do one pass but these modules as far as I know you need actually a second pass before we just will find out the dependencies so it would also need rewriting gbuild or using a build system if they are already build system which support it so it should eventually happen but right now somebody needs to do it I think it will be quite a lot of work and I'm not sure if right now it's worth the effort someone in the future yes right now if somebody feels like it yes but I expect it's going to be quite some time Yes Can we use the pre-processor to have a single header that is for the module? Possibly there is still the problem with the build system which I don't know how much work it would be to change gbuild to have the second pass Yes Yes it requires C++ module are official official feature of C++ 20 Client also has a different implementation which as I understand it was actually backwards compatible they had some kind of way of important header files into their modules I'm quite disappointed that it wasn't the final thing that C++ decided for whatever reason I don't know the internals I'm personally disappointed with the way it turned out but it's what it is I don't know all the details but this is the way I see it Yeah this is just a mention that didn't really turn out to be good but I tried it LLVM is the project which provides also the Client compiler so they also provide a C++ library it's possible to switch and it's faster again to build but it's kind of problematic if you want to know the actual details we can talk about it later but right now just skip the slide This is build time but building is actually not the only thing we do we also face up to use debuggers so as I said you should use enabled gdb utl because then you get extra useful stuff like gdb index which changes gdb from horribly to just horribly slow which is still a huge improvement or you can use this you can disable auto loading of all libraries and then use the share library command to just selectively load it when it's useful I sometimes use it because I just want to quickly attach gdb and I don't want to spend all the time waiting for it to pass the libraries for those of you who use gdb and don't know that you can switch it to text interface which makes it way way faster I suggest to you google gdb to you text user interface it saves so much time I have also tried to using the LLDB compiler which is again another project for LLVM it's way faster in some ways it seems to be better in some ways it seems to be worse but many of like the workflow is slightly different I still haven't actually switched even though I have already submitted a bunch of features to LLDB kind of like the whole LLVM project personally repeatedly annoys me by being really pedantic in the reviews and I repeatedly get demotivated by just all the reviews but generally the tools are better it just sometimes I force to switch and I think this is the last slide I have another trick you may know about ease of use and disk space as you may have noticed git checkout of LibreOffice is very large even just the .git directory so you can do checkout just once somewhere and then you can use the work3 feature where you can just kind of check out a work3 into separate directories and they all share the one .git directory or cherry pick between the three ones if I for example want to update my master I just go to this main checkout I run git pull there and it doesn't ruin my work3 where I actually built so this is another trick and yeah this is all from me so we still have few minutes so in case somebody has a comment question or you and some of your tricks yeah they're all for developer builds when we're interacting we're building continuously but when I'm doing the RPM builds for Red Hat they're a throwaway build and just building it once and I'm throwing it away so in that case we have that configure option for disabled dependency tracking you can throw away all your dependency tracking you're not collected in the first place it gives you a faster build to rebuild a small portion afterwards it won't rebuild what should be rebuilt but for a one time build for distro packages you have that extra option yeah thank you yeah so there is minus-minus disabled dependency tracking and as I was saying before make one of the part why it's very slow is because so many files creates so many huge number of dependencies which make a lot of time to process so if you disabled dependency tracking it will just skip the whole step the problem of course is that you don't have any dependencies so it's not useful for normal development but I remember I have occasionally used this even for my own things I don't remember why but for one time of builds it's actually useful so one of my key use cases for GDB is I do something that crashes and wouldn't be nice to get a stack trace with real symbols in it and currently it seems like you can attach with no shared library and you can manually load each symbol library as you do it to avoid a five minute wait etc but this seems pretty pathetic is there any way that GDB could just load the symbols for the shared libraries it knows are on the stack frame is there any way that you can do that more easily if GDB could do when you crash you have all of the frame pointers you know where the shared libraries are that you need symbols from in GDB I just wonder if there's any lazy loading that loads symbols when it needs them to show you your stack trace I have looked into this a bit and the short answer is no the symbol loading in GDB is so old code and so complicated that it's very hard to improve it I remember the GDB maintainer like a year ago or so blocked about improving oh yeah that's another case if you get a newer GDB version this has been improved a bit even though I think with GDB index it doesn't matter but basically symbol loading in GDB is so horrible and you always I think you basically what it does is that it loads everything in a shared library one of the reasons why LLDB is so much faster is because it's newer code and it also is better at being actually lazy about it and I think it also has better indexes I think LLDB without indexes is faster at loading stuff than GDB with indexes and other thing to backtraces we have this solve backtrace whatever function if you use enable GDB until it will not trigger during a crash which I think could be added but it it normally uses the glipsy backtrace call which is next to useless but in GDB mode it tries to be smart I implemented it like two or three years ago if you actually now use the call it will try to print a nice backtrace if you are in GDB until mode and you do that with address to line or something outside the process or you use GDB or how do you it uses the other tool line tool preferably again the LLVM one because it's faster basically you get all the stack pointers in the call and then I added some caching grouping whatever and it resolves it so it prints the nice backtrace some more ok so thank you