 Good. Then let's get started. So this is about essentially kind of rehash of the the FOSTEM talk, essentially an update on CI improvements and suggestions. All of that was funded by the Prototype Fund, which is a German Ministry of Education and Family, I believe, budget that funds twice a year funds open source projects for a while. So we're very grateful for that. Work was done by Svante and my son Linus and myself. Mission was for liberal office for the continuous integration, developed some glue codes so that we could integrate more data providers easily, get new and shiny tools because of that, without too much disruption for the for the setup that we have. And by that, like those nice and shiny tools, perhaps create some incentives for developers and create the right thing, which is better work, like better code, better patches, changes that that create less fewer problems. And there was the idea that perhaps we can also use that for some automated feature locations. So this idea that someone comes on the IRC in the middle of the night and there's no one around and asked like, ah, there's bug X, Y, and where's the code for something? And then no one is there and grabbing for get attribute in liberal office or get with as giving you like thousands of fits. So that's not going to really fly. So the idea is kind of so if you have some something that is a feature and then you can run that on the code and you can see where the code like what which part of the code base is touched by that feature, you can have a much smaller set of lines or files to look into to find the functionality. When we started with this, there's, there's a ton of, of tools like both data providers and data things, things and analyzers and Jenkins plugins that kind of render something nicely. And then you quickly end up with some and and to, to old problem. So you got, you got n languages like we got Java and C++ and Python and C sharp and what else have you not. So, so all of that will need to be kind of covered by some data provider like coverage analysis. Then you have different CI systems that for us, that's Jenkins, but there's other projects out there who use GitHub actions or Travis CI or, or something else and then you have the, the actual tools like the data analysis tools that need to operate on that data and need to kind of be plugged into the CI systems or just sit on a separate website, but also need to be triggered. So, for that, what we, what we thought would be nice is kind of have it orthogonal like so if a data provider that sticks things into some, some database and then you have some CI implementation that kind of triggers that database filling and then you have an analysis tool that also kind of separate and pulls from the database instead of something that only works in Jenkins and only works with coverage or XML, which is a coverage format that's kind of prominent in Java land. And you always have this problem like you need to generate this XML and I can only use it in Jenkins and that when they use it somewhere else need to be implemented. Shiny new tools. So that's quite a chunk. What we could do. So this is mostly Java. So there's some coverage analysis. That's another coverage analysis. That's the first, that's actually what would trigger the idea that some API abstraction for Jenkins at least, login, several tools that feed this rendering and this display. Yeah, and that's kind of so you can do this coverage API plug and that's meanwhile installed. And I think, I don't know, what was it you call for GM, one of you installed that on the on our CI and you get some nice drill down lists like where how much coverage did you get in total with when you run make check or make slow check and you get some some nice bars there like how much of your code is is covered or you can get this nice grid view where you can also kind of drill down like what is what is touched and what is not to touch. And that is also something that we want to use for this kind of feature map like like where is the code get to that in a second. So yeah, more tools from that ecosystem. Yeah, just a long laundry list of things. Some of them we're already using so there's CPP check for example that we have installed as a plugin for Jenkins. Jenkins tends to like with a higher level analysis tends to kind of skew towards Java for better or worse. I mean, Jenkins is made in Java. So kind of Jenkins developers probably have some tendency to, to target their, their nice shiny tools to Java. So for something that is C++ or for Python, that's also a bit of an extra mile that you need to walk to use that. But it's also kind of a nice, like carrot dangling in front of your head, like those Java developers have some something nice that we have something nice as well. Yeah, for incentive creation. So it's kind of useful to to automate. So you want in a review, you don't want to point out like, like, I don't know, like, like we do with clang format, like, you don't want to do the manual work as a reviewer, usually. And it's also useful if, if you're pushing your page, and then you're getting the review very quickly and not like after two days when somebody looked at it and says, I don't like this minus one because something isn't quite right. And so, so whatever the computer can do for you, let the computer do for you. So you want to automate. And whether you want to leave whether you want this kind of minus one your patch or whether you just wanted to have a warning with a matter of debate. And I'm kind of more into this nudging thing, like, it should have a warning, it should have a nice metric, it should have something read there and then kind of create some incentives that people will kind of just write better patches. Yeah, so so this kind of there's some some metrics things. So for example, if you this is some code coverage, in this case, for some Java for the audio toolkit. So it's kind of nice, like when your patch goes in, you're the metric like, I'm, I'm covered lines go down and covered lines go up. So it's kind of nice and rewarding. So, yeah, then this this kind of feature map thing. So the idea behind that is, if you if you have a coverage analysis, you know where your code what your code touches. So if you have a baseline, let's say you open it just an empty document. And then you open the document with the line of text in there. And the difference between the two is probably indicative of of the difference in the content. Let's say, there's, there's a document that contains an image or there's a document that doesn't contain an image. And that's probably image loading code then touched or image loading code is probably always touched because you need to load icons. But there's very specific documents or documents specific image loading code that it's touched when there's a document in the image, same is true for something like a bold or italic or or any kind of feature or just clicking a button so a dialogue pops open. Then you know where the dialogue is. And you can say, yeah, but that's not so hard because you just grab for the string and the dialogue. But it's kind of fuzzy science and it depends back on the strings. And it's also kind of, it's a bit harder to explain rather than telling someone who's a newcomer, click the button or search for the for the help blurb. And then there's a nice kind of ready made database with code locations that you can start looking at or put a breakpoint that will lead you to the right place. So, yeah, so you have you have a baseline and you have a specific orthogonal feature like text that is pulled, then you run it, you create the coverage, compute the difference. And then you get something that you can see in the script view. This guy here that only contains the bits that are different. And that is significantly smaller amount of code than checking checking all of writer or just mean that we all have our heuristics. So when we need to find the code, we have our heuristics how to find that. But it's not very easy to teach those. And they're not easily written down. So having something that that automates that will be quite useful. So, yeah, that's again some illustration of how that is meant. A quick update on where we are. So, so most of what I said, I already said, in February, we did make some progress. So we started to, we started with setting up on CI LibreOffice.org. There was some installing the extensions, running the first jobs, figuring out and working around all the problems, annoying cloth a little bit by breaking an installation or two. Thanks for fixing that. So what we need is so so that the challenge is that that CI is a very, very, very, very important system and breaking that is not something that that I want to be guilty of. On the other hand, since it's kind of it's hard, it's easy, it's very easy to break it. And so both on the builder side, like when you in this particular case, I was installing a Python version. And the result of that was that the Android boot was breaking because it was picking up the system Python all of a sudden, but then there was a developer package missing. So the, the the brittleness of the system is on a level that at least I, I was kind of quite hesitant to touch much of that. So I was taking one particular builder and equipping it with the prerequisites to run C++ coverage analysis. And I was installing a bit of Python code that we developed that would parse that coverage data, kind of compress it a bit, optimize it a little bit and then send it out to some database server. So so the only thing that we wanted to do then on the CI was the minimal thing, which is the data generation and collection and everything else would just be someone else. So we could have some nice greenfield project and break that as we like, but we wouldn't affect so much the daily operations. So as such, that was kind of minimal. So we had cobertura. That's this format thing, this coverage format. Forensic API, I don't know what that was used for, but I think we just added that. There's two demo jobs. One is this, this Elkoff thing. I think that is from Martin. And that's just driving this Jenkins Elkoff. So from this LODE repo, just driving that Jenkins Elkoff script. And what we did, we did a second setup that I can show you. So that's a bit more here, but it's essentially duplicating the the Elkoff setup and running that with a bit more bells and whistles to generate a baseline and a number of feature tests on top. And there was the first change there with some minimal setup. So that has just a bunch of ODF files generated by ODF toolkit, like synthesize. So you can it's easy to generate many, many more of that. Because once you get at least for ODF, there's a pretty good idea what a feature is. You can just generate that those files. And then you just load them one by one. And with the usual tricks, you can run them one by one. So you can just say, yeah, this was that CPP unit test name and just run one. So first you run the baseline, then you run the bold and the italic and the image and the table, etc, etc. And convert that and then stick it into the database. And that's the code that's running down here. And then there was very usefully, there was already some, some credentials provider installed. So I didn't need to touch that. So you can kind of inject the API right key into that, into that script. So it would like not be not be visible anywhere and then not hard coded anywhere. Yeah, so so that that is working mostly. And it's feeding the it's feeding database. So that's like per per commit, you got like n runs. So it's like three, there's like the office and then there's the master branch and then there's commits as many as you run this for. And then per commit, you got the number of tests that are that are run. And on the other end, there's a test Jenkins, some demo server from allotropia right now that also hosts the database where we can then run those analysis steps. And that's kind of work in progress. But we're quite happy that that actually the side of on the liberal side that is working now, the code is on get lab and this another cover rest project. And there's some, yeah, there's some front end thing. And there's a, there's a database like API API generated and the database back end and some conversion scripts for ingesting the the several coverage data formats. And this Jenkins plug in for the visualization is this coverage API thing that we also need to extend because that wants to ingest directly know that wants want to read directly from the database. Right, so did you actually see what I was telling you? Sorry for that. I was looking at this. That's a shame. So you were looking at the static slide while I was kind of rambling on. I'm sorry. So that is clearly not ideal. I was showing you this guy here. So sorry, that has to do. Otherwise, we're running out of time. But that that is the yellow coverage Linux 64 that is tied to one specific builder. Somewhere here, yeah, this coverage label and that's served by I think 10 the box 58 or something or 85. And that has that specific setup like with Python and the code now. And theory, we can put them if we want to roll it out, we all we need more, more build power there, the usual track black with the LTE setup that would then kind of pull that in. The only prerequisite really is some reasonably recent Python three, three, six, I believe that should be available also on central seven. And then and the code itself that can live under that just, it's just a good checkout from this repo here. So also to show you that so that's good, let common cover minus rest with all the code like in one place. Okay, so right back to that. Yeah, the the initial idea was to actually run that on the on the Jenkins host and that failed quite spectacularly because first we ran it with the with the full coverage like without this different thing like just a difference but with full coverage dated at some 400 something megabytes of XML and just the thing was keeling over. And I'm quite glad I didn't try that on the production Jenkins, but on some other posts. So that would have been not so nice. Yeah. And I mentioned that. Right. And the in the end, it turns out that that's that's a bit of a blessing in disguise. Because because of this already pretty loaded and complex and kind of slightly creaky system that sometimes when the reboot takes takes half a day, not to add more complexity to that and rather put it on the side. And that's the idea just as a visualization, like data collection, which was minimal and then data processing on this on the schema server. This, yeah, that's the concept of this of this mapping, like where's the code that does something? The test that we're running there, that's this, this very simple thing that's this kind of the most trivial thing to do just load the document. Then, well, it has some nice names. So then there's a little set script that passes it out of the of that file and then runs it one by one. And can certainly be made nicer. The make target for that for building all of that kind of feature tests is make coverage. And that just landed recently two or three weeks ago in master. And the change is this guy here. If you are curious, that the same thing that but that's a bit easier because it's much less code and it's already Java. So we get lots of those tools for free would be nice to do for ODF toolkit, which is a TDF projects in survival. But so far, so it's only been built, maintained, released on GitHub. But we do have some Java stuff collected over the years. So that could be an idea. Then the question is, and kind of I think we do have five minutes, whether that's maybe an idea, because this demo thing, this demo server, I think that's not something that, like, turning this into something that is production ready, probably it should be running on TDF premises. And then the question comes up whether it maybe would make sense to have a second Jenkins instance for those smaller projects also not to get in the way. And also to run those C plus plus analysis stuff there. Because it's kind of heavy burden. And, yeah, at least, at least for the next half a year, I'm not feeling comfortable putting that on on the main instance. Right. And that's the end of that's not the slides that you're seeing. How you do. So yeah, that's the end of that kind of pitch for that. As I said, it's kind of work in progress, but it will always be work in progress. Because that's kind of endless opportunities there. My commitment is that I will continue working on that. I will occasionally have a talk file so that will kick me into action to work a bit more on that. So making slow progress. As I said, the database is filled now. The analysis is not working yet because the the extraction bit like running the discord coverage plug in and then pulling from the database doesn't work yet. But it's probably just just a week or two before that starts and then at some stage probably worth announcing that for the project. And in any case, your feedback appreciated with that's the idea or whether you've got something ready there already that we should use instead or anything of that kind. There must be some there must be some sort of feedback at least cloth some some sort of take on this. Should we put it all on one server? Should we have more than one? Should we have 10? Should we have that like separated between C plus plus and Java? That's I mean, there might be an option. It depends a bit on what's hard to say. And it's then it's some stage we need to try that like on the life instance, whether that's going to blow up or not. So for certain, the if you want to show the full coverage, which which means like, like run all the tests and then see where we lack coverage and use this nice visualization, that's going to blow up on the on the, at least on the current setup. So that that easily eats like north of 100 gigabytes of memory, because it's extremely silly. We can fix the plug and of course, but it's kind of parsing the XML into memory in Java. And then working on that and blowing it up by a factor of, I don't know, 100, whatever, but it eats a lot of memory. And it also it tends to saturate. And I don't know if there's any way for for Jenkins to say, kind of ionize this, this plug in here, like, don't don't take everything that the machine has, just because some plug in wants to do something. And unless that is not possible, because that just it also takes time. So just if Jenkins is not responding in 15 minutes, I think people will be quite unhappy. Okay. So, so yeah, we will see. So, so then, but that's good to hear that that otherwise you're not not not terribly worried. And for the audio toolkit, that's that's kind of like that's like two orders of magnitude smaller in terms of load and pain. Okay. And thanks a lot. And if you if you got any kind of feedback, I mean, I'm around here like hallway and you will you will meet me or shoot me an email. Thanks, everyone.