 Welcome to my talk about the Reconference Automated Updater work. I've presented some of that already in Boston. This is now an updated work with a bit more details and a bit more... Why did I decide to work on the automatic updater? About, I think, three years ago I worked on the rest of the work that was also presented in the 100kc part and we had one really really bad release, 532.42 some probably shutdown pressures that have been dominating our pressure reports for a long time. And actually even today there are still 1,000 daily pressure reports for that release despite the release being released about early 2017, I think. So even now we have now 600 or 100 companies and 1,000 pressures really really old release and nobody is up-taking despite that it's giving pressure. There are 1.1 million pressures for that version and still people are using these old versions. The other amount of motivation from that is I'm not sure if you can see it, it shows here the numbers of pressure reports for the future versions and if you can see the two used versions here have 400, 600 pressure reports but we don't know what it really means. Are there just no users or for example our newer version may be reversed. We introduce a regression that causes many more pressures. But can we detect improvement in quality for example in 6.0 or 6.1 in the number of pressures that we have reported? Or can we detect a huge spike in numbers like the 500 to the 4 numbers given by visitors but what happens if it's much smaller we can't detect it individually. The solution for that would be to invite the number of pressure reports that we get each day for each version by the number of users but for that we would need an accurate number for the users that are using our versions each day because as soon as we have an update that checks every 7 days for updates we can get an accurate number of the active data users for each day just by counting the update. So that would look at different components in most data. First, I know that I decided not to comment on the updated code. Instead I checked the ZILLA data code which is called MP1 that includes that for the office. It's well tested, they have done the security model of the code they have done the binary drift tools and how to apply the gifts again and so on. It has been watched to master and it's making me mind the trigger flag because nowadays there's a data build of a useful Linux that uses an update that you can use. On Windows this situation is much more complicated. I'm currently producing these builds from time to time on my local machine because in the end I can't handle the Windows installations in system locations. I'm collecting updates that I could use in a range of 1 to 10 megabytes per day build plus between 1 and 2 days which is much better than 200 megabytes full installation would be for down the way. During the startup of the process every year we request an update check from our update server that will be uploaded on our version so we will provide the OS information as well as by now it's on JSON's method with information about the potential update files. If the update is available the need for this instance can go to the download server and can access that file. The need of these two services instead of directly providing the file because we now just need to make sure that the update server has completed the unauthorized control and the need for this download server can still be one of the mirrors that we use because the update server provides the hash information which I later checked after the download from the download server. To make sure that the files are ready as I kept the signatures on them I correct the size of the hash I correct so that everything is ok before we go further and actually apply the update. The line update is the simplest step process in mirrors so we have our own office instance which supports the update that is available and then it's clear to the side. In the next step our update that is in mutual starts at least on windows to update the service because in the end we want to update a new office instance in a system location where our normal new office is not able to write to it. The update the service does nothing else than checking the signatures and then me causing our update that is executed now in a privileged process which can actually write to the system location. That update is our new office instance and then we start the new office again. We need that complicated etc. at least on windows because on windows we are not able to order back files that are still open so we can't update running new process instance. We use the update for release builds but only for a new update. We don't need update to service we are going to support updating the system locations when we use the locations and on windows it's much easier because we can update files that are still open. The update the service on windows is special it's not yet deeply integrated into my new update process because I still have some work to do to get it into our modern side files but to get a small correct here there's still a bit of work necessary to make sure that it's registered correctly in windows but it's one of the high priority in using pieces that currently we are getting it to the users. Then there's the update to server as I provide some small updates that's all the information about our updates it's written as a jungle server just was hash size location of our update files and for additional updates the source and the file that it is. The idea that I have and that's part of the matter over here is that it's registered in the database just the OS version and the languages so that we can get accurate user information how many users we're using in the office this day just by using the database very The update is available on GitHub it's also already running on one of the TTF VMs and works actually quite well for the day that it is not ready If one of our tools is an update version it sends one of these part is the URL of the address of the server to just check the updates like we want to be able to support the enterprise users so basically there are people only updating from our still builds and people updating just from the fresh builds and here we have a part of the response from the server here that almost has an update file to check and in this inside it is a bit more complicated and in this way we can just use the Mozilla Mozilla is not using inside for for their updates for for their builds and in this I have my own update form it's called ASP and here the ASP files require special handling to generate the update files generated directly currently we have some support for that in the office build but it's tightly integrated in our build system and actually what I would like to do is take to the inside files give it to the inside ASP splits and generate the ASP file just from these two and inside files that's not possible to become in the office build system we generate a special kind of installation file that's different from the normal to generate and it makes it really difficult to change so in the end we decided to functionality but changes it so now I use normal inside files generated in the office build and extract the generator in the ASP set model configuration which completes all the update information how to generate which information is set in the ASP file and that's the way the office build is generated also in the Microsoft MSI, MSP set in my ASP files are not perfect yet as today we have some general problems for example the project and it's not allowed to change if you want to generate ASP files but in our phase the generator is set if you force them I'm still looking into solutions for that to generate stable ASP files that are always in front is that having these automatic updates introduces a totally new problem for testing for making sure that our pins are correct suddenly we have the normal builds that are usually tested in the release by executing more tests but what happens with the old builds that are updated but that is still perfect for the tests one it's impossible because we don't have the building right before that so we need to find a way to at least make sure that these builds generated by every old build by an update which is basically the same new process just coming from in this county my best idea for that is using a bit of funeral and the eye testing even if you could run all the new eye tests against both kinds of builds and all the eye tests should be able to run against both kinds of builds but when I'm still working out on this stuff that's where you can see how to make sure it actually works that's all