 Good morning. So this is Sandeep and we'll be looking at trans stats now to track translation string change So basically we will be covering the tracking Translation string change part today and we'll look at jobs how to run jobs in trans stats and how they contribute to different levels of Processes in Elton and Iotinian stuffs. So let's get started So these are the topics we are looking at first is why trans stats and the challenges it Face and challenges it tries to solve second our second is what are the touch points through which users can get some interest using this and how to use jobs to actually Actually tracks something and actually create diff to use it at server and how to use it by client and and what is next coming in trans stats at the level of developments and integrations to different systems and How we can actually get involved at two levels maybe to just try and test and maybe to start Contributing into the project So first is why trans stats what is the need of trans stats and why it is required? Elton in process I'm in a localization process as a whole in the life cycle. There are some challenges which We are facing and these challenges are first is how to track Strings of packages have been duly pushed pushed and pulled from respective translation platform Second is latest strings are being packaged and built in Koji. So these are the two major bottlenecks where Like how actually we can track this so that packages could be in sync So trans just actually try solving this by comparing translation statistics derived from different places The places are up a stream repository translation platform and well system Second challenge is what is the volume of translation required for next Fedora release? This is actually required by Elton and team at very early stage to have an estimate like what What would do what would be the volume and in which languages? So answer to this is Transstats release summary can tell us volume of translation required per languages The third point is can be fixed major translation bugs reported after translation deadline and before final phrase this is this is related to QE point of view and Answer to this is the why Melviz jobs can pass packaged srpm to check criticality of translation bugs Actually everything will be very clear in the demo. So I'm just I'm just Explaining the answers to what are the challenges are and how we are getting these answers in trans stats The fourth is which set of languages need attention and the fifth is is there any way to automate few steps? So answer to this is in trans stats We have the system where why we can find the packages which are out of sync which has has differences from translation platform versus what What is being built into well system and then? Yes, we can actually have some automation by adding few tasks in why Melviz jobs and those automation can be achieved So these are the challenges and these are the respective answers to these challenges now We'll see how we can actually get I mean get to these answers What are the touch points in trans stats so left-hand side we have the Actually, this is the release milestones For translations point of view there are two major milestones the first is software string freeze and Second is translation deadline so software string freeze is the point where The developer has to stop doing all the changes in string and then Translator has to start their translation work and translation deadline is where Translators are supposed to finish all the tasks and then QE people has to test everything so that things could be packaged and well-shipped In trans stats we have some features like translation of date volume and coverage the packages which are out of sync and Two features are linked to out of sync packages first is string change and second is languages which need attention So these are the features which are available in trans stats and which can actually solve many of the challenges as we have discussed in previous slide So now let's look at how to use jobs In trans stats why jobs so jobs is Really very flexible in the terms of creation We can run them we can say we can schedule and we can share schedule is in progress It is not available yet, but others we can do The predefined job types are like there are two basic types first is predefined jobs again is why well-based jobs so in predefined everything is defined we can't change them and They are basically to sync with different parts so that we can fetch out different data and Stow them in trans stats in predefined jobs We have sync build tax just to have all the Build tax which are available in Koji to get into trans stats Second is sync with translation platform to have all the information from translation platform regarding projects their details and High-level overview meterator third is sync with the release calendar This is why this is important because Actually, there are certain dates which keeps sleeping a week or so So we should have all the latest milestones in trans stats. So this jobs handles that Now next is why I'm a bit job. These are very interesting part in trans stats These are vinyl based and we can actually go and define them so the first is sync with up a stream that is Get repositories second is down a stream that is Koji build system Third is a string change fourth is verify translations verify translation is in progress I mean development is in progress. It is not yet available and The last one is inspect jobs. There is a page or there is a section Whereby we can inspect all the jobs how they are like what has happened running them So this is actually log view. So it has list and detailed view So we are going to see each and everything in so much of detail now Sync why I'm a job template. So this is actually job templates. So you can see that in sync This is up a stream sync job Template where what it does it close the git repository filter all PO files and calculate the statistics on the basis of those PO files This is sync downstream Job template where it takes the build system. It supports to build system first is Koji and second is brew and It takes package name build tag. We are running this job for What it will do it will get the latest build info download as rpm unpack that as rpm load a spec file and then unpack Turbo and apply patch which are available in a spec file then filter PO files and calculate the statistics So the beauty of this is we can actually go and remove these a step or we can add an additional step So this is a lot more flexible Now let's move to another templates. These are actually templates to track a string change So this is the first template Like for a string change it requires two things first is package name second is release lug Why release lug is required because suppose you you are tracking for Fedora 29 So we are actually keeping branch mapping for each and every package. I'll explain what is the branch mapping and Through branch mapping. We are getting to the point like a suppose it is for Fedora 29 So through branch mapping we are calculating like which is the project version of translation platform for the package where translations for Fedora 29 are Kept or are being done. So that calculation we are that mapping we are getting From release lug now for task thing. We are cloning git repository and branches master It is well configurable like it suppose git repository as of now SVN and other things are not yet supported But you can mention branch here. Currently it is master. You can mention whatever you want And then it then actually after cloning it will generate the part file from the source itself Now why domain is required. This is the command you have to give to generate that particular part file So by default it will show Intel tool command which is being used by almost every GNOME packages And few of the packages are slowly moving into get text Get text based system where this command will not work like anaconda. So for those systems I'll show like which command will work And this is the command here we have to give the command to generate part file And domain is required because some of the packages do not have the same name for their part file So we actually need to give domain so that it can generate part file of that particular domain And to download part file from translation platform of the of the same release we are running this job for And then it will calculate the differences between part file which are at translation platform And part file we are generating from source repository. So if we create diff from these two We'll be getting what are the strings which has been changed Now this is like upstream like git repository versus translation platform And this is like translation platform versus what is being built picture So this is under development, but the steps are written here So the first step is download and unpack let us let us srpm from build system Create fresh translation template like part file from the source tab all so we'll be repeating this step here itself Then pull translation from translation platform. This is very tricky because we have to pull each and every translation for all the locals from translation platform Then we actually we are actually merging those translations To newly created part file, which is in set one And then we are applying patch and filtering PO files So we are actually having two set first is set one which we have generated right now And second is the set which we have found in the srpm So if we create diff between these two then we'll find diff for each locale And what has been packaged versus what is being translated So this will solve most of the problems we are facing in localization cycle And if we see the results of these two Then actually we can answer many of the questions like What what was breaking and what was not in sync So now let's see them in action So this is an important slide like if we find some differences So this is like string changes detected and if it is after string freeze date Then it should be a breakage. So we should send a warning to maintainer And if it is after translation deadline, so it could be an error So we should alert a quality engineer that it is an error error so actually Once it is integrated to different systems, we can we can actually redirect those messages to bugzilla and then we can handle several errors as well And there are just four steps at server through which we can run these jobs The first step is to we have to pick the template The second is we have to define task in the terms of yml file Then we have to set values in yml file. We have some variables like package name release lug and those all things So we have to set those values and then we can trigger the job So now it's time for demo Let's uh So I have recorded everything and I'll be keep explaining So this is the landing page of transters in which we have just five packages Where uh, these are the messages and what has been translated not translated the packages in packages section We can see each and every package in so much of detail So uh, we have selected one package the first one that is abert, I think So this is the page for abert and These two this is coming from upstream and the one having yellow background Going from downstream now see in anaconda In anaconda everything is coming from translation platform because they their background color is just normal So these are the statistics which we are getting from translation platform This is the branch mapping I was talking about for anaconda Now since we don't have Statistics from downstream we are going and selecting the downstream package job and this is the yml file for downstream sync Now we are going to run this job So, uh, it will do everything which has been written in yml file. First it will go and see uh, which which is the latest build Which uh, which has been Into koji and then which is out those details Download the srpm file from the koji build system unpack that srpm load spec file And this is actually loading spec file into a python python object. So it is very flexible Now it is unpacking table And then applying patch no patches has been found Filtered all the po files and then at last it will calculate statistics. Why it is flexible because It is loading each and every po file in python objects So if we are getting some error like syntax error or something in po file So here itself it will show like we have found one error or whatsoever So it is very important and it is very interesting tool to inspect into srpm. So this is how We are getting translation statistics of anaconda the build system packaged level So now let's move back to anaconda detail pitch. Yeah, this is the job success url Here you have the yml file Which has been run and this is the log off output of each and every step What has been performed at the server level And the good point is we can actually share this url So this is what I was talking about. You can just copy the that url and share across so that everybody can know that What is the current statistics of anaconda package at branch map at build level Now let's move back to anaconda Here we are here. We are having statistics of Anaconda from koji build system for fedora 29 and now Generative statistics diff once we are generating it will show that these languages are Not complete at koji level. So these languages need attention for anaconda package for fedora 29 Now let's move to gnom initial setup in initial setup. Everything has been synced. We have statistics from koji build system We have statistics from translation platform. So we can directly go and generate diff So once we generate diff it is in red color. That is chinese traditional And it is showing one person that means There are some strings translated at translation platform, but they are not being pulled and packaged So that is why that is it's red and it is showing some differences So this is for gnom shell And for gnom shell, we just have japanese. That is not complete at koji So let's move to python may For python may statistics has been generated and no languages found as need attention. So it is well and good For python may for fedora 29 as of now Let's move to Settings part like how trans stats is keeping things in his database. So this is all languages These are the translation platforms currently trans stats has in products we have fedora koji and this is the release schedule of fedora 29 So actually we can keep multiple products like we can keep rel and fedora at the same time This is uh the packages and their list Let's go and add one package password d So for adding a package we need upstream url Copy upstream url. That is pager And we need to give The name password d password d the name has to be matched with whatsoever we have package name at translation platform So validation is being done over there and we select as fedora then add package So now package d is being tracked for every release in ith of fedora So now you can see that everything is empty here Like it is newly added package. So we don't have any statistics from any of the sources Go and sync with translation platform. It will sync with fedora zanata currently Trans start suppose to translation platform damd license zanata So Now it is fetching all the statistics available from zanata And there are the statistics we have two project was in master and password Now it's go and sync with upstream repository So what it will actually do it will go and clone the git repository It will filter pio files and calculate the statistics So it is cloning uh buggyu repo And this is the statistics we are getting from the repository And this is the job success url Once we land to pass the details page we can see that we have statistics from upstream And this is the branch mapping Go and sync with downstream as well So we have statistics from translation platform upstream and we are getting statistics from build system So mostly for koji build system. We don't Have any patches for a brew build system. We do have I mean various back patches which has to be applied This is job success url for password d downstream sync Now let's get back and calculate differences for password d So, uh, this is all red because for password d things has been translated at zanata But they have not been pulled and packaged. So we are getting differences here So this solves some of our problem like Actually add to for package maintainer level or something Once we refresh this we can have statistics like how much of packages need attention that is for currently for this instance Once we land to federal 29 translation statistics, we can see the password d has been added And different statistics has been refreshed as well So this is update volume thing like what volume has been updated and in which languages we need I mean most attention in the terms of translation Once we refresh this we can have statistics for each and every language Like What is the number of untranslated messages? This is shortcut to land to details page of each and every language For translation part. So these are the pre-defined jobs. I was talking about these are three number build tags translation platform and relay schedule And this is yml based job. It has three templates as of now There are four steps pick template define test set values and trigger job And let's see like how things are being stored. So this is the yml for upstream sync as we have done for so many packages This is the yml for build system and this is the yml for track string change Now we will be running a couple of string change job to see actually how it works These are inspect logs. This is the list view of logs And like this is forcing downstream and password d what we have ran just now These are the job IDs here And this is the type this is the remarks in remarks mostly we'll be having package name and this is Here like this is success or fail whatsoever The time taken and this is the job details page URL So here by you can land to job details page It has most of the details like the time the output the input yml and everything Yeah, here I was talking about this is the command for anaconda So it is different from intel tool. So this command is required for anaconda port file And this is the steps which occurred When we fired that command to generate port Zero message default has been found for anaconda and in full diff you can actually see The entire line to line diff of anaconda port file Like this from the source To like what is what has been created just now to what we have found a translation platform So this is what I was talking about as the message id difference is zero So we are safe at anaconda Now let's run it for some other package So we are keeping the command as intel tool itself And we are just looking into abert So we are just going and see like if do we have any branch for fedora 29 As of now, we don't have any branch for fedora 29 in abert So we are just going ahead with master branch itself and So here master is branch is master recursive is true And command would be intel tool now let's select package abert And then release slug that is fedora 29 And run this job So first it will clone the repository Entire repository and then It will go and generate port file So port file has been generated and then we can see the differences which we are getting So these are the differences which we are getting in abert So for current master branch if we'll consider it with fedora sanata abert There are 14 Messages which differ So we have either we have 14 new messages or we have some new messages and some has been updated So we can have answer to them as well These are the output these are the 14 messages which has been differed with their line numbers like At which line they are differing And once we click full diff we'll be having line by line diff diff of a port file On line number 176 We are having this line added. I mean we are having this difference found So now let's go and see so here in full diff you can see that this has been added to the port file And this is how we can actually go and see Like which string has been added here Or which string has been updated and where Now let's go ahead and run one more job with different command So now we'll be looking at python may in python may in po. We don't have any po. We have one port file So And we don't have port file. So intel tool update will not work here We need to figure out like which command we have to give there to make it work. So it is make port file simply For python may so it is just make port file And here we are keeping branch as master The interesting part is we are adding one thing like Overwrite as true because we have port file in repository And we don't want to use that port file. We want to generate our own. So for that you have to add override true Once it's get added for python may and release later on 29 Run the job So what it will do it will clone the repository and it will delete that port file And then it will create the new port file and then go and see the differences So here we are safe because there is no new messages found for python may So job success URL is this and you can see here that no messages differ So This is actually message id diff and full diff is line by line diff. So everything Is here on this page and this is the URL which you can copy and share basically In archive, I don't think we don't have anything because Once we have more than 15 or 20 jobs run then it comes to archive And this is how it is being navigated Like from list view you can go and go to details page We have a quick start page where you can go and read like how to get started with trans stats Some of the FAQs are here itself, which will answer to most of the questions most of the common questions basically So this was the demo Next is trans stats cli. So what I was explaining in the server parag is going to explain everything with cli as well So over to parag So I'm going to talk about the trans stats cli so basically why trans stats cli is needed because It is needed by the developers if they want to quickly track their translation status Or even by the translators if they want to know what is the current status of their package to which they want to contribute their translations So it looks better now Okay So basically in trans stats cli there are five commands have been developed The first command that have been developed is the ping server to test whether the trans stats server is alive or not Okay So this is the Ping server. So basically trans stats is using a rest api. So This trans stats cli is just Using this rest apis. So this is the gate kind of api The ping It's a simple command second Trans stats cli I think is nothing but getting a package status So package status is nothing but returning the translation stats of a package for enabled languages Uh, I will show you the demo as well. So just let's go through these commands and their descriptions So, uh, the package is one of the cards for trans stats cli Uh, third one is a graph rule coverage. So you can create your own rule And you can get the Stats from the For that package. So we have developed this command called as coverage So it returns the translation coverage according to the graph rule. So you can create your own custom graph rule there It's also a gate kind of command Uh, then the fourth one is a release status It's returns the translation stats of a package which are being tracked for a given release. So suppose you now want to track us Whole whole whole suppose for any branch. So release branch. So let's say the current release branch is federal 29 So, uh, you can use this release command Uh, the recent one that we have developed in the current trans stats cli release is a job command And this was one of the biggest command, uh, that has been developed into the trans stats cli So job detail. So we have two sub commands that have been added here Job run and job log. So with the job run, you can run the jobs of three different types That is nothing but a sync upstream job type sync downstream job type and string change job type and, uh, when you run these Through trans stats cli, you will get a url and in order to view the log from that url You need to use another sub command that is trans stats Uh job log and you need to give the log id that is written from the run command So like, uh The example has been added. It's this big job ID has been generated for the each job that you you are going to submit And you can just substitute that job id, uh, in that, uh, instead of that job ID field and you will get the log so, uh So these are the commands that have been developed into the trans stats cli Yeah, so like I said before trans stats cli is written in python. It uses rest api and the code is hosted on the github So we have a trans stats organization on github where you can find a code for the trans stats server as well as the trans stats cli Uh, it uh, this also provides output in json format as well as the norwix format. So basically, uh, The the the output that is coming from the trans stats server is in the json format, but in order to Make sure that the end user will read it properly We have made a default output format as a normal text format But to all this command we have, uh, added a hyphen hyphen json option So if you add a hyphen hyphen json option output in json format only So so there are two types of, uh, output format available through trans stats cli Yeah, in summary, uh, coverage job package release and version are the five commands that are available So I can show you So, uh, we do have a man page Uh, in the trans stats cli package trans stats cli package is available in fedora as well as for epel 7 as well And this is the man page So, uh The same information is available here like commands coverage package version release job And we do have examples also for end users how to run it So suppose Trans stat release you want for fedora 27 then if you want in detail you can use a detail option Version I think we better so, uh Just if you want to test What version trans stats is currently using so you can just use a command trans stat version So it will just return a client version But if you want to know what trans stat server version is, uh, Then you can just add a hyphen hyphen server option and you will get to know so currently Trans stat client is at 0 to 0 version and trans stat server at 0 1 6 hyphen rc 2 version So this is one of the Big output commands So if you want to know about the status for the any package, uh, say anaconda So I mean I have already run this command here. Oh, it's already written the output. So, uh, see So for every branch that is available Like rail seven is there rail six is there so It will give you the output in this format like, uh, for African language say the completion status is 1.09% only for Albanian It's a 79.92 So it's it's a bit easy, but it's uh, but uh, yes, I can understand it's lengthy So we are just adding a sub options for easy to get the information in the output like Like if there are you can see if there are five or six branches It will give you the all the output along with all the enable languages for that package So the package command can be used to get the status for all the available branches for all the available languages Yeah, so I I think uh, I will cover this because this is very important thing a transfer job. So trans stats job So, uh, how to run this command? So trans stats job run And then which kind of a job type you want to choose say suppose string change You want to test whether there has been any string changes happen for the gnomeshell package And then that too into the fedora 29 branch stream Then this is the command. So once I run this command, I got a job log and In order to see the log Yep, so this is the command Transstat job log a job ID So it will just return in a normal text format like what is the job ID? What was the job type for this job? What is the start time end time and the YAML template that just this Previously we have seen different different YAML templates that one can use to submit the job. So it also returns what YAML YAML text has been used for this job. So this is that So it just clone and generate a part file and it use this command Downloaded the part file So yeah, so this is how we are just trying to make it as as pretty as possible So that the end user like developer and translator can understand or can pass it At the cli level only instead of going to the web and look into the log files I mean server just returns it server generates some job ID once you submit the job And as I'm using a rest API. I'm just getting that thing. So Even even like I explained you let's See how much time it gets So JSON outputs are also available Or maybe I can show you this quick If you want output in JSON format You can get it It takes some time Something going on wrong see Here we got the output in JSON format It's success job created and log and we got it Job URL job ID So, yeah, I mean This is the way. I mean you can use the transfer at cli and It will be easy for developers and translators to quickly interact with the transfer server and to get the status Yes We have a release command so 29 and Yeah, let's see. I mean It's gonna take some time. Ha ha. It broke. So, uh So this is the way release and release look and then we have added a local sub option to get only Local information particular local information Do you want to address? Sure. So, uh, actually when we when we'll go to server, uh, then we can actually see that can you navigate to server? I mean Maybe transaction dot xyz Yeah, uh, so click on dashboard basically So here we have Here we have federal 28 and 29 statistics Just scroll down. Yeah And scroll this bar. So here we have all the languages which are Have have some untranslated messages and once we click on any of the languages click on any of the languages Here we can see that, uh, what are the packages? What is the total translated? Package, uh, I mean messages and what are not so this is the navigation for entire release to languages to messages kind of thing And click on dashboard once So basically we have two tabs first is releases and second is packages So if you want to get into packages navigation, just select second tab And then you can select any of the packages and then you can see all the packages stuffs So, uh, basically it is categorized with release and packages and similarly we have two commands release and packages at sly also So Hence categorization is being done So, uh, should we move further in the Uh, thank you para for covering cli part Now what's next? So, uh, the next in development plans are this First and most priority thing is to deploy transters in federal infrastructure Second is to have some more tablets in ymelvis jobs like verify trans which is still in progress Third is better UI to eliminate navigation confusions and for this we are developing new dashboard Fourth is transters job scheduling feature To avoid all manual Go and run jobs thing Fifth is notification about string change breakage and translation deadlines to their counterparts and better user guide We have very short demo for I mean screen recording for new dashboard as well So this is how new dashboard will look like This is created using pattern fly and here we have all the navigations like dashboard packages jobs languages platform and custom graphs So by default we'll land at release page And here here the stuffs are similar to what we have currently So we have planned the migration from old to new dashboard somewhere around in september And this is everything which you are having currently in the old UI Uh, this is locale to details page navigation And these are just the tool tool tips for different Fields This is release schedule page Where you can see the release schedule of fedora I mean which are very much linked to translations This is summary page of packages and this is the list view of all the packages Once once you'll select any of the packages you will land to their details page Like we are going to aberts detail page and this is the details page for abert The internal the internal looked like is very much similar to what we have currently but the outer and Outer skin is being changed So it will avoid all the navigation confusions and it will give us opportunity to actually go and Include all the things Which which which is actually required at UI level So this is languages page Where we can actually filter everything using shortcuts and see the language list Similarly for platform so Uh So these are the platforms which actually we can support any number of engine Like if we have four instances of zanata, we can support all four if we support Damdolize and for the some parts we support trans effects as well. So There are several translation platforms with These are the custom graph rules on the basis of which coverage graphs are being generated Like and these are the filters Which is being added so in the coverage we can add different filters and filter those rules on the basis of that And this is the coverage graph So we are in the process to modify this graph to be more meaningful currently, which is probably not Let's select one and then Yeah, this is the about dialogue of trans charts. So This this has incorporated And this is a still into progress. I think So Let's move to the next slide, which is for integrations So we have some planned integration for trans charts with bodhi so that we can have some automated test plugin We should be updating us like if there is any string breakage or not And if there is a string breakage, then it should flag maintainer Second is message queues like fit msg It is very important to have in sync with if we found any translation error in any of the packages Then we can respond to that error if we have an answer in trans charts Third is chart process like isc bot. This is interesting like in even in isc bot If we can link that to trans charts, then we can have some commands through which we can actually go and see What is happening behind the scenes and why things are breaking through isc itself In get involved we can get involved with trans charts By first test and try deployments So for deployments, you can just pull docker image using this command docker pull docker iotrans charts trans charts And run that image. So application will be available at port 8080 the server with some of the sample data And this is the steps for open ship deployment. You can go to github.com as trans charts For all these things and if you want to contribute You can get the development environment in just four commands These are the technical specs of the project Project url is trans charts.org docs is at docs.trans charts and source is at github So anytime if you would be interested in looking into it You can go to these urls and you can find details there So if you like the idea, you can go and submit one feedback feedback.trans charts.xyz Why I'm looking forward this because I have some questions to have for the development in trans charts So that we can decide on priority like which things we should first And we are available at federal globalization and at trans chart channel In irc free notes. So now I'm open for questions If somebody wants to discuss We can integrate it to the documentation, but we actually go what is the actual use case For that for example Probably most probably you will be having some projects at zanata Where you are pushing all docs file to get them translated and pull them back So we can add those repository into trans charts and then we can have tracking there So this is one use case. Are you talking about this use case or do you have any Currently it is not but we are looking forward it Thanks. Thanks. Thanks. Thanks. Thank you so much. So any other questions So, uh, basically I would say this is ready So you can have these things in trans charts as of now So we can have translation update volume and we can go and see for out of sync packages String change languages which need attention and coverage Yes Yes at trans chart.xyz it is available and it is available for almost 90 packages and for 70 languages I'm thankful to Rafa. He has submitted one bug for portuguese. So I'll fix that. I'll fix that. Yeah I'll fix that. I've seen that and it needs some back end changes. So I I'll keep and hold I'll fix that So most probably I'll do that today Sure See for that actually you need to get into integrities like Which which projects you are handling underneath of what? So, uh, suppose you have 10 projects. So you need to create 10 projects at zanata Where or whatsoever your translation platform is and then you need to add those projects in trans charts So that tracking could be enabled for those packages So whatsoever packages are being added to trans charts. They are being tracked So actually we need to get into that like, uh, I mean Uh, do you have anything to add in a question like? Basically for this system On the basis of that, uh, we don't have any plans as of now We are just creating some instance for fedora rel and then we'll be looking forward for other platforms as well Like if we are looking out to weblet, so First we need to enable all the weblet counterparts in trans charts as we have enabled it for zanata So if we have some apis in weblet, we have to consume those apis and then we can add projects which are in weblet to trans charts So this could be one use case, but it will require a small study before we go ahead simply so, uh Anyone has further questions Or we should break out for lunch Okay, thank you. Thank you for joining