 So, let's start with the presentation using TransStats to ensure package translation completeness. Let's look at software package localization cycle. Suppose we have an internationalization application and from that we are going to extract all the files, all the strings which are related to translation and we have to extract everything in the form of translation template, for example, POT file. For example, POT file and then we need to push those POT file into any of the translation platforms available out there and for translations there are several translators who will translate those files and make them available in different languages and then we have to pull those translations back. So, while pulling back we will be having several files, one for each language and then we need to merge and compile everything back into our application. So, this is one complete cycle, this is one very simple complete cycle of software package localization and then we have to build everything in build system, like our package needs to be built in system and then it got shipped to different channels, for example, different repositories of YAM and DNF and different several other repositories as well. So, this is very simple software package localization cycle where things, where translations has to be extracted, get them translated and then merge and compile back, then we have to build the package and then ship. Let's talk about some of the challenges we face within this cycle. The major challenges we face are three. First is everything translated packaged, like suppose we have extracted 10 strings from our software and push them to translation platform out of 10, we got 9 translated and then should, I mean all the translations has been pulled back and merged back well before the package or software package are being built into build system. This is first challenge. Second challenge is are all strings pushed to translation platform, latest to software repositories. So, this challenge we can see like suppose we have 10 strings in software repositories which has been pushed to translation platform and now with some changes, two extra strings has got added in our certain development cycle or something which didn't get pushed to translation platform. So, we have mismatched there. I mean in software repository we have 12 strings but in translation platform we just have 10 strings. So, there is out of sync condition which actually was there. This was second challenge and third challenge is is there a way to speed up the process like that particular process push and pull back that particular process. Now, let's have a closer look to software package localization cycle and in software package localization cycle if we see it closely then for an application to get internationalized framework has to be used. The framework, I mean there are several frameworks out there. The most famous one is get text and that framework works on, I mean that are backed by various IT and components at operating system level but at application level that framework has to be enabled for that application so that it can be localized and converted and enabled for different languages like French, Japanese, Malay to get enabled for those software applications. Now, let's look at different file formats which actually needs to get extracted from IT and framework. So, various IT and framework supports various different file formats. One of them is POT file, PO and MO file and other file formats are INI, JSON and properties DTD, Microsoft supports resource files, properties DTD are basically related with Mozilla applications, JSON and INI are related to INI is related with PHP files and JSON is mostly related with JavaScript based applications for example Angular, React and those frameworks. There could be other file formats as well like XML and YML and those file formats the translations which we store in those file formats has to be pushed in translation platform and there are so many translation platform out there. For example, Damolize which is GNOMS translation platform we have PUTL, so Mozilla applications and LibreOffice is on PUTL, we have TransEffects, Weblet and Zanata. There are several other translation platforms as well. So, we actually if we see it very deeply then we can see that everything is happening or we can track everything on by using any of the SCMs of the configuration management tool like Git, SVN, Bajar, Mercurial or others. And once the cycle is being completed and we have every translation backed in our application, it has to be built and then packaged in any of the formats. So, there are so many different formats as well. For example, SRPM, RPM and DBN formats, PKG, APK and MSI formats. So, now we can see the complexity of different translation file formats and different translation platforms as well. So, at every level at each and every level there is a sense of diversity. There are several options out there and people follow each and everything very, I mean, differently. So, there is a condition of mix and match and there is so much of, I mean, things are hybrid. They are not on standard format. So, coping up with these challenges itself was very complex. So, we come with some of the solution which we are going to discuss here and we can say that these file formats are being shipped in these channels. We can hook up to those channels using YAM, APT for the DBN packages and BREW etc. Let's have a look at IATN frameworks. So, in IATN frameworks there are, I mean, this is next level of complexity. In IATN frameworks there are so many different frameworks out there and we can see that the most famous one and the most oldest one is GetText which supports POMO, these file formats and GetText was get implemented in different programming languages. For example, Cpython.net, I mean, most of the programming languages do have support for GetText that has been implemented in those programming languages and there are several other IATN frameworks as well. For example, from Microsoft we have System.globalization library which handles IATN framework and it has resource files, resource access for .NET onwards thing and for Java IATN libraries we have RTDN properties for Rails, IATN and API. We have properties file. So, there are so many IATN frameworks for different programming languages and it differs for application framework as well. Like for C and C++ it should be GetText and for Django, Django natively supports GetText and Python framework. I mean, Babel is one framework which is entirely written in Python. Now for .NET, ASP and MVC, I mean, ASP and MVC we have System.globalization, Spring, I mean, there are Java IATN libraries but that has been wrapped with IATN and Servlet which is available in Spring. Now in Ruby Rails, Rails IATN and API is supported but there are several other APIs available as well. Similarly, we have Revell for Golang and several other things. In PHP, we have a lot of different implementations like for different frameworks like for Drupal. It has different mechanisms by which it supports IATN. If we go and see for Moodle, if we go and see for Zumla, they have different implementations and natively they support file formats would be INI, that is fine but the mechanism by which they support is different. I mean, implementation is different and for JavaScript like Angular or React, natively file formats would be JSON but implementations may vary. So there are variations at each and every framework and two, one or more programming languages but at core they are IATN frameworks and IATN frameworks has to be implemented well at different levels so that we can actually go and see where what is happening. Now, so we are actually looking at one tool that is Transstats. Now, let's see about Transstats support. So at platform levels, we have support for Damgla's, Transifx and Xenata and IATN frameworks currently transits only support to get text which is POT, PO and MO kind of thing. In build systems, Fedora build system is supported which is Kozi and SRPM and RPM are being, I mean, produced from Kozi build system which is then shipped into repositories and can be hooked from YAM and DNF. In version control, Git is only supported. Hence, we have support for Pagur, GitHub, GitLab, Bitbucket and other platforms. I have a quick demo for Transstats. Let's see how that goes now. So this is the landing page and here we can see that there are a list of languages which has been configured in Transstats and in translation platforms we have Damgla's and two instance of Xenata, Public and Fedora. Let's go back in packages we don't have anything now. We have two jobs. I'll explain what jobs are. In products we have Fedora and one release is added that is Fedora 30. Now let's go back to packages and add one of the package. For adding of the package we need Fedora authentication. This will redirect and it will take Fedora authentication. Once authenticated, we can actually go and create packages in Transstats. So now we can navigate. Here you can see that we have logged in into Fedora. Now let's go back to packages and add one package. We are adding Ebert. Ebert is one of the package and we need upstream URL that is source repository URL of Ebert which is github.com slash Ebert slash Ebert. So it is required just because there are a number of jobs which will utilize this link. We can select one of the translation platforms from available and we are selecting Fedora Xenata. It is added. Now go back and see how it looks. So this is the entry of Ebert which is currently added. We can sync to three different places. First is translation platform. Second is upstream repository which is github in our case. And third is build system which is Koji. So currently it is going to translation platform that is Xenata and fetching out all the statistics and showing it here. So we have two different project versions which are at Xenata. So we can see statistics for 10 languages for Ebert which are coming from translation platform. Now we are syncing with source repository. So what it will do? It will actually clone the Git repo and filter all the PO files. PO files contain all translations and it will crawl to those PO files and generate statistics. So this is how job runs. We are running the job. It is currently cloning Ebert. The clone is now completed and 106 PO files has been filtered out. So in the bottom we can see that statistics will be calculated out of these PO files, collected PO files. So this is the statistics and the statistics will be stored in that same page. We will go to that. This is the job URL which is just run. And here we can see that on the left-hand side we have YML which was pushed and on the right-hand side we have output from each of the tasks which was defined in YML file. So now go back to Ebert and here on the first row the statistics are coming from upstream repository. So we have statistics from translation platform and upstream. Now we are going to sync with build system. So this is second job and what it will do? It will go to build system. Go and collect the latest build details from build system. Then it will download the SRPM that is source RPM from the build system and then unpack SRPM. It will actually load a spec file which is the source file to build any package in build system. And it will do all the tasks which are required for getting PO files ready. And once PO files are ready it will just filter out those PO files and run the task to actually calculate the statistics from those PO files. So here you can see that we got SRPM. The tarbol has been extracted and no patch has been found. Then 106 PO files has been filtered out from that SRPM. And at the end we will be having statistics. So this is how we are getting statistics from build system. Now we have statistics from translation platform, we have statistics from upstream repository, and we have statistics from build system. So we have statistics from all three different levels. Like where we have code, where we have translations, and where we are building. So if we combine the whole picture it will show us like where we are lacking or is everything fine or not. So this is how output will look like for each and every step which was defined in YML file. So let's get back to Abert and look at the statistics. The second row is coming from build system. So we have statistics from all three. And now we are going and see do we have any differences. We don't have any differences between translation platform and build system. So we are good. So for Abert we are good. I mean what has been posted in translation platform has been built. So there is no out of sync kind of situation at these two places. What we'll do next is we'll check like is everything we have in source repository. I mean all the strings from source repository has been pulled to translation platform or not. So this is how release summary is being displayed in transfer. So we have I mean these numbers are number of strings and we can navigate to from Fedora 30 release to that particular stage and come back to the package level. So that is very high level scenario in transters. Now we can go and run string change job which will actually go and see difference between source repository and build system. I mean translation platform. So here we can see that we need to check in the source repository do we have any branch of Fedora 30 or not because what we are doing we are actually we are actually looking for any differences for Fedora 30. So we need to give git branch there and we are going with master because there were no Fedora 30 branch. So we are selecting the package we are selecting the release name and it will run the job. What it will do it will actually create a new part file that is translation template file from from first it will clone the repository then it will create translation template file and it will fetch translation template file from translation platform as well and then match those two part files. So it is matching and it got 18 messages as differ. That means for about for master branch 18 messages was was not post to translation platform. So we got difference there. Now once we can go and see for job log we can see these are the logs. So for line number two. So here actually it shows line by line diff as well and for one of the translation messages which was added you can see that it is at line number two zero five one that is print verbose information and if you scroll down at line number two zero five you can see print verbose information. So you can actually go and see the line by line differences as well you can see the message difference which was not being pushed to translation platform. I'll stop the presentation here because I have one slide left and I'm running out of time. So this was demo and it was to conclude like first is everything translated translated packaged so we can see through package translation completeness the three pictures from different I mean the picture where we have translations from different sources. Second was all string push to translation platform are latest or not for that we are running change job to direct string change and third is is there way to speed up any of these processes. So yes we can speed up using YML jobs. This is how trans starts work. It collects data from different sources from translation platform and it collects data from release release schedule as well. Upstream repository build system using various jobs. There are two main jobs predefined and YML which we have seen once we have translation data there we run analytics and there are several reports graphs and charts generated which which is useful for translators which is useful for developers which is useful for QE I mean quality engineers and project managers as well translation project managers or general project managers. So this application is available at trans starts federal project dot org sources available at GitHub slash trans starts and docs is available at docs dot trans starts dot org and feel free to join federal globalization or trans starts channels at IRC myself is Sandeep and I'm open to questions. Can you do YML jobs to translate something else? Van text to ensure this is properly packaged with trans sets. What I have in mind in mind is for example SVG. Actually jobs are jobs are actually going and bringing translations from different sources. So jobs are not which are actually for translation. They are just pulling out data. So sorry. I mean could you please repeat a question? I think this was not the correct answer. Okay. If I have a text alongside SVG which developer might want to be translated because this is an asset. Yeah. Can we have this in this process? So that translation process would depend on translation platform support. So if translation platform is supporting SVG translation then the answer would be yes. So for trans starts translation support could not be I mean currently in trans starts we don't have support for SVG. Okay. Wow. Thank you. Thank you.