 Thank you everyone, so thanks for joining the session, transstats, so this is basically the first, I mean, introduction to transstats session in which we will cover some of the aspects of transstats, how we can use it, what it means, and the basic overview of transstats. Let's see, let's start with the session, introduction to transstats, so the first is that what is the problem and the problem solution. The problem is like, the problem is like, transmission position across packages for release launches with respect to current development and transmission. So basically, when we progress with the releases and different steps in different projects, so in that particular way, there are several packages being translated and some of them missed translation releases and some face unwanted delay. So this was the problem and to cover the problem, the proposal is transstats and transstats is an attempt to tie up the loosens and maybe automate some of the localization steps in further phases. And the concept is based on syncing with transmission platforms for statistics, comparing statistics with release streams, managing translation differences, keep up the stream updated and create new solutions. So these are five steps same and we have to go gradually with these steps. Currently, we are stepping into comparing with the release streams thing and we are almost, I mean, the syncing with translation platform for Xanata and Demolize is almost complete. And we are stepping into comparing statistics with release streams just after the integration of Pooji Grid system and all. So and there will be moving towards transition differences, managing transition differences and keep up the stream updated, create notifications. So these are the steps and there are so many steps which has to be added to this particular list. So now let's move to how we can achieve this. To achieve this, we need to create a mapping between three things. First is up the stream, translation platform and releases. In up the stream, we have identified data for the first phase and in translation platforms, Xanata and Demolize, in releases we have two different aspects being over it. First is schedule and second is very system. Why schedule? So we need schedule in our system to create notifications to have stats, snapshots so that we can have comparison between different schedule and what was the status in different schedules. To have work estimation of the coming release, like what is actual work estimation required and the work estimation has to be generated for different teams. Like for identity, what is the translation estimate for IED and QE team, what is the work estimate for things which has to be tested and development perspective as well. With respect to build system, it is very important step in the stats like why we require integration with build system. So we require integration of the system to have these things like translation differences, like what is actual difference between translations which we have in translation platform which is in up the stream and which has been built to build system. So we have to keep these things as well. Somewhere down the line, we can improve the scratch build to the latest translations from translation platform. And then we can send that particular notification to particular developer or package owner that this is the build report for the latest translation. And if you are fine with that, you can go ahead or you can create or push whatever might be required. So this is the system actual mapping we need to have in place to create the system little or partial functional. So we have achieved some progress in this and the things which are in green and orange things are in almost no end progress thing. And which is imploring this plan. So basically the build system is coming in progress and with the program plan, the rest of the things is attaining some stability. Now let's move to another demo part. So demo instance has been deployed at cross chat.xyz. It is a temporary instance and we will take this down when it is public and it is a demo instance. So it is just two prior things. And demo instance can take just one release, that is 4.27. And one can go and have a look on these particular settings. I have a small screencast recording. I did that when I was just configuring this environment. And may I play that so that I can explain it further. So actually initially we have three. On the initial landing page we have three tabs. Firstly the status. This is for current status of all the packages which has been registered with cross chats. Second is coverage. Coverage based on different rules. And coverage totally means the graphs based on graph rules. Like these are the custom graphs which actually are required by the developer or someone which can create a room for a custom graph. And translation workload. So currently cross chat just have workload estimation for translation. So that is what we have. Tab translation workload. Now let's move further. So currently in translation workload depends upon the release. So this has to be added. Now let's select anaconda. These are the statistics which we get from translation platform that is anata. And currently we have anata photographer configured with this. Second is GNOME initial setup. These particular data is coming from dandlines. The second one that is GNOME 3.0.2.6. And this data is for this branch. GNOME 3.0.2.6 branch. And up-stream the statistics for up-stream is coming from GitHub repo. Like what is the actual translation status of up-stream. So up-stream and this particular branch. Let's move further. We have system dle as well. In system p currently they are maintaining their own translations. And though they have one project at dandlines. But it's still there maintaining their own translations. So we are relying on up-stream statistics. And that is almost 100%. It's actually that's handy. And that's also 100% pending too. So we can have these things in place. Like statistics from transition platform. And up-stream is currently being shown. Now let's move to graph series presentation thing. This is more catching on this. Can show you the trends of different branches. Like suppose for petroid 25. What was the position and for petroid 26. What is the position? And for the coming release. So this will show you the trend. See the trend in the particular language as well. Like this was the trend of different branches. For different languages. And you can go back as well. Now let's see for the non-title. So y-axis is translation percentage. What has been achieved. And x-axis is languages. So here we can see that. We are comparing up-stream and non-branch. For the language version. So here we can see actually the position of translation. And now let's go back to inventory and contribution. Inventory and contribution is very much important aspect of transfer. Because it holds several data. And transfer architecture is divided into three parts. The first part is inventory. Which can this language, translation platform, language streams. And the second is jobs. These are some programs which are being run on inventories. To generate some data. And those data are being represented. In different forms. Stabilizers and graphs. So in inventory we have 10 languages. 10 active languages. Two translation platforms. One is in 0 reasonages. And 14 packages. Five jobs have been run as of now. Now let's move to language style. So in language style we can see that. There are several languages. 10 are available. And these have local plus LAS. LAS is required to have single translation platform. Plus we can create. Sorry. Plus we can create sets of languages. Like master set and all available. Two sets we have here. Here you can see that we have two. Translations platform. Damage types of LAS. And data patterns. And in real streams. You can see that. We have one real stream. In packages we have 14 packages. These packages are currently mixed. Some packages are from you know. Some packages are. Do have their translation in Zanata. So now you can see that. We have same status. For our stream. We have same status for translation platform. The current real stream. For which the translation is being cracked. The mapping thing. And settings button. So mapping is required to have some mapping between these. Particular releases. And show how the mapping works. In some. Some. Some. Some. Some. Some. Some. Some. Some. Some. Some. Some. Some. Some. Some. Some. Some. Some. Some. Some. The statistics which has been generated from those PO operator. It is 172. POategory. I mean, there are 172 locals. That is retreating. That is. 172 locals. Let's register a new branch. So difference between branch and release is like, we can have several releases like Fedora RL and different, I mean, one release, we can have several branches. Branches are releases of that particular, yes, release. So let's add one branch Fedora RL seven. The system predicts lag form which is being displayed in the light. And we can attach from that with SIP to particular branch. It requires an IPL URL to have sync with calendar. So you can have, you can fetch IPL URL from this particular page named your ICS link. Just copy and paste that particular URL in the system and add release branch. It will pass the URL to the data and it will fetch up all the releases and it takes. So here we have all the computations. See that language set is all in order. So Fedora RL 27 branch is currently mapped with language set all in order, that is tell languages. And with sync calendar and data transition is another notification is visible for this branch. Now let's see. This is what I can see, which was that in packages, let's add one package and we are adding a word. So in the right-hand side, we can see that what slug currently a system is taking. We have to provide good URL so that it can repost, it can clone repost. So this is data processing model. You can select transition platform. And all the transition platform which has been added will be shown here. Select one real-estime and add package. It will be validated package from the transition platform. And then in settings, we can sync it with upstream and the transition platform. Currently, a word is being cloned. All the PO files are being filtered and the statistics are currently being generated for a word. And actually, those statistics as is are being stored in TV and we are just filtering the data we require. In loss, you can see that 106 PO files has been filtered for a word. So upstream your word is currently maintaining 106 PO files. Now sync, last sync, back there. And this sync with transition platform as well. Both has been synced. Now we can see the statistics for a word. In a word, you can have upstream thus the statistics which we received from Xenada. The credentials. Now let's go back and now we are creating map. We are creating, we are mapping branches. We have one release branch in the stats that is 777. So we have to map that particular branch with the branch, one of the branches which we have from the transition platform. So that we can create mapping and we can map the statistics, like this branch is responsible for this particular version of branch. So once we do mapping, it will map 777 to master just because it is not finding anything nearby. I mean, if we have F27 branch in Fedora or something nearby, it will map to that branch. So most in Xenada or in other platforms, most of the packages are not maintaining Fedora branches. So that is why things are getting mapped with master or Fedora something. So that is why we also have to create notifications today. Now since we have things mapped, I mean branch mapping is required. So now we have some branch map for some packages. We can see workload. So workload is, this is the total workload, like sum up of all the languages which is required for PIP to start. Currently everything is showing messages just because from templates we are getting the statistics for in messages and from Xenada we are getting the statistics for in messages plus work, bad for the system is showing messages. Now you can see here that for I bus it is showing zero. This is just because branch mapping is not correct for I bus. So I'll show you how we can identify I bus branch mapping. And we have to create branch mapping thing. I mean we have to enhance that particular thing so that it can work in most of the I bus. So we can quickly go to I bus and see. So currently this particular branch is not mapped with any of the translation platforms version. So you can see I bus that we have zero upstream and head. Upstream is for data and zero and the head is coming from I bus for I bus. So that is why it is not mapped. In between we can have percentage which is remaining for different languages. This is more of a language percentage representation. In individual languages we can see the statistics for individual languages. The first example will show you like what percentage is actually required for these particular languages. Like which is current language. So in Cohen we have larger work in Chinese tradition and we have some work. So this is and if we see in French an individual show package and the messages which is currently has to be done. So we are almost done with 37 and there are so many messages we have to be translated. So this is like here we can see that we can have some workload estimation. Now let's move to coverage stream. Coverage is dependent on rule and you can create rule for a release branch and you can select some packages plus some languages. So the rule in this is let's say we are creating a rule for a RLG installer packages. We are selecting some of the RLG installer packages and then you can select languages from this list or you can pick the map language set. So I'm picking the map language set and GraphQL has been created like RLG. You can see over here once you click on RLG installer it will take you to the coverage graph thing and it will generate a graph for you. So this is the graph in which we have all the packages and this is the start graph. You will have an idea for the percentage which has been covered in each language for each package for a particular release. So this is very much custom graph for a custom requirement and this is our template's coverage. So now we have covered the status coverage on workload, all three. We are in process of development of plant as well. So I'll show you demo of plant and we are we will be releasing this particular plant in the next portion, maybe plant of this month. And for the plant, you can specify server's URL through environment variable and start server or you can give it in command line or in conflict file. So all three will be working. So I'll be using environment variable thing. Currently we have four commands, first is status, coverage and the version workload. So version is basically we will show version of plant and server. Status is for, what is the current status of a particular package? So let's say Anaconda, the trans-stat status Anaconda. It will just jump the JSON for all the branches which are directly shown as graph. This is the same data. And maybe we can have some flag for representation thing and change this data to other thing or JSON is just for so that it can be consumed by some other services as well. So system D will have just one of the streaming. Let's start the normal initial setup. So here we can see the share of different packages in different particular view. These commands are just calling APIs at server and what data they are getting, they are just putting that particular data on console. Coverage RHS installer, we had created one RHS installer and this will show all the statistics, language wise for each package and the number is a percentage actually, a percentage of work which has been covered. That is the coverage of that particular room. And it seems to workload, so a workload for Federm, it means that because there was a pipeline here. So this is the workload estimation for each package and this is the data estimation, the sum of things. And the limit is messages. Helping sub-command and that and we can have details view for workload as well. So it will jump all the, this is very real thing for all the packages, what is the workload for all the packages and for all the language. So this is the statistics of workload in different languages. Let's get back to the dashboard. The last particular, the last particular link that is docs, it contains trans-stats docs and I have to add many of the pages here as well. This is very short and in this description of what trans-stats is and all about it. I think the demo is over. So I'll be switching back to slide. What is next? So there are so many things we have to get completed in different particular releases. So next, 013 has been released, it contains three links. Now we are back in the environment, the next step has been created and that has been integrated. In 014 we are targeting core APIs and the jobs has to be scheduled or automated using the little bit, just release a plan. In 015 we are targeting jobs, Indian and notification. Now I'll explain what is jobs engine and why it is required. Jobs engine is going to be, play a vital role. Like we have several jobs coming up in trans-stats like sitting with upper stream, sitting with client or sitting with transition platform, sitting with build systems so that we can have some operations and we can get many cool data. So we can do this particular thing in more dynamic way that is vital with jobs which is planned. So we will then refer to this particular job. So it is a same job with the build system that is build system Koji and what should be the strategy for exception what is the name, what should be the item type on which package this has to be run. Tax is like what they should conduct for which build system type and what task is to be followed and what is the type of the job. So thereby we can make things more generic and thereby we can have jobs being scheduled, being shared, being saved. So these are flexibility which can come up with jobs engine. So we are targeting jobs engine to be developed. I mean the development has started for this and soon we can see that many, like soon we can see this functional in coming releases. So I think I'm done with this slide we have some minutes left so I'll open for questions. If anybody, this is actually as I mentioned earlier. As I've mentioned that we are moving that we are moving to most of the main steps of the release streams and then this thing is managing transition differences. So in managing transition differences we have to keep POPs so that I can manage differences. So these are the steps I've identified and the next step is that only. So we'll be dealing with POP soon. Thank you so much for the suggestions. Actually we are also keep thinking on it and the things you have mentioned we have planned it to include in jobs so that we can have customer steps and we can write those steps by our hand and the system will be playing those steps for us. So in jobs we will try to cover these things, these steps and thank you for all the suggestions. From a resource perspective. So I think Alex was talking with the best regarding, Steve was another. The same with the transfer, are you having a discussion with the better in front? Yes, we have planned discussion in this month. And so we'll really start the decisions to release for the years to come. Exactly. You can review them. Sure. So I'll give you the link. Yes, we can, I'll have to do it. Yes. Yes, yes. You can go to transfer spot XYZ currently and look, yes, the things that are required. Plus, we can see if you're better at work now and close the decision. Yeah.