 And now, I'm going to enter if there are no questions on the second part of the talk, which I think is going to be more interesting, which is how the software is then and how you can use it if you want, and how you can deploy it elsewhere if you want. Everything is based on Grimoire Lab, which is, of course, a free open source software and is produced mainly by the company in collaboration with some specific people who have contributed. Grimoire Lab is a very simple architecture in the end On the one side, you have the repositories, Git, mailing lists, and stuff, and we have Perceval. Perceval is basically the data retriever. It can retrieve data from like 20 different data sources sort of related to software development. In your case, that's great, Git and Vexilla. Perceval gets the information, produces JSON documents from it, which basically includes all the information that you have in the original data source, and stores those in Elasticsearch. We call that the row index, because it tries to keep all the information that is in the repository. The reason for that is that once it is in the database, it's much more easy to work with it. You don't have to go to Git every time you need something, you have the information in Elasticsearch. You don't need to go to Word and ask if you have everything in Elasticsearch. Perceval can go incrementally, so that they can run now and in 10 minutes from now, and it gets the difference from now. So it's very efficient from that point of view, so that you can basically put it in a cycle and retrieve everything new in the repository. Then we have Grimoire Elk. Grimoire Elk takes those row indexes with all the information and produces indices specific for Kivana. Basically, those are summaries of the activity, and in those cases, we try to produce the information that we want to represent in Kivana. Again, Grimoire Elk is storing the new index in Elasticsearch. In the end, we have a software of Kivana, which we call Kivir, which basically gets information from Elasticsearch and produces what you saw in the first part of the talk. Developers have three options to work here. First of all, they can use Perceval themselves, and they can throw Perceval to their repositories and get the output and do anything they want. They are writing in Python, so it's very easy to write simple Python programs to use Perceval. For instance, in most data sources, what you get is a Python generator that you can just call in a loop and you get all the activity, all the things in the repository. The other one is using the row index, which basically means Kury Elasticsearch to get the same information that is in the data source, but you don't need to have real access to the data source. You don't need to go against the infrastructure, because usually the infrastructure is not designed for people to download everything in it and stuff like that. That's why you can access Elasticsearch. The third one is go to the enriched indexes. For most things, probably enriched indexes are good enough, and they are already prepared for being queried and so on, and they are more more comfortable to use. You can find more information about GrimoireLab in grimoirelab.github.io, and there you have access to all the components, to all the source code, of course, and to some documentation. Most of the components are here. Some of them I already talked about. Percival is the one retrieving information from the repositories. Sortinghead is the one dealing with affiliation, so it tries to sort affiliation for every person and tries to do unique identities. Unique identities means converting identities to persons, because you know that persons use different identities in different data sources and even in the same one. For instance, people change email addresses. So Sortinghead tries to keep track of that, and it uses some heuristics and can also be fit with manual information, let's say. For instance, githium files or other files that include information about affiliation. GrimoireLab is the one enriching the information and producing the Kibana indexes. Arthur is now on beta, and he's designing to orchestrate everything, so he's designing to deal with thousands of repositories at the same time. If you are only dealing with tens of them, probably you don't really need Arthur. Kibiter is the fork of Kibana, and Panels is the configuration of Kibana. The real information, the list of visualizations, and so on that you have in the dashboard. So it's basically configuration for Kibiter. There are some more elements in the works, but for now it's basically that. This is the list of backends that Percival and GrimoireLab are supporting right now. There you can see that you have the ones that you have, but you have all the things like I don't know, Meetup, Fabricator, Pipermail, Stack Exchange, SAPI Vault, Demain, and many others. And this is the main source for documentation right now. In fact, it's a training guide where you should be able to up to speed to do your Python scripting, for instance, on top of this in maybe half an hour. I'm going to show you some simple examples, but basically here you have how to use Percival to retrieve information, how to produce simple dashboards with a couple of commands, and it's literal, couple of commands. All the tools are in PyPy, so that you can very easily start with PyPy. And I would say that it's more easy than it seems to be at the first instance. And now your turn. In fact, how you can play with the dashboard. Of course, you can just go to the dashboard and find information which is not accurate, but like maybe this one that you can now. You can just play with it and see whether you find something curious or whatever. You can look for yourself and you can try whether this really corresponds to what you did or not. You can play with the last dessert data. So for that, you currently need a password, but you can ask SNA as to the foundation to see whether there is any trouble. But basically, as far as I know, they are very interested in developers using this information. So it's only a matter of sharing the password. You can, of course, produce scripts, link the data to programs wherever, because you can do this in pattern. And you can, of course, collaborate to improve your model if you want or if you spot any problem or the last so that we can improve it. Or, of course, you can report back, et cetera. And so three specific examples of how to use the data. So the first one is very simple. And it's just downloading the data from the UI itself. In tables, you have this, which basically means retrieve all the data from the table in CSV format. You know that CSV can be easily exported to a script or exported to a spreadsheet, for instance, to LibreOffice. And you can show all that information in it. This is probably the most simple one, but you need to have the table that you need. But if you are looking like information, like participation by organization or participation by developer or stuff like that, you already have the table. The second one is requires access to the database. And this is the most simple one, is how to access with CARL. I mean, Elasticsearch is a very simple HTTP interface, a REST interface. And then you can use CARL or similar tools just to access the data. The only step here is to know about the Elasticsearch query language, which is not that difficult anyway. So this is a very simple query. You can see the query up there, which is basically saying, I'm going to mark on my screen, this is the Elasticsearch instance. So you have to substitute this for the Elasticsearch for the dashboard. This is the Git index, which means get information from a Git. And this is the search query. In this case, it's very simple is give me one result, whichever, and pre-defy it so that the JSON, which is the result, can be read by humans. And here's what you get. First of all, you get the size of the index. In this case, it was 407. And this is one example of a hit. A hit is each of the documents that is retrieved. I only ask for one, so I get one. And here you can find the kind of information that you have. For instance, for a commit, in this case, you have the hands, you have the commit, the author, the author date, a commit date, and more stuff. Basically, what you would find if you do commit, sorry, Git log with all the parameters to get all the information out are possible. So that if you need to do things like, I want to know how many commits this person did, it's very easy. You just go substitute that query, a query for that field, that's it. So it's not rocket science. And this is how to do the same thing with Python. For Python, you can rely on a couple of nice packages for dealing with Elasticsearch. There are Elasticsearch and Elasticsearch DSL. And both allow you to do query to Elasticsearch in a simple way. This is actual code for getting information from Git, which is basically the number of commits per quarter, instead for merge commits. I mean, commits not touching the code. And since some date, and unifying by hands, so that if you have the same commit in several repositories, that's only going to get one of those. And you can see how that this code is simple to just get an instance of Elasticsearch of an Elasticsearch object. You can see here, I build a search object saying that I'm interested in the Git index. Then I add some filters, metrics and packets. This is very similar to SQL Kimmy in SQL. For instance, you can see the same structure of adding components to the query. And then you have loop where you basically loop through the answer by Elasticsearch, which is in this case, as I said, data procura. And if you want to have a try at something quite similar done with the same software for random repositories in GitHub, you can use Caldron.io. That's another view. And all of this can also be done with the same software that is used for the Document Foundation password. But the only thing is that here, for free, you can go and analyze any GitHub organization that you may want. So you only have to go there. We need to do your GitHub account because the GitHub API uses a token and we need to use your token. But otherwise, you can get a complete password for that project. That's only if you want to play a bit with the tools without actually having to install the tools. And enjoy. So this is the link to the software. This is the link to your password. And that's all from my side. I don't know if you have questions or comments or anything else. You just won't need to finish or what. Okay, I have a scarf. Somebody makes a question. Even what's the time or wherever. So not even though. Okay, you got the scarf. So what's the most interesting thing that you found so far? I honestly, yeah, honestly, we didn't look a lot at what to find in it because we're busy trying to produce it. For this talk, I was looking at it. And maybe what surprised me a bit more is the structure that you have in Baxilla. Because it seems that my personal impression of looking at the data seems that either a bag is fixed in, say, a couple of months or it sits there forever. That's my very personal opinion. If you look at the structure of the states, you can see how you have a lot of, of course, issues coming from fix or whatever, sorry, from whatever to fix or to close. But then there are a lot of them that stay there for a while. And it seems that after some time, they have very little chance of being fixed. I don't know. But that's when we were looking at the data, that was something that. And the other thing is probably with respect to Garrett, that you have a very short time to code compared to other projects. If that's still with during the two or three months, again, you saw that you have some code reviews sitting there for one year. But if it is, if it is still with during the first two or three months, time to fix, sorry, time to go, time to merge is very, very soon it will compare to other projects. So, in summary, I would say it. But we didn't do a fully analysis of, of the project. Okay. Anything else? Any other comment or thank you very much.