 Hello to everyone, my name is Elwin Waman and today I will present KITRA base when KITRA meets WIKI base. The outline of my presentation will start with the introduction where I will state my motivation and the problem that I will solve. Then I will present the approach where or how are we tackling this problem. Then in the third I will show the results, basically the KITRA base and its feasibility into the community and also technologically wise. And finally I will present some use cases where the KITRA base can be used. Well starting with the introduction I will tell you why we are doing this, basically. And the point is that the knowledge in general in our lives can be studied from different point of view. And also when it's well spread can produce positive or negative results which will affect basically the development of nations. The two pictures at the bottom can show two examples of how the knowledge is spread. In the left side you can see the KITRA community trying to preserve the mountains with snow because we consider them as gods or guardians in some cases. In the middle image you can see a mountain that was basically consumed because it was considered a means of resource like a mine of gold or other kind of mineral. So which knowledge is well spread? Basically if you see a mountain you can see a resource or you can see a guardian. So this kind of knowledge is well spread in the world. Which one of those is well spread in your nation or in your country? Then why we are doing this? Because the KITRA language is an endangered language and however it still has 10 million speakers around the world. These speakers are mainly concentrated in the six countries in South America. However there are only few resources available for the KITRA community and they are not present as linguistic linked open data because there is no consensus between the different KITRA languages in South America or in the world. So in the map in the right side you can see the distribution of the KITRA communities in South America. So what is the problem? The problem is that the new generation of KITRA families are not speaking KETRA anymore. So basically my generation is probably one of the last generations that can speak and understand KETRA. And technologically speaking technologies are not developed for KETRA communities. Basically everybody in the world assumes that in South America we speak Spanish and which is enough for us but this is not true. We in our identity or in our heritage, KUDRA heritage, we speak KETRA and everybody should know that we first speak a native language which is KETRA, AIMARA, SHIPI or one of the 48 languages in Peru for example. Then how we are trying to solve this? So basically we are trying to propose an approach that contains four steps to the first one is the identification of adequate sources. For example when we talk about data we are collecting purposes, dictionaries, bilingual texts and everything that can be preserved. Then where we collect this? We have to choose a database or a knowledge base where we can store this. So basically we were thinking at the first thought a wikibase, a wikidata, etc, a wikipedia. But we wanted something that can make use of the semantic technologies because this is very useful if you want to develop personal assistance or go a little bit further and not only storing the data but making use of them. And also we want something that is community driven because we don't want to store the knowledge and protect them and secure them. We want the knowledge built by the community and exploited by the community and we want something that is open source as well. So that's why we use the wikibase. In the second step we are processing and setting up the knowledge base. So in the database we set up the keychart base and programming a bot to ingest the data. Regarding the data we have to first identify the source and then we normalize the data, we clean the data and we refine it. Basically we have the keychart base set up and a bot that can ingest the data and also can clean the data and refine the data. Then we model and populate the model we described. Basically we describe for example lexical entries, lexical forms and so on. But also it can be integrated knowledge like biographies, persons, locations, places and so on. And then we populate them. So ingesting leximas for that we develop basically we use the wikibase integrator where it's a python bot to integrate or ingest data into the keychart base. So you can see the image in the bottom that ingested after 3rd of June and also some command lines on the right side at the top. Then on the exploitation it's very simple afterwards because you can access the keychart base through the link keychart.wikibase.cloud. And you can make it Sparkle queries and also you can exploit it in different use cases like language learner resources, dialogue system or NLP tasks. So some of the results is the keychart base and you can access keychart base through your desktop or mobile through the link keychart.wikibase.cloud. And currently there are around 25,000 items ingested into the keychart base which so far contains linguistic data like assistive, diverse, diverse, adjective and so on. But it's proposed to ingest also data of like biographies, places, restaurants of the kechart towns in Peru. So for example here there is some queries that can show leximes that have multilingual sense descriptions. For example we have a lexime here, lexime 106 where the lemma is ashka which also have sense glosses in different languages like German, Spanish, English and so on. There is an example of how this lemma is used in the context in some phrases and also where this example has been found basically. For that we develop a bot and we ingest the data according to that. Then regarding the feasibility of the keychart base we talk with the community but regarding the technological side we basically exploit the features of wikibase. So they provide means to model the schema, to also populate the schemas and also the wikibase integrates can integrate the open refine to clean or interlink the data. And in the social and organizational wise we talk with the kechart community and we thought that they can build a collaborative mentality that where kechart speakers and users are better off if they participate and contribute. Because if we have knowledge base in the kechart language we can exploit them. So if it's correct we can also get correct results. So this is very good for the community because we can work together on this. Some use cases that I can mention that can make use of the keychart base are for example, Google can index the keychart base and present in the results some keychart pages or keychart results, some questions. And also there are some, I propose that once it's indexed it can also be used by search engines like providing knowledge panels with content that is summarized like in this case Peru in the right side. A simple info box that contains important information about places in this case. Then also keychart base can be used to develop personal assistance. And there's an example in the YouTube video here that you can access through the link in the presentation. Also there are some chatbots that can be developed in kechart if you fit this chatbot with a kechart language knowledge graph or knowledge base. So for validating this keychart base we also talk with the community and for that we organized several events. And I went where I traveled to Peru and I organized several workshops with the students, teachers and the community. And they thought that the first view that the keychart base is perfect for them, however, they need technical skills for using them. So basically they can provide some insights regarding the, for example, recording the voices for the lemas or for the items that is contained in the keychart base. Also on validating the correctness of the examples on the keychart base. And also they wanted to populate the keychart base by providing, for example, a description of places, of stores, of paths or hiking trips and so on. So they want to do this and they think that the keychart base is a perfect place for doing it. Because it's something that we are developing together and they feel like you are doing together in this path. And of course the final result can be integrated into the several Wikimedia projects like Wikipedia, Wikimedia Commons and so on. So this is a collaborative part but a first step that we want to do in the keychart community is to build this and prove that it's good enough. And of course once we have the community we can keep on collaborating with Wikipedia and the Wikimedia Commons and so on or the Wikimedia projects. Then also I want to thank all the collaborators that made this talk possible and also this event. And yes, I want to say and finalize saying that the keychart communities are alive. We are in the mountains in the Andes of Peru but also we are in the world. Thanks for your attention.