 Yeah. Welcome to our presentation. I am Mustafa Jarrah. I did this work with my colleague, Mustafa Al-Hajj, from the Lebanese University. Before actually starting, I would like to mention that we at Birzet University have several lexical resources that we developed in the past years, like a lexographic database with 150 seconds, the Arabic ontology, which is an Arabic word date, and also several annotated dialect code. Okay. For those who are not familiar with WSD or were since this ambiguation, the task is different. Given a word, in a context like this one, so we would like to know which meaning or which sense this word denotes among the several senses, a set of senses. Our contribution in the paper is that we used bear for WSD. So what we did, we first developed a dataset that consists of 167,000 gloss bears, context gloss bears, which are labeled something like this, where you have gloss and you have a context. And there is a target where we went to label whether this is true, that this is the sense of this word or not, and so on. So we extracted this dataset, and then we finite you on the three bear models, and we achieved accuracy, which is 84. So we actually, in this way, what we did is that we converted the words, this ambiguation task into a binary sequence bear classification task. So there is a related work you can see it in the paper, but I would like to just to say that the use of bear in this way is very recent, and most of it was done in English. Most of the work that was done for Arabic used static embeddings. So our dataset was constructed from Beerset University lexicographic database, which contains the Arabic ontology, and about 400,000 dictionary definitions. But they are raw definitions, so we have to extract bears from them. The extracted bears about 60,000 bears, which we consider true bears. And from the true bears, we generated defaults bears by cross relating the context and the glosses. This is statistics about our dataset, 26,000 eoniklamma undiagratized, 32,000 eonikiglosses, 60,000 eonik context, and so on. So from this dataset, we extract, we also divided the dataset into training and testing. The majority is of course training, 152,000 training, and 15,000 for testing. This is very tricky and very important to note that every context selected in the test sheet was not used in the training. And every gloss used should be used in both training and testing. The dataset and the tuned models can be downloaded from this link. In order to tell bear that, like this context, and we need to focus to emphasize the learning process in a certain way, we have to annotate the target words in context, which was done semi-automatically, but then verified manually for the 60,000 contexts. So we find the two and three bear models on this task, binary classification, binary sequence bear classification. As you see here, three models, Arabert, Camelbert, and Karid. And as you see, Arabert achieved the best results. We again went again to fine-tune bear with different tagging of the target words in context, but we couldn't improve more than one percent improvement here. And as you see here, the scores are also not bad. To sum up, so we show how to tackle the WSD task as a binary sequence bear classification task. We constructed a dataset of 160,000 bears labeled with the true and false, and we annotated the target words, which we actually consider a relatively large dataset. We fine-tuned the three models and achieved this accuracy, 48 percent accuracy. This is the end of my talk. Thank you very much for listening, and I'm happy to take some questions. Thank you.