 Γεια σας! My name is Thadros Diakonides and I would like to present you my package, a package called ScreenMed R, which is a package for automatizing the screening of publications for systematic reviews and meta-analysis. To start, I would like to thank the organizing committee for giving me that opportunity to present my program. Let me start with the idea behind the ScreenMed R package. Its task is actually to find all the relevant publications that are needed for a meta-analysis study as quickly as possible and as accurate as possible by using the program instead of showing instead of someone reading all the abstracts one by one or the publications one by one to end up with the more relevant ones. So, this is the task of the program and the program actually to accomplish this task is to run an algorithm, an unsupervised machine learning algorithm and in conjunction with the cosine similarity it can provide the user with the more relevant publications abstracts in our case for his study. The program is run through PubMed so PubMed is actually the output of the database that is the input for the program. The output of PubMed is the input for the program and at the end of the day the user ends up with a small number of publications, about 30% of the initial publications for manual inspection but the user can also lessen this number by using some extra functions that they are already in this package, in this scrimmed art package that I'm just going to show you just afterwards. So how the program works? The input of the program is actually as I said before the output of PubMed database search it's usually a CSV or a TXT file which includes the PMID numbers. The PMID numbers are actually what the program needs for as an input. I will provide also with a video vignette I implemented a specific case in order for somebody to be more acquainted with the program. So this is the first input and the other input for the program are four or five publications actually the PMID numbers of these publications that the user is quite sure that these belong to his or her study. So with these two and actually the number of groups that the user would like at the beginning to divide a total number of publications is the input for the program. What the program does it actually divides the abstracts of all publications in groups that the user defined in terms of a text similarity. The measure for this text similarity is actually the cosine similarity. Cosine similarity is more or less a number between zero and one. If the number is very close to one it means or one let's say it means that the two texts are identical if it is zero it means that there is no connection of one text with another text so more or less you end up with some number between zero and one for these specific groups. So what is actually doing what's made the function is doing the output as I said is the cosine similarity if you have for example two or three groups you have three numbers between zero and one and the group with the biggest cosine similarity is the winner is actually the most relevant group of publications. Is it safe to discard all the other groups and keep the publications of the one with the biggest cosine similarity? Well it is safe in the case where the cosine similarity is between the first the winning let's say and the second let's say the less it is greater than 0.2 in that case you can discard the second or the third or all the rest and keep only the first one. This is a round of a screen map function we will apply this screen map function again in a second round taking this time as an input the output of the first run and keep going until there are you cannot find the cosine similarity greater than 0.2 This is the idea it takes a very few it takes a couple of minutes to in case for example a modern computer let's say 1000 abstracts you can run the whole thing in less than a minute ended up with a smaller group of publications and you keep running it and you can end up with let's say 30 or 20% of the initial ones after that it would be quite difficult to have such a difference such a cosine similarity difference and it's not working more or less it's not safe to work extra functions that are included in the packets this mesclin bq function what it actually does let's say that you have a group of publications and you want actually to to have in common d descriptors and q qualifiers more than d descriptors and more than q qualifiers with the publications of a comparing group let's say that the comparing group is this 405 publicans that we had and you enter as an input another bigger group and you want to see how many pages of the bigger group have these numbers in common so in this way you filter even your publication your publications in terms of the very relative comparing group another function is the mess by name this is more specific let's say it has to do with the name of the mess term if someone is familiar with mess terms there are two parts the descriptor and qualifier of a mess term we will show everything also in the video vignette so you define the exact name of the descriptor and the qualifier that you want your publication to include and the program actually filters all the publications that you enter to the ones that have these specific descriptors and qualifiers so this is the whole idea of the program and these are the functions that are included and someone can find everything in this web page it's actually in kit hub you can download it and install it from here and there is also a vignette pdf vignette that is more analytical and you can find a case study here and this thing the program actually was implemented to this meta analysis here and you can find more information for this specific program in the appendix of this public case that is more or less what i would like to say if somebody has any more anything more if you want wants to learn more about the program i would be very glad to help him thank you very much for your time goodbye