 So hello everybody, welcome to this Wikimedia session about alternative text to images as co-authors with structured data and commands. I'm Mathieu Lovatov-Sturmfgunz. So you can find me online at CycosLive within our Wikimedia projects and have me as a member of the Wikimedia Accessibility User Group. By the hand of this session, hopefully you will be able to understand what we are talking about here, why it matters and how to make things go forward. So first, some context to better understand why it doesn't matter. So within our movement we have a vision which is that every single person and health should have access to the sum of all knowledge. And into a narrow focus we have the 2030 strategy which states that we want people which were left out by structures of power and privilege that lead them to a situation which is better from a knowledge equity perspective and in the same time to make our community more diverse and break down social, political and technical barriers that prevent people to access and contribute to our projects. So what is textual alternative? We have the definition here given by W3C in its accessibility guideline which states a lot of things but I just will focus here on the fact that it should serve the same purpose as a visual or auditory content but provided as text. So even if we are more focused here on images you can keep in mind that it also applies to videos and also to audio content. So we also have to better understand and at least highlight what is mean, what will mean to be blind or visually impaired. For that I took here a part of the definition given by the National Federation of the Blind which is a MAMIUKAN association which highlights the fact that it's about people using alternative methods to access and manipulate information in our case. And we also have the World Health Organization which is giving us some statistics like there are more than 2 billion people who have knee or distance vision impairments more than 100 million people with low vision and more than 40 million blind people. So that makes a lot of people for who we need to be sure that we provide alternative methods to access the knowledge we are providing to the world. Just a little focus on the Wikimedia Accessibility User Group We were born into May during the last hackathon we already exchanged more than 100, 1000 messages within our telegram group where we are more than 50 people we were able to identify 8 main scopes and more than exactly 6 topics on which we are currently highlighting including this very one we are talking about right now and to better grab what we are talking about we can make a little experiment I will read out loud here two descriptions of two different images you can close your eyes if you want to extend the experience so the first states O'Malley 34794 Czech McCoy, Istanbul, Turkey you will pattern my pronunciation especially about locations it's not something that is always easy to come with and it's another possible accessibility issue that we won't deal with today but it's still interesting and the second description is a village girl Palengan, Kurdistan, Iran at the beginning of spring Kurdish people are coming to their customs and culture celebrate a celebration called Nourouz or came in fire Nourouz and Kurdish clothes are symbols of Kurdish culture and Tadar has two images that were just described if you tell me that you were expecting a butterfly and for the first one I will be very surprised and the second one came with a description which is already far better but still we are missing a lot of details like the girl is holding a flare she has a lot of color in her clothes she has some specific hat I don't know the name we see that there are people in the background we have specific architecture and we have some background with a landscape maybe it's maintained I'm not sure but it's a lot of details that are not given within the description that we just saw and that can be fine in certain contexts it's a point where we can state that we don't necessarily provide the same alternative text for the same picture in all contexts but we always should have at least one default description to which we can fall back and that is not the case today so by the way the second picture was the winner had the third place of the 2019 picture of the year so it's a picture that received a lot of broadcast and well many people were able to access it and still we don't have the most great description for it so what can we do to improve these current situations well it will be fine maybe to have a board of statistics about description how many images the turn have one what maybe has assessment of a quality if they are some so if something exists today I'm not aware of that but we know we can maybe improve states of quality structure data and commands that is data that has stored some wiki base that's something that she's already integrated within wiki commands but which is still often empty we can develop partnerships to try to improve these situations we might maybe even use some artificial intelligence to leverage an existing description more generally we can have some structural process that can achieve for which we could achieve better things but for that I will pass now to Kasten Kasten Hein is a documentary filmmaker writer and photographer he worked with blind people on photo projects since 2011 and see wikipedia as embodying the good side of the internet so in the group of blind and seated people in our photo studio we have made some incredible stories the most spectacular one is blind people who are constantly dealing with image descriptions developing image comprehension and image knowledge that surpasses the many sighted people they are proving and developing convention their jobs as blind museum reporters and the work is blind photographers the single image description has not much impact but when image descriptions are on the present like every image that is around is described the situation changes completely and blind people get access to photography they get access or re-access to the visible world imagine an internet with all images described by casually hearing or reading all texts blind people can get an intuitive understanding of what is easy we want the internet become accessible for blind people and we want wikipedia to be more front of this enterprise in order to take wikipedia accessible for blind people we want to supply its images with short eye texts and successfully with longer image descriptions I think everybody understands that and everybody agrees with it by providing every picture with adequate eye text it takes a lot of time every eye text needs some minutes of work and so we aren't talking about the uncountable number of images on the internet we are still talking about 70 million images in wikipedia how can we reach that goal first we need many volunteers we need a good environment but I am still new here and I am still understanding the structure and possibilities of wikipedia therefore I am thankful for every help in our specific project journey we reach out to schools, universities, community colleges etc to let the students art, artistry write and edit eye text and inch descriptions for wikipedia in all of these subjects images and descriptions are all on the curriculum anyway and the good pedagogical reasons to rinse to describe an image with a person in mind who is not able to see it makes us look more closely and charge us with our perception that it precisely develops our articulacy and to know that you are helping blind people is an additional incentive our experience with students as image describers are very good you have to tell them to do so but once they are doing it they even like it it's a pleasant task our project is supported by the federal office for accessibility and many organizations for the blind in Germany their goal is the accessibility of the internet and we convince them that wikipedia is the best place to start with we are also supported by some german museums due to the accessibility policy of the european union they are obliged to get the images described and they are interested to get the images presented in wikipedia because the wikipedia articles are often better attended than their own websites even more since wikipedia's info boxes are featured in google search google prioritizes images in wikipedia articles and it prioritizes images with our texts so in terms of search engine organization everybody wants his image to appear further up in google search should attach an alt tag to it finally we are also reaching out to tech companies they are developing image description ai they need what they call ground proof i.e. image descriptions and corrections by people to feed and train their algorithms we glam image description ai is lagging far behind image recognition ai and translation ai but if you want all images to be described we will need automatic descriptions in order to facilitate these corrections it will be necessary to structure ground proof therefore it would be vastly helpful if structured comes this would furthermore allow the combination with translation ai which is also far better developed than image description ai and can also help to develop it further but i understood in sunday's session on structured comments that they are designed to help authors to find fitting images for their articles they may search for pictures by Leonardo da Vinci they will not search for all pictures with smiling ladies in the Mediterranean landscape so currently as i understand there is no need for visual search there is no need for alt text how could we bridge that gap of course if we would need alt text of find the new qualities we would write them in order to make Wikipedia accessible for blind people we need very minor volunteers to write alt text and image descriptions how could that be organized two we suggest an accessibility editorial three we have to store alt and image descriptions for blind people in wiki comments which is not the case yet structured data and structured comments should get every support available in my point of view so now we are nearly done and i have to announce our unconference that is taking place in a minute in the rainbow in building 6 floor 5 that's what i had to say so can you subject Matthew ok thank you can i also show my i don't know if the slide is still being shown yes great so i wanted to add some thanks first especially to the wiki media staff that make all that possible because of the wiki media accessibility user group so wiki media community at large i wanted to thank you for your attention and wanted to make you further aware that you can join us on our telegram group if you want to discuss on the long run i would like you maybe try to play with some room there is a random media link on comments try to click a few times in that and look at the descriptions that may be fun and maybe you could improve the descriptions where they have room for that and finally has so we are going into an unconference just after that it will be in the building 6 floor 5 and we still have one yes i guess so here is a credits for the pictures that were used and not yet credited and if you have an equation let's see 30 seconds left so if you have an equation i invite you to join us into an unconference thank you very much