 Hi everyone, my name is Kai and thank you for joining us today for the presentation on accessing complex images for readers who are blind or visually impaired. I'll be discussing several methods in addition to image descriptions that readers can use to intuitively access charts, graphs, maps, and diagrams. Many of you are probably familiar with image descriptions. If not, that's totally fine. We have several presentations and resources on what they are and how to do them well. Simply, image descriptions are text descriptions that are used to convey the same or equivalent information that a sighted reader would get when they look at a picture to someone with a print disability such as those who are blind or visually impaired. Image descriptions can be included in digital content in two ways, alt text, short descriptions, and long desks, long descriptions. Depending on the type of image, photos, graphs, cartoons, to name a few, some will require lengthier descriptions while others can be described with a short phrase. I think many of you would agree that language is versatile and can communicate concepts and ideas quickly but may not be very intuitive when communicating precise spatial elements and relationships that are vital to the reader's understanding when accessing things like art, cell structures, DNA, the shape of graph, and diagrams for assembly. This is where alternative methods can be more precise like sonification, tactile graphics, and 3D models. These approaches can be more intuitive for some readers and can convey spatial information much more efficiently. Before I discuss each of these methods in greater detail, I would like to talk a bit about markup in the context of images. I've seen some publishers provide tables and math equations as pictures. Some have added image descriptions while others have not. But even with image descriptions included, this is problematic. This is because the description will be imprecise and lengthy which can overwhelm the reader. So instead of providing image descriptions for tables and equations, I recommend marking tables up with semantic information using HTML tags and MathML if you're dealing with equations. Let's now look at an example of table navigation. Right now I've shared my screen and you can see that there is a long table with several columns and rows. If I were to access a text description and have this entire table laid out, this would be incredibly lengthy and difficult to read. But because it's marked up, I can now move from cell to cell, row to row, and be able to get individual pieces of information that's much easier to process. I'm going to slow down my speech and now I'm just going to move around. I've moved up. And you can hear it read one, which is trial one of this table that shows the reaction times between visual and auditory stimulus. And when I move around, you can hear each individual piece of information. And spatially I can move down and to the left and be able to understand how each cell relates to each other. Let's now turn to equations. Here we have a document that shows a typical quadratic equation. But if this were written as an image, the language may be incorrect or there would be some inconsistencies with MathML. Since everything is standardized, the equation can sound like this. And again, I can individually look through each element in the equation. And as I continue to drill in, I can get each piece of information. Next, let's talk about sonification. Sonification refers to the use of sound other than speech to convey information such as the height of a bar or the curve of a line. For example, the pitch of a musical note can indicate the height of a bar on a bar graph or a sine wave can indicate the shape of a line on a line graph. For my examples for sonification, I'm going to be demonstrating two tools, SAS accelerator and the Desmos graphing calculator, which utilize the sound to convey shapes of graphs. This is the epidemic curve for COVID-19 in chart form. This is a bar graph. So when I play this, you'll be able to hear different pitch of musical notes to convey the height of the different bars. Let's take a listen. So that is a sonification of the epidemic curve. We can, of course, navigate to individual bars and hear the different values by just using our arrow keys. Let's now turn to Desmos. Desmos, as I mentioned, is a graphing calculator, and it can convey line graphs using sine waves. This is the flattening the curve graph also for COVID-19, and here we have a nice description that describes what this is and how to use this graph. But we also have auditory sonification features where we can look at the three lines that are on the screen and be able to hear the shape of them. You'll hear a little popping sound. That indicates that the lines intersect. Let's take a listen. This is the flat line. This is the second line. And finally, let's take a look at the third line. You can hear that the second line hit the flat line and then went down, but the third line went up beyond the flat line. Let's now turn our attention to tactile graphics. Tactile graphics are images that can be felt with your fingers. They are made up of raised lines and textures. Some people call this 2.5D. It is vital to understand that taking a print image and directly translating it from a visual form to a tactile form doesn't make it accessible. Because our hands have different properties than our eyes, such as a difference in resolution, we need to optimize images so that they can be interpreted tactually. When we don't do this, images can be difficult or impossible to interpret. So what do we need to do to make them accessible? Depending on the image, we may need to enlarge or clean it up like text descriptions where borders or decorative aspects can be omitted. If there are labels, they may need to be converted into braille. Tactile graphics can be made up with different materials and methods, but a popular way is to use a printer that can produce braille and tactile graphics. This is an image of a volcano that is optimized for tactile readability. As you can see, there are braille labels, clearly defined shapes, and each component is fairly large. The line connecting the label to each component is called a guideline. You may have noticed that the image uses different colors. This is because some embossers like the ViewPlus can use different colors to represent different dot heights. However, each dot height is not mapped to a specific color, but on the relative light and dark areas of an image. For example, when the image is tactually produced, darker areas have higher dot heights while light areas have lower dot heights. Braille labels are great, but depending on the complexity of the image, it can make the image much larger, thereby taking up a lot more space. To address this, you can use abbreviations or numbers and a legend, but an emerging practice is to add audio labels. This means that readers can identify what they are exploring as they touch specific parts of an image, and because there is no physical space limitation, we can even include other information besides labels such as facts and commentary making this image multimodal. Next, let's look at how 3D models can help convey complex information. 3D models can be another wonderful approach because it can easily convey depth, size, shape, and the relation of objects in relation to each other with a high degree of detail. Tactical graphics can do some of this, but when we are trying to show height and orthogonal projections, it can be challenging for a reader to interpret. This is because the reader can't see the optical illusion, so when they read this type of image, they will have to map out each line and how it fits in 3D space. It is not the easiest thing to do cognitively. It requires a high degree of patience and training. Even expert readers with this skill would agree that 3D models would be a much better approach to conveying this type of information. If we use this model of a boat as an example, it is much more efficient to show a 3D model when you can wrap your hands around it to get the whole shape rather than having to interpret a tactile graphic with imaginary lines or have it split into multiple perspectives. There are many ways to create 3D models. Models can be made up of wood, metal, or plastic and shaped with additive and subtractive processes by using 3D printers or CNC machines. I would like to focus on 3D printing, specifically fused filament fabrication printers as it has grown in popularity and has become much more affordable. In the last few years, teachers working with blind and visually impaired students have used 3D printers to create models to assist them in teaching concepts. For example, as a child is learning how a 3D image can translate into 2D, a 3D model like this frog can facilitate and their understanding when paired with a tactile graphic. So what is 3D printing? 3D printing is typically used for prototyping and is commonly found in makerspaces and in some homes. It involves building objects and parts using a 3D printer that additively builds the components layer by layer. These printers have nozzles that extrude malleable materials such as plastic from a filament spool onto a bed. Depending on the printer, the nozzle and bed can move on the X, Y, and Z axes to allow layers to be added. Are there some constraints and if so, how do we overcome them? Well, it is not as easy to add braille labels due to the topography like tactile graphics. We can enhance it by attaching audio labels via embedded sensors or a description using an NFC tag. Some museums have worked with organizations to create tactile models for their exhibits where users can touch different parts of it and hear information about what they are looking at. As a result, this makes it easy to convey detailed information and promotes independent exploration. In conclusion, text descriptions are helpful but cannot convey precise spatial info. Methods such as sonification, tactile graphics, and 3D models, along with image descriptions, can fill this gap. By providing alternative ways of consuming complex images, it speaks to designs that can be universally accessible and engaging for all readers. My hope is that no matter what methods are used, these will be included in books with plenty of images.