 Rhaid i'n gweithio. Felly, yw Lleon Iw Watson. Fy fyddwn i gweithio ar y company yng nghymru yw'r pasiello groupau, yng Nghymru TpG, yn ymlaen i'r formol ymlaen. Fy fyddwn i'n cael ei gweithio, mae'r co-chair yw'r W3C, y gweithio'r gweithio'r gweithio'r gweithio'r bach o sfech, wrth gwrs HGML, Dom, ymlaen i'r ffasgrwch, ymlaen i'r gweithio'r gweithio'r gweithio'r gweithio'r gweithio'r bach o sfech. Yma, mae'r ffordd rym ni wedi cael ei wneud a'r cyfnwysbeth, a rym ni'n gweithio'r cyfnwysbeth, ond yn gweithio nid ystafelloedd sy'n gychwyn iddynt aron y celtio'r cyfnwysbeth, byddwch i'r cyfnwysbeth wedi'uieu'r cyfnwysbeth o ffrin, mewn lidio ein gwneud. Ond os yw'r cyfnwysbeth yw chwarae yma? Wrth gyfnodd a fynd i'n gweld yw'r cyfnwysbeth efallai un cwbl yn archyflogu'r llan. in the guy's just now we're talking about usability. That's come up a few times already today. usability is definitely part of inclusive design. In short, it's all skills you need, design, engineering, usability, accessibility, that come together to make products, interfaces, technologies that are enjoyable and possible for people to use in different situations and no matter what their kind of abilities or environmental context. So, the bit of accessibility I'm going to talk about today is very much focused on the engineering and a stack that comes together to enable someone like me who's blind to use the web, which is to say an assistive technology called a screen reader, the code that will write to create interfaces that I want to be able to use, the browser that I choose to be able to look at those things in and lastly, APIs or at least accessibility APIs that hook all three of those things together to make it all work mostly. So the first project is NVDA, Non-Visual Desktop Access. It's a project being developed on GitHub led by a couple of guys who are themselves blind and they have created this screen reader, which is of course freely available as it would be as an open project. A screen reader, if you're not familiar with it, is a piece of software that converts pretty much everything you see on screen into text and then spits that text out either as synthetic speech or as refreshable braille. What's really remarkable about the NVDA project is that until it came along about ten years ago, the average cost of a screen reader for a Windows user was somewhere in the region of €600, €800 or more, seriously expensive pieces of kit and NVDA came along and completely changed that. Other screen readers have now come along as integrated systems both in Windows and Apple products, but this still is the only real open source screen reader of any particular note. Now, this is where we get to the never work with children and animals or screen readers and technology. I wanted to play a screen reader demo, which of course requires some sound, but we don't have any speakers. So I've turned up my volume and I'm going to hope that you can hear it, but listen up because it's going to be tricky. Can you hear that? So what that said very, very quietly was as the highlighted headings showed up on the screen, the screen reader was reading each of those headings. Screen reader is very good at reading all kinds of different pieces of content. In this particular case, it was looking at the H tags in the HTML and announcing the text that was inside them. These headings happen to be links as well because it's a pretty standard blog format, and the screen reader, if you could hear it, was also announcing the fact that it was a link. Another quick demo, screen readers, as well as navigating to different chunks and elements of HTML can just actually read content straight through. So screen readers are very versatile. Pretty much if you can see it on screen and it has a reasonable degree of accessibility about it, a screen reader will be able to identify it and, for my case, translate it to that synthetic speech. It's not the most beautiful sounding speech. Some text-to-speech engines are much more human-like. They tend to be a bit slower, so if you're an impatient user like me, you tend to stick to one of the voices that sounds a bit more robotic because it's a bit quicker off the mark. The next project I want to tell you about is HTML, which probably doesn't fall into the classic definition of an open-source project. It's developed by the W3C, but the W3C has changed a lot in recent years, and it's actually now possible for anyone who's interested and willing to contribute to any W3C specification. You'll find the HTML spec on GitHub. You can do all the things you'd expect to be able to do with an open project. You can file issues. You can suggest new features. You can even contribute actual patches and PRs back up to the core HTML spec. The only condition is that if you're not a member of the W3C in any capacity, we just ask you to please sign away your patent rights to it. Something that's very dear to all W3C projects is that we ensure all the technologies are available patent-free, royalty-free for anybody who wants to use them in perpetuity. Beyond that, the more the merrier. We want help from lots of different developers. HTML is important in the accessibility stack because that's where screen readers get a lot of the information that you just heard being spoken. Most HTML elements have implicit roles, so if you use something like a link, you will find that the screen reader can automatically identify just from using that anchor tag in the HTML. If you were to inspect the accessibility information about a link, you'd discover it has a role of link. In this case, it has an accessible name, which is the link text. It's a link off to my blog. If you delved in a bit deeper, you'd also find that if you previously visited that link, it would have a visited state. Sorry, there we go. If we were to look at an image, we'd also find the same thing. In this case, we've got an image there with a source file, and it has an alt attribute on it that describes what the image is. Again, if we delved into this in the accessibility sense, we'd discover it has a role of image. It has an accessible name. That's how a screen reader can identify it, and in this case, it comes from the alt attribute. So a screen reader would say something like image bottle of Tumuca's tequila. Something similar happens when we have interactive elements like inputs as well. In this case, this particular form field has got a role of checkbox. It gets its accessible name this time from the label element that's associated with the field. But all this says to the screen reader, basically, say this is a checkbox, tell the user that, and in this case, read the label text as the accessible name so the user knows what the checkbox is all about. All that accessibility comes for free from the HTML. My next project is Firefox. I'm pretty sure this one, too, is going to be known to all of you. It's a browser, of course, that's developed in Open, but of all the open source browser projects, this one probably does the most amount of hard work for accessibility. The guys on the Firefox accessibility team are absolutely brilliant, Marko Zehe and David Bolter in particular, and all the guys that work with them. But all the contributions that come in, whether it's issue filing or actual patches and fixes, really make a difference. Firefox is a terrific browser to use, especially with the NVDA screen reader. So what screen browsers do in terms of accessibility? Well, they give you focusability for free. If you're dealing with interactive elements like links or buttons or form fields, the browser will make sure that if you're a keyboard user, i.e., you don't have a mouse, that you can focus on those things. As developers and engineers, we don't need to do any of the hard work because the browser is just going to do that for us for free. You also get expected interactions. If you're a keyboard user and you focus on a link, your expectation is that you can activate it using the enter key. If you find a button, your expectation is that you can hit it with the enter key or the space bar. The browser provides all that for free for you. So, again, coming back to using HTML as it was intended, you get a whole bunch of stuff courtesy of the browser doing the hard work. So if we look here, we've just got a piece of straightforward HTML code. Just a checkbox there. Nothing particularly remarkable about it at all. What happens in the browser is that it creates the DOM. You all know about that. We knew about that to any of us, I'm sure. What you might not know is that the browser also creates something called an accessibility tree. So, as the document is loaded into the browser, the browser simultaneously creates the DOM and the accessibility tree. They are both hierarchical tree structures of the document in question, but the accessibility tree just contains accessibility information. So all the stuff we saw earlier about roles and names and states, is information that the browser makes available in the accessibility tree. What happens is that as the DOM gets updated, perhaps in response to a script or a user interaction, it triggers the accessibility tree to get updated. The accessibility tree or the browser then fires a change event and an assistive technology like NVDA listens out for that. When it detects the change event, it goes off looking at the accessibility tree to find out what's changed and makes that information available to the user. So a good example is a notification that gets added to a page. Visually, it's really obvious. A bit of contents just appear probably slap bang in the middle of the page. To me who can't see that, I may never know that that piece of information has appeared on the page. But these cascade of changes in the accessibility tree and the way the screen reader can deal with them mean it's possible for my screen reader to tell me that that piece of content has appeared. So the last piece of the thing I want to talk about is accessibility APIs. In particular, one that is very much in incubation at the moment. It isn't actually a thing yet, but it's well on its way and it's called the accessibility object model. We have a whole bunch of accessibility APIs at the moment and they're all available on different platforms on Linux, on Windows, on all the Apple platforms and Android. These APIs are how screen readers currently get all that information in the accessibility tree. They query the accessibility tree using one or more of these APIs. And these APIs exist at the platform level. So if you have a platform control like this checkbox in Windows, I know, I'm sorry I probably shouldn't have Windows at a conference like this, so you can take me out and shoot me later. But if you have this control here, you will find that like the HTML we were looking at earlier, it has a role of checkbox, it has an accessible name and all the bits of information we're coming to expect. We can then recreate exactly the same thing in HTML and the available information to a screen reader using the same API is absolutely identical. So a checkbox is a checkbox is a checkbox as far as a screen reader user is concerned. All of these relationships if you're curious between the platform level controls and the ones that appear in HTML are available in another W3C document. So if you're curious to know what all the roles and mappings are for different HTML elements, that's the place to go. But there's a catch. The problem at the moment is that none of those APIs can be used by us engineers and developers. They are really exclusively for use by screen readers. Why I like the idea of the AOM, the accessibility object model, is that it's a JavaScript API and it's setting itself out to do some really interesting things. It's going to give us as developers access to the accessibility tree in the browsers. So we like screen readers and other assistive technologies will be able to query that accessibility tree and find out what accessibility information is available about existing nodes. More than that, we're actually going to be able to create virtual nodes in the accessibility tree. So if there's something in the DOM that in itself isn't particularly accessible, we can make it more accessible using scripting by updating the information that's available about that same thing in the accessibility tree. So it's going to be an extremely powerful next stage, if you like, in terms of accessibility. So that's really your open accessibility stack. You need a screen reader or an assistive technology if you're someone like me to be able to get to the web page or the browser and whatever it may be in itself. We need people using good technologies like HTML that have a lot of built-in accessibility. You can fix accessibility if you choose not to use HTML in its native sense, but it's hard to work for us and hard to work accessibility-wise too. Use good browsers like Firefox that really do champion accessibility and support a great deal of many features. And lastly, keep an eye on something like the AOM because it's really going to be a game changer, particularly in the way we code complex web applications and interfaces over the next few years. And that's to be done. Thank you very much. APPLAUSE I have no idea if I have time for questions. Somebody will... Thank you. Oh, very cool. I wasn't sure. You mean between NVDA and the ones that you have to pay for? Or... No. Very little difference at all, which is the other remarkable thing about NVDA. So you compare two guys plus an open community working with them of perhaps 20 regular contributors to some of the companies that make the proprietary screen readers, Microsoft, Apple, Freedom Scientific who have got all this income coming in and actually feature-for-feature, there is nothing I'd really care to call in terms of the difference between NVDA and any of them. To implement accessibility, we read the specs and we just had a single blind user and he had problems at very unexpected places. So our problem was is there some kind of community of handicapped and blind persons who are willing to help projects that really have no experience in this stuff? Yes, there is a Slack channel called A11Y Slackers. If you're not familiar with it, A11Y is a bit like I18N. A11 is the number of letters between A and Y of accessibility. And there you'll find 200 or 300 people who...some people who have disabilities like myself, but all of whom work in accessibility or have some interest in accessibility. So that's a really good place to ask questions and get technical solutions and have people test things out if they've got to spare a few minutes. If you are looking for more of a mailing list option, the W3C has a WAI slash IG, which is just an email forum that has about 2,000 people on it of all different backgrounds from academia through engineering to people with disabilities, multinational as well. The other email based forum that's a very good place to get help and ask questions is called WebAIM, W-E-B-A-I-M, and that's another free email forum. That's a good place to go. Thank you very much. No worries. Right. Yes, so most browsers on most platforms now have tools that you can use to inspect the accessibility through Chrome, Firefox, some of the unmentionable Windows ones too. But yes, you can get into that. So those are tools that you can use to look at the accessibility tree and compare it to the DOM. I'm not sure if this is also what you mentioned, but Chrome as a browser also has an extension called Chrome Vox, which is another type of screen reader. They've actually just released an update. I just saw some tweets going past this morning that makes it very much more capable than it used to be, which is terrific. The only problem is because it only works in the browser, you don't find many blind people use it because we still need a screen reader to turn on the computer, open the browser, get to the web page. So having just a screen reader that is just only in the browser is only a tiny part of what we need to cover in terms of having that kind of availability. Absolutely. On Linux, as you said, you've got Orca, you've got Chrome Vox if you just want to test in the browser. Windows has a screen reader called Narator built-in that you can just turn on, control windows enter to toggle it on and off. iOS devices, if you use those, you can just triple click the home buttons on any of them to turn the screen reader there on or off, on a Mac. It used to be Command F5 until they did that thing with the toolbar recently, and I have no idea what it is now, I'm sorry. On Android as well, there's a screen reader called Talkback that you can enable and turn on. So pretty much whichever you've chosen platform for development is or whatever you're asked to develop on in your work environments, you should be able to find a free screen reader that you can turn on and probably get really lost using, but have a go if it's really important, you're right. Thanks very much. Thanks guys.