 Thank you very much. It's great to be here this evening. My name is Leanne Watson. I'm not a developer. I'm definitely not an ember developer. What I am is an accessibility engineer. That means I spend a lot of time pulling apart code to find out why it doesn't work in the browser or with assistive technologies like screen readers that translate content on screen into synthetic speech. Once I hopefully have figured out what's going wrong, I try to find solutions, file bugs and work with various people to find ways to make things better. What I'd like to talk to you today about is accessibility mechanics. A lot of accessibility is kind of soft science. It's about user interaction. It's very subjective, but a huge amount of accessibility actually is pretty cold-blooded code-level stuff. I'd like to explain to you kind of how a lot of that works, how actually a lot of accessibility happens without you really needing to do anything at all. At the end of it, look at some ember examples that hopefully demonstrate some of the ways you can use and test for better accessibility. The first thing I want to touch on is HTML. It will be really familiar to all of you, but a question. How much do you know about what HTML is really doing for you when you use it in the way that it was originally intended to be used? Well, when you use HTML, pretty much every element you choose to use has a role that defines its purpose, paragraphs, tags, whatever the element you may choose to use. It has a role that tells us what that thing, what that piece of content is supposed to do. For an example, if you use the image element, the IMG element, that has a role of graphic or image. It's pretty evident visually on screen when a graphic is displayed, but if you use a screen reader because you can't see what's on screen, then the role is what tells a visually impaired person that they're dealing with some graphical content in their web page or web application. We have things in HTML like main as an element that defines the main content area of the page. Actually, from a visual point of view or a structural point of view, it's not that important to use this particular element. You could just as easily use a div, but from an accessibility point of view, knowing that this element represents the main content area and as a blind computer user having that information communicated to you is incredibly important because it's the equivalent of information that you can see on screen just by looking at the page and assessing its overall layout and structure. There are a couple of exceptions to the rule, of course. Div and span elements don't really have particularly strong roles. If you were to put some content into a span or a div, it would pretty much just end up being plain text. No matter how much scripting and styling you throw at some divs and spans to a screen reader looking at the code, it's still just going to be some pretty neutral content there. That's a key theme in accessibility, and it's going to come back again when you think about this in terms of EMBA and other JavaScript frameworks as well. When you use HTML, you'll find that a lot of elements have a name or an accessible name, like the names of people. An accessible name is what differentiates one thing from another of its kind in a web application interface. For example, if we use a link, an anchor element, the content of the link becomes its accessible name, the text or the graphic with an alt text in the middle of it. Visually, that's how you tell one link apart from another on a page. A screen reader will use that same content to do exactly the same job for someone who isn't able to see. If you're a speech recognition user and you use voice input, then you can use that accessible name to target the link in order to click on it. Accessible names can also come from attributes. If you have an image on the page, then using the alt attribute and giving it a sensible text equivalent for the image, that text equivalent becomes the accessible name. A screen reader user knows what differentiates this image from another. In this case, we're imagining a bottle of tequila, and the alt text is Chimooka's tequila. Again, if you are using other forms of assistive technology, you can identify that image using this information. You can also get an accessible name by association with another element. If you use a pretty standard input, text input, and you associate a label element with it using the for and id attribute pairing, the text content of the label becomes the accessible name for the form field. Again, if you're using a speech recognition tool, you could just say focus on the username field. Call it by name just like we people do. You also get information about state if you use HTML. An obvious one is the required attribute that you can put onto form fields. Visually, it changes the UI to indicate that the field is necessary before the form can be submitted. But again, programmatically, that information is available to screen readers that can communicate it to someone who can't see the changes to the visual UI. You also get a keyboard focus thrown in for free when you use HTML. For example, if you use an anchor element, the browser will automatically make it possible to tab onto it so that you can use it. It'll also know that if you hit the enter key, you're going to activate the link. You don't have to do anything as a developer other than use the anchor to get that functionality in place. Similarly, if you use a button element, it can be focused on with the keyboard. The browser takes care of that for you. In the case of buttons, the expected interaction with the keyboard is the space or the enter keys. Again, you get that for free by the browser when you just use the right HTML element for the job. So, how does this all hang together? When you throw some code into a browser, the browser creates the document object model, the DOM tree. It's a hierarchical representation of all the objects in the interface and you can manipulate them, change them, add stuff, take stuff away. Every time a DOM is created or it's changed in some way, the browser also creates an accessibility tree. This is another hierarchical model of the interface, but this time it contains information specifically about the accessibility of objects within the interface. Although news hot off the price yesterday is that very soon Edge from Microsoft is going to radically change this model and do something quite revolutionary with it, but for now, things are working still pretty much on this two-tier basis. What happens is that on every platform you have an accessibility API, MSAA or UI automation on Windows platforms, NS accessibility protocol on OSX and others on Linux and iOS and Android. These APIs exist at the operating system, the platform level, so they're inherent in pretty much every kind of interface you get on a system, be it the operating system itself, an application within it, or web content rendered within a browser application. If you look at a platform control, in this case it's the checkbox taken from the Windows 10 operating system and you query it using one of the platform accessibility APIs, the information you hear about this checkbox is its role is a checkbox, its name is bold because that's the text label associated with the checkbox and its state is focused, focusable and it's also checked. The interesting thing is that if you were to create exactly the same checkbox using HTML input with a label, you'd get exactly the same information if you queried it using the same platform accessibility API and the relationship of this all the way through is mapped by W3C. If you're curious, search for accessibility API mappings and you'll find them for HTML, core and some new one on the way with SVG as well. ARIA is an interesting technology because it lets you manipulate the information in the browser's accessibility tree. It's a bunch of attributes that you can add to your HTML or SVG and change the accessibility information that the browser makes available because as developers you'll know that it isn't always possible to use the right HTML for the job, either because we use frameworks that don't necessarily always do that by default. Sometimes we want to recreate a widget or something that doesn't exist in HTML like a set of tabs for example. Sometimes and increasingly we're getting into the territory of web components where actually all we can do is create widgets from primitives like div and span and where no accessibility natively exists within the components so we have to build it all in for ourselves. ARIA lets you do is use the role attribute to manually define the role of an element that you're using. There are more than 30 roles now in ARIA 1.1. Roles for things like sliders, dialogues, checkboxes, radio buttons, a whole bunch of things. Pretty much every standard application interface control that you can think of, there is a role either in existence or on its way in future versions of the spec. There are different ways that you can provide accessible names for elements and also accessible descriptions. The ARIA label and ARIA label by attributes let you provide an accessible name for something. ARIA label takes a string value. ARIA label takes an idref that references some text that exists somewhere else in the application interface. If you need to provide some more information, ARIA described by is in the attribute and it too takes an idref of information somewhere else in the interface and that can just be used to provide some additional hints or extra descriptions if something needs it. Lastly, ARIA lets you explicitly inform the API about states. There's 20 odd attributes that let you do this. ARIA pressed, ARIA expanded, ARIA checked. Most of the common states that you can think about in terms of application interfaces you can apply manually at an accessibility level with ARIA. Sorry, let's take a bit of a look at Ember. Now I will stress again, I'm not an Ember developer, but there are some really good examples out there by some talented and more clever people than myself. There's a GitHub project that's looking at the accessibility of Ember components. If you're familiar with Ember.component, it's pretty functionally similar to Web Components and the custom element specification. Although I believe at the moment it doesn't really utilise the shadow DOM that Web Components do, but you can use Ember.component to create effectively a custom element of your own. The example on screen is going to create a tequila button. It's just going to do some pretty simple thing when it's activated, it's just going to flash up and alert on the screen. There's really nothing much in here apart from basic Ember.component code. What we get on screen if we render this button is a tequila button element, and it has a tab index of one because that's what's been put in there by default. That's pretty much all there is to it, but there are some fundamental problems with this code. If I can get some volume, hopefully. Good to go? How do tequila go? Hang on, let me go back in to see if I can start playing again. Ha? One more. Tab is riposado tequila good. Thank you. The tab is just the echo of the key that was being hit by the screen reader, and all it said is riposado tequila good. That's fine, it's what's represented on screen in terms of text, but there's no information there from the screen reader user's point of view to say that it was a button, so there's no queue that you can actually interact with this thing. If you were to go back and look at the code component, you'd also have noticed that, although click functionality was defined, no keyboard interaction was provided. The other catch in this is that, although tab index was used to make the thing itself focusable with a keyboard, it had a positive integer value, it was a value of one. That means that wherever this button sat in the interface, it would be the first thing to receive tab focus, which, if it happened to be some way down through the interface and you started off at the top of the page and you hit tab and the first thing you got to was kind of miles away, that would be really quite disorienting if you were a keyboard user. The basic problem we've got here is that, as far as the accessibility APIs are concerned, the tequila button doesn't exist. In other words, it's just like using a span or a div. It has no role, has no particular accessible name, it's got no keyboard focus or interaction thrown in for free. It's pretty inert in every respect as far as accessibility goes. Good news is, we can do some things about that. We can amend this Ember component so that it does a whole bunch of things. We're going to use it to apply the Aria role of button, so from a screen reader user's point of view and the API level, this will now be a button. It won't be a button in any other sense, so in the DOM this will still be our tequila button element, but at the accessibility level, screen readers and other ATs will also know that it's a button. We're going to use the Aria label attribute to make sure that the button has a proper label. The label is Reposado tequila. Good was working pretty well in the previous example, but this is just belt and braces to really make sure that it's available to assistive technologies. We're going to change the tab index from 1 to 0. Tab index with 0 will just place this button into the tab sequence of the page based on its location in the DOM. It's a very natural way to add keyboard focus to something. Lastly, we're going to add in some keyboard interaction which mimics the information I mentioned earlier on in the presentation that you can activate this tequila button using either the space or the enter keys. Coupled with the visual styling, hopefully now what we end up with is a button that has all of these elements mapped into it, so the role of button, the Aria label to provide that. Behind it, all the scripting that should make this now functional with a keyboard and with a screen reader. If I can have some volume again. Good. Tab is Reposado tequila good button. Enter dialogue yes, OK button. Enter is Reposado tequila good button. Now we have a whole bunch more information. We know it's a button, so a user who isn't able to see the visual styling that makes it look like a button knows that it's a button and therefore can expect some kind of interaction. Better still, if you use it with the enter or the space key, you actually get something working. A dialogue in this case, which you can then shut down. And the last piece of the puzzle is that focus returns to the button that triggered the alert in the first place. So you don't have to go searching back to find your place on the page. So what can you do to get some of your own ember stuff up to scratch in terms of accessibility? Mary was telling me earlier that there's a growing concerted effort to look at accessibility within ember and make it more integrated. And they're looking at a whole bunch of tools, a couple of which I'm going to mention just now. The first is a set of test suites that are available on GitHub. They're really easy to install. You can just use your ember command line to install the A11Y test suites. If you're not familiar with the term A11Y is a truncated version of the word accessibility. There are 11 characters between the A and Y, much like I18N and internationalisation, came about because simply trying to fit accessibility into a tweet is an extra impossible if you want to say anything useful about it. So you can run a full test suite doing this. You can just put it inside an assertion, run the full tests. I think there's maybe 10 or 11 different tests available at the moment. Run them inside the assertion and it'll just punch out information to let you know whether each one has passed or failed. Occasionally it'll throw stuff out to the console if you need it, but it's pretty basic. But it has some really useful tests in there. You can also run individual tests. So, for example, the individual test here is asking whether all actionable elements are focusable, coming back to that keyboard focus I was talking about earlier. To do something, it's really important. You can actually focus on it first before you can even hope to use it with a keyboard. And that's what this particular suite will test for. There are other tests that will look for things like alternative descriptions or accessible names on images. The same for labels on forms. There's one that will check for colour contrast. Some more that will look for keyboard interaction. Another that will look for aria roles. So it's not a comprehensive test suite but there's some certainly really good stuff in there to help you narrow down some key accessibility issues. There's another API called TINON. It's developed by my friend and colleague at the Pasiallo group where I work called Carl Groves. It's an API that has a web application interface so you can go and throw code or URLs at it directly. But as an API you can also integrate it into your development and build processes which is really, to my mind, where it comes into its own. You can install a TINON module. There's a link through when I share the slides. There are modules available for grunt, for gulp, a whole bunch of other things. There's also a node module that you can use as the starting point for building your own plug-ins. There isn't present a number specific one but it wouldn't take too much work to turn the base node module into a number plug-in. You need to throw it to require parameters. That's your API key and the URL of the source to be tested. TINON is a paid-for product but it's incredibly reasonably priced. Unlike many of the accessibility tools out there, they price things for individuals as well as large companies so it's not an unreasonable level that it's pitched at. It also recommends taking a URL as the source to be tested. You can throw it code directly but what TINON does unlike many other accessibility testing tools is actually use a headless browser so it absolutely replicates the experience of the DOM, the accessibility tree and the assistive technology. Many accessibility tools will just scrape the DOM without really going anywhere near the accessibility tree so that's why you quite often get misreadings or unreliable results. You can throw out some optional parameters as well. You can ask for a certainty level. With accessibility, as I said, sometimes things can seem to be a little bit of a grey area. TINON will return a certainty level and you can set that. Only return issues that are 75% certainty is actually being a big problem for users. You can set priorities as well. You can also choose which level of the guidelines you're testing against the web content accessibility guidelines being the most commonly used. Interesting, you can also set the viewport. This is really useful if you're testing responsive designs as well because you can just throw it screen resolutions and it'll test whichever version comes out based on the response breakpoints that you want to test to. There's another tool I'm just going to mention very briefly which is Ember Axe. It's an open source tool. It's a plug-in and again an API that you can incorporate into your development cycles. Functions very similarly to TINON's API because it's open source. There isn't a price tag attached to it. Again, it's developed by a bunch of people as is TINON who really understand accessibility. These guys have worked in accessibility for many years and they're both extremely good tools and worth checking out. The last thing you can do is actually just test it yourself. You don't need to be an accessibility expert to try and get accessibility right. It's actually far from it. So what can you do? You can just abandon your mouse or your trackpad. Go and see what you can accomplish with your Ember applications with your keyboard. See what you can focus on and what you can't. What you can activate and what you can't. Then just start fixing some stuff. If you have a Mac, hit CMD F5, you'll turn on an integrated screen reader in your computer so you'll get the synthetic speech experience. You won't be able to use it like a regular screen reader user. Don't worry about that. Just play around with it and get a feel for it. Remember, control will stop it talking and CMD F5 will turn it off again because I've come across lots of people who've turned it on and not been able to turn it off again. In your browser, just try zooming in on stuff whether you're on a mobile, a tablet, or on your desktop or laptop computer. Just go find the zoom capability. Control++ does it on most platforms. Whack it up to the maximum amount of zoom level like somebody who's partially sighted might well do and see how your layouts respond and again whether you can still get to all the functionality on the page. You can just do a few things like that every time you develop something or make an update and hopefully do your best to fix them. Trust me, you'll be well on your way to making things a hell of a lot better in terms of accessibility. People often think that with accessibility you need to be perfect and you really don't. Perfect is the enemy of good, whatever you're talking about. Just try and make a couple of things better and trust me, somebody somewhere will be very, very happy that you did. That's me done. Thank you very much. The three key ones are screen readers, screen magnifiers and speech recognition tools, so voice input in terms of assistive technologies. There are other variations, but by and large those three. Plus just keyboard accessibility irrespective of an assistive technology will cover most of the groundwork for you. The one thing is that ARIA at the moment is still very well supported by most screen readers. A little bit supported by Dragon, naturally speaking a speech recognition tool and not generally supported outside of that in terms of other ATs which is why they're focused mostly on screen readers through the sort of middle chunk of them. What percentage of top websites actually uses ARIA? Absolutely no idea, to be perfectly honest with you. So the BBC for example uses it, Google uses it, LinkedIn, Twitter to name a few, so I couldn't remotely give you a percentage, but basic ARIA, a good number. If you look into applications, web applications, most of the Google application suite, Google Docs, Gmail, all the rest of it, they use ARIA to do that. So it's pretty widely used and growing all the time I think. You should include this in your talk because if a developer is a past art, he actually wants to use technologies that employers are using. Sure, good point, thank you. Any more questions? I'm just wondering, one of my concerns as I try to make my applications more accessible is there ever a case where you're just too much chatter like by the words, you say you're using a button element which already is self descriptive but then you also add the area, ARIA tagging for role. Does it start to become noisy when you do too much or is that not an issue? You should not worry about that. It can do, but that's quite a refined sort of level of thinking if that makes sense. The specific example you gave if you use ARIA label on a button that already has a text label, there is such a thing called the accessible name and description computation and it's like the worst game of trumps you ever came across. It's horrible. But basically if you've got a text label in a thing then it'll use that unless there's a title attribute, unless blah blah blah, and ARIA trumps everything basically. So in theory in that respect you'd only get one label read but in a wider level, yes, too much noise, it can be a thing. I wouldn't let that slow you down too much because you can always stop a screen reader from talking to you if you need to. I'd say it's better to try and maybe make things a little bit too noisy sometimes than not try and not have it work at all. I was in the US last week and I bought an Amazon Echo. I want to play around with it. I don't know if there are any, but in the voice recognition category is there anything that's excited and interesting that people could tap into? In terms of the speech recognition, the web speech API is pretty cool which lets you bring speech recognition functionality into your web apps directly. I'm not suggesting you do this as a replacement for those tools themselves but actually in terms of the capability that's quite exciting. It's got some really good stuff coming out of its conference last month in terms of its intelligence API and its bots, in terms of speech capture and recognition, image identification and a whole bunch of other stuff like that. There's a few different exciting things at the moment. How are you getting on with the Echo? I like it, but I don't know. How about accessibility on the W3C? I know that this area 1.1 is making some being built, but it's the main focus for the areas that are not covered well enough to 1.0 and it's being improved. 1.1 is just a point upgrade so it's not a huge evolution. There's an attribute called ARIA Current which is going to be a way to identify the current link within a set of navigation. It's basically with CSS and that's fine but again that's not available to screen readers so we go through all sorts of strange methods to try and make that information available using hidden text and other things whereas this attribute will do that a lot more neatly. There's some attribute looking at keyboard shortcuts so if you provide shortcuts like some of the scripted applications do like Twitter for example or Facebook you can let a screen reader know that's coming. There's a few new roles as well but nothing really earthshaking. I think the really exciting stuff is going to come with 2.0 when we're going to look at the extensibility of ARIA so coming back to the idea of web components and custom elements what happens if you want to create something that doesn't exist it won't be possible to choose a predefined set of ARIA roles because you'll be dreaming up something new for the first time so how do we make ARIA extensible so it can tap into the web component space? Svg Svg Svg 2.0 is going to have a whole bunch of ARIA capability in it as well you can use some ARIA at the moment with Svg although it will chuck a wobble if you try using a validator or conformance checker for it but there's a whole bunch of stuff that's just landed in WebKit behind the flag that will make a lot more of the semantic information Svg available using ARIA as well so in Svg it will do mapping so for example the Svg element itself will map to a group role because it contains a whole bunch of stuff the graphics elements like rect circle will map to an image role because that's essentially what they are the title and desk elements already actually map pretty well to accessible name and accessible description in the APIs so it's pretty solid already but yeah a few bits and pieces like that coming along maybe a bunch of stuff a sort of chart I mean one of the big things for Svg just doing charts and being able to actually interpret the chart being able to get information spat out to the user that tells them what's being represented there so libraries will be able to produce Svg that's exploreable and readable without being following every little line and every little last Svg I have a question so if you are implementing a component from a visual design would you say it's advisable to look for a UI pattern from the operating system that maps most closely onto so that you can use the appropriate roles yes it's not a bad way to do it if you look up the mapping guides that's where that information comes in really handy so if you want to create use a simple example a checkbox like we had on the screen if you look to the platform controls and see how they're represented then those are generally the roles to apply if you don't mind I'd like to just give everyone an idea of what's going on in Ember regarding accessibility the only point about some of the add-ons that exist currently new to that stage there is an org on github that appeared in the last couple of weeks called Ember A11y and this will be the consolidation of all the experimentation that's gone on so far all the things we can learn from other communities from the standards bodies, from the vendors so right now there is one add-on called Ember A11y and it's going to pack in a bunch of tools that are particularly tuned to the mechanics of single page applications and Ember applications in particular so right now it gives you a focusing outlet so if you use an Ember app today and you have a nav which brings in new content in a new outlet and you click a link in your nav the screen reader will have no idea that new content has appeared and won't be able to announce it to the user what this does is when new content appears focusing outlet the add-on will bring the browser's focus to the first element in that outlet so that the clicking of the link flows naturally onto the content that it's introduced it also handles things like error states appearing in outlets loading states appearing in outlets and basically the intention is that this add-on become part of the default blueprint for Ember CLI eventually when it's been proven out by the community as we usually do and also the intention is that we connect up our efforts with the efforts of people in the Angular community the React community so go check it out on github there is a topic A11Y channel on the Ember Slack go have a look in there as well there are some great people in there who have all sorts of different experiences in the area so I think another round of applause for Leone thank you