 Hey yo, this is Melanie Sumner. I'm a disabled military veteran turned software engineer and I'm delighted to speak with you all today. I wish I could be in person, but I'm looking forward to next year. I'm a member of the Framework Coroutine and the WAI ARIA working group and I do these things as part of my job as a senior software engineer at Luton where I work on digital accessibility products at scale. As many of you will be familiar with my usual style of talk, I want to let you know that this talk is a bit different. In my other talks, I've spoken about what accessibility is and the things that we can do now, and if you're interested or new to accessibility and want to learn more, you can find my previous talks by visiting my speaking profile. But this talk is different because it's my vision for about, for how I want the world to work and how I think we can get there. And just in case this is the first time you're hearing about accessibility, let's run through a few common terms since over the next couple of days you're likely to hear other people talking about accessibility too. When we say accessibility, we're specifically focused on making inclusive websites so people with disabilities can use what we create. You might also hear it referred to as A11Y, which is a shortened abbreviation. There are 11 letters between the A and Y and the word accessibility. I'll also refer to the Web Content Accessibility Guidelines, or WCAG for short, as a standard by which our applications are evaluated through the use of the success criteria they provide. And finally, assistive technology includes hardware or software that people with disabilities use to access the web or mobile devices. For example, a person who is blind might use a screen reader to browse and use websites on the internet through desktop browsers or mobile devices. So today, I'm going to share with you my vision, and that's a vision I'm calling continuous accessibility. Let's jump right in. I want you to take a deep breath and let yourself imagine. Imagine having greater confidence in the accessibility of your code. I want you to imagine more easily delivering accessible experiences at scale. And finally, I want you to imagine not being afraid of losing customers or facing expensive lawsuits due to accessibility blockers on your site. It's a pretty great feeling, isn't it? In software engineering today, we have continuous delivery, and we also have continuous integration. And these two things have drastically improved our lives, unless, of course, you're the engineer working on bugs for these things, in which case you probably swore a lot, and I'm sorry, but we really appreciate your sacrifice. But now I want us to turn our thoughts to this new idea, continuous accessibility. How will we get there? How will we deliberately achieve this vision? As I like to say, we have to be on purpose. The well-established principles of continuous software engineering remind us to build quality in, to work in small batches, to let computers do the tedious stuff and humans can do the hard stuff, to always be improving, and that everyone is responsible for doing these things. In this talk, you will learn the essential points of strategy for continuous accessibility. So, what does that strategy look like? Well, this three-part strategy includes a plan for the code we already have, a plan for the code we'll have in the future, and a plan for how to measure our work. And we'll discuss why that is so important. But before we dive into that, I have some exciting news to share with you. I'm excited to share with you that we have already started on this continuous accessibility journey with the newest version of Ember template lint. As of the latest version, no longer in beta, we deliver template linting in such a way that it ensures our code has a path to become more accessible if it already exists, and that our future code will be more accessible by default. And that's really exciting to me. At LinkedIn, we manually rolled out this feature to some of our apps already in production. We also took what we learned through manual implementation and built automated tools to do this work in the future. And we are also building trackers to ensure that the configurations remain correct over the lifetime of the codebase. It's pretty cool. Any plans we make to improve accessibility in our products should include plans for the code we already have. We need to think about the age of our codebase. We need to think about how we plan for library upgrades, because it's cool that we improved accessibility on that popular UI add-on, but what does this mean for existing users? What does the upgrade path look like? We need to think about how we will deliver new features or developer tools. The code that already exists will not be able to consume the latest and greatest upgrades unless we've done this thoughtfully and purposefully. And it might take some trial and error. This is especially true for apps at scale. It can be tricky to update dependencies, especially when they include breaking changes. Depending on the size of your codebase, taking in a new version of a dependency can mean extra developer coordination about which new features to use. And there are product and business priorities to consider. Features that we create for accessibility automation should make it simpler to deliver improved products and tooling to support accessibility as a facet of our craft. For the code we have now, we can wait for our users to report issues to us, but at the risk of losing customers. We can also rely on audits to tell us where the issues are. Ideally, we're automating our code linking and tests and periodically checking the code we already have. This gives us higher confidence that as technology progresses, our sites will still work as expected. Of course, there's a lot of improvement we can do in this area. But this approach allows automation to be our first line of defense. Backed up by accessibility audits and user reports because users will still let us know when they run into blockers. We need to plan for the code we will write in the future, or we will be much more likely to make the same mistakes of our past. Who knows what comes next? The way we wrote code yesterday is different from the way we'll write code tomorrow. What is our strategy need to include to prepare us for these unknown unknowns? Of course, we'll continue to follow the principles of continuous software engineering. One of these, letting computers perform repetitive tasks so people can solve hard problems, is what I want to focus on. This is something that we have and will continue to improve on. Let's take a look at the way automation and accessibility can help. Today, developers have access to automated testing through the X-Core library that can be integrated into our CI and CD mechanisms that are already used to test and deliver our code. We have a lot of options too. We can use Lighthouse, or Microsoft's Accessibility Insights, or the Ember A11y testing add-on, or even the X-Core library itself. For static analysis, of course, we can use ESLint, although it doesn't have any accessibility support, and we can also use Ember TemplateLint. Developers can get linting feedback while they're writing their code. It also supports a plug-in system that allows teams to define and use custom rules. Some rules have automatic fixes built right in, and running a fixed flag will clean up all of the auto-fixable issues. Finally, it supports sharing configs across projects so you can ensure that all of your teams are on the same page. Before version 3.0 of Ember TemplateLint, the print pending flag gave us a way to roll out new rules. You could take a list of current errors, and we could basically ignore these errors list in the linter, but it relied entirely on teams to be proactive and treat that list as a burn-down list, not a ignore-permanently-forever list. Of course, teams understood that we didn't really want to ignore these forever, but sometimes it's easy to forget something if it doesn't have a deadline. One approach was to turn a rule off entirely until all existing instances of that error were fixed. Of course, this led to a never-ending cycle of trying to turn a rule on, finding new bugs, fixing those bugs, trying to turn the rule on, and starting all over again. This is because by the time you fix the errors, new code would appear with those same errors. At scale, this is time-consuming and costly and can really disincentivize teams to keep up with best practices. So we had issues that could go into the last void of forever or stuck in a never-ending cycle of almost ready to turn on, and we needed something different. Enter the to-do. In the latest versions of Ember TemplateLint, we now have the to-do. Instead of only having the option to set a rule to a warning or error, we can now instruct the linter to find all existing instances of a rule being broken and create a to-do for each. After a period of time, this to-do turns into the warning and then into an error. Look at this from a different perspective, from the perspective of the code we already have and the code we will create. When a new lint rule is released, existing code will have to-dos created and teams can plan to fix incrementally. For new code, it's linted as the code is written, so no to-dos have to be created and no extra planning is needed. How great is that? And this feature was intended to mirror the queue unit to-do feature, which gives better consistency for developers who already use that. You can try this out yourself today. Once you've installed the latest version of Ember TemplateLint, you can run the linter to create to-dos with the update to-do flag. You can see the to-do count when you run the linter with the include to-do flag. And once an issue is fixed, you can automatically remove the to-do with the fix flag. It's so incredible that we can try this out now and even more incredible that it's built into all new Ember apps by default. This kind of accessibility consideration is just one of the many reasons I enjoy using Ember. But let's now turn our attention to the third piece of this puzzle, the metrics, because we need a way to quantify our progress. Right now, there's no common shared standard for metrics and accessibility engineering, but my vision is to change that. Metrics play an essential part of our strategy because our goal is to demonstrate the business value of accessibility and make it a default part of our engineering practice. Just in case you haven't had to quite work with metrics just yet, let's review four of the commonly accepted key criteria of quality metrics. Wow, say that five times fast. Anyway, metrics must be meaningful. They must be connected to the goals and strategy of our organization. They must be controllable. If a metric is not under our control or our influence, it's not meaningful to report. Metrics should be easy to find, easy to identify and visible to management. And metrics must be actionable. All metrics that we define must have an actionable outcome. But what do we mean by that when we say actionable outcome? Well, we can diagnose a problem or improve a process or set goals or even observe trends that inform future work. But before I get into the specific metrics I think we should be measuring, I wanted to remind us. Metrics themselves should never become the target. We won't reach our goal if we're gaming the system. Now, we all know that there are nearly infinite number of ways that anything can go wrong in our application. And that's not what I mean here when I say potential violation count. With potential violation count, I think this metric should be the baseline metric. We want to make an unknown problem, a known problem, then solve for each. This baseline metric is the total number of individual ways an application could fail legal accessibility requirements. But how do we get that itemized number? A few things feed into that. We've got the web standards from WCAG, success criteria, known techniques, common failures. We have location-specific legal standards. For an example, in America we have Section 508 and the Americans with Disabilities Act. And there are different standards for the European Union and Canada and different countries around the world. We also have failures identified by audit findings. Because no matter how much spec anyone has written or how many laws anyone has written, developers will find a way to make an inaccessible interface. Of course, all of these potential violations represent a massive effort that has already been completed by user researchers. So how do you get this information in a practical way? We do this by making an itemized list. There will, of course, be some overlap and we should document that. But this work is worth doing because it gives us a peace of mind that we really know the edges of this problem. We're turning an area of ambiguity into an area of clarity. We're making the unknown edges of this problem known and that will give us confidence. And before you think, oh my gosh, Melanie, this is so much work, let me share with you the effort that's already begun to itemize potential failures. It's, of course, an open source project called the A11Y Automation Tracker and it's just for developers. And it intends to compile each one of these itemized details. There's an overall details list, but there's also the ability to dive into each potential violation and see the status about how it can currently be tested, the possibilities for the future, and the relevant criteria with links to documentation. Of these potential failures, of these potential violations, my initial analysis is indicated that about half of them are either already automated or potentially automatable. But what about the other half? Well, this means that the rest still require manual testing. And I want to emphasize this again. You still need manual testers. If you're a small company, just remember that accessibility is a journey. Do what you can now and do more as you're able. If you're a large company, I want to encourage you to not outsource this work. In my experience, a few well-trained testers that share your company's values will be much more resource efficient than an entire army of outsourced testers. No matter what size company you are, though, I strongly encourage you to make manual testing an essential part of your accessibility strategy. Some of you who are already familiar with WCAG's success criteria might be wondering, why not just use the WCAG's success criteria? Well, here's why. They're generic. They cover generalities rather than specifics. For example, WCAG's success criteria 1.3.1, information and relationships, is a single success criteria, but can relate to at least 25 or more different failure scenarios. When I write a lynching rule, it needs to cover one specific failure, even if that failure can present in different kinds of syntax. By identifying the edges of a potential violation count, we can then determine several related metrics, violations for which we can provide automated lynching for static analysis, violations for which we can provide automated testing for dynamic analysis, violations that require developer author tests, and violations that require manual testing. There are also metrics that we can look at from the accessibility audits that we receive. What is the total bug count? What is the bug severity count? How severe of an impact do these bugs have for our users? Some are worse than others. What is the time to fix? How long does it take a team to resolve an issue? And violation frequency. How often is a particular violation occurring? So I've given us some essential metrics to track, but what sorts of things are we hoping to learn? What impact do we expect to have? We should expect, at a minimum, to see some trends as a result of the actions we take. We can expect, for example, to see an increase in the number of automated lynching rules and tests, since we have a better way to identify specific potential violations. We can also expect to see a decrease in some things, a decrease in support requests, in accessibility issues, in new code, and fewer issues in code that has the new automation applied. Our audit-related metrics could also inform process from a business and legal perspective, since they could be used to help quantify risk of legal action. When risk is quantified, we can then reduce that by taking specific steps to remedy the issues. And while this might be a business justification, the result is that our users have an improved experience. Improved monitoring also means that development teams know sooner if a product's digital accessibility conformance deteriorates at a rapid rate. It can also help us determine a threshold for new learning opportunities. If a high percentage of developers keep writing the same non-conformant code, we can produce materials that will help them on their learning journey. We can also use these trends to determine an app's overall accessibility help. We could even maybe put that in, well, what's that thing Ember is really good at? A dashboard? In addition to trend analysis, we can use the metrics we gather to inform future work, such as which possible violations currently require manual testing, but could reasonably be automated. What violation is happening the most? Is there a tool we could create to make that problem go away? How could we make it faster for developers to fix the issues that seem to take a long time to resolve? There are additional metrics to be considered here, but I think the ones we've discussed give us a solid start to quantifying accessibility in our engineering practice, bringing us closer to continuous accessibility. I think there might be a new tool that our community could create. I have a vision of this being maybe available as an open source tool, something that anyone in the community could use to improve the measurements of accessibility in their code base. Of course, this slide is a little ugly and maybe a little bit on purpose, if you're offended by the design, please consider helping out on this dream and designing something better. With our metrics in hand though, we could use this data to see clearly and quickly how we're doing. As we have looked at how to think about our code and what metrics to consider, I hope you've seen a path forward to the vision of continuous accessibility. I hope I have inspired you to create something new or to contribute to an effort already in progress. I hope I have empowered you to think about accessibility and your code in a new way. A lot of the strategy I have talked about today is of my own creation, but I'm part of a team and I stand on the shoulders of absolute giants. And I'd like to take a moment to recognize the contributions of others on the project. Robert Jackson, of course, among other things helps maintains Ember Template Lint and directed the idea to add the to-do feature to Ember Template Lint. My colleague Steve Calvert drove the to-do implementation and designed the decay days feature. And internal teams at LinkedIn have been enthusiastic early adopters. It's really allowed me to work out the key of our strategy to implement template linting at scale. I'm really grateful to every one of you. And of course, my husband Joseph, he supports me and my career. And he's the reason I can take time to speak with you today. As a research scientist, he has taught me to think more like a research scientist. And I am really grateful for that. In closing, I'd like to say this. This talk has been focused around providing a strategy to achieve continuous accessibility. But when a measure becomes a target, it ceases to be a good measure. It's so important. I'm saying it twice. Keep the end goal in mind. We're doing this work for real people. Accessibility is for everyone.