 Hello, I'm EA Drafton from Southampton University UK and I've worked in the area of ICT accessibility and assistive technology for many years. As someone exploring artificial intelligence and inclusion, I feel we're only just scratching the surface of the accessible Europe 2019 agenda with the use of big data, algorithms and machine learning. People talk about artificial intelligence, automating tasks. This may be true in some cases, but when it comes to the diversity of disabilities and the range of barriers individuals face when using technology, I think at this stage we need to think about augmentative and assistive intelligence rather than just artificial intelligence. It's estimated there will be over a third of the global population who will live with some sort of disability in the next 30 years. So we have to find ways of improving access to ICTs. We need to think about the way individuals with a range of dexterity, motor, sensory and cognitive disabilities can control devices, use applications as well as the content we put into them. With increasing use of touch tablets we can be causing more barriers, with gestures that are hard to achieve. But pause for a moment and think how haptic or touch feedback and speech input is beginning to appear. Then remember how we can dictate and listen to speech output on our devices. Many have found this helpful. The improved accuracy of these types of ICT has been largely down to data collected about the way people touch screens or speak. This data has allowed computer scientists to enhance these applications to the point where many individuals with disabilities are successfully activating technologies independently. Also, these apps have often become part of an operating system of a device that can be bought in the general marketplace. This is AI supporting universal design and digital accessibility, whilst hopefully making assistive technology more affordable. The use of media in all its formats has also expanded over recent years. There is hardly a web page without a video or collection of images, so we can offer many alternative formats to suit all users. However, if we fail to offer such useful things as captions that can help us all if the video is in another language, and especially those who have hearing impairments, we are causing yet more barriers. The accuracy of automated captioning is not there yet, and nor is image recognition. But it's getting there, and I feel we need to realise how much quicker it can be to correct an automated set of captions as opposed to creating a completely new set. The same is true with alternative text for images in documents and websites. We just need to be reminded to spend the time making these changes once we have uploaded our video or image. Assistive AI can speed up some accessibility processes. If you're thinking about personalisation and localisation, the chances are you have seen a chatbot or have been offered automatic translation when you've reached a website in another language. These tools can be used to support accessibility, the chatbot to answer questions and clear concerns about the content or application, and the translation may turn into a summarisation for someone who has literacy skill difficulties such as dyslexia. In this case, the potential is for augmented AI to enhance access and offer just-in-time support. As a final thought, if we accept humans are hard to categorise and the variety and degree of disability cannot be put into separate boxes, then AI still has some way to go to supporting our needs. But if we allow ourselves to think about the way AI can augment and assist our skills, it is an exciting time to be around. This background paper is available on the Accessible Europe website, and I very much look forward to discussing the subject more at the Regional Forum on ICT for All in Malta this December.