Due to the profile of strengths and weaknesses indicative of autism spectrum disorders (ASD), technology may play a key role in ameliorating communication difficulties with this population. This paper documents coding guidelines established through cross-disciplinary work focused on facilitating communication development in children with ASD using computerized feedback. The guidelines, referred to as A3 (pronounced A-Cubed) or Annotation for ASD Analysis, define and operationalize a set of dependent variables coded via video annotation. Inter-rater reliability data are also presented from a study currently in-progress, as well as related discussion to help guide future work in this area. The design of the A3 methodology is well-suited for the examination and evaluation of the behavior of low-functioning subjects with ASD who interact with technology.
ABSTRACT: Effective tutorial systems can help promote products by reducing barriers of learning new applications. With dynamic web applications becoming as complex as desktop programs, there is a growing need for online tutorial/help systems. For visually impaired users the key limitations of traditional help systems are 1) poor access to help content with assistive technology, and 2) frequent reliance on videos/images to identify parts of web applications and demonstrate functionality. In this paper, we present a new interaction model, targeted towards screen-reader users, that describes how to embed an interactive tutorial within a web application. The interaction model is demonstrated within a system called DTorial, a fully functional dynamic audio-based tutorial with embedded content. While remaining within the web application, users can rapidly access any tutorial content, injected inline near relevant application controls, allowing them to quickly apply what they just heard to the application itself, without ever losing their position or having to shift windows. The model and implementation are grounded in sighted user help-systems literature and an analysis of screen-reader and Web-Application interactions. Lessons learned from the incremental design and evaluations indicate that providing visually impaired users with dynamic, embedded, interactive audio-based tutorial systems can reduce the barriers to new Web-Applications.
Real-time computer feedback systems (CFS) have been shown toimpact the communication of neurologically typical individuals. Promising new research appears to suggest the same for the vocalization of low functioning children with Autistic Spectrum Disorder (ASD). The distinction between speech-like versus non-speech-like vocalizations has rarely, if ever, been addressed in the HCI community. This distinction is critical as we strive to most effectively and efficiently facilitate speech development in children with ASD, while simultaneously helping decrease vocalizations that do not facilitate positive social interactions. This paper provided an extension of Hailpern et al. in 2009 by examining the influence of a computerized feedback system on both the speech-like and non-speech-like vocalizations of five nonverbal children with ASD. Results were largely positive, in that some form of computerized feedback was able to differentially facilitate speech-like vocalizations relative to nonspeech-like vocalizations in 4 of the 5 children. The main contribution of this work is in highlighting the importance of distinguishing between speech-like versus nonspeech-like vocalizations in the design of feedback systems focused on facilitating speech in similar populations.