 So everybody, thank you for being here. My name is Jessica Reiko, and I am a dancer and a designer. I coexist across many spaces at ASU, including the School of Film, Dance, and Theatre, the School of Arts, Media, and Engineering, and the Human Security Collaboratory, which I co-direct with two amazing women, Dr. Jackie Warnemont and Dr. Marisa Duarte. With them, I conduct research that explores our physical interactions with technology and explores how that affects our everyday or general ways of being in the world. To explain a little bit why I'm interested in this area of research, I want to start by telling you a story about Google Glass. So Google Glass came to the market with great hype in 2015, but as you might remember, it failed to fully take off. Google expressed surprise when people began connecting its failure to surveillance concerns, letting that these fears couldn't really be about the device's on-board camera because there are many cameras in public and private spaces, and this is only one. Now, logically, Google's right. If we're just looking at the cameras, and we're just looking at this from a design perspective, then this doesn't make rational sense. But let me reset this stage from the perspective of a socially conscious movement practitioner. The voyeuristic nature of this camera is not just about the camera itself, but the way in which it is worn. It is a publicly recognizable camera, permanently facing outward on a moving body. It is at eye level, and it roves and seeks with its wearer, stopping to stare directly into your eyes during a conversation. If you look at a Google Glass wearer, you do not see two eyes, but you see three, and that third eye is on blinking, and it could be recording everything. It feels alive because it is connected to a living being. It cannot recede into the background. It cannot be put away. It just watches. Google Glass rubbed up against a world that was felt mostly by those who are proximate to Google Glass wearers, hence the emergence of the derogatory name Glasshole. These negative sentiments are also likely a resurfacing of existing negative feelings towards video surveillance more broadly. For now, this camera is not connected to a lamp post or a building, but it is connected to a person, a person to whom I can voice my frustrations, a person we can kick out of a restaurant, a person we can define as a glasshole. These are the seemingly subtle human-computer relationships that we are starting to study, partially because we're interested in how this can help us design better technologies, but also it helps us understand the very real and potentially harmful repercussions of our traditional design methods. Ask yourself, just because you choose to adopt a technology, does that mean that you trust, value, or condone it? Often times when something like this, like Google Glass fails, the gut responses, you know, people just aren't ready for it yet. But this is dangerous because it makes the assumption that the best plan of action is to facilitate, create, or wait for the right conditions in order to try again. But what if we as designers took a step back and said, you know, what's wrong with my process? Why are my designs creating fear, anxiety, and paranoia? And lastly, what about my practices and my own lived experience? Do not relate to those who I intentionally or unintentionally designed for. Rather than begrudgingly pushing society forward to be ready, I asked designers to critically reflect on the limitations of their own design practices and to remember that to design for one intersection of society, namely affluent, middle-to-upper class, white American men, does not mean that those designs will work for those who do not identify as such, even with modifications. This is more than bringing the right people to the table. This is about changing who gets to make decisions and how. Because we know that our design practices do not come from neutral or acultural places. They represent the implicit and explicit identities, values, histories, habits, and cultures of those who make them, which we know in the tech industry is predominantly heterosist-gendered men who identify with Western schools of thought. If we're going to make change, we have to make room for change in our methodology. This does not mean that our existing practices and methodologies are bad or invalid. It just means that they, like all practices, have limitations. To address these issues in my own work, I collaborate with other people who are interested in creating research from a multi-dimensional perspective of what it means to be human. In this, we celebrate a multiplicity of embodied identities, including queer, feminist, racially diverse, and differently-abled perspectives, because we think it's important. To summarize, our work is compassionate. We do our work with an ethos of care and joy. Our work is kinesthetic, which means that we forefront movement-based and bodily knowledge. And lastly, we see ourselves as disruptors. We do work to question, interrogate, investigate, and break apart structures that we see as needing radical change. If we want our technologies to represent all of us in society, then we need to make room for radical change. This means doing wild things like inviting a socially conscious movement practitioner to the design table. Our technologies are too much a part of our everyday lives to have only certain sociocultural perspectives and certain practices embedded within them. So we need to make room for change. Thank you.