 Welcome, I'm Caroline Bowman, Director here at Cooper Hewitt Smithsonian Design Museum, and we're really happy to welcome all of you this evening for a discussion on a critically important topic, AI and how it might revolutionize the museum sector. Though AI has existed in some form for decades, as you all know, we've reached a pivotal moment as we witness its transition from narrow research-based applications into ubiquity. In just a few days, we will be sharing our interactive exhibition, Face Values, exploring artificial intelligence with our public just upstairs in the Process Lab, right above us. Cooper Hewitt represented the U.S. with his installation at the London Design Biennale last fall, and we came home with the gold medal. Face Values presents technologies like face detection, emotion recognition, and eye tracking that invite audiences to consider the pervasive but often hidden role of surveillance technologies whereby governments and corporations monitor identity, classify emotions, and assess individuals often without permission or oversight. With this in mind, it's not surprising that the dominant cultural conversation about AI leans oftentimes towards dystopia, most often depicted as something to be odd, something to be feared. Meanwhile, researchers, designers, ethicists, technologists, elected officials, and more grapple with the present-day implications of AI, things like data bias and questions of consent, transparency, and accountability. For us here at Cooper Hewitt and in the museum sector more broadly, the need to explore and critique AI's ethical implications is a pressing need. How might we engage with this technology around our mission to empower people through design? What are our opportunities to make meaning with AI? How might we design ethical and equitable applications of AI in our sector and well beyond the museum sector? Tonight's program welcomes three different people to present provocations from their very unique perspectives that of a curator, a computer, and a creator. These provocations are meant to frame and introduce this crucial discussion, so we might begin to forge a path together as a community. This evening's event marks the first public program of our newly launched interaction lab, the purpose of which is to reimagine Cooper Hewitt's visitor experience with interactivity as the core principle. The launch of the pen established Cooper Hewitt as innovators in interactive experience in the museum sector and could not have been realized without the generous support of Bloomberg Philanthropies who are in this room with us tonight. Building on that success and our strategic aim to be a platform for design, part of the lab's mandate is to explore how we might continue to innovate with various emerging technologies like AI. With that, I'd like to hand it over to my colleague and our organizer tonight, Caroline Royston, who will tell you more about the lab and what we're doing at Cooper Hewitt. Hello, hi. I'm Caroline Royston, confusingly Caroline, and I'm delighted to welcome you here tonight for the very first interaction lab public program. It's so exciting to see the room so full with such a wide range of people and organizations, and I'd also like to extend a warm welcome to our audience members and colleagues visiting from cultural organizations in the UK. They're here for a UK-US collaborative project on museums and AI, and you'll be hearing a little more about that project very shortly. But first, I'd like to introduce the interaction lab, a brand new initiative here at the Cooper Hewitt. It's a new kind of R&D space, a lab without walls, where we are reimagining the museum experience for the 21st century. We're bringing a holistic interactive design methodology to the very heart of our visitor experience, across digital, physical, and human interactions. Rather than calling ourselves an innovation lab, we've opted to focus on the interaction as the most essential unit of engagement with our visitors, conscious that an entire visitor experience is the sum of many interactions across platforms. To do that, we're living our mission as a design museum. We're creating a space where design is a living process, and we can tackle our own design challenges openly and transparently and invite visitors to co-create and test out ideas with us. One of our goals is to embed a prototype and culture into the museum, supported by a research and evaluation framework to ensure that we remain grounded in data-led audience insight and meet visitors where they are, whilst also opening up new and unexpected opportunities. Alongside this, we will be actively and openly documenting our progress, sharing our learning as we create opportunities for new interactions, test out new and emerging technologies, glean critical visitor insights, and look at the impacts for the sector as a whole. Tonight's event is the beginning of what will become a regular event series where we present questions, ideas, and provocations out in the open, both for discussion and also as collaborative design activities, or with a wide variety of audience. We softly launched the lab in April this year with Rachel Ginsberg as a founding director, and since April, Rachel and I have been leading brainstorming sessions internally and workshopping with the support of some very talented and generous designers and friends of the museum who have helped us to develop a clear understanding of the role of the lab and our opportunities and priorities. After having our heads down for the last few months, we're now at the point where we're ready to make some noise and share what we're doing. We're just about to kick off prototyping of a co-creative audio experience for the museum, and very soon we'll be launching a number of exploratory design commissions to solicit provocative ideas from the design community, and we'll also begin collaborating with cohorts of students and researchers to explore visitor experience opportunities at the museum from all angles. Working in an open and participatory process, we will develop, iterate, and test prototypes with audiences, then optimize successful products for a wider role now. So we have a very busy and exciting time ahead, and we look forward to sharing our progress with you, and perhaps even having the opportunity to collaborate and co-create with some of you here tonight, as you help us to reimagine the visitor experience at the Cooper Hewitt. So now I'd like to introduce Dr. Elena Villeshbezer, who is the assistant professor at the School of Information at Pratt and international co-investigator for the Museums and AI Network Forum. Elena, along with principal investigator Dr. Una Murphy, lecturer in arts management at Goldsmiths University of London, have brought together a range of senior museum professionals and prominent academics to develop the conversation around AI, ethics, and museums. Thank you very much. Thanks Caroline. We are going to be very brief, as we want to leave a lot of time for our three brilliant speakers tonight. So we just wanted to say how excited we are about this event. We have two events, one in London, one now at Pratt in New York, working with the museum professionals in the network, but we don't want to keep the conversation close to that, so we wanted to open up to more people. We organized an event at the Barbican in London, and now we collaborated with the Interaction Lab at Cooper Hewitt. So thanks everyone for coming. We are very excited about this conversation, and have a great day. Hi everyone. I just want to thank you all for coming and just tell you how incredibly inspiring this very full room is, and for those of you who are standing, my apologies, and thank you for being with us. My name is Rachel Ginsberg. I am the director of the Interaction Lab, and as some of you know who know me, I'm quite an AI enthusiast and have done some work with this, an artist myself, and I'm really excited and just sort of generally spend a lot of time thinking about what the possibilities are with AI in the museum sector specifically, but that has been well covered this evening, and I'm not going to continue covering it, and it will continue to be well covered by the three amazing speakers we have. So just some quick notes about the format. So we're going to have three speakers, as you might have guessed, a curator, a computer, and a creator, and then after each of them speaks, then the three together will do a Q&A with the audience for as much time as we have. And just a note, so all of you have, if you chose to grab them, some cards and a Sharpie that you got at the front at the check-in desk, and those are just for making notes, and if you'd like to leave them to share with us, if you have thoughts about the event, you can feel free to leave them on your chair afterwards, and we'll be collecting them and reviewing them, and you know, we're so curious, like with this new audience and thinking about the interaction lab, what are you thinking about when you come to an event like this? And I think that that's all I have to say. Sorry, it's been kind of a long day, but I'm so excited to have you all here, and I'm going to kick it off by introducing our first speaker. Andrea Lips is Associate Curator of Contemporary Design at Cooper Hewitt Smithsonian Design Museum, where she conceives, develops, and organizes major exhibitions and books. Most recently, Andrea has authored and edited publications and curated exhibitions including Nature, Cooper Hewitt Design Triennial, The Senses, Design Beyond Vision, Joris Larmann Lab, Design in the Digital Age, and Beauty, Cooper Hewitt Design Triennial 2016. Additionally, she spearheads the museum's efforts and acquisitions of born digital works, digital works into our collection. Andrea is a regular visiting critic, lecturer, and thesis advisor and participates on international design juries, and frequently moderates and speaks at events, symposia, and academic conferences on contemporary design and curatorial practice. Andrea, thank you so much. Hi. Welcome, everyone. So I have the pleasure of being the curator in our triumvirate of this discussion tonight. And so thanks to all the introductory remarks, thanks to this packed room full of people who are here with us this evening. So as Rachel mentioned, I am Andrea. I am the Associate Curator of Contemporary Design here at Cooper Hewitt. And I have to admit that when I was asked to participate in this event, I initially hesitated. And that's because for me with AI, I have so many more questions than I have answers. Where does the human end and the AI begin? And does it? What is the impact of AI on our lives right now? How is AI being engaged with creatively and critically? How can we ensure diversity, inclusion, safety, and even human rights are maintained with AI? What role will AI play in our future in one year, in 10 years? What is the landscape going to look like? And how can public institutions like museums critically engage with this technology? So this literally is just the initial subset of questions that I started jotting down as I was thinking about all of this. And it all made me realize how deeply ambivalent I am about AI as a curator and as just a human in the world. To be sure, AI or artificial intelligence already pervades our lives. Computers now complete many tasks that formerly required human thoughts. They are able to transcribe phone calls. They can analyze legal documents. They can play chess. So here is IBM's deep blue computer as it defeated Gary Kasparov who is a chess master in 1996. So in 1996 AI was able to repeat a human in playing chess and it still continues to do so. So these advances are examples of AI and many AI systems rely on machine learning and so that software that really looks for patterns in a large group of images and text in other data so that it can learn to do something useful. Whether that's tagging images, suggesting new products, finding signs of cancer. So there was recently a study that came out that discovered that AI is just as good and sometimes better than doctors at diagnosing lung tumors in CT scans. So indeed AI can be a beneficial technology. For the blind and low vision community for instance there are AI apps. There's one called Seeing AI that uses machine learning for seeing an object recognition that then narrates the world to the user so it can identify currency. It can scan barcodes. It can describe nearby people as well as even predict their mood. It can audio describe images in the app and so much more. So the uses of technology are incredibly promising and of course there's the dystopic end of it as well that AI can be biased or manipulative. Governments and corporations are collecting images from social media and public spaces and they're using these to monitor identity, to predict and influence behavior and of course perhaps the most egregious and pervasive use of this is in China where let's say it's unburdened by the concerns about privacy and civil rights. So the government there is using AI to build a surveillance state. So as an example here are surveillance photos of jaywalkers. These are jaywalkers and they're publicly displayed in an effort to embarrass them into compliance. But these eight people after the infraction occurred 15 minutes afterwards received a notification on their phone with a fine. So using just public cameras it links their face to their phone and they get a notification as well as of course this sort of public shaming if you will for jaywalking. So China uses CCTV to capture images, closed caption TV. This is just a shot of a facial recognition system display that is being used in Beijing. So I mean it's almost like the digital panopticon. There are face scanners that are used to access homes and offices or rather than you know a key card or your phone or whatnot it's your face that allows you entry and there are check-in kiosks mandatory checking kiosks at airports for instance that use facial recognition software to check you in for your flight. So here you can see a general in using that. So all of this data from travels data from criminal and medical records online purchases social media commentary it's been reported that all of this is being aggregated and fed to a database that links it to its citizens faces. So here in the US face scanning technology is beginning to be used and rolled out a bit more and we actually are seeing it beginning to see it in our airports. So in 2018 Delta inaugurated its first biometric terminal in Atlanta. So you walk up to one of these kiosks it scans your face you don't have to put in a boarding pass or your ID and that's it you're checked in and you can walk through. So one look and you're in. JetBlue also is allowing you to trade your face for your boarding pass or your ID. And I mention all of this because perhaps the uses won't be as egregious as they are in China but this technology still is largely unregulated and it's important for us to consider how these incremental uses of the technology normalize us to our face becoming a data point and all of this other data being connected to our face. So it's just something for us to really keep in mind. Of course also flawed algorithms amplified bias in 2015 there was the Google photos debacle that labeled a picture of black Americans as gorillas which is harmful and hurtful. Faulty law enforcement tools can trigger harassment and false arrests. We really are living in this era of the self as database. To be mined to be measured to be monetized our data is sold it is circulated and it is surveyed without public oversight. And all of these AI systems and technologies are designed. There are humans that are writing the algorithms and selecting the data sets. There are decisions that are made about what data to collect how to use it. And so what's really unique about having this conversation here tonight at Cooper Q it is that as the nation's design museum we present in our galleries the very things that we as a museum sector might deploy. So AI for instance because these things are designed. So as Carolyn Bowman was mentioning that upstairs we are opening in just a few days an exhibition face values which premiered at the London design you know last year. And so with this exhibition we really are engaging visitors curiosity and play with facial recognition and AI. So you're asked to sit in a chair perhaps you make an expression you express an emotion. A camera records your expression and then it employs software tools to judge your age to judge your gender to judge your emotional state. And systems like these which are used commonly in surveillance are biased because they learn from data sets that focus on limited populations or that classify people using narrow categories. So for instance one of the pieces upstairs I believe it uses some data from IMDB. So if I was to walk up to it and have my face recorded it would tell me probably that I'm 50 years old because it's comparing me to like Julia Roberts. And she looks a hell of a lot younger than I do even though she's a lot older. So you know here we aren't presenting the answers but what we're able to do is really poke and probe at the technology to raise questions with our visitors. And that's precisely what we're doing here tonight. We really are a community of concerned stakeholders and AI technologies are increasingly a part of our daily lives. I mean anyone here use Siri or Alexa or Google Home and these technologies again hold a lot of promise but how do we deploy them responsibly ethically safely and with transparency for our visitors. So as a contemporary design creator I like to say that design is the externalization of our values. It is the manifestation it is the tangible form of our priorities. So sure bad data can happen to good people and just because something can be done doesn't necessarily mean that it should. So I would encourage all of us to consider what are the things that we are seeking to solve as museums in particular and is AI the right technology to do this. And perhaps it is. I mean media AI would be fantastic at helping us to explore and make connections in our collections. Perhaps AI can help us to discover new ways to see and experience work. So we start with our goals first and employ AI as needed. And so to that end I actually would like to just point out some frameworks that people are using to think about AI that I think potentially could be thought about for museums. So for one is it active or passive. Are you knowingly using it or is it something that happens as soon as you walk through the door. If it's passive is it being disclosed. If it's active do you have a choice to engage in it. Is your data and or your face being linked to a real world identity or is it just used as an anonymous ID. So for example is it used just to generate statistics on flow of the galleries. And is this being done to provide the user some utility or is it for someone else's benefit. And then a question I didn't put on here because I didn't have room. And this is really much more about databases is where is the data being stored. Who has access to it and can a user see it and can they demand that it also be deleted. So just as museums employ a level of criticality in acquiring works for our collections for developing exhibitions for our galleries how can we maintain that same criticality in the development of digital technologies for the museum experience. So I encourage us to continue having conversations in public like this. This is fantastic. You know really the possibilities and the benefits with AI are there along with the concerns and the problems. So to close I was reminded also when I was invited to do this this talk of this quote from Norbert Weiner he was a MIT mathematician. And here he says that if we use to achieve our purposes a mechanical agency with whose operation we cannot efficiently interfere once we have started it we better be quite sure that the purpose put into the machine is the purpose which we really desire. And so in my ambivalence I encourage us to continue talking and to tread thoughtfully. Thank you. Thank you so much Andrea. So our next speaker. So actually just by way of a little bit of introduction we had some conversation quite a lot of it actually when we were talking about the computer. Who to tap to be the computer and initially people were like are we going to actually have a computer and have a computer give a talk. And what where we ended up landing is that computers before they were machines were people and that the nearest to a computer in this world that is a human is a data scientist. And so on that note I would like to introduce Harrison Pym. Harrison is a data scientist at the Welcome Collection in London and his background is in computational physics and machine learning. Thanks Harrison. This is an interesting kind of position to be given in the three. I'm going to try and share some perspective that I've gained over the last couple of years of working as a data scientist in a museum with AI. I'm going to go through where I work what I do the kind of things that we're already deploying machine learning wise at Welcome and then give a couple of provocations or things that I think might be interesting to this audience. So I work Welcome. This is Welcome Collection. We are a museum of kind of medical history and health. We're based in Houston. We look like this. This is the reading room inside where it's all very interactive and touchy feeling friendly. We have the kind of more classic galleries as well where you can ask very smart people very smart questions about all sorts of interesting health related things. We have this new exhibition gallery called Being Human which is all about what it means to be healthy and human in 2019. Lots of stuff about environmental breakdown and infection and genetics. We also have quite a big focus these days on accessibility and we've got all sorts of lovely write ups like this for the new gallery. We have our own website. This is mostly what I work on where we have all the kind of standard information about exhibitions and events. We publish editorial content on that same theme of what it means to be healthy and human on quite a diverse range of topics on that broad theme. So we had a week on masculinity and female masturbation. All sorts of things that people love reading about. We also have this collections page which is where most of my work lives which allows you to search for things in the collection that have been digitized or have digital records. So if you're interested in witches you can type in the word witch and get something like this back. Evidence for witches in the 1600s. We digitize a lot and quite fast at Welcome. We have a whole studio and they get through about 10,000 images a day. It's quite a clip that we're digitizing at and that's much too much for the traditional cataloging processes to add detailed catalog records for. So this is where the kind of machine learning stuff comes in. We have loads of images. We have loads of text because you can turn those images of book pages into the corresponding text and that's what I work with. We can do lots of interesting things with the images. I'm only going to give one example of the stuff that I've been working on over the last year and a half and that's related to text but it's kind of illustrative of why this kind of thing is useful in a museum context. It's also worth saying that we are only working with the collections data at the moment. We don't do any kind of machine learning or AI stuff with our visitors. It's all about learning from the collection rather than the people who use it. So we have a lot of text, text like this that it would be nice if we could link up with other text records. We can use machine learning to annotate this text with entities so not only recognizing that there are entities that exist in this text but recognizing who and what they are. Is this the right slide? There we go. So this is the annotated version and if I click on... There we go. There's South Africa which has been disambiguated from this. We can do more complicated things like the MRC unit. It knows that that's referring to the Medical Research Council because of the context around it. So that's kind of useful. It's hard to appreciate why that's useful at surface value but this was demoed earlier today. Once you have those linked records where you recognize that entities are linked by the text that they appear in, you can put together these lovely dynamic graphs of the entities, how they're connected, why they're relevant and then you can find all of the things that they're related to. This is a Wikidata example. Our stuff is based heavily on Wikipedia and Wikidata data. The models are trained using that data. Now into the next part of this, the provocations, the things that I think are interesting. These machine learning models have kind of a parasitic relationship with our data as we've heard. They are dependent on that data and they will absorb all of the kind of poisons that exist in the original data sets. And there's a lot of risk there. If we take historical data sets and train machine learning models on them, we risk inheriting all of those biases from history and using them in models that we then perpetuate into the future, which is obviously bad. But we have skeletons in our co-op that's in the historical data. But there's also a risk in training models like the one that I just demoed on contemporary data and projecting that back onto the past. We risk altering the historical record. And if we are going to deploy these models, we need to be aware of those decisions that we're making. The second point is about AI monoliths and this monolithic perception that we have of them. I think that sci-fi over the last couple of decades has been really useful for figuring out a lot of things about AI, asking interesting questions and kind of puzzling them out. But I also think there are problems with that in that we imagine that things that will go wrong with AI is that a data scientist like me designs a little digital Rookerhauer and he runs around inside your laptop and maybe he decides to go evil and unplug all of the wires inside my laptop and wreak havoc. But that's not the way it works. That's not what we're designing. There's also this perception that AI is some kind of monolith that is going to descend from the heavens and be all-knowing, all-powerful, omniscient, omnipresent and is going to change society forever without our kind of understanding or control over it. But that's not the case. We are not designing colleagues that are super human, super powerful. We're designing tools and those tools are used by people and the computer in this situation is a much broader thing than this laptop sat on the desk. It's a whole system of decision-making that involves all sorts of people, all sorts of levels of your organization. And I think we need to be cognizant of that. I'm not claiming that algorithms are neutral. I think it's very easy to design a tool with a bloody great spike on the end of it. And we should avoid doing that, but we should also be cognizant of how these tools are being used. There's also hope in that, in that we are in control of these algorithms. They are tools. We can use them to great effect. We're in control. If things go bad, there are people to blame. Cheers. Next up, Karen Palmer. Karen Palmer is the storyteller from the future. An award-winning international artist and TED speaker, Karen creates immersive film experiences that combine immersive storytelling, film, AI, neuroscience, and behavioral psychology, and use technologies like facial emotion detection and eye tracking to provoke discussion around larger societal questions about implicit bias and social justice. Her newest work, Perception IO, Input Output, is a reality simulator that reveals how a person's gaze and emotions influence their perception of reality. This immersive storytelling experience invites participants to evaluate the data they are calibrating in their minds, become aware of their subconscious behavior, and potentially reprogram it. Perception IO will be on view in the Process Lab upstairs as part of face values from this coming Friday the 20th through May 17th 2020. Here's Karen. I don't know how I'm going to break this to you, so I'm just going to give it to you straight. I'm here to tell you a story, and I know it's going to sound far-fetched. I'm urging you to be open-minded. You see, your very future depends upon it. I have been sent back from the future to deliver a message by some of the very people in this room's future self from the year 2033. My name is Karen Palmer, and I'm the storyteller from the future, and I've come back to enable you to survive what is to come for the power of storytelling. But I get the sense because museum crowds tend to be quite savvy that maybe not everybody in this audience is taking my word for the fact that I am from the future. So I'm going to do something very, very dangerous because of severity of the situation. I'm going to tell you something I could not possibly know unless I was from the future. So in the time that we currently live, your main concern where your technology controls every single aspect of your society and culture, in the time where I come from, technology controls every single aspect of the world we live in. So when it comes to something like your driverless cars and your Ubers, your main concern at this time is safety, boy, you were so naive, trust me. In the time where I come from, the main concern is actually security. So if you call for your driverless car like an Uber and you step into that device and the AI scans you and the data set determines that you have any outstanding warrants or anything minor against you, your vehicle will not be taken to your designated destination. Oh, no, no, no. Your vehicle will be turned into what is known as a temporary police transportation device and you will be taken to the local police station for processing. So hopefully now I have your attention and I'm urging you to listen to me because I have a very important message to deliver to you today. But first of all what I'm going to do is I'm going to start at the beginning. So the original purpose and role of storytelling was actually to empower and affect people's subconscious in the form of myths. This is what is known as a reality construct of real life of Mandala. Most religions or spiritualities understood the power within storytelling and how it can affect us and the role. All the myths have to deal with is transformation of consciousness that you're thinking in this way and you have now to think in that way. So what happens in a world where we have lost this connection with our myths and our power of storytelling? We live in a world like this of division, isolation, misunderstanding, separation and that's why I created Riot. Riot was an emotionally responsive film that uses artificial intelligence and facial recognition. Riot is the film that watches you back and as you watch it it makes you aware of your subconscious behavior. The narrative changes in real time depending on your response. Harking back to the sense of reality constructs and empowering you. I'm not going to show a clip of Riot today because I want to show you a world exclusive of perception so I'm going to keep moving. With my experiences that I create I use technology to empower people so traditionally you would watch media or interact with media formats which are increasingly becoming more persuadable and influencing us. First of all social networks are designed to make us more addicted to it now it's more about influencing our behavior and our thoughts and our politics and with this what I'm trying to do is to create make media a feedback for who you are so the media responds to you and gives you an indication. I'm very inspired by Parkour. Parkour is an urban inner-city sport. Are you guys familiar with Parkour? Okay Museum crowds are super cool nowadays. Parkour is an urban inner-city sport. It's really about moving from point A to point B but really it's about moving through fear and in the society that we're coming and living in we're being bombarded with lots of media which is really kind of influencing our weakest point of fear and misunderstanding and I feel kind of holding us hostage to that. I've been working with Bernard University London previously with Riot to create the datasets and I'm just going to keep moving through this and yeah the subconscious storytelling experience is that I create they use epigenetics, neuroscience, mindfulness and behavioral psychology. I started to work with storytelling and film and technology about 15 years ago because I understood that the future was very much about making the participant part of this experience using technology. Things are becoming like a next wave with AI and data and that's why that is now very much a big part of my massive storytelling experiences in a world where people are not conscious of how they're using technology. Things like bias, kind of their lack of awareness, a lack of consciousness is going to seep into that. Technologies are just storytellers but they're using data to do it and if they're not conscious of being neutral or even their own perspective in that, wow thank you this is cool, then that can influence their creations. Yeah so I just want you to make you aware that there's lots of bias around us in the world but when it seems into technology and machine learning and datasets it's not something which is so obvious to see for example. Um bias has been indicated as it's something which can be a real danger moving forward. All right so watch this, black hand does it, Larry go, black hand does it, Larry go, racist mother sinks. Yeah so that is kind of funny I mean you can put that down to bias or just bad design but sometimes the consequences aren't quite as humorous. They're systems that has been designed within the criminal justice system, one called the Compa system which supports judges in terms of their determining the length of a criminal and they have been proven to be biased against people of color and black people. There's another similar system in the UK which has been proven to be biased against working class people. I am an independent artist but my partners are absolutely essential. I would even call them partners, I would call them people who we have also come back in time on a mission that we have reconnected with and I've been doing a lot of research where the company called ThoughtWorks and ThoughtWorks Arts and I'm very happy to have Ellen here in the audience and we did a lot of research to really understand so we weren't really speculating, we're coming from a position of being informed, we're working with technologists who are very conscious and deliberate and intentional and have a strong sense of responsibility in what we're doing and that has been fundamental in terms of informing my work moving forward and this is some of the work that I've been doing with ThoughtWorks. We actually are on one of the missions we're on is to democratize AI. The pictures I showed you at the beginning were obviously from the future where people are very aware of the need to have AI regulation, AI transparency and AI governance, something which isn't really very sexy in the time we're living but it proves catastrophic moving forward. So in terms of what we were doing at ThoughtWorks is that I created the right installation and it has a film and it has this AI attached to it and what we did is that we took off the film and then we made the AI available open source. Things like email and internet, things you may have heard of, they were with the military and the government for decades before it came to us the people and we cannot make that mistake now with AI. That's why we have a big mission with ThoughtWorks to democratize these software so that we the people can find our own uses for them and for the community. This is the, we actually have the AI dataset system is called EmoPie, that's the link to it, it actually becomes the basis for the resistance moving forward so please be sparing with how you share this information. Moving on to the next evolution of my work, I'm currently developing a project that's going to be shown here for Friday called Perception IO, Perception Input Output. I'm extremely proud of this work. How quick should I start quickly? I only have a few minutes. So I've been working with NYU and we've been looking in terms of how people's perceptions are reality or formulated and much of that is based upon the data that goes into your brain. Your brain is the most powerful computer in the world and then how that calibrates that and that creates your perception. Some of that, that's a lot of that is based upon your own personal experiences and it's also influenced by your gaze and your eye tracking. So I've been working with NYU to make people aware of the eye tracking and maybe change how they see the world and then in that context see a situation and in that way change their perception of reality. So I've been working with Emily Bloquette at NYU to do that and also my newest kind of, and this is the perception project that will be here and that's NYU and also I'm very proud to say Toby Pro working with who have been learning us their extremely precise hardware for eye gaze so that that can give the participants when they watch the experience insight to how they see the world. So what will happen with perception just to show you a little of it being calibrated. It's really, really absolutely fundamental to my work to use this kind of high-level technology but in a way to make people more self-aware and as I said the companies that I work with have the same objectives like Toby Pro is very much about using technology for making technology more accessible for people with disabilities but also making people more self-aware. ThoughtWorks Arts is very much about using technology to give insights to social issues. So what happens with perception IO is that you watch the film and the film will watch you back and you're going to be invited to be training an AI dataset for law enforcement of the future. So as you watch the film you are going to be the law enforcement from the perception of the body cam and as the person comes towards you your emotions will determine how you respond. So if you feel that a character is a threat as a police officer the situation will escalate and you may kill them. If you shoot them if you feel they're a threat if you're watching the film and you feel that the person needs your assistance you may then call for their assistance. To make it more interesting I have a black and a white character and for each character there's one person with mental health and one who is a criminal. So your emotions will determine how you perceive that person. Your bias will influence the narrative of the film. Does that make sense? Oh cool that's really cool so first time I've said that to audience it's good to know it makes sense. Yeah so I'm going to just show you a world premiere super quick and this is going to if you could dim the lights. From this is the beginning where you come into the room and it's going to ask you to kind of just push the button and then you're going to get a snippet of the action. Welcome back citizens to the future of the world. So when you saw the um the crackle in the screen at that point your emotions would determine the narrative and as it changes it will be branching into there's sequences of different narratives where it would escalate and there's different narratives where you would give the person assistance. As I said there's consequences all around if the person is a there's a black and a white person one is a criminal one has mental health. If you deem someone has mental health and you shoot them you'll be made aware of the end. If you deem someone is safe and they end up being a criminal you could be shot so there's consequences on all sides and you won't be physically doing it your emotions will be determining it and your eye tracking will also reveal what you're looking at in the scene and what is helping make how that's informing your emotion. I define this work I'm going to run this up now um I define this work because um kind of the mess one of the main messages that I've been brought back to tell you is that the age of information has done human mankind a great disservice. You feel that more and more information is going to give you greater insight and understanding however you couldn't really be wrong this over saturation of information has led to so much division and so much misunderstanding so the perception I owe is helding the next stage of mankind which is the age of perception where it's all about greater understanding. So just to conclude uh just quickly I'm going to touch on the future if I have time really quick so um in 2020 when the China social system is launched um you guys aware this Chinese culture yeah um the world is going to just change completely where the digital self will have an impact upon the real life self and how your real life identity and the consequences of that life and how that is going to be watched by other governments and that is going to be um gradually implemented and also if you're not aware that the Chinese are buying um lots of infrastructure throughout the world from Ghana to Jamaica so them integrating these and also New York and London so to them to integrate this facial recognition type of technology will not be um that difficult um we're going to be moving into a world of um auto self surveillance weaponized tech and bias networks so I'm just kind of letting you know in the world that we're moving into but I wanted to kind of urge you that the stories of my childhood has informed the world in which I now live that's why it's called programming so I've come back to tell you I need you to be conscious and deliberate storytellers whether you're storytellers whether you're technologists you need to be aware of the type of stories you're telling with AI and data and information in the world we're living today because it's really down to us and your future needs you today thank you anyone questions I know that was a lot yes Ruth is coming to you with a mic I have a question for Miss Leib and the Palmer I come from China and and these are you two both talk about the the social credit system in China and maybe that's the reason why I'm here so I'm currently studying at NYU museum study programs so for that case my question is not about kind of AI and museum but I want some I want to know some view from the kind of western society I argued with my friend about that system we called it a sky night system as always for the police for the government so I wondering some of my friend said if I didn't do anything criminal or I didn't do anything evil I didn't do anything bad that system doesn't bother me even if it takes my facial facial takes my face as a data and is for the public good because because it helps to catch those bad guys so I wondering what's your opinions over that over that issue thank you so who will determine if you do something good or bad it's like somebody has to document that data right and when it if it was like a person you could have a like if a police officer comes up to you can have a conversation with them if it's a data set and AI there's no conversation to be had and as we move towards these smart cities I have to ask a question smart for whom you know they're not designed by working class they're not generally tend to be designed by people that look like me working class people women or people of color so yeah smart for whom and why so I would I would be very concerned unless you feel that you there's not a history at all of any form of oppression or suppression or you know systematic racism or injustice then cool there's no worries right but if there is a history of that and then it's going to just be automated that's a I think a scary situation to me it's a scary situation personally more questions so just shout oh hi um this is for the data scientists man hello sorry um you can't see me but um you talked about um text-based work have you been doing much with image classification or video frame classification with the collections as well yes most of the work that I've been doing has been around images um the more recent stuff has been text-based though um and I think it's better to be honest I think there's more interesting stuff that you can do with it and we also have more text than we have images um I decided to demo one thing today there are lots of other examples that I could point you to if you are interested in the ways yeah very working with images have you moved into any um like um timeline based stuff with videos like uh tagging frames uh not so much most of the stuff that we have digitized at the moment is uh static images um we are starting to digitize much more video and audio and all of the same maths that you can apply to images and text also applies to video so uh and audio so yeah it's it's a thing that if I'm still around when we have enough data then yeah absolutely I'm going to be working on it well thanks thanks um this is a question for Harrison I'm curious about um how you validate the information that's linked and also who does that is that part of your job and how do you who do you trust when it comes to that because Wikipedia you know it's sort of like we manage it all but I just I don't know how that's how does that work exactly yes um I have that question too it's a thing we are actively working through um I am currently working on a system to expose this stuff to the catalogers um to verify that the annotations that are being added to these records are sensible um I think a lot of that and well a lot of that validation can be done by a person who is not necessarily an expert so there's a chance that we could expose that to the public and ask volunteers to to verify that data um there were some really interesting discussions earlier on today as part of this network about um the Wikipedia and wiki data model for just assuming good intent of your um volunteers your annotators um and that any feedback that you do get is is probably valid um those those records are immediately added to the wiki data database um whether we would do that or whether we want multiple positive signals that something is valid I I'm not sure um but it's a set of questions that we're actively discussing at welcome and also as part of this network anyone else so we're sitting in the Cooper Hewitt Design Museum and we haven't talked much about beauty or things that are aesthetic and inspiring so if we can put down the doom and ruin for a second and offer up from the panel some things that could be beautiful from AI that are designed and elegant and interesting and curious about the human experience I'd love to hear about that I definitely would cite the seeing AI app um as something which is incredibly beautiful um and serves a very specific population um I've seen other uses for uh AI for the earth for instance um to work with conservationists and citizen scientists to help with um tracking endangered species and to look at biodiversity um so there certainly have been some beneficial and lovely solutions thus far that I I think are out there and can point to the idea that there there is promise with this technology it can certainly serve us I am there for the beauty like that's that's why I want this job that I have at the moment is because I get to experience all of these things that I have no uh experience with so far like my background is in physics I I don't know anything about art but through using these technologies I can find links between things that I do know about or something that I don't and then get further and further into these wormholes of interesting things um I think that's beautiful being able to discover new information um I hope that I'm doing that for other people as well by building these things who knows just to add quickly um I think how you said about how um the technologies inherently good or bad is what you do with it and I think it's just about making sure that this technology is democratized that to me is a very beautiful thing so that we can have access to it and determine what we use it for hi um I'm at the back excuse me um I just wanted to ask you if sorry you can't see me I'm like way over here um if you had any experience or opinions on the generative side of artificial intelligence so I know I think across the three of you tonight you talked more about its ability to analyze um and maybe find patterns in things but there's been this new wave with deep fakes most prominently where AI is now being used to generate media so deep fakes are this awful thing um but then there's also there was research done recently by a team that samsung who used artificial intelligence trained on data sets of um facial expressions of women to sort of create three new versions of the Mona Lisa um to show what she would look like when she was speaking or when she was smiling um so I guess this kind of piggybacks off the previous point about beautiful applications of artificial intelligence and I just wanted to to gauge any reception our our opinions from the panel on the the generative side of artificial intelligence and maybe some interesting applications for that in the world of art if that's a question I think that some of the work that has been created with GANs these generative adversarial networks is incredible like if you look at Mario Klingman's work or Colonel Sarans there's amazing stuff being produced um which really links back to that previous point about beauty um but I think that the dual use of that stuff for nefarious means is much more obvious and kind of pressing um than some of these other things that have been discussed uh yeah I don't know what the solution to that is it is kind of terrifying um also in response to the second to last question and the previous one um about generative adversarial networks I would also add Rafiq Anadol in terms of his work with architectural uh constructions reconstructions of things and for critical things on that related to video I would also bring up Hito Steryl's interesting video this is the future that was presented at the Venice Biennale which is certainly a critique of the predictive possibilities of this but generative adversarial networks are also being used by designers in part to help aid certain things like covers of books for illustrations things like that um although we're not always uh announcing how they're being used but I think both in terms of the video and the image the GANs are definitely uh uh taking taking root um and certainly at Columbia University in the Data Science Institute there's a number of projects currently with that um that are ongoing thanks okay hello thank you all such interesting talks you kind of covered the use for the potential dangers of AI in terms of an exhibition being at the center of an exhibition being at the center of an artwork and online collections but thinking about in the gallery and the visitor experience and engagement side what do you see as potential applications of AI for visitors to museums just in their experience of exploring whether it be the building or an exhibition in uh in my previous role I uh was working much more with visitor data um we didn't really put together predictive models of visitor behavior to be used as such but they are really really useful kind of diagnostic tools um you can you can use them to find out or to prize a part of your data um and reveal interesting things within it so I think from an operational standpoint machine learning uh is useful as an intern so not necessarily something to anyone roll on to visitors um I think there are probably many more creative people than me who could come up with interesting things to do but that's that's my experience of it Anna oh hello hello um Anna similar note to that um that is democratizing access to AI democratizing um access to your data how do you as a museum um all the museums present think about open data especially in the context of the met making you know a huge majority of their collection like open source and available for just developers worldwide to be able to work with so how do you how are you guys thinking about open data well the one thing for a Cooper Hewitt's collection already all of our data um is open we have our entire collection digitized um and it's my understanding the visitors can go on and access any of that metadata uh and whatnot so everything already is open and I mean ultimately as a Smithsonian our collection is your collection um so you know it's really something which is close to us here um is enabling all of that to be open uh and accessed by folks and actually just to kind of lob on to the the previous question about potential applications for AI in the museum I mean I think to a certain extent you know it is endless I mean you know does a robot welcome you to the museum you know is there I think what Harrison was saying is you know perhaps there are much more operational kind of behind the scenes uses for it um that would be helpful but at the same time you know how could it enhance your visit how could you provide a little bit of information and it helps to navigate you to things you might find interesting um how could it help you to find things in our collection or to make other connections out to things because I think you know oftentimes when we go into museums um at least for me I tend to get fatigued after just an hour or so you know and maybe I miss that one thing that would have just been revelatory um so how can AI you know help us do some of those things you know and even kind of tease out um our collection you know even for curators I mean we have over 200,000 objects in our collection so how can it even help us to find things that you know much more quickly um and efficiently as we're digging through so I think that there are some some interesting potential applications that could be used yeah yeah so as a storyteller from the future I'll let you know what happens um and one of the main reasons why I'm here is that the kind of the role of the museum in the future is very much going to shift like the curators are going to be like gatekeepers to these very powerful collections cultural artifacts that in some way are set put from communities and they're going to be inviting communities into here so the museum is going to become like a living space where people are going to understand the value and the worth in all this different artwork all these different artwork that you said need labeling people are going to be coming in from around the world and making you aware of the context and the power in some of this art and it's going to come alive because we're going to need that understanding of culture and art that is kind of I feel sleeping at the moment the power of art and we're going to as a museum you're going to resurrect that just another little quick note um on welcomes approach to open data um all of our data is also open um all of our code is also open and we write quite a lot about what we're working on look us up it's all out there okay so two more questions and people already have mics yeah yes hello and thank you so much this is a fantastic fantastic discussion so clearly the theme here is concepts of values of justice of tolerance right and wrong and I get the sense that no machine learning or no algorithms can be taught these things so I look to art I think of paintings I think of literature and we're thinking of machine learning and tagging pieces of art let's say Socrates and the hemlock was there justice there how about Henry VIII asking a Frenchman to chop off Anne Boleyn's head with a very very very very sharp knife would that be justice over there so there are certain pieces of art have you tagged them as this is what justice looks like in the 15th century in the 16th century in the 17th century in the 18th century in the 19th what is justice today and can a machine be taught that I think that's much too big a question to ask a machine to deal with these these machine learning tools that we're we're creating are very good at um repeating the the the tasks that are very easy for humans like if you could get a five-year-old to do a thing or you could do a thing within five seconds that is an ideal task for a machine learning system but ideas of justice like the the compass states that you mentioned earlier um they're incredibly complicated questions that require so much understanding of nuance I don't think it's sensible to hand that over to uh to machine learning or AI um I I am increasingly talking to artists and uh creators who want to use wealth and collections assets or the the works that we hold um in these contexts of asking more complicated questions about emotional justice things like that um we can use machine learning tools to like we demoed um make connections between artwork and and allow those humans to ask the complicated questions themselves and answer them or come up with answers that are uh right for them that's my view on the very complicated question okay and last question hi this one is um this one is for Karen um so I was wondering if you could maybe tell us a little bit more about this idea of democratizing AI because I think that that's that's a beautiful uh notion um and I question the ability to which humans would be able to handle it and uh also how would you do it and like what is this process that you've been uh engaged in so I'm curious you question the what did you say abilities what was the question the human's abilities to to use AI all but at the moment certain humans are using it but their private corporations government and military so are you more concerned about the people having like us having access to it but you're not so concerned about those organizations okay everybody okay cool so um I think there's shouldn't there's not to me a reason why it shouldn't be democratized like why shouldn't it be democratized um we're living in such a tech savvy culture at the moment like if it was 30 or 40 years ago okay maybe but there's so much tech savvy people out there and with ThoughtWorks and ThoughtWorks Arts we created this emo pie that's open source and we gave it to the community and it's been the community have been developing it for their own purposes so I um I have faith in people um some in terms of ethics which is a big issue to do with this some people have said well you know we don't know what nefarious means people may be using it for and I'm like well I don't know what nefarious means it's being used for now and it's kind of like a knife you know a knife can be used to kill someone or by surgeon to save that someone's life but you still make knives right so I just with ThoughtWorks Arts we have been making an open source people have been using it and making it accessible and I had a meeting with a tech activist young lady and I was telling about the compass system where they put into that database this is the person's track record of doing criminal activity this is the fact of where they are regarding employment and you know they're likelihood of and they kind of project the likelihood of reoffending and she said what if we programmed in their um their kind of new employment status and their new support network and their new housing and you know all these different support structures and look at the likelihood of them not reoffending so it's kind of there and they wanted to look at the emo pie system in that way so I think it just starts with creating the software making accessible for everyone and then like people who's I wouldn't have even thought of that in a million years but then getting it out to people so that you know we the people the community can use these things because the compass system is a private corporation that is made it and their algorithms are not transparent we can't there's no regulation no one can see anything in there and it's just spewing out this information so I think it's essential absolutely essential for all this type of technology to be be democratized and I would ask why it wouldn't be that is such an amazing moment to end our program for this evening thank you all so much there's just thank you yeah let's have a round of applause