 So I think we're going to go ahead and get started on the theory that people will come and go as necessary. So thanks so much for coming to our session. I'm Lisa Hinchleff at the University of Illinois at Urbana-Champaign. I'm joined in person with Emily King from the College of Southern Nevada. And we will be joined virtually by Jason Griffey, who is at the Berkman Center at Harvard University and Michael Schofield. This is the problem of only ever communicating online. I'm, like, all of a sudden, not sure. Frontend Lead and Partner at LibUX. So we're really excited to present this today. It's been a very fun time to put together this issue-oriented briefing. And particularly, as an issue-oriented briefing, we thought it's worth spending just a little bit of time sort of on, what is our goal with this session today? So I'm going to give a little bit of an overview of the purpose and vision of the session. Michael is going to give us some thoughts on the conversational user interface. Emily will give us a perspective on the bot as the new library patron, so when the user is the bot. And Jason will provide some context of sort of future developments and maybe some of the ethical concerns that might come along with using these technologies. The goal of an issues-oriented session is also to have a significant amount of time, hopefully, for questions and comments and discussion. So before we get started, we thought we'd like to hear a little bit from maybe one or two of you today about what drew you to this session and what you're hoping to gain out of it, and perhaps even if your library is experimenting with any of these particular technologies. Or I shouldn't say just experimenting. Maybe you have it full-blown implementation. So anyone want to speak up? Anthony, go ahead. Yeah, Anthony, home guard of the college. You know, voice-based interfaces are kind of pervading the home life, gadget life, in general. And so I think it seems only appropriate that we kind of respond and figure out if there's a purpose and appropriateness here. We've just started really just scratching the surface playing around with the Alexa interface and what it can do for something as simple as is library branch X open today. Open, OK, yeah. Great. Just starting. Great, wonderful. Anyone else thinking or thoughts about this? I thought about the super said library. As long as in fact, we did a little experiment with machine learning years ago and developed something that kind of worked. We haven't been able to really develop it further. But I'm really interested in this idea of what can we do in machine learning and artificial intelligence and what application that might have in our field. Great, great. Fantastic. One more from anyone? Yeah, go ahead. My name is Rastian. I live with index data. I'm part of the project to build a resource microservice platform. Right. My name is for possible applications. What chances might this have for architecture, data model? Right, fantastic. So we've got a variety of perspectives already being discussed in the room here from the idea of service development, exposing our content. And even as we think about building the next generation search and discovery tools, what does it mean to think about these things from day one, when it goes to year seven, or whenever a lot of this is coming along? So I want to tell a little bit of a story here about, it's already been mentioned, that for a long time, this was the early term days, integrated performance support is the scholarly term for this. The idea that there would be assistance that's provided to somebody as they're doing a task. And this is really becoming quite pervasive already in the consumer and business marketplace. It's very common if you call a help desk for any company where you think, and you may actually be speaking with a human being, that there's actually a supportive system that allows a help desk or a customer service agent to manage 10 to 15 calls simultaneously by actually manipulating prerecorded messages that allow them to increase the volume of calls that they can handle simultaneously. So that's a kind of performance support as opposed to a replacement, which is, of course, part of what the bot world is also doing. We're seeing in sales that a lot of the applications of this machine learning is helping businesses decide which leads to pursue. And which sales calls should people go out on? Because the algorithms are increasingly able to predict the more likely sales and the like. We're seeing in legal research the development of bots that search the literature and write legal briefs. And if you haven't seen it already, his name is Ross, R-O-S-S. And there was a big article with the headline, law firm hires its first robot attorney. Probably more a law clerk, legal clerk. But the point being, geez, research and writing, those sound an awful lot like something we might have thought were sort of uniquely human tasks. Journalism increasingly, many of the stories that we read in the media were written by a computer program. They may or may not have been edited by a human being beforehand. But in many cases, they are generating large-scale amounts of text that even if a human being does review them, they're reviewing the computer text. They're not necessarily just using the spellchecker and the grammar checker anymore. And I don't know if any of you follow Digital Science on Twitter, but you might have seen that on April 1. April Fool's joke was a joke that they had hired a robot as their next CEO. And I sort of paused because I was like, that's actually a really good one, because it's just credible enough that we possibly see this sort of learning that's happening with the computers. So really the question we're here today to think a little bit is about what about libraries and what about higher education more generally. And I have to tell you that this talk for me is 18 years in the making. In 1999, that's why the handout looks so like, I did find the handout, I'm kind of proud of my librarian foo that I could find the folder, where I gave a talk about virtual futures developing new models of instruction in which you'll see this section called integrative learning support. Now I will tell you we librarians are the nice group. I was actually booed when I brought this up in 1999. The idea that something would begin to provide this kind of learning support. So I'm kind of excited to feel a little vindicated. I also like us to take a moment here to just really recognize the amount of effort and work it took in 1999 to get a screen club of Clippy to appear on this handout. And one of the interesting things is the problem with the Microsoft Clippet wasn't the technology. It actually did the tasks it was supposed to do rather well. It recognized the kind of task you were doing and offered you the support. What Microsoft hadn't understood was the psychology of humans accepting that kind of support. And so really a great amount of the research in the last 20 years has not just been on the algorithmic developments, the machine learning, but actually on studying human beings and how we respond and interact with these kinds of support. So the technology's improvement, but as much AI is about the acceptability to humans, which is increasingly pervasive and honestly we don't even necessarily realize it's happening when it happens in these kinds of customer service interactions. So I think it's very common. I think all of us, everyone in the room is probably like quoted Larkin Dempsey at some point about the libraries being in the flow. And this is really what this kind of supportive environment could put is put our services and resources in the flow if we can figure this out. So I'm giving you a little bit of timeline here. It feels very amazing to me that that was 12 years ago that Larkin really put forward this idea that we've been really working with since then. In 2007, I was fortunate to be a presenter at the Tyser Summer School. And Ann Christensen from Hamburg University gave a presentation there about Stella and Stella was a reference bot that they had developed and put onto their website which could answer an immense number and percentage of the kinds of questions that we typically get asked in our virtual reference services. It was an early prototype. She was live for a number of years. She no longer is live. And there's all kinds of things we can reflect upon and that Jason will also help us with respect to gender and many of the ethics of what's coming up in this kind of situation. We can see moving forward in time that in 2014, 15 and 17 we have the new media consortium's Horizon Report Library Edition. And only in 2017 has artificial intelligence emerged as one of the technologies that the library panel says is really affecting. We can see that in 2015 machine learning was there. So there's obviously some relationship. I've been on that expert panel all along. And so it's been interesting to participate in the dialogue over time about whether this really is prominent. I can tell you that these concepts were on the talking table in 2014 and did not emerge. So the other thing I think is interesting is that these were the semifinalists and you will see that important development in technology virtual assistance was a semifinalist. So this idea that Emily will talk about with the bot as our customer is definitely emerging as well. Finally, Chris Berg, who nice to see you, Chris. After we proposed this session but since then she gave a very interesting talk with a provocative title. What happens to libraries and librarians when machines can read all of the books? A number of the things. I highly commend this to you for reading after the session. And I know that it lots of things to think about here that I think really resonate with the things we're gonna be talking about. So there's a lot of questions in this arena. When should we deploy bots? How do we design content when the reader of content might not be a human being but might be a bot or an AI or a machine learning device? I don't even know. I'm good vocabulary for this even yet. How should our information literacy programs help users develop fluency with conversational and voice based search and retrieval? I know that they promise that you won't need any help but having taught a lot, a lot of library systems over the years that we were told that users would just understand them. There's better fluency with querying Siri and Alexa. And should we be teaching this? What are the ethical and legal implications of deploying these technologies? And I think we've thought a lot about ethics of the last few years with privacy and data and PII but it only gets bigger when we get to this environment. What are the threat of existing jobs to libraries? When I was at ACRL, I heard somebody reflecting on the new media consortium report and describing this sort of bot development as dangerous for libraries. Really talking about the displacement of library workers. And then finally, is there a threat to libraries from artificial intelligence competition? Because when you see something like ROS that goes out in queries databases, right? And eventually is the database actually what somebody, a law firm purchases or do they purchase ROS? And the artificial intelligent bots just come built in with that machine learning and having read all the content or to harken to Chris's comments. So there's a lot I think to think about here today. And now I have to do this like moment of fantastic technology switch to hear from Michael. We need to be on the same page when we talk about conversational UI. And we can start by not conflating it with voice UI although on a Venn diagram these circles are definitely overlapping. What we're talking about is this conversational back and forth this input and output that we use to guide users through a task. And as an interface, well, it's an interface is that thing that connects you to whatever it is you want. It's not too dissimilar from other interfaces and sort of the abstract sense. As soon as we could automate stuff we designed interfaces to communicate instruction, take feedback, receive and synthesize information using all sorts of gadgets and doohickeys, buttons, levers, cranks and a blink of red lights that even given the power of the space shuttle in the palm of our hand, we continue to rely on not because our devices aren't capable of more. I mean, we embellish these buttons and switches and dials with animations and accelerometers so that our virtual applications respond to the laws of physics but our buttons still depress and are defined by their box shadows because these decades old conventions are just that. Conventions, they're familiar, intuitive. Push red button, blow something up. It's easy. You've heard good UX as good business and figuring out how to make this stuff easier is an entire discipline unto itself. And a long time ago we discovered that against the backdrop of menu bars and sidebars and calls to action and other bullshit that conversational tone really went a long way. We began to instruct our colleagues to write for the web and it's here where we can begin to visualize a spectrum of conversational UI beginning with just kind of making our print chatty, tonally appropriate. In fact, the first popular novels were epistolary. These letter forms because you, the reader, were the intended recipient of those. You were participating. You weren't abstract, unfeeling observer from afar. You were part of it. It's this connection that in part describes how the novel began to outsell just about everything else. And really it's so easy, it's so natural, so obvious. But once you begin to introduce the possibility of actually instructing the application through chat type or spoken, you see that our interfaces going back and back and back have always been compensating for our inability to talk to our machines because they couldn't talk back. Michael Mulden has the honors of creating the first verbal robot in 1978 named Pat to whom he wrote a couple of messages like I like my friend and later I like food. And Pat conflated the two, responding, I have heard that food is your friend. How on point. Today, we have these obligatory precedents, virtual assistants in our pockets. We've got Siri, Google, Cortana. There's the ones that may be already in your house. Alexa, Google again. We have chat bots in Facebook Messenger. Quartz use chat as a means to deliver and interact with news. And geez, every startup under the sun using something like Intercom or an end alternative has some kind of proactive automated chat. What's cool is like I even saw this use case recently of a chat bot embedded in a long form article and kind of periodically throughout as a way to suss out more information on a topic or get kind of like a backstory from the authors themselves. For as many flavors as these things are coming in, what we are finding, presuming they work, is that they work better than what came before. We asked for directions from Siri or Google instead of tapping in our car because we don't want to die and we may accidentally order a whole bunch of dollhouses instead of noticing our mistake, but that's better for Amazon. Using Quartz or something, it's easier to be told what's important than having to actively find it. And all these proactive chats, and even if they are kind of annoying, you're more likely to reach out for help and complete your task rather than kind of grumble and give up and so on. And why? Well, we're literally on speaking terms. You or the chat prompt is breaking the ice. You're that much more forgiving, invested in. It's more, we can't look over that the interaction cost is much lower. You're doing less and less time and getting a better result. Your experience, even if it's kind of shitty and you're making a complaint is still better and that's good for you and that's good for their business. In libraries, things are still a little nascent. We still have humans on the other end of our virtual chat. Aberyst with the university in Wales has a robot. But now we're actually starting to talk about building chat bots around easy data, like opening hours, as well as the possibility of kind of baking these into complex academic research queries, which is really fascinating. I mean, pretend that you're an English major. You need three or four references for an article you're writing on Anglo-Saxon literature, specifically from Wessex. This is actually a task that I've given to graduate students during usability testing and it takes about 10 to 15 minutes more if you're really being discerning. And the bulk of this time is in constructing the actual search query, using it, failing to find results and going back to reconstruct, fiddle with the Booleans, the limiters and the other facets. These advanced search interfaces are prime candidates to replace with a chat, but I mean, consider going into the catalogue or onto the library website and instead you're prompted with something like this. Hey, Michael, what are you looking for? Oh, I need three or four references for an article I'm writing on Anglo-Saxon literature, specifically in Wessex. Hmm, let's see. Are you looking just for articles or can we use some other media? Oh, just articles. Does your professor want these articles to be peer reviewed? Oh, yes. All right, well, here's everything that we have that meets the most parameters. Okay, I mean, like, you could really have this interaction in person across a reference task, but I bet our chatbot could return results faster. Better, maybe not. Not yet. With my remaining minute, I kind of want to respond to maybe a realization that I've totally glossed over the fact that there is a talking robot somewhere in Wales and it's really cool. But robots are not the future of libraries. We're not replacing the reference librarian with a robot librarian. No, the trend of conversational UI was that first it is supplementary to buttons and sidebars and menus, then it replaces them. With voice input and maybe linked data, it obviates the need for the interface entirely. Conversational UI is not meant to impute synthy replicant analogs or level up a cool kiosk in the lobby or like in Mass Effect, give the hologram a voice. Rather, the future of libraries leans toward an indescribably thin layer between you and the services you want performs. It removes barriers. It doesn't add them. Rather, conversational UI predicts no UI whatsoever. Thank you. I get to follow up, Michael, kind of that big idea that's coming out and this is what's really exciting thinking about this conversational UI is it's really not an update in the interface. As Michael mentioned, we are really looking at the end of interfaces, the end of the way that we've always done things and a new informational experience for us and for our users and just information in general that's dependent on smart technology in a way that we just really couldn't imagine before. So I've been doing library website design for a while. I won't say how long, but it's been a while. When I started, it was very much defined by me as the designer. I defined what content was gonna exist on what pages, how people navigate through that content. They could choose what path they took, but I'm really the one as the designer deciding what those paths are gonna look like. What are those options gonna be? How are they gonna interact? And then we had the advent of the Read-Write Web Web 2.0 and that kind of really opened things up and we started to see that content just isn't one way. It's gonna be created by lots of different people and we're gonna build all of that in. And then we had the mobile revolution and then we were looking at content's not just gonna look one way, it's gonna be adaptable to the user and how they're viewing it. And now we're going into an even bigger revolution where the content, we have no control over them. It's not even being interpreted necessarily by the users in the way that we think of them. So the way that, as a web designer, I've always thought about my users are people. They have emotions, they have needs, they have wants, they have things that annoy them, they have things that they want to see, expectation. And I build up personas and different things like that to get them ready. As I've been learning more about this revolution, I've realized that's not who we need to be designing for anymore. This is who our new user is. And this new user does have wants and needs but they're very different from a human user that's using this information. And I'm just gonna, in the brief time, just kind of talk about things that I think we should be thinking of when we're designing for bots and conversational users. Yeah, sorry about that. First, we've always, well as library web designers we always kind of looked at what's gonna be uninteresting. What are people gonna get bored with? Computers aren't gonna, they don't have that same limitation. They can process all the information and not get bored with it. They're good with doing that complicated logic stuff that's sometimes hard for our human users and we have to design a lot of ways to make it easier for human users. We're not having to do that in the same way for bots and CUIs. They're not gonna get overloaded by information in the same ways that we're used to that happening. I put a little asterisk there because we do still need to worry about processing time but processing time is something very different than cognitive overload of humans where it might take them a long time to get through data but they're not gonna stop because whoa, this is too many links for me to look at. I can't even begin. They don't get distracted. They don't watch cat videos. And they don't care if things are visually interesting. These new users, they are very focused on data and information which is great news if you're an information professional. Now there are some real big limitations, at least now, when we're looking at this kind of interaction. Bots and CUIs, they're not human and I know that's obvious, everybody knows that but there's a lot that we assume with working with human users. There's a lot of inferences, affordances, different types of things that you can assume a human will know because they've had a similar human experience. And with Bots and CUIs, they don't have that experience. They can't guess, they can't figure things out. In the end, there has to be some programming that mimics that behavior that comes naturally to us as humans. Another thing to keep in mind with Bots and CUIs that's different from how we interact with some of the technologies that we have now is because it's a conversation, they're going to want to give an answer rather than a list of results. And a lot of the sort of self-correction that we assume with human behavior comes from giving people a results list as opposed to a single answer. So that human ability to discern whether something is relevant or irrelevant is going to come in that search results. And I want to show an example and this is kind of, I think, something that everybody is thinking about. I know it's come up a couple of times here. Library hours is something that we all struggle with. I think about it a lot because my library is called the West Charleston Campus Library. 100 yards down the road is another library that is the West Charleston Public Library. For a human, discerning the difference between these two libraries is pretty easy. We're physically separate buildings if you walk into it. The libraries look really different. The data that's provided now or, you know, we've been working on improving it, but the default data that we were giving it is really hard for a computer to understand. Now, if I have the search results, I can look and see, okay, it got a little confused, but it worked it out in the end and I can get to the right library hours and know when things are open. But when I'm working with the bot, like the Google Navigation app, it's not, it's taken that decision away from me and it says, hey, you're trying to get somewhere that's closed. Maybe they're talking about the public library. Maybe they're talking about my library. We do have different hours. So it might be giving me the wrong information and I'm not necessarily seeing that backend data that it's looking at that confuse the bot. If we take this a step further, those self-driving cars, they're maybe making a decision about where they should take you to based on the information that it's getting from our systems. So the biggest thing about this new user group that we need to design for it is we need good data and this is something I think we've been working on for a long time, but this is really the key with these new users. It needs to be defined, well-structured data that gets over the ambiguities that are easy for us as humans to figure out but are really hard for machines. We need to also optimize this data so it is machine readable. It is easy for whatever computer that's gonna come in and look at it to scan it and store it and process it the way that it needs to be processed. So we have to stop blocking bots from coming onto our sites and scraping our data. And just a couple of the challenges for now because Jason's gonna get more into this, but conversation UIs, people treat them as somebody to talk to and natural language is very hard in general for computers. So our end users are gonna talk naturally and by default the machines don't. CUIs have to because that's the way that they're built but that's a programming step and we need to make sure that we're doing everything on our end to make sure that these CUIs can have a conversation that accurately reflects the data that we're feeding it. Everybody, thanks for joining us for our panel on bots, conversational user interfaces and VR assistance as you have, as you've seen through the talks of the others on the panel, this really boils down in many ways to questions about artificial intelligence and machine learning, big data and how those develop over the next several years. I'm Jason Griffey and I'm going to talk about the kind of problem space for AI, some of the things that can go wrong, some of the ethical issues that pop up when using AI. I'm gonna talk a little bit about the things you might not expect AI to be doing and hopefully we'll have some interesting questions come out of this. We are used to having sort of ethical questions in our daily lives. We're not necessarily used to having ethical questions about the tools we use. And this is because tools are historically passive objects. They're, we have a tool that does exactly what we want it to do. When we have a hammer, we use it to hammer a nail. We set the hammer down on the table. The hammer doesn't then go off by itself and do other things. That is not the case with the tools that are coming over the next several years, AIs and machine learning interfaces. These are best understood as kind of independent agents in and of themselves that can do things that we don't expect or even, you know, or tell them to do. This becomes problematic in a couple of different ways. The first is that these systems aren't just interfaces. They aren't just sensors. When we build out this fully robust kind of internet of things, AI driven world, we're gonna have a situation that Bruce Schneier calls the world robot. This is because this world doesn't just have eyes and ears, right? We're kind of getting used to the idea that things around us are listening and watching. That's an output of the internet of things. But Bruce points out that not only do they have eyes and ears, they also have hands and feet. And increasingly those hands and feet are multi-thousand pound pieces of metal that do great deals of harm to biological systems like ourselves. So we need to think very carefully about how much power we give these artificial intelligences, especially when it comes to ethical questions. The classic kind of philosophy 101 ethical dilemma is the trolley problem where we have a trolley car barreling down the tracks. It's going to kill a crowd of people, right? Five, six, seven, eight people. If you, the man, the person in the picture, pulls the lever, the trolley switches to another track and only kills one person. The question is, is that a morally correct thing to do? What is the moral ethical decision there? And if you replace the person with an AI, you kind of can see the sorts of problems that AIs are going to have to solve. And if you extend this, right, this is a very, very simple sort of ethical situation. If you extend this to much more complicated ones, you can begin to see the sorts of issues we're going to have with bias and technological systems that are designed by us ultimately that have our biases built into them. Technology is not neutral. And our systems are going to do things that we don't expect and don't want but are there because of our unconscious biases. Like Google who found that their photos app sometimes classifies African-American or just dark-skinned people as gorillas. This is a horrifyingly racist sort of thing to see. It is not anything obviously that was intended by designers. This was a thing that happened as a result of data sets that were given to an AI. People should have known that there might be negative outcomes and should have tested for these sorts of things. This is an ethical problem and an ethical outcome that could have been avoided. Similarly, something like Microsoft and their Tay bot where they had a Twitter bot that was learning to talk, basically learning to talk with people as a result of what they said to it. It took about a day for this Twitter bot to become a racist sexist hate field like Nazi bot. It was incredibly fast and a great representation of how our AIs and our interfaces to them can become corrupted by negative inputs. Lots of people are very scared about AI and not just kind of, not just for science fiction terminator reasons, but things like people like Elon Musk, the CEO of SpaceX and Tesla has said things as strong as this, with artificial intelligence, we're summoning the demon. You know, those stories where there's the guy with the pentagram and the holy water and he's like, yeah, I can control this demon. That doesn't always work out. He really does fear that AIs could become the, I mean more or less the terminator situation is out of control, self-improving intelligences that eventually become, you know, realize that biological systems are not as important. In the short term, however, for us, for information professionals, the big problem with AI is that it's gonna put a lot of us out of work. Once it becomes a fully, you know, once we get better and better at these things, Jeffrey Hinton at the University of Toronto, one of the creators of a lot of the neural net technologies that are intrinsic in these systems says take any old classification system where you have a lot of data and it's gonna be solved by deep learning. There's gonna be thousands of applications. This is true. If you have enough data about something and an AI that can ingest and crunch through it, the outputs become fairly trivial to get. You get things you might not expect from what we would think of as, you know, a machine. Things like inventive recipes. Watson has been fed, IBM's Watson AI has been fed thousands and thousands of cookbooks and now produces unique recipes for people to test. We have medical doctors being replaced by AIs in the form of diagnostics where AIs are far better at diagnosing things like melanomas and carcinomas by visual inspection of photos of skin because they have pattern matching powers that are far better than any individual diagnostician. We have studies that show that AIs are better by a significant amount than professionals at diagnosing these things. Unless you think that somehow the information professions are immune. Just a couple of weeks ago, there was an announcement about a new company named Ripcord whose entire job is the ingestion of paper and the output of digital archives fully scanned OCR'd, categorized, you know, metadata added, searchable, findable interfaces and, you know, kind of a top to bottom AI digitization archival system for corporations. So this is, this is here, this is a company they are taking orders right now. More or less anything that involves the manipulation or the interpretation of data is going to be solved and done by AIs in the next 10 to 20 years and we really need to pay attention to this because it is going to change the way we do our jobs. This is Steve Mnuchin, the current Treasury Secretary of the United States. He said in an interview a couple of weeks ago that he didn't think that AI was anything to worry about with American jobs. He says we're thinking we're very far away that it's maybe 50 or 100 years from now that we need to think about this. When he said this effectively the entire technology sphere online went completely nutty and said he's crazy and literally doesn't know what he's talking about because from our perspective, Mark Andreessen was right in 2011 in the Wall Street Journal article where he said that software eats the world. If a solution to a problem can be instantiated in software it is going to be instantiated in software and moreover, when the software can make itself better at solving these problems we are going to see it happen at a ridiculous rate of speed. So I think we do have to worry. Thanks, that's my very, very brief section of this and here's my contact information and everything. I look forward to your questions. Fantastic, so questions, comments, arguments with what we've said, things we, I mean, needless to say I gave them the impossible task when I invited the three of them to present to be like, please tell us everything you think about this topic in 8.5 minutes because we wanna make sure to have the kind of discussion time that an issue-oriented briefing session is supposed to have. So we do have 15 minutes for you to add your thoughts, your arguments. Yeah, and would you give us more minutes if anyone wants to come up and chat with them? Otherwise, I know Emily and I will be around for the conference and really excited to hear your thoughts and additional conversation. So thanks so much for coming to the session and engaging in this conversation.