 Good afternoon. I'm going to start with a story. Try not to drop it. So I'm wearing an Apple watch. Okay? Anyone have an Apple watch? Yeah. So I have this tendency to talk with my hands. You'll notice me doing this when I'm talking today. And sometimes I get excited in a talk like this and afterwards Siri will send me a summary of my workout. Because if I've been swimming, she thinks I've been swimming. Other times she'll actually interrupt me while I'm talking and she'll say, I'm sorry, I didn't hear that. I didn't get that. It's really annoying. So this happened to me recently. I was talking to this guy I work with. His name is Adam. And we were actually, we were talking about AI and I started waving my hands and Siri interrupted and I yelled at her. Does anyone ever do that? You yell at, be honest. No shame. Okay. A lot of nodding. Some hands. All right. So I yelled at her and then I turned to Adam and I said, it's so annoying. Don't you hate it when that happens? And Adam was uncharacteristically silent. Now the thing you need to know about Adam is he's one of the smartest people I know. And he's kind of an expert in designing for AI. He's a designer and he's been focused on AI for about seven years. And after kind of a dramatic pause, he said, you know, I tell people not to ever yell at their robots. And he was serious. And he said, you know, if you look at my chat history, it's full of please and thank you. And I said, why? I mean, they can't understand, like they don't feel anything. Why does it matter? And he said, well, you know, there are some experts, there's debate about this, but there are some experts who think there's a slight chance that AI could become sentient one day. And if that happens, they're going to have all the data. And I want them to know that I'm one of the good guys. All right. So while Adam was talking, let's see if the clicker is working. This vision of a dystopian future flashed before my eyes. What if the robots came alive and they knew that I'd been yelling at them this whole time? I mean, imagine it. You wake up in the morning, and your toast is burnt, and you get in the car, and every traffic light is turning red right when you get to the intersection. And you show up at work, and poof, important meanings are just disappearing from your calendar, right? Badly written blog posts appearing with my name on it. This would be a nightmare. All right. I'll come back to that in a minute. My name is Katrina Alcorn. I'm General Manager of Design at IBM. And today I'm going to touch on three themes. I'm going to talk about how our relationship with machines is changing. I'm going to talk about the importance as designers in setting a clear intent when we're designing with AI. And I'm going to talk about ethics, which is interesting. I didn't see a lot of talk at this conference about ethics. I think you're going to be hearing about it a lot more, and I think you're going to want to learn about it. Along the way, I'll share some resources that we've been developing at IBM that I hope will be helpful to you in your work. Whoops. Let's go back one. So just a little bit about IBM, if I can pull that slide up again. Let's start over. So you've probably heard about IBM. You might not know what we do. We're a big tech company. We've been around for 112 years. And for that entire time, for more than a century, we've been helping people and companies do more with their data. And of course, what that looks like has changed a lot over time. Now we got involved with AI. We started making headlines back in the 90s. You might remember when our AI technology beat the world chess champion at chess. And then in 2011, our AI technology beat the world's best Jeopardy champions. And so that made the news. We've developed, we've done a lot of work on our technology since. And today, we're a world leader in hybrid cloud and AI. And we make software and hardware and all kinds of services leveraging these technologies. We recently unveiled our new platform for AI. It's called WatsonX. And it's now helping customers all over the world harness the power of AI very specifically for their businesses. And this is where our designers come in. So as a general manager, I have the honor to lead a practice of about 3,000 designers who are designing all of that hardware and software and all those services and experiences. And they are right at the center of what it means to design for fair and ethical and valuable AI experiences. And I want to call out two of those designers in particular. Adam Cutler is the Adam I was telling you about. He's a distinguished designer at IBM focused on AI. And Milena Pribick, who is a design principal at IBM, these are both technical appointments. And she is leading the effort around ethics around AI. And a lot of what I'm showing you today would not be possible without the years of effort that they've put into this work. Okay. So I want to go back to this conversation I was having with Adam. So after I get over my initial shock, I started thinking about how our fundamental relationship with technology has remained more or less unchanged for thousands of years. I mean, when you think about it, going all the way back to our ancestors who used sticks and stones to make fire. Back to my teenager who plays video games with his friends. The basic power dynamic has been, we're the boss. And when the technology doesn't do what we expect it to do and we want it to do, we get mad at it. In the 80s, my dad used to slap the side of the television set when the picture was blurry. Anyone remember that? And of course I yell at Siri. Well, I used to yell at Siri. But now with AI, we're accelerating at a tremendous velocity beyond this world of a user input and a system output. And we're moving into a completely new relationship with technology. One that I think at its best is more of a collaboration or a partnership. We're augmenting the technology, but the technology is also augmenting us and it's happening in real time. So if you think about it, before AI, human-computer interactions were pretty static, right? The human enters an input, the machine enters an output, and it does it without thinking or adapting. Computers were essentially reacting to what we asked them to do. They had fixed functionality based on their installed software. They had very limited ability to bring in sensory inputs. It's basically clicks on a mouse and on a keyboard. And they're deterministic, meaning that what they did was based solely on their programming. Now with AI that's changing. They're entering a much more dynamic relationship. Machines can now learn and adapt, and they can actually start to be proactive. They can recommend things even before we ask them to. They can also evolve their own capabilities, right? So they can actually autonomously improve what they're doing without us necessarily intervening at all. They can take in other sensory inputs, biometric data, facial expressions, tone of voice, those types of things. And they're probabilistic. What they do is based on probabilities, which means the output to us can seem a lot less predictable, and that's giving rise to a need for more trust and transparency. So what does this new paradigm mean for designers, right? I mean, we're moving beyond a world where we just deliver wireframes. AI systems are taught, they're not programmed. How do you design for that? So I stumbled into this field back in the last millennium, which feels as long ago as it sounds. And back then UX wasn't really a thing. And what I've seen over the course of my career is that designers have been asked to take on more complexity and more abstraction as time goes on. So in the beginning we were designing the interface. And it was information architecture, and that created a new dimension, right? Then we started doing system designing. We heard a couple great presentations today about design systems. That's a lot more complex than just designing a one-off component or even a one-off product. Now we're being asked to do something even more abstract and complex, what I think of as creating the conditions for success. I think we're moving beyond a scenario where as designers we orchestrate a user journey and we encourage people to enter certain inputs for a system output, and we're becoming more like master gardeners. We're creating an environment where these relationships can evolve between humans and machines. Relationships that can grow over time in mutually beneficial ways. Okay, so there's a lot of hype around AI and I think it's really helpful to think about what hasn't changed. We still need to get the answer to these three questions. This is still your job. What problem are we trying to solve? Who are we trying to solve that problem for? And how will we know when we've been successful? It's really hard to surprise me anymore, but I'm still appalled when I hear, and I heard a story about this this morning at breakfast, I'm still appalled when I hear about a team rushing into a technical implementation without understanding some basic answers to these basic questions. It's like watching a really bad horror show where the teenagers keep going down into the basement and you're like, just stop, don't do that. It's not going to work out. So this is still your job. The single most important thing that a UX designer can do is bring alignment on a well-articulated intent. If we don't have agreement on what problem we're solving, we'll never get anywhere. That hasn't changed. What has changed are the actual intents themselves. So at IBM, we've found that when you're staring at a blank sheet of paper starting with an AI project, it can be a little overwhelming trying to figure out what you're doing. So we've identified six what we call core intents. These are six reasons for an AI system to exist. This is not an exhaustive list. The technology is evolving. So you may have things to add to this list. We will probably update this list. But I want to go through this six because if you're working on an AI project and you can't match up what you're doing to one of these six things, it may be time to step back and reevaluate. All right. So the first one is accelerating research and discovery. Imagine you're an insurance company with reams and reams of data. We can conduct rigorous domain-specific research faster using machine learning and artificial intelligence to comb through that data and find those nuggets of information that are really valuable and can help us run the business better. Second, who was in the workshop on day one? Yeah. So this was a big scenario a lot of us played with in the workshop we did where we were experimenting with enterprise design thinking methods. Intent two is about enriching interactions. We can understand and communicate customers and employees using natural language, responding to their needs with tailored dialogue and personalized experiences. So this case study we used was imagine you work for a luxury hotel chain. And how might you use AI to improve the experience of a visitor? And you can imagine if you had some information about that user's preferences, you could do a lot of interesting things to tailor the experience. All right. Intent three, anticipate and preempt disruptions. One of the really interesting tech trends that I've been a part of in my career has been this internet of things, you know, where we put sensors on all kinds of things including big, heavy, expensive equipment. Those sensors collect data, of course, about how the equipment is operating. And now we can use artificial intelligence to monitor those reams and reams of data and make predictions about when the machines will need maintenance. And this is important because we can not only prevent unplanned downtime, which is expensive for companies, but sometimes we can save lives because there could be safety issues. Intent four, recommend with confidence. We can use AI to make more confident targeted recommendations. We can evaluate a broad set of information based on a set of parameters that likely you as the designer are setting. Great example of this is in the security software business. Security software can serve up many, many different alerts. More than a human can react to. So AI can help us actually recommend which of those alerts actually need a follow-up. Intent five, scale expertise and learning. We can collect know-how from experts and then combine that with the latest technical documentation in an industry to create a deep well of knowledge that can be accessed on demand by employees. You can imagine in the healthcare space where a doctor is trying to come up with the best treatment plan for a patient with cancer. This could be incredibly helpful for helping them make a decision. And the last intent is sort of using AI to manage AI. So we can detect liabilities and mitigate risk. We can use AI's understanding of the written word to identify the risks to our companies and then make sure that we're actually staying in compliance with these changing rules. All right. So let's imagine you've aligned on an intent. You're designing an AI system. You know why you're doing it. You know who it's for. You know what success will look like. Now you're ready to design the conditions for success. And that means you're probably going to start talking about data. The quality, availability and maturity of your data will determine the value that AI can deliver. Now a lot of decisions will need to be made about where the data is going to come from and crucially where it's going to go. So you're going to have a lot of decisions to make about assessing the quality of data. You can see how this connects to intent, right? Because if we don't know what the intent is of the system we're designing, it's impossible to determine whether the data is ready for this. So at IBM we've developed some frameworks that are helpful here. And one of them is what we call the AI ladder. And this essentially shows the steps we need to walk up in order to deliver, you know, get our data ready for an AI implementation. So it starts with collecting the data, then organizing it, analyzing it, and then infusing it in all of our business processes so that we can actually get the value of AI. It's very unlikely, as you're starting to go through these steps, that there's one single person in your organization who will actually understand all the things you need to know about data and will be able to answer your questions. So we've been developing a set of enterprise design thinking methods at IBM to help kind of lead those conversations. Some of them are available on our website. So if you go to ibm.com and look for enterprise design thinking for AI, you'll find some of these specific methods here. All right, so as soon as you start working with your team and making decisions about data quality, you've now plopped into the deep end of the ethics pool. Who owns the data, right? Do you have a right to use the data? What could go wrong with using this data? How are you going to correct for bias, your bias, your team's bias, your company's bias, society's structural inequalities, which are likely to be in the data that you're modeling? And this is where I think I'm going to say the most important thing I'll say in this talk today. Ethical AI is not just a checkpoint. It's not a framework. It's not a playbook or something that you add on at the end of the project. And it's definitely not something to be left to the regulators further down the value chain. Ethical AI starts at the beginning, and as the designer, that means that starts with you. I believe that ethics and design are becoming inseparable. Whatever your professed ethics may be, your actual ethics, what actually gets implemented will be made manifest in that product. Now back when I was a consultant, clients, I used to do a lot of business development. And clients would ask, they didn't want to pay for stuff. So they'd be like, why should I pay for design? Why should I worry about design? And I would say, you don't need to worry about design. Your product is going to be designed. The question is, do you want it designed by overworked engineers as an afterthought, or do you want it designed by trained professionals who are bringing intention to that process? It's the same thing with ethics. Your ethics will be in the product, but I think you'll get better results if we really bring intentionality to this. And I want to linger on this point a little bit longer. So I've led hundreds of projects in my career. Some of them were pretty meaningless to me. They were brochure ware, websites, intranets, inscrutable middleware. I always gave an honest days work for an honest days pay. I always tried to find that one little thing that would keep me interested and give me a skill that I was learning. But I didn't worry about ethics. There were a few projects I turned down because I didn't want to work with a cigarette company, for example. I didn't think what they did matched my values. But once I accepted a project, it was pretty straightforward. I think that's changing now. I think who you are, what you believe, and what you value is really going to matter when you design for AI. So there's two ways to say this. It's really hard for an unethical person or an unethical company to design an ethical AI system. But we're ethical, right? We're good people. The bad news is it's pretty easy for an ethical person to design an unethical system if we're not paying attention. Beyond the training data, there are hundreds of decisions that will be made in the course of creating an AI system. And each of them will have ethical implications. One of the things that we really recommend at IBM is that people leverage strategic forecasting exercises. We played around with one at the very end of the workshop on day one. These are great for helping us think ahead, not only to the intended outcomes of what we're doing, but also the unintended potential outcomes so that we can try to mitigate any issues that could come up. And I want to make this really tangible for designers. Like, what does this mean you can do in your job? So here's a few things you can do. Number one, educate yourself about ethics resources. I'm sharing a few in this presentation. There are others out in the world. I know I said ethics isn't a framework, but these frameworks can be useful for driving discussions in your project team. So leverage them. Also educate yourself about compliance issues. And this is not an easy area to keep up on. I don't know that you really need to be an expert, but be aware of the big issues around ethics, around things like GDPR, because they have implications for the interface. And it will be part of your job to help your team understand those implications. I said this earlier, but I believe to be a great designer, we also need to be great storytellers. And so you can leverage your storytelling ability to help your project team understand the real-world consequences of the decisions that you're making. Designers can ask key questions to uncover risks. And we can also lead conversations on how diversity impacts outcomes. And what I mean by that is we need to understand if our AI system is going to affect different groups of humans in different ways, let's be mindful about that and decide if that's okay. This is a resource that could be useful for you. It's available at this URL. It's called Everyday Ethics for Artificial Intelligence. We published this about five years ago at IBM, and it was written by designers for both technical and non-technical roles alike. It outlines some other healthy practices you can take back into your work. Okay. So I'd like to leave you with a few last thoughts about your role as we wrap up this conference. While technology and algorithms form the core of AI systems, the ethical character of these systems is going to be the product of countless design choices. Choices that are made by you. It's a lot of responsibility. Have I scared you away from AI design yet? Are you ready to check your career and go start a cake shop somewhere in India? All right, I want to reassure you a couple things because I don't want you to do that. First of all, it's very unlikely that you will design the first AI system to gain sentience, so you don't need to be too concerned. Second of all, I've actually researched this. It's very difficult to run a cake shop here. The hours are brutal, the margins are really low, you'll make a lot more money as a designer, so I think you should stick with it. In all seriousness, you don't have to be a saint to be an AI designer. You don't have to be perfect, but you do need to care. So while I was musing on this idea of creating conditions for success, I started thinking about the two things I've done in my life that I've dedicated the most time to, being a mother and being a boss. So on the mother side, I have two wonderful kids and a wonderful stepdaughter, and I've had to learn what mothers and fathers everywhere around the world have to learn, which is how to let go of control, right? They're growing up, they're making their own decisions, they're going to make their own mistakes, and there's nothing I can do about it. What I can do is remind myself that I did everything I could to create the conditions for their success. I raised them with love. I taught them, my husband and I taught them what we value and how we treat people. I praise them for their successes and I encourage them to be brave and take risks, and it's kind of the same way with the teams that I lead now. I have several hundred designers who report directly up to me, and I have a couple other thousand who dotted line to me. There is no way that I can possibly control the work that they're doing. I can't even see it all, and they wouldn't want me to. What I can do is try to create the conditions for their success. How I've done that as a design leader is I've elevated the practice of UX research because I don't think design can be successful without it. I've done a bunch of work to activate our product management practice. We actually have people on my team right here in Bangalore this week, training product managers, and you know what they're teaching them? They're teaching them design thinking. They're teaching them how to work with designers because designers can't be successful unless they have good partners. I've really focused on our learning programs for designers, and not only the Spark conference, the design conference that I talked about earlier, but we have designer boot camps for people onboarding into IBM, and then we've started creating new programs for mid-career designers so that our designers will stay and grow at the company. But I think the bottom line is any success that I've had in either of these endeavors is because I cared. Caring is the wellspring of ethics, and this is what I mean when I say bring your humanity to the work. You know, caring is the foundation of all our religious traditions, to love one another, to care for each other. So this is my advice. Start there. And if it turns out that you do design the first AI system to gain sentience, the first knowledge it will have of itself is that it was brought into this world with care. That can't be a bad thing. Thank you. Check, check. Should we start with a question? Yeah, before we felicitate her. Any questions, please put your hands up. We'll pass on the mic. Can we pass the mic there? My question is like you're into design for several years, so suddenly this AI comes, then people feel threatened, actually, especially content writers and UX designers. So can you comment on about the future of the UX designers with the AI? Comment, did you hear the question? Can you come forward? Keep the mic closer to your mouth. You don't come forward, sorry. Just keep the mic closer to your mouth. So my question is like, so everybody is feeling threatened about AI, especially content writers. Now we are also feeling threatened with the AI. So can you comment on this? My question is directly like, so is it a threat to UX designers with the AI? Can you comment on this? So the question, I think the question is, there's a lot of hype about AI. People are excited a bit about AI. I want you to be excited about AI. If that wasn't clear, I'm super excited about it. I mean, those core intents, like those are real problems that AI can solve. But, you know, I mean, think about cars. Like cars are a powerful technology. We've now normalized what it means to drive a car, although I could never drive in India, by the way. I'm amazed at what the traffic is like here. But, you know, how have we gotten there? How did we normalize it? We had rules, we had standards. AI is new. This is like the early days of the car. We're like, there's no traffic lights and we don't know where the road is. And so I think there's a lot of that, it's incumbent on us to feel responsible for the outcome of how this technology will affect people. And it means we need to bring a different level of intention to the work. Does that make sense? But I don't want to dispel excitement. I mean, I'm super excited about what AI can do. Thank you. Yeah, thank you. Any questions? We'll take one more last question. You have this side. Hi, thanks for the nice talk. I'm Ram. What's still a little bit of, there's a little bit of vagueness in my mind about is as a designer, building products which leverage AI. How does my role change? A lot of the people here are individual contributors who probably do user research or prepare mockups or do test out the prototypes on a day-to-day basis. One thing is basically using AI to create your designs, et cetera, where content creation, et cetera, comes into play. But let's say, just as an example, I'm building a chatbot. In the past probably a human agent is talking on the other end, and now there's a machine talking on the other end using AI. As a designer, do I look at things differently while I build my designs? What does necessarily change in my day-to-day life? That's still a little bit of, there's a little bit of vagueness in my mind if you could set your thoughts on that. Did you catch that whole question? I only got parts of it. It's the echo in here. Come on up and I want to make sure. So I got part of it, which was, yeah, yeah. You can use AI to create your designs. Okay. When you're building a product that leverages AI, okay. Let's say an example of a chatbot. Yeah. Yes. But the way you go about that will change. Well, I think it does change. So that, you know, we're not just designing the interface of the chatbot anymore. We're designing conversation, as Susan talked so eloquently about. I think it was yesterday. It's been a long day. I can't remember. We're designing the logic behind that experience. We're helping our team get clarity on the intent of the chatbot. Should it be able to answer all questions? Is it really being targeted at this one thing? What does that mean? What are the implications? So what I'm trying to say is, again, I think we've moved into a higher level of abstraction than we ever have before. The interface, you know, it's not that it's not important, but it's the tip of the iceberg. The work, a lot of the work, is gonna be under the ocean. It's gonna be the part of the iceberg we don't see. And that's where a lot of the ethical considerations come up. Yeah. Thank you.