 Nice to be here again. Bruno, I think you need a new tagline next year. UX Lisbon will fill your mind and your tummy. Bruno does a great job on the event and the food and everything else, so it's always a pleasure to be here. So a few days ago on Twitter, somebody asked me a question that I got a lot. She said, how do I help my organization to value design more? Fair question. Probably all of us have asked it once or twice or 1,000 times. I've been hearing the same question in some form or other for, I don't know, 20-plus years now. I'm sure I've asked it myself a few times. Seems like a fair question, but here's the thing. This question assumes that we are actually the important thing. Whether a company delivers great user experience or not, is it really about whether they value design? I don't think it is. I think, actually, it's much more about whether they value humans at all. We're asking the wrong question. It's about how do we help our organizations value people? Organizations that are small and don't have any design teams can deliver fantastic experiences. I know plenty of organizations with enormous design teams that will never deliver a great user experience, because they don't value people. Let me illustrate with an example. So I live in San Francisco. And that means that I fly United Airlines a whole lot. It's one of their hubs. Now, some of you might know that United is the airline that we Americans kind of love to hate. United knows this, too. And so about five years ago, they revived a 30-year-old brand campaign, Fly the Friendly Skies. And all the frequent flyers kind of laughed under our breath. And the press did the same thing. They said, oh, yeah, this is an aspirational goal, all right, because United is universally known in the US at least, as pretty bad on the customer experience front. So United, though, was sincere in their efforts. They started investing in user experience much more. They brought back snacks. Maybe they figured we coached passengers would be less hostile if our blood sugar wasn't crashing. So they started to spend a bit more money on feeding us. And they took their website, which used to be sort of a typographer's nightmare. And they said, let's put a better front end on this. And they upgraded it, some, if you like, the Windows 10 aesthetic, anyway. It's a bit cleaner. It's a little nicer to use. And I know they've actually hired some terrific designers and their good folks on this team fighting the good fight. But as a user, I still kind of hate this website, I have to say. And there's a lot of reasons for that. One is that they think personalization means I can change which airplane I see on the background. Another is the ads. They know who I am. Welcome back, Kimberly. And yet they keep advertising the dumb credit card to me. I have the credit card guys sold already. We're good. So they keep trying to sell me a credit card that I don't need. But the worst thing, actually, is not even the ads. It's the bugs. Oh, man, this website is the buggiest thing. It's super slow. It crashes on me. Sometimes it won't let me book a premium seat. And by golly, I'm entitled to my premium seat. I want that. And sometimes it'll suggest itineraries that don't make sense. Like your second flight leaves after your first flight arrives or something like that. So there's some goofy stuff that happens sometimes. My friend Steve Portigal sent me the best itinerary, though. United suggested this one to him. Cargo truck. Are the snacks free in cargo class? I'm not really sure. So I feel sorry for this web team, actually. They're trying to paint a really nasty back end. And they're dealing with legacy systems that are obviously kind of problematic. It's going to be hard for them to deliver a great experience, no matter how good it looks, because there's a lot of other stuff going on. Now, United's problems go beyond their web technologies, though. You probably heard about this. Even in Europe, about a year ago, United dragged, literally, dragged this guy kicking and screaming off of a plane. And they had oversold the flight. And they were trying to get their flight crew from one city to another. So they actually booted passengers off the plane. And he said, no, I'm sitting here in my seat on the plane. You're going to have to drag me off. And they said, OK, we'll call the cops. We'll do that. Clearly, they care, right? Recently, United has been notorious for killing pets. They stuffed somebody's puppy in the overhead bin. And it suffocated. But it's OK, because United cares about every ounce of the user experience. Copywriting fixes everything. Sorry, if you're killing people's puppies, I don't care what you print on your cops. You're not going to deliver a great user experience unless you actually value the people. So they have a lot of work to do. And this is not about the pixels, right? A user experience is not made of copywriting or pixels or CSS or any of the rest of it. A user experience consists of everything about it. It's the reliability. It's the performance. It's your pricing and your revenue model. It's the way you handle privacy and security. All of that is user experience. It's not just that front end. A user experience is actually built of decisions. Hundreds and thousands of decisions made by almost every employee every day. There's no way that we as design teams can make all of those decisions. Lawyers are making design decisions with their usage terms. Engineers are making design decisions when they decide about performance. All of those are decisions. Customer service reps are creating user experience when they decide whether to be helpful that day or go off to lunch a little early. So how do we actually enable an organization to consistently make the decisions that deliver great user experience? Even within a design team, we're always making decisions that are trade-offs. Lots of those decisions are trade-offs between things that we would all agree are good. But how do you as a design team decide, well, are we going to focus on performance? Are we going to focus on, say, a really great animation that's kind of fun and playful? There's a trade-off to make there. Sometimes you're making trade-offs between a lot of different things. Net promoter, are we going to focus on that? Are we going to focus on this metric or that one? Are we going to focus on conversion or long-term retention? Which of those is more important? They're both important. How do you choose? There's always a tension. There's always a trade-off in design. So how do you make those decisions? Well, Roy Disney said something very wise once. He said, when your values are clear to you, making decisions becomes easier. When your whole team and your whole company share the same values, you have the criteria to make your decisions. You don't have to micromanage every single one of those because everybody knows what's most important. I had a colleague in grad school who used to work at Walt Disney World. And he told me about the way they trained their employees and their whole culture around customer experience. He said, when you work at Disney World, you are told that you're always on stage. In fact, you're not called an employee or a team member. You're called a cast member because the entire Disney experience is a production. They want you to see it as a performance. You're performing customer experience. And you're not wearing a uniform. You're wearing a costume. Whether you are shaking hands as Cinderella and posing for photo ops with little girls, or you're sweeping up all the stuff people leave behind, you're wearing a costume. And all the employees know that this is the job. There's a book called The Disney Way, which is kind of a, it's a good airplane read, kind of skimmed through it in an hour or so. The authors share a story in there where a bunch of business folks are at a Disney resort for some sort of conference. They check into the hotel, and then they get on the bus to go somewhere else in the park. And the bus driver says, hey, I see you're staying at one of our Disney hotels. How are your rooms? And one guy says, oh, shoot. The faucet in my room is dripping. And I forgot to tell the front desk about that. So the bus driver says, what is your room number? I'll make sure they fix it for you. The bus driver. Now how many of us take that much responsibility for a task that lives in somebody else's silo? When was the last time you offered to fix a tech support problem for somebody? Yeah, probably never. I wouldn't either, because I wouldn't know how to do it, but the bus driver is taking responsibility for the whole experience, because they've been told that that's the priority. That's what they're trying to deliver. That's the shared value among the employees. Now there's a risk here. If we say our focus is on the customer experience, what we're really saying is we value customers because they give us revenue. We value their revenue potential. And that's what I hear a lot in user experience circles. Now Justice talking about the importance of speaking CEO, yes, very true. CEOs think in terms of things like revenue, and we need to speak that language. But we need to value humans as something other than revenue sources. What I see a lot is a focus on us as designers being metric aware and using metrics in our work. Can we use business metrics to figure out if we're doing the right things, if we're actually affecting the business? Yay, that is all good. That's important. But if we move from metrics aware design to metric centered design, which I see a lot of teams doing that's problematic. Now what happens if we spend all of our efforts focused on, say, engagement? Some unfortunate things happen when that's all we think about. Facebook, of course, is the easy poster child for this because they're very focused on engagement. And they have some complex problems to solve. They have some good intentions. But their business model is inherently challenging. They're not necessarily incented to think about things other than engagement because that's the model. Now what happens? Well, Facebook says, hey, can we care about you? The copywriters fix everything. They gloss over all the bad things. So lots of products are facing issues like this. Social media is the obvious one. But even your Roomba, you buy something to vacuum your house, and it's mapping your house. And is that OK? We have some difficult issues as an industry, I think. If you are focused on metrics and you're focused on the business, are you really doing human-centered design? I would argue no. I think that if we're going to actually be human-centered, we need to focus on human needs. And as humans, we need things like food and shelter and heat. We need things like safety, physical and psychological safety. We need a sense of belonging and connection to other people. We need a sense of self-esteem. If we're spending our time on Instagram and thinking, oh, all these people's lives are better than mine, is that actually helping us live better lives? Now, that'll seem big and abstract, and it's not that close to that engagement metric you're trying to move. But here's the controversial thing I'm going to say. If your user experience undermines any of these things in Maslow's hierarchy, it is a net negative for the world. That is human-centered design, centered on human needs. That's a tough story to tell in an organization sometimes. But I think it's what we as a profession need to fight for. Now, good intentions aren't enough. I think we're all out there trying to do that. And lots of companies, even Mark Zuckerberg, doesn't intend to undermine democracy with his product. Nobody's trying to do bad things. I think part of the challenge is we're all making it up as we go along. We don't have shared standards or guidelines for what's acceptable. How do we know it's OK? Facebook did an interventional psychological study a while back, 2012, I think it was. They started actually manipulating what was in people's news feeds and measuring the impact on mood. Unconsented. Is that OK? Uber uses a lot of gamification and psychological manipulation in order to manipulate the availability of drivers. Lots of people think that's creepy. Your boss probably offers bonuses for certain behavior. Is that creepy? Where's the line? What makes it creepy or not creepy? How do we know? Apple showed me this message a little while back. They said, hey, do you want to submit this voicemail to improve our transcription? I thought, yay, Apple, you're asking. That's good. But how would the person on the other side of that voicemail feel with me sharing the content of that with Apple they haven't consented? Is that OK? Session replay tools are another interesting one. I'm sure some of you probably use these. If you don't know, a session replay tool lets you look at an individual browsing session and actually watch what somebody's doing, potentially revealing identifiable information, even security and financial information. Is that OK? Is it like going into a store, or is it more like being watched in the dressing room? Where's the boundary? So GDPR takes effect tomorrow. This is actually going to give us a starting point for what's acceptable and what's not. But I did a very scientific poll on Twitter yesterday. I said, how much of a difference do we all think this is going to make? Most of us, I think, believe it'll make some difference, but it's not going to solve the whole problem. I want to know who that 1% is. I don't think any of us really believe that. But companies are going to have to follow some aspect of this, because the 800-pound gorilla that is the use, as they have to. But unless you change the values, the underlying behavior isn't really going to change. Lots of companies are going to find workarounds, and they're already doing so. It's not really going to make a difference. I don't think GDPR is going to fix everything. It's a great start. But I think there are places we can look beyond that regulation. And one of those is the Nuremberg code. Now, anybody heard of this? Hardly anybody? So back in the 1940s, among their many other horrible things that they did, the Nazis experimented on humans. And Nazi doctors said, well, we are learning things about human health. Therefore, the ends justify the means. And in the Nuremberg trials, all of the stories of what they did came out. And the medical and scientific professors said, holy crap, we need some shared ethical guidelines to prevent this from happening again. And so I work with a lot of medical researchers. And the Nuremberg code is deeply culturally ingrained. It's enforced partly through regulation and partly through very strong cultural norms in the medical research world. If you do something that violates these guidelines, it might be hard for you to find work, because people are just appalled by that. So the Nuremberg code is a set of, I think it's 10 principles, that are enforced essentially by institutional review boards. Anytime you're doing any sort of medical study, you submit your protocol to a group of people that includes patients typically. And they look at these 10 principles and decide, is this or is this not OK? And people have a pretty good shared understanding of what those things mean. Now, working with medical researchers has actually made me view what we do in a little bit of a different light. If you think about it, internet-enabled technology, the web and all the internet of things, and everything that collects data about us and tries to change our behavior, is actually the biggest interventional study ever. And it's mostly unconsented. Most people have no idea how much data is collected about them, and how that's combined with other data. And a lot of people have no idea the ways in which products are trying to manipulate us. And that's kind of squishy, isn't it? So what do we do? Well, if you do any sort of human subjects research, some of the things in institutional review board will ask you are these. Is this actually a benefit? Will the participants benefit? Will it benefit society in some larger way? Or is it just a minor thing that your company is trying to benefit from that isn't actually going to be good for the world? Is this the only way to accomplish this goal? Do we actually have to study human beings and do interventional things with humans to understand this? Or can we get that accomplished some other way, some lower risk way, some lower harm way? Is the risk proportional to the benefit? For example, are we trying to cure cancer here? Or are we just trying to improve toe fungus? There's a big difference in how much risk we should take, given the size of the benefit. What kinds of harm are possible? What are the bad things that might happen? And this is one of the issues that you see in product teams. When they don't have a diverse team, for example, they don't realize the ways in which people might be discriminated against or harassed if they don't actually have that experience in life. So what are the risks that are possible to our users? And how can we minimize the harm? And IRB will ask you, do you have the resources and do you have a plan for mitigating that harm? Imagine if we had to do this for products. How would that be different? And of course, one of the most important aspects of that institutional review board is consent truly voluntary. Now, we have consent rubber stamped in a lot of products, right? Pages and pages of terms and conditions. I saw great visualization the other day of the terms and conditions actually rolled out on a sheet of paper. And they're like 20 miles long for some of these things. In medical research, consent actually has to be informed. And people have to understand what they're agreeing to. There's no default checkbox that says, oh, yeah, I have read and understood these things. You actually have to go through them with people. A researcher will walk you through the risks and the benefits of the research. So why don't we ask these questions? It's hard. It's hard to do this in our organizations, but imagine the impact if we did. Now some of you are sitting here thinking, oh, man, Kim, can you just tell me how to improve my CSS or something easy? Because this value stuff, man, it's hard. And we feel helpless. What was the phrase Jamie used? Wireframe monkey. We're code monkeys and wireframe monkeys. And how can we affect all this value stuff? And there are days when I feel pretty helpless about that and I feel overwhelmed by it. And I don't think we'll make a difference. But the thing is, even if we are monkeys, even if we can't drive the whole motorcycle by ourselves, this little guy can't reach the gas pedal, but he can nudge the handlebar. He can move that motorcycle just a little bit in a different direction. Now imagine this is swarming with monkeys and we're all trying to move the motorcycle in a different direction. We're gonna be able to affect the direction it goes. I am confident in that. So I'd like to share with you five things that you actually can do. And they range from pretty easy to, yeah, this is gonna take you a while, if you're gonna wonder why you're doing it. The first one is super easy, you can do it tomorrow. And it's just changing how you think. Think of yourself as the Disney bus driver. Don't just drive the bus. Think about how can you take some responsibility for the whole experience? And think about the ways in which it affects other aspects of people's lives. Think about the performance. Think about the impact of the business model and start those conversations on your team. Think about how you can do more than just worry about the pixels and the front end of things. That's pretty easy, you can do it tomorrow. It's just deciding to do it. Now the second thing is also relatively easy. It takes a bit more thought and that's leveraging what the organization already values. Every organization has some things that they care about. Sometimes they talk about them. Sometimes those are sort of covert things that they care about. Identify what those are and think about how can you hook what you wanna do to what people are already focused on. Now I have a client who works in the energy industry and as you can imagine, when you're dealing with big industrial processes and so on, safety is a big focus for them. Probably their number one cultural value is safety so much so that when I visit their campus I'll get scolded for not using the handrail on the stairs. They actually start meetings with briefings about where the emergency exits are. Like safety is a big thing here. So my client was trying to figure out how do I get these people to invest in making their enterprise software not suck? Kind of a hard sell. And so he realized, oh, ergonomics. Repetitive stress injuries do harm. It's unsafe and it sounds goofy. It sounds like a stretch to most of us but he hitched user experience to ergonomics and safety and boom, now he's got all the budget he needs. Okay, so what works is gonna be different in your environment depending on what people already value but identify those things and see if you can make a connection to them. Now, number three starts to get a little bit harder. And this is where you start to have some difficult conversations. Anytime we build something, we all go into it with some assumptions about what matters. And we all go into it with some requirements. And sometimes we need to help a team shift what those are. So one of those assumptions, monkeys again, one of those assumptions is we need all the data. Let's collect everything. We need your location. We need your email address. You sign in for Wi-Fi at the airport. Somebody thinks they need your email address and your age and access to your photos and why not, your shoe size just for fun. Why do we need all that data? What are we gonna do with it? Well, so the first question I ask is what are we gonna do with all that data? Is it a benefit? What's gonna come out of that? Well, how else would Facebook know to show me ads for German language ads for long-haired sheep? They have 10 years of data from me and this is the best they can do. You know, if they just asked me what I was interested in, they could probably get a little closer to meaningful advertisements. But as it is, yeah, dig into your Facebook ad tracking setting sometimes. It's pretty entertaining. So we have to have this Nuremberg code conversation with our teams. How will this benefit users if we're tracking this data? Are we prepared to manage the risks of it? For example, I've had conversations where people want to get people's social security numbers, which is a government ID number in the US that actually unlocks a lot of identity theft issues. Do you really need that for your medical research? No, because you're not prepared to handle the risk of that information getting exposed. How are people going to consent? Can we be transparent about that? So we have to have those difficult conversations. Now, moving up the difficulty spectrum. Sometimes we have to start being really explicit about the values because if we just say, oh, we value safety, that's a little abstract for people. That's hard to act on. At the execution level, it's hard to identify what that really means in terms of what we have to change. So we have to help teams figure out how to act on those values, how to measure for those values. Because if you're not aimed at optimizing the right thing, you'll swirl a lot because we're all acting from our individual values, maybe I value accessibility and you value performance. And there's a trade-off there for something and so we don't know which direction we should go. So let's clarify that conversation. Now if you say, guys, I wanna spend some time on our values that are gonna look at you like, aren't you a designer? Can you just go push some pixels or something? If instead you say, let's talk about our design principles, guess what, those are values in disguise. It's kind of like hiding the spinach inside the food that people actually like better. So let's talk about our design principles. So at patients like me, I started working with these guys as a client some time ago. And one of the things that they said, when I first started working with them was, our value is to put patients first, but we don't always execute on it. Why is that? Now the company started when, so that's Jamie and Ben Heywood and their brother Steven had Lou Gehrig's disease, ALS, which is a terrible degenerative nerve disease. Typically people lose function and stop being able to eat and breathe within about five years and there's no cure. And so patients like me started to help people like Steven share what they were doing, learn from it and create medical evidence from their patient experience. So the values of the company are inherently patient-centered. They're incredibly sincere in this. And yet the user experience was kind of lousy. So what happened was we identified that there were hidden values interfering with this core stated value. And we all have hidden values that affect what we do. So you might say, I value exercising and yet I don't have time to exercise. Yeah, actually what's happening is you're spending too much time watching Eurovision and you value that more than exercise. It's not that you can't, you just aren't being clear about what your values are. We get this all the time when people say we can't afford the time or money to do user research. That's not a fact. That's actually a value statement. Because you spend time and money on other things, you're saying user research is less important than those other things, that's what's actually being said. So in the case of patients like me, the hidden value was about getting the data. Because it's a medical research platform, there was lots of focus on tell us all the things. And that was actually interfering with the user experience in some cases. And so what we did was we clarified that these are two things that are in tension. And we said getting the data is an explicit goal, but not at the expense of putting patients first and helping them get better. So one of the things that we did was we dragged people out into the field and did some user research. And we then developed a bunch of aspirational principles and we said these are all things that we need to do better at. Let's use these to guide our decision-making. So for example, one of those is let me feel in control as a user. And this is important because when you're sick, nothing feels less like control. Everybody else is in charge of what's going on with your life and your health, right? And you feel helpless. So what does that mean in terms of design decisions? Well, for example, one of the things that happens is if you're walking through a questionnaire, you might be getting asked a lot of things that don't feel relevant to you. And because we're trying to track people's status over time, you could ask those same irrelevant things every month. And you're like, okay, this doesn't matter to me, stop it. So one of the changes was every time we ask you a question, you now have the option to say stop asking me this. This puts users in control. Now we introduced this in the science team said, are you freaking kidding me? You're gonna give me inconsistent data, what are you thinking? And we said, yep, we're gonna give you inconsistent data and you're gonna have to work harder to do the science. Because if you don't act on it when it's inconvenient, it's not a value. Values are things that we stick to when it sucks to stick to them. That's what it means to be a principal. And that's hard, a lot of organizations don't have the will to do that. So unless you're actually willing to adhere to it, when it costs you a bit of time or money, it's not a value and you have some work to do. So the hardest thing that we sometimes have to do is work to change those values. Because fundamentally, if the organization doesn't value humans, there's not a lot we're gonna be able to do. We can put a fine gloss on that user experience, but we're gonna be united airlines. Yes, as long as we're killing puppies, all the nice coffee cups in the world don't matter. But changing values is really hard. It's glacially slow. If you have an executive mandate, if the CEO says, yeah, we're not good at this, we need to be different, it's gonna take you at least two or three years, assuming you have a good change process in place and you have that strong executive backing. Now how many of us actually start with that? Not too many. Most of you actually have to get executive buy-in in order to make that happen. And so change is a much slower process and it's very difficult. It is possible, I've seen it happen. I've been part of it happening. It takes a long time, it's a very slow process. So managing change in an organization is itself pretty tricky. It's worth a whole workshop by itself. I'm gonna follow Jamie's lead here and share some books with you. These things are worth a read. The John Cotter book is kind of shallow. It's an airplane read. It's a good starting point. Sit through it, skim it for an hour and you'll kind of get the gist. The ones on the right are the ones that have really influenced my work pretty strongly. They go a bit deeper. They're a little bit nerdier, but they're very useful. So I'll show this why it slides on Twitter if you haven't had a chance to photograph them. I think the key takeaway that I hope you leave with today is remembering that it's not about the metrics, even though we have to be good at those, and it's not about whether organizations value design. We call it human-centered design for a reason. Design is the least important word in human-centered design. Human is the important word. So thanks for listening and I'll get you after lunch.