 This is Spotify Discover Weekly. Once a week, Spotify updates you with a couple of dozen tracks that their algorithm thinks that you'll like. And when I talk to people about Spotify, this feature is something that they tend to call out as important, delightful, as the reason they keep paying their subscription. But what about the UX design of this? Well, Spotify just reused their playlist pattern. So there's no new design. And the content? Well, the content comes from their algorithm. If you'd been the designer on this project, what would your contribution have been? Something like this? You draw a wireframe and say, well, put stuff the user wants in here. Because the cool stuff, the stuff that people love, the real source of value, all that stuff is being done by the engineer who defines the algorithm. And I don't know many people at this conference who do that. The reason that there's been a boom in UX jobs over the last 10 years is because we add so much value. But maybe that's becoming commonplace. Maybe algorithms are the next generation of value. And algorithms are changing more complex interactions than this. So take something like booking a train ticket. Over the last 10 years, the experienced designers at my firm have spent a good deal of time trying to figure out how to display the quirky, complex, often nonsensical world of UK train ticket prices. And making sense of that information, that's a huge IA task. Trying to display complex information on a small screen, that's incredibly challenging. And scheduling travel is just hard. Some people want the cheapest fare. Some people want the most flexible ticket. Some people want to arrive at a specific time. Most people want some combination of all of those things. So they end up exploring huge tables of data. They end up making mistakes or getting confused. And even when we've done a great job, users find this hard. So a lot of them just give up and go and ask a human being for assistance. That's why I think the future looks more like this. Natural language interfaces like this will mean if you've got a question, you just ask a bot. And you know what? Chat interfaces are really simple to understand because people use them all the time. They use them in text messaging, in instant messaging. So you don't need the interface to be redesigned. They don't need an IA to figure out how to display a table on a small screen. Everything's done here on the back end. It's developers all the way down and you can leave Photoshop at home and Sketch and Exure. So maybe people don't even need us. I run a business of 50 people and most of them are designers like you and me. And I've got to wonder how many designers will people need in the future? Will we get replaced by data scientists deploying algorithms? I guess we should be asking ourselves that question. Is my job about to get disrupted? So over the past few months, I've been working with people on projects powered by algorithms and I've been talking to people like Spotify about how they do what they do. And I'd like to share with you some of what I've learned about how algorithms work, about what it's like to design services that are based on them and where I think you've got skills that are vital to creating effective and satisfying user experiences that are driven by algorithms. So let's think about what it takes to design a service based on those and see what we can learn. I guess there are four big areas where you can think about using algorithms. So the first ranking, that's the most obvious. Google has a bunch of links, a bunch of ads and it tries to rank those in ways that are going to be appropriate to you as a user. Sensory input, that's the second. That's about recognizing a pattern like a face or an object or a spoken word. Google's DeepMind team are, I'm told, training an algorithm to look at a photo of a plate of food and figure out how many calories are on it. So that's incredible. That makes inputting data much easier. Agents monitor one or more data streams and look for those patterns and then alert you or even take action on your behalf. And conversational UIs, they turn interaction design into chat. Designing with algorithms doesn't feel like normal UI design. The system makes leaps in the way that it processes data and that often confuses teams. People end up either lacking ambition and saying, well, we'll make the user do the work. Same as usual. Or they expect too much and expect magic to happen. The value here is no longer in just providing the user with some data. It's in anticipating the user's needs correctly, guessing the future and short-cutting tasks. So let's try and figure out what a service powered by an algorithm might look like and what it might be like to work on designing something like this. So imagine we're designing for a bus company. So our user's need is pretty simple and what she wants to know is, where's my bus? And how can an algorithm help with that? Well, the first thing you've got to remember is algorithms need data to work on. And if you've got unique data, you can build a uniquely valuable service. Let's see what we can do. So the first layer of data that we've got is timetable data, right? We know where the buses are supposed to be and when. But that's public data, it's not unique. But what if we also have GPS data from the buses? That is unique data. I mean, only the bus company knows that. And now we can compare actual journeys to timetable. So we can see when the buses were late in the past, which is interesting, it's not very valuable. What about if we start adding other layers of data in? Data about local weather, about traffic congestion, about whether the schools were on holiday. We could look for correlations in that past data and use that to make predictions about the future. If we get the right data, we could build an app that says, your bus is gonna be eight and a half minutes late the day before it's even left the station. And that's pretty cool and pretty valuable. But algorithms aren't magic. Somebody needs to build them. And if you come up with an idea, you need to know enough about algorithms to have a sensible conversation with the engineer about whether or not how that can be built. So here are the basics of that conversation. You can break down the engineering task like this. You've got a set of inputs, our data layers about the bus timetable, about the weather, school holidays and so on. You've got an algorithm that processes the data and then some outputs in our case whether or not the bus is gonna be on time. So let's work back from there. As a designer, you're gonna need to know what kind of output is actually useful to the user. Is it enough to predict that the bus is gonna be late? Do you need to say the bus is gonna be more than five minutes late? Do you need to put an exact number on it? Your bus is gonna be five minutes, 27 seconds late. The more precise and detailed the output, the more complex the engineering task. So you gotta know before you start. Which is where things like user research and Wizard of Oz prototyping come in. The algorithm that gets us there, well, when you're designing the service, you start off with a raw algorithm and you train it to recognize situations and then make predictions. You do that by giving it a sample set of data, that's the weather, the road works and so on, and some known outputs when the bus was on time, when it was late, and the engineer adjusts the algorithm so that it takes the inputs and generates outputs that fit with the known data. There are lots of different classes of algorithms and it's really the engineer's job to pick the appropriate one, but when you're thinking about data, well, there are things that you need to worry about. Quality, for instance. In our example, that GPS data should be pretty accurate, right? But sometimes training data can be inaccurate or noisy and if the GPS doesn't work well, say, in some built-up areas, we'd have noisy data. So if your data's noisy, that can lead to overfitting. In other words, your algorithm, trying to make an inaccurate prediction based on errors in its training. So your engineer's gonna want to know about quality. If you've got a problem that's complex, if you've got one that relies on lots of different data layers or you want answers that are really precise, then you're gonna need more training data. One year's worth, two year's worth. It can be hard to find training data that goes back that far. An engineer's gonna get nervous if you keep trying to add layers of data. If you keep trying to add factors in, more layers, you add the greater volume of data they need. So don't just chuck in extra layers. Try and think about which ones are actually making the difference. Do you need the layer that tells you about school vacation dates or will the traffic data give you everything you need? If you've got a sense of what the main drivers are, you can get to the optimal solution a lot faster. If the data in each of those layers is unnecessarily complex, then an algorithm may end up being slow or unreliable. So it's a good idea to examine whether there's data that you can get rid of. Consider editing it down to be simpler. Take the data about the weather. Do you really need the, what do you really need to train the algorithm? Rainfall and precipitation is gonna be much more important than temperature. And you need to know the precise times of the rainfall or just whether it rains in a particular hour. Do you need to know exactly how much rain fell or just whether the rain was heavy or light? That's gonna determine how much information about the weather is in that data layer. And sometimes simpler data actually leads to more accurate output. It's kind of like taking a grayscale picture of some text and turning up the contrast to turns into a black and white picture and suddenly the text becomes clear. That works for you and it works for algorithms too. Here's the thing though. Predictions will never exactly match reality, okay? No matter how hard you try, you can't be perfectly right. But you can choose how you wanna be wrong. So if you're high bias, that means you're wrong but in a predictable way. And if you're high variance, well that means on average you're right, but any one guess that could be wildly off. You can choose which way you wanna be. So if you're building like a step counter, you can accept high variance. It doesn't matter if you over count or miscount or under count a particular step. So long as at the end of the day, you're in the right ballpark. If you're trying to guess whether a bus is gonna be late, then it's better to build an algorithm that tends towards saying the bus will be on time that way nobody misses their bus. So even if you can't be spookily accurate, at least you can satisfy your users. So at the end of all of that, you've got a trained algorithm that's delivering information you want based on the data that you have. And now you can set your algorithm loose on some real data, make some real predictions. Chances are it still won't be accurate enough. So running a closed beta or having a live service with a feedback loop means your algorithm can continue to learn without the need for supervision from somebody feeding it training data all the time. And at the end of that, you've built a prediction machine. But think about it all the way through. There's a dialogue between designer and engineer about what's possible, about how to present it to the user. And I think that's a clue to our future role. There's tools and APIs proliferate, perhaps more of us will be taking on that task of training algorithms in the next few years. There are incredible toolkits from Google, Facebook, IBM, here's one from Microsoft, and there's a ton of open source materials online that you can tinker with. But the real places that you add value and defining what the inputs are or the outputs should be and how they're presented to the user. And it's really easy to get that wrong. If you wrap up your recommendations in InDesface that promises human-like interactions with less than human manners and capabilities, people will revolt. And the reason that Clippy is so hated is because he's brash. His interruptions can't be ignored, he's patronizing. Having this thing pop up when you're trying to do your work didn't feel like fun, it felt stupid. So if there's a high chance that that interruption is gonna be unwanted, then you can take a quieter, more humble approach. So Ios Mail sees that I'm writing a message to David and Richard, and it suggests I probably wanna send it to fellow team members Verity and Paul too. And that's a pretty good guess. But on this interface, it's one that I can easily ignore, skip over. I think a lot of next-generation interface design is gonna be around the etiquette of suggestions and assistance. If you're getting started out with algorithms, don't start out with grand ideas. Begin with something small. So here's a small example of efficiency and delight. This is a contact manager that I use. You type in the next action and a due date. But if I type in, call Sarah on Monday to confirm next steps, it updates the due date automatically. It sees Monday and it goes, oh, that must be next Monday, got it. Designing that kind of micro-interaction is a great way to learn about the etiquette and suggestions of suggestions and interruptions. With more complex situations, things like natural language interfaces, essentially, well, they're just collections of algorithms too. So you need training data, logs from online chat, pretty good place to start. You can use that to understand conversational patterns. You can use that to understand language patterns, even common misspellings and slang. You can simplify that training data, for instance, by looking for the conversations that solve customers problems in the fewest possible steps. You can look at what a good conversation looks like, either by reading through these logs, analyzing the logs, by observing real conversations, listening to customer service reps in call centers or in the field, or by researching the psychology of conversations. And a few things kind of jump out. And the first is that conversations can't start with a blank screen. You show somebody a blank screen and one or two things happens. Either they freeze and they don't know what to type, or they type in teleport me to Venus. So start by explaining what the service does. Anchor the conversation and make sure the user is set up on the path to success. One of the complaints about bad conversational interfaces is that they're verbose. But when you look at successful succinct human conversations, you find that they're based around common ground, around shared context. So for instance, if a chatbot knows your transaction history, then it can pull up your transaction data and that can form part of the common ground, part of the shared understanding and you get super efficient conversations like this one. A friend of mine's been working on prototyping payments for an online bank using chat and he says they take about a quarter of the time as normal form filling. That's why there's so much buzz about chatbots. So if you don't have the data you need to establish common ground, you're gonna end up with interactions that are long verbose, convoluted, irritating. Here's another example of the psychology of conversations. Just because an answer is correct, doesn't make it a good answer. You can give a correct answer like this. It's a complete usability fail because I'm in a hurry. I don't need this level of detail. These are signaling something really important about the format, the answer that they want. People operate on the principle of least collaborative effort. In other words, both parties try to figure out how they together can get through the conversation with the least possible effort. It's part of the idea of theory of mind where you're constantly trying to guess what the other person's thinking. So if you're designing for natural language interactions, you need to pick up on those cues. When you're in conversation, everybody tries to balance the volume of information they're giving with time pressure, the risk and consequences of error, and try to take advantage of that shared knowledge. As a designer of natural language interactions, those are the factors you need to take into account. Give your bot a theory of mind, make it easier to deal with. You can also use that to signal to your users how they should treat your bot. So a few years ago, this text adventure game, Lost Pig, had the user telling an orc called Grunk what to do to help him find his pig. Well, because you're dealing with an orc, you know that you've got to keep your language simple, and you expect the odd dumb reply. And that's a cute trick, and it's got personality and humor, but it serves a practical engineering purpose. Make sure the user doesn't bump up against the limits of the chat bot's understanding. I think, you know, you can call it ethics or good manners, but you need to give users a clear signal of what's going on behind the curtain. Are they talking to a machine or a human? So that friend of mine, Pete Trainor, the guy who's working the chat service at the UK Bank, he's designed it so it deliberately refers to itself as we, not I. And that slight conversational weirdness means that people understand all the time that they're talking to a non-human. And of course, they can ask to be transferred to a human at any time, but as he continues to tinker and improve the service, more and more people are sticking with the bot. So he started off with about 70% of people just asking to talk to a human, and he gradually got that down to 30%, but being honest about your service being a bot isn't a problem. Making it work well is what counts. So you can see that we do have knowledge and skills that are necessary, but they're kind of shifting. I've always looked to human-to-human conversation patterns to figure out how to solve interaction design problems. Now what I'm finding is that understanding human-to-human conversation is not just necessary because it's an interesting metaphor. It's going to be central to what we do. Our skills of understanding people at the heart of interfaces that are run by algorithms. What about Discover Weekly? Well, I spoke to Matthew Ogle at Spotify, and it turns out a large part of the design work here was about understanding how to package up that service. So the playlist format, well, they used that because it was familiar to people. There was nothing new to learn. Limiting the size of that playlist to a couple of dozen tracks, that was a key insight from user research. It gives the feeling of a mixtape from a friend rather than a data dump from an algorithm. Data visualization helped them understand the structure of people's music collections so they could see that, you know, broadly you listen to a lot of indie, a little bit of folk, but over here there's a little bubble of Disney music, and that's not because you want to hear a Disney song. It's because you're playing that to your kids. So they kind of filter that out. It doesn't appear in the recommendations. Discover Weekly doesn't recommend any old track. It prefers to recommend tracks that it sees humans recommending to each other. There's some indefinable quality about the difference between those tracks and other ones which share the same parameters. So it uses that, it pulls it out, and it promotes those type of recommendations. All of that goes to making Discover Weekly feel like it is a mixtape from a friend instead of something inhuman. So engineers made the service robust, but it was up to the designers to make the service feel elegant and approachable. So you know what? I think the way that we've done things in the past is about the change, about to become disrupted and automated and perhaps even redundant. But the core skills that we've got, they continue to be vital. They're vital to build tools, to plan services, to fix new kinds of problems, and to understand the foundational knowledge. So if you understand the foundational knowledge, the structure of human conversation, the principles of etiquette and behavior, if you learn the new technology, get to grips with what algorithms can do and transfer your know-how, you've got the opportunity to take a lot of experiences which are potentially broken and make them better and fix things. And that's where we add value. Thank you.