 I was just saying that, you know, it's a real pleasure to have Nate Clinton here. He's the managing director for the San Francisco office of Cooper. I actually never met Nate before this conference. It's actually a great pleasure to meet him. David Hussman, who is the program chair for today, highly recommended that we should do whatever we can to get Nate to this conference. He's had a really great experience working with different clients. And I think his topic is of real interest, at least to me, and I'm sure to a lot of people because of our involvement with machines every single day and how the life is going to progress. And I think Nate is going to throw some interesting lights on where we are headed with this. So thank you so much and over to you. You've heard a lot of talks today about discovering and building new products, both from the technology and the design side of things. So I want to end today by telling you a little bit about this new frontier in technology that I'm personally very excited about, which is what are being called conversational interfaces or conversational UIs. Before I get to that, I'll just introduce myself and introduce who Cooper is. I won't assume that you all know who it is. Cooper is a design and business strategy consultancy based in San Francisco in New York City. And I'm from San Francisco. We've been around about 25 years, which in the user experience field is pretty much the beginning of time. We do a lot of work with a lot of recognizable brands and we enjoy that a lot of big companies like the ones you can see on the screen here and also a lot of smaller startups and mid-sized firms in the area and around the world. What we do is we apply our own brand of our own version of design thinking broadly speaking to a myriad of different problems. So problems in lots of different industries, enterprise software, but also in higher education and in finance and in healthcare. And we do that through a bunch of different tools from visual design and brand strategy to interaction design, to new product development, to user and customer experience. So we help these companies basically understand and decide what to build and then we help them understand and decide how it should work. Cooper was founded by this guy, he's Bob, that's his face, Alan Cooper is a real pioneer in the software industry. And I want to talk about him for just a second because I think it establishes a real connection between Cooper and Agilindia. Alan was an early pioneer in Silicon Valley, he wrote some of the first serious business software back in the 70s. Around 1992 he stopped programming, oh let me just mention this father of visual basic things. He kind of invented a visual programming interface, he got really into that and he wrote a specification for it and he sold that to Bill Gates who then made visual basic out of it. So a lot of times Alan was known as the father of visual basic. But around 1992 he stopped coding because he realized that too many engineers were writing software that people hated and he asked why is that? And it turned out that no one was really thinking about those users of the software, they were thinking about requirements, they were thinking about functionality, but they weren't thinking about the human beings. So he started writing books and sort of thinking deeply about those human beings. And so instead of coding he started creating what now is the field of interaction design or user experience design. And he invented a lot of the core tools and techniques that the industry uses that you may have heard of like user personas is an insight that Alan had that was the first to use it and write about it. He also sort of popularized the notion of using stories to describe software and describe users succeeding in their goals as they use the software which of course sort of pre-saged some agile ideas around storytelling and software development. So Alan's a really great guy. And I mention all of this because Cooper is actually sort of, you know, we derive our culture and our ideas from Alan in a lot of ways as our founder. One of which is that we care a lot about the things that we design becoming real in the world. We want them to be built. So we're a sort of practical results oriented type of team. And the other is that we have this long history of being on the edge of, you know, new ways of thinking about software design. And so that's why I'm really excited about this topic because it's a new paradigm. It's a new way of interacting with software and a new way of interacting with machines and things are going to change. So I'm going to say a few words about conversations with machines. And by that I mean systems that you can talk to with natural language. So natural language being just the language that we use every day as we talk to other humans and not the language of computers. So that means kind of what chatbots do like these assistant like tasks that you might see built into Facebook Messenger or WeChat or Slack. This is the Alibaba retail chat bot on a website. Really these are kind of things that you type with. So natural language but they're typed on a keyboard sometimes in mobile sometimes elsewhere. Mostly it's kind of an alternative to traditional screen based UIs. You're doing tasks like looking for something to shop for or looking for a hotel to stay at, things that you could typically do with a screen. On the other side of the equation is what I'll call voice first devices. These are devices that you can just talk to and they talk back in a human sounding voice. You've got Amazon Echo in the middle there and it's like got all these microphones on it. Whenever you say Alexa, this magic word, she lights up, you can ask her question, I say her, it and it will respond to you and do things for you. Typically things in the home, this is the Google home actually next to it, the white cylinder there and they'll do things like play music, control the lights if you hook it up and so on. Of course Siri and Google Assistant is built into most modern smartphones. If you have a recent iPhone or a very recent Android smartphone. There's actually a lot of these conversational systems out there already. This is an interesting case of a conversational user interface. It's a doll, it's Barbie, it's called Hello Barbie and basically it's a thing that children can talk to. It's actually really sophisticated and very cool conversational system. We'll talk about it in a second. Anyway, there's a lot of different manifestations of this new paradigm. Here's an example of what kinds of things you might ask Amazon Echo to do. And these are all drawn from some marketing materials that Amazon sends owners of the Echo every week to try to teach you how it works. You can say, hey play station name if you know the station name on some service called tune in. You can say, what's the current moon phase? That's kind of interesting. I don't know what it means usually, but there's some playlist somewhere that I can play on prime music. There's a lot of music related things. You can say, Alexa turn popcorn on if you've hooked it up to your popcorn making machine. So it'll control things in your house. So I think there's a bunch of current challenges with this new paradigm. It's very trendy, very interesting. I think a lot of us maybe in this room acknowledge that it doesn't have a huge amount of utility. Like how many people really need to know what the current moon phase is today or to turn their popcorn on. It's not a difficult task. So maybe the value isn't quite there, but I want to convince you that despite these challenges that I'm going to go through that there are some very big opportunities available to us. So some of the challenges, and I don't think any of these are insurmountable. I think we can get over these pretty quickly. The first one is that it sort of requires you to memorize some kind of syntax. And I think speaking to a bunch of engineers, that might sound easy. But for the average human being, that's actually very difficult to remember how to invoke things. And without a screen, you really have no clue whether you're doing it right or wrong, and you really have to study hard and do some research to know if you can succeed. Something interesting about this is that a lot of conversational systems rely on the sort of traditional set of individual applications or apps in whatever system it's using as sort of a back end. So here's an example. It's a screenshot of the sort of set of apps in the iOS app store that supports Siri, because they say, hey, Siri, get me a lift to SFO or hey, Siri, I need an Uber home. These are both basically identical services that a car comes and gets you and then takes you to a destination. But you have to know the name of the service in order to invoke the right one. Similarly, Venmo and SquareCache are both basically US-based payment, like personal payment systems, but you have to know the name of the service in order to send the money through one of the others. You have to say pay Liz $18 for dinner using cash versus pay Gary $30 for tickets with Venmo. You have to know how to do this. You have to remember how to do that. This is even worse. You can look for images on Vogue Runway, Pinterest, Canva, I don't know what that is, but a lot of different services offer image search. So you have to constantly be thinking, which service am I using? Which one am I supposed to be invoking at this moment? So there's a lot of memorization that happens here and a lot of things that have to stick in the user's mind for them to use it properly. I don't think that's going to be a huge problem in the long run. I think we're going to figure out a way of both creating conventions that users can start to rely on. One of the problems with these systems is that they're so new. People don't know how to use them. I think the same was true back in the early days at mobile when people didn't understand those conventions. And then they started to learn them and now we have a sort of base set of expectations for our users. So this will be alleviated over time. The other thing is that these systems have a limited number of domains that they know about, right? And you don't know which they are. So here's an example. When the iPhone Siri will sort of tell you, here's some things you can ask me. That's always the question you have when you start up one of these things. What am I allowed to ask it, right? So you can ask it to look up my videos taken in New York City. You can ask it to play the hottest YouTube tracks. You can ask it to Bing Norah Jones. So there's these things that you can do and you have to kind of scan them and see, do any of these make sense? Do I care about any of these? But what about the thing that I really care about, which is turning my popcorn on or whatever it is? I don't see that on this list. So it's hard to know kind of what you're allowed to ask any of these systems in a given time. Wolfram Alpha is a system that actually serves as the back end to a lot of different Siri kind of lookup services. They call it a computational knowledge engine. It's not an Apple product, it's made by Wolfram Research, which also makes a very popular software package called Mathematica. Anyway, they created this big system full of these really tiny little micro utilities. And again, it's sort of one of these, you're allowed to sort of ask it certain things using semi-natural language and it knows about lots of things. And I think this actually is the future of the whole question of limited domain, which is once we expand the number of domains to the point where any question you can think of is pretty much answered. Think of the scale of Wikipedia in one of these systems. Then you start to sort of decrease the error rate starts to decline and it starts to feel more natural, not quite so machine-like and mysterious. So that's a bit of a problem, this limited domain. The other problem with limited domains is that they change all the time. So Amazon's always adding new things, new tricks that these echo devices can do. But it's so it's a moving target. And it's a real burden on users that they have to remember what these are. A really big problem with these systems today is that they have a limited contextual memory, so they really can't carry on a conversation that feels human. Cuz they don't remember what you were talking about five seconds ago. They can't keep that thread and refer back to things in a way that you might feel is natural. And this is really a limitation in our systems that we have for dealing with natural language. Amazon recognizes this. They have a big $2.5 million prize for the team that can make a conversation last 20 minutes and feel pretty natural. So it's sort of like the grand challenge, like one of those self-driving car challenges from 10 or 20 years ago. That's kind of where we are with this. We're still at early stages, but I think we're gonna make progress. The other big challenge here is that they usually fail rather catastrophically. As soon as you wander outside one of those limited domains, it just sort of throws up its hands. And this is pretty much the most common thing Amazon Alexa will ever say. Is that I just don't know. And sometimes it's because it can't interpret what you said properly. Maybe you spoke too quickly or too softly or you didn't enunciate. Or maybe it just doesn't know that domain. It really doesn't give you that feedback. So it can be pretty frustrating to use some of these early systems. Now that said, there's some challenges. I think we can overcome them. I think they're being worked on. I know they're being worked on day and night by these platform vendors like Amazon and Google and many others. But I really think these are a big deal. And so I wanna convince you that they're a big deal. So why do I think these are a big deal? One is that they're pretty cool. And pretty cool is actually sort of one of those necessary but non-sufficient kind of criteria. If we want people to want these, otherwise why bother, right? But there's lots of cool things that turn out aren't very fundamental or game changing or life changing, right? Google Glass didn't change the world. Nobody wears them, nobody wants to. Augmented reality is pretty cool though, you have to admit. But it's not enough to be cool. Here's another cool thing, Nintendo Switch. I don't even know if they're selling this yet, but it looks pretty cool. It's not gonna change our lives though, so cool is not enough. Lots of cool things don't change our lives, but I really think that conversational UIs will change the way we work and change the way that we interact with our fellow human beings. In any case, it's pretty cool. The other thing is that humans really want it. And the evidence that I have that humans really want it is that humans talk about it all the time and think about it all the time and have thought about it for millennia, not just decades. All of our stories from history are about things that are inanimate made animate. Things that we create out of nothing that talk to us and that we have conversations with. This is a myth from ancient Finnish folklore about some blacksmith who got so desperate he made a wife out of gold. That story did not end well for him. Lots of other stories though, sometimes we fear them. Sometimes we love them. Sometimes we talk about how it will shape our society. Sometimes we dream about it. But in any case, we seem to be obsessed with this notion of creating this thing that we can talk to. Creating our, sort of creating new light out of nothing. I think humans really want it. The other reason that conversational UIs are a big deal is that they're natural. And when I say natural, I mean that in the way that I might say it for like sort of the interaction design sense of natural. But also in the sort of social sense. So 10 or so years ago when the first iPhones came out, they kind of blew everyone's mind because you could just touch it and do stuff with it and that was amazing. Before you had all this stuff in between you and the software, between you and the ideas, you had styluses and keyboards and other things. Trackballs, this one you could just touch. It felt really natural. That was part of what made it work like magic. That's what made it feel really magical. Another way this sort of gauge is this natural is kids do it. We see kids use touch devices all the time, maybe a little too much. Cats can use these things. It's so natural. All animals can use a touch device. Because dealing with the world is just about manipulating it directly, right? It matches sort of all living things, mental model of how the world works. And so it becomes easy to use because it matches that mental model. We have this kind of instinct that we want to interact with this stuff because it matches what's in our brains. It's a penguin playing a game on an iPad. So can kids use conversational eyes? You bet. This girl in Kentucky I think accidentally ordered a dollhouse and four pounds of cookies using her parents Amazon Alexa. Back to Barbie, she's super interesting. Actually maybe one of the most compelling examples of conversational systems today. Not the most technologically advanced and we can talk a little bit more about why. There's actually a lot of brute force technique in this system. All of the responses that she gives are recorded by a voice actor. There's nothing synthetic or generated by code. It's all hard coded in these complex flow charts. But it gives this really great illusion. It knows about so many different topics. It knows about holidays, it knows about cuisine, it knows about favorite colors. It'll play games with you. It knows about unicorns, Hawaii, chores. It knows about everything. Everything that a kid would care about anyway. Whenever you sort of start to wander outside of the domains that it knows about, it'll gently nudge you back into a domain that it wants to talk with you about. So it's actually as an adult using it, I bought one of these because I was curious, as an adult using one of these, it's very tiring. Because it just wants you to keep talking. But it does hold your attention, especially a child. It's also very clever. It knows what time of day it is. So if it's in the evening, it might ask you about, can you see the stars? What do they look like? If it's in the morning, it'll say. So excited to see you. I couldn't sleep last night. It's very cute. Obviously geared toward a specific demographic. But it definitely is a compelling system. So kids can use this. It's natural. So here's the meat of the thing, right? There are a bunch of new business opportunities opened up by this kind of interference. I'm going to talk a little bit about actually three different business opportunity types that I think are really important here. One is that it enables new experiences for customers to be delighted by, I don't know, a brand or a company or a product. The other is that I think it opens up new ways to scale. So I'll talk about that briefly. I also think it opens new ways to prototype a new product. So that really fits with today's topic. We'll start with new experiences. The first is an example from Amazon that's been doing a lot of great marketing with some really interesting brands. One thing they did was they developed a skill to deliver what they call whiskey education, which is you sit in your living room and Alexa gives you a guided tasting of Johnny Walker whiskey. And that's fascinating. I mean, as trivial as this is, it actually is pretty interesting. Because now Johnny Walker has a way of interacting with you in your living room. And this is not an opportunity Johnny Walker had before. So it's a new business opportunity, a new way of creating experiences that are much more personal, potentially. So I expect to see more of these in the future. Maybe you can create those. There are these new ways to scale a business and scale a service. These three applications are not widely known, I think, outside of maybe the Silicon Valley bubble that we live in. But I'll just mention them quickly. One is called FIN, and it's basically a virtual assistant. So it does anything and anything you can imagine. You just type into this app and you say, remind me to do this. Remember that. Tell me to do this. Make me an appointment at the barber, whatever you want. And basically, if it can handle it with its AI, its computer, it'll handle it. And if it can't, then the human will handle it. But the user is really not aware of whether a computer is handling it or a human is handling it. So it abstracts away. Magic is another kind of super high-end concierge service. This is the example from their website. I need a private helicopter to LAX in three hours. So it's some crazy Uber-rich people may be using Magic. But this is the same kind of concept. The whole interface is text messaging, and it's kind of a concierge, and they do stuff for you. You don't know if it's a computer responding or if it's a human responding. And you're not supposed to care. It's just magic. The third one is x.ai. And this is a really interesting. The problem that they're solving here is you've got five people, and they all want to meet for lunch sometime in the next few weeks. But it's really hard to coordinate schedules. So you tell Amy, the avatar for the service, hey, I want these five people to go and have lunch together. And Amy starts firing emails to everyone saying, can you meet on Tuesday? Andrew's free on Tuesday. Hey, Nate, are you free on Tuesday? And it starts to send emails like that in a very conversational tone. You reply to her, and she says, oh, OK, I see. I'll try to reschedule for Wednesday. It seems like a real person. It all happens through email, though. There's no app or other interface. So an interesting application of a conversational interface. But the key about scaling is that humans and computers can sort of happen in different ratios, and basically allows you to scale more easily. So you've got the idea of, in a service model, you've got a concierge at a hotel who serves a certain number of guests. It doesn't scale very well. If you want another guest to be served, you basically have to build a new hotel. In the example of a customer support line, you've got individual customer support agents serving inbound requests, and they can serve a certain rate of them over time. You want to serve an extra customer. You just hire one more customer service agent. So scaling becomes a lot easier. In this model, I'll call it the air traffic control model. The central entity is actually sort of just a blob, a pool of people, humans. And you sort of just interact with the middle, right? You say you call the air traffic control, and the pilot doesn't need to know who answers. And maybe someone different answers every time. And it doesn't really matter because the pool all sort of coordinates together. But this model, the one that conversational UIs enable, allows for much easier scaling because a bot, an automated service, can serve most of those inbound requests. And if something fails, you can fall back on a human. But if you don't need that many humans, as long as the bot is very good at its job. So anyway, this, I think, is actually a really useful model of scaling that I think conversational interfaces are very good at enabling. It's hard to react in real time to someone who's clicking on a button in the wrong way or misspelling something on a sort of a graphical user interface, but in a conversational type thing, you can have someone sort of step in at the right moment. And it feels really seamless. Conversational interfaces also offer a new way to prototype, which is fascinating. This is a company called Digit. The URL is digit.co, and it's a financial technology startup. And basically the whole thing they do is they automate savings. I'm not a very good saver. I don't save my money. But Digit helps me by taking like a dollar out of my checking account every day or two days or so. It kind of seemed random to me. And it always keeps track not to take too much, it just takes a little bit. And then a few months later, I look at my Digit account and I have a little money and I can go out to dinner. So that's what it does. It automates saving. It just takes a little bit of money at a time. But the amazing thing about this service that started a few years ago is that the whole interface is just SMS. That's it. You issue commands and sort of ask you the little questions and you answer them. And the genius thing about this is that when Digit was first starting, they didn't need to build an app and submit it to the app store. They didn't need to build a really complex UI and risk that users wouldn't know how to use it. They didn't need to do any of that. They just kept updating the backend of the service and people kept using it. Maybe if they failed, they can update that quickly. So they were able to iterate very, very cheaply. So in essence, it allowed them to live prototype their service. Their service still was based on a conversational interface. It now can take many forms. And I think they just launched a Facebook chat bot to serve the same thing. But look at this company. This is all they did for a friend of them. They didn't hire anyone to do any Swift programming or anything. Last week, Fast Company called them the second most innovative company in finance 2017. All of the year's not over yet. So it's a fascinating success story for such an amazingly simple user interface. Prototyping on the other side is about design. So as a designer, I'm used to sort of drawing things. You know, drawing a screen, drawing a wireframe. For voice, that's actually really hard to do because there is no thing to see. There are new companies once called SaySpring where you can prototype a voice application very quickly and it allows you to user test, it allows you to sort of experiment with how does this feel? What does the conversation feel like? No code required. It's super simple. Amazon, the platform really allows for this. So interesting sort of R&D product business opportunities available with conversational interfaces. The last reason I think that conversational interfaces are a big deal is that we're just getting started. So I think back to the beginning of iOS and I think not many people saw how big you know, smartphones and mobile technology would be. We just didn't know how incredibly huge, how sort of world changing it would be. Every industry, every country, every person has been somehow affected by that new paradigm of interacting with technology. And this is an opportunity to get it on the ground floor, so to speak. So you could go out and actually kind of get involved in this thing which will be game changing early. This is voice first device sales. Last year you can see it's accelerating. This year is expected that the total number will exceed 30 million which isn't that big but look at the growth rate. And these are only available in the US, the UK and Germany right now. And this is by the way only, no this is across all device types. So it's really, it's a tiny sliver of the world even that has these to be available to them. So it's a big opportunity to sort of get in on this early. These are the number of apps or skills available for Alexa specifically. That 10K, I added that bar because I just found that data point. This is accelerating also. The big problem here is that the quality of these skills is pretty low at this point. It turns out most people use these just with sort of whatever default skills or options are available. This gets back to the whole requiring people to memorize syntax and understand domains and remember them. It's hard to do that. So installing skills and using them over and over again is just not yet an easy task. Amazon's working very hard at that but it's still not very easy. Those three that I marked there, those are the only examples in this list that are not just built in to the default device. So you can see that a lot of people aren't installing new software for these devices yet. And part of that is about convention as well. Once people sort of gain that expectation. Here's another kind of issue I think that the platform developers are really gonna have to solve which is retention. So the two lines there, the red and the blue line are how many, the percentage of people that still use an app after day one, day two, day three through day 14, 30, 45 and so on. And you can see that basically on an Android or an iOS device about 10% of people who start using an app still use it 14 days later. For an Alexa skill it's 3%, it's much lower. So the retention is a problem and I think they're gonna need to solve that somewhat. This is all gonna happen this year. All of these things. We're gonna see new hardware from new platform entrants. At least one of those three, Apple, Microsoft, Samsung maybe two. We're gonna see new software that enables much more social use cases from some of the big players whether it's Facebook or Snap or whoever. Maybe it's hype. We have to see a new way to monetize what goes on in this voice app because right now you can't make money doing it. And of course developers only work for three for so long, you tell me. So that's gonna happen. The other is kind of direct person-to-person communication right now is really just a one-way path. You talk to the device, it talks back, but you wanna be able to make phone calls. You wanna be able to send messages and things and talk to people through it as a speaker. So that's gonna get added. The ability to send data the other way. So this gets into use cases around sort of industrial use cases. Like if I'm a manager at a factory and I wanna make sure and monitor what's going on in my factory, I might be sitting in my office typing away and my Amazon Echo says, hey, something's wrong on the factory floor. So that kind of push notification is not yet available. Productivity offerings. This is an area where Cooper is, we're starting to explore designing and building new Amazon skills around productivity because we want our office to work better and we think these things can help us do that, whether it's facilitating stand-ups or booking conference rooms, helping us manage our calendars and so on. These things are really gonna be useful for that. And the last is voice ID. So we don't want our six-year-old ordering doll houses every day. We also wanna make sure that these devices know who's talking to them so they can respond to them personally and know what their preference is on. All of these things are gonna make for a great platform. This is really kind of on the platform vendors to do, but you'll see these start to roll out soon. One promising strategy for kind of making a great experience, as I mentioned before, is really to fall back on human operators. So I think we saw that with these in the scaling example, but really is a thing to think about as you build out and prototype these new services is how do you prevent that catastrophic failure? It was really frustrating experience and will lead to that lower tension that we see in the data today. The other sort of interesting strategy that most applications and most services have sort of by default started to do is personify them. So each of the devices has a name or an avatar for the thing that's talking. A lot of these services have names for the service sort of element. The bot itself has a name. In this case, they actually made a LinkedIn page for it. So personifying the service is an interesting strategy because it turns out that humans personify everything. I mean, anything that we talk to, we sort of imagine is a human. And there's a lot of great research about this. There's a book called The Media Equation that you really should read about that. Really about how people interact with and project humanness on lots of inanimate things. It's called The Media Equation. We've looked at a lot of different examples here. I think it's interesting the kind of diversity that these conversational systems come in. And you'll see they aren't just one thing. It's not just the Amazon Echo. It's not just the Google Home. It's a lot of other kinds of applications. I think that creativity will continue and that diversity will increase. I think I'm gonna leave you with the last big opportunity which I think is really relevant here today. It's the biggest gap and the biggest opportunity which is that most of these kind of voice first, especially devices are only available in the US or the UK or Germany. And part of that is that the people who make them, the platform vendors in the US, don't have access to a lot of cultural understanding that only can be made here or only can be made in a local setting with a local context. So I think that's one of the biggest opportunities. That understanding can only come from a local understanding or a local immersion. It can only come from people who have access to culture about what art and music mean, about how we negotiate meaning between ourselves and our new machine sort of cohabitants. We need to understand how we identify ourselves as part of a group about humor, about emotion, about the structures of sort of interpersonal power dynamics which vary quite a bit across cultures. We need to learn more about ourselves so that we can teach the machines. So that's our job. All right, so what I wanted to understand here is that while all these conversational apps do communicate, right? How do you bring the human aspect of connecting with customers and empathizing with them? Now it could be very easy for these apps to talk in that language and show words of empathy, but when it truly happens about modulizing the tone and not having a very monotonous tone so that they can really feel like this app is empathizing with them, how do you handle such challenges? That's a great question. I think part of what makes these things feel so real and magical today even is that speech synthesis has improved so much lately. So I don't know if you guys have Google Maps or something giving you directions. It's actually pretty good relative to the old days of speech synthesis which sounded almost like a, I don't know what, like a robot essentially. So that's getting better I think. But yes, you know who needs to understand this are linguists and cultural theorists and people who have a grasp of media studies. These are like people from the humanities. They're not necessarily engineers. And I think one of the things that has happened in the last, I don't know, 20, 30 years as the rise of computers has been to separate engineering from humanities. One thing we do at Cooper is we borrow a lot from the social sciences, anthropology and psychology as we study the people that we wanna design things for. And I think the same strategy needs to happen here which is bringing in experts who understand this from studying humans and help them translate that knowledge into these new systems. But it's a hard task. Hello, where do you see this market 10 years down the line? I'm sorry, what's the question? Where do you see this market 10 years down the line? 10 years down the line, yeah. I think, you know, again, one of the things that I think the breakthrough as far as Amazon Echo was the fact that it wasn't that it necessarily is more sophisticated than Siri, which predated it by several years. But the breakthrough was environmental, was contextual. It was that it could sit on the shelf and you never had to sort of take out a phone or fiddle with it. It was just ambient, it was always listening. Which has its own sort of surveillance implications. But 10 years from now, I think it'll be even more pervasive. I think it'll be even more ubiquitous. I think we will have imagined use cases for conversational interfaces or for voice driven devices that are not in the home, that are not consumer based, but I think it'll actually make a lot of inroads into verticals that we haven't imagined, like medicine, where sort of hands free, spoken operation could be hugely valuable. Heavy industries even where you need reference materials on the fly or manufacturing where we need advice or data at a moment's notice or unexpectedly. I think you're gonna see this permeate industry. I think you'll also see it permeate business in the sense of productivity, as I said before. I think a lot of our daily lives will be, we'll find that we rely on these machines more and more, little by little, instead of referring to our calendar every second or maybe even instead of referring to our smartwatch to tell us where to be, we're gonna have things kind of talk to us a little bit more. So I think we'll see that amp up as these systems become more sophisticated and as they access sort of greater pieces of our lives. Yes? I don't know if it's the right question to ask, but given that you're going to be very prevalent, and voice is going to be the next big thing, everybody agrees on that. How do we companies like yourself who specialize in user-intuitive, user-experience design, they're changing your strategy? Yes, that's the big question, right? I mean, everyone thinks of design as drawing screens, and we like to think of it as designing experiences but it turns out that all the tools that we've made for great screen-based thinking works equally well without a screen. So tools like doing user research to really understand user motivations and user goals apply equally well here, creating user models and personas apply equally well, telling stories about those users as they succeed in their daily life also apply. So a lot of these tools we can sort of port over to this new context or the new medium, and I think will be equally successful. That said, I also think we're going to have to design new tools to deal with this stuff because it's different. It's not quite so session-based maybe in the future, just having kind of an ambient assistant that's sort of always there, but you're not maybe staring at or paying attention to. It's going to require a different kind of design, I think that could be- When the visual medium to verify what I have typed is missing when there has to be something else. Yes, feedback, exactly. Visual feedback, auditory feedback, those kinds of things. Right now we have some designers at Cooper who are really interested in motion design and motion design is interesting because it can help you understand what's going on in a really compelling way visually. You don't have that with these devices, at least not yet. There's a lot of talk about maybe adding screens to them that's maybe the right thing or the wrong thing, I don't know. But what's the equivalent? The equivalent might be sound design, right? And I don't think that's a core competency yet in the user experience field, but maybe it ought to be. And so maybe we need to hire more musicians. Should get on that. So it was really fascinating, those ideas. Looking at the social angle, for example, nowadays even for the invention of Android phones, people are restricted only towards only Android phones, they don't even talk with the language. For example, in a family itself, father is not talking with mother, son is not talking with the mother. So that kind of scenario has developed in our world. So have you looked at the social angle of it and how you are going to understand? I mean, there are a couple of answers. It's a really good question. It could be that the social angle, so to speak, is kind of mediated by a third party. So maybe it's Facebook that drives all of that. And you just use this voice device as kind of a terminal into Facebook, right? It could also be that these platform vendors somehow figure out a way to sort of standardize things. I think with mobile technology, it's sort of a triumph of standards, really, right? We have GSM, we have SMS. We have all these technologies, global technologies. I have my phone right now that I did nothing to and it still works, even though I've flown around the world to be here. So that's crazy and that's amazing. The question is, will the same kind of convergence or standards-based thinking happen in voice? And maybe not. The problem with voice is that, of course, it's not driven yet by open-source software, which is one of the drivers behind that standardization globally for mobile technology. So I don't know the answer, but I hope, I hope that that's the case. Thank you. All right, I think we're running late, so we'll try and wrap up the session, but Nate is gonna be around, so if you guys wanna ask for a question. Thank you so much, everyone. It's been an honor. Thanks, Nate. Thank you. Thank you.