 It's nice to be here. I spent half my time in Boston and half my time here, so it's nice to be back. My name's Amber Case. Like, you know, could you just hold up all your phones? I think you won. You're the fastest one. So the idea is that you're all cyborgs, but some of you are more reluctant cyborgs than others. And you don't have to have anything involved when you give cyborgs. The term cyborg actually came from a 1968 girls' space travel, and the definition is an organism to which external components have been added for the purpose of adapting to new ambient spaces, the whole idea of a cyborg space travel, so that you can adapt as a human to places that humans shouldn't really go and they aren't able to survive it. And so I got my start studying what it means to be a cyborg and what it means to wake up next to a phone before your loved one in the morning and have a device that cries and have to pick it up and soothe that to sleep or turns against you if you don't upgrade it fast enough. Along the way, I was studying the cell phones in 2007 when I was writing my thesis. I wanted to do something near the same location. In anthropology, you usually go to these other countries and study other people, this idea of anthropological others. You say, oh, how interesting their originals are, how curious their tools are. In the entire time, you forget that you are the weird person. You're the one that you might want to study because now, just 15 years ago, we didn't have these little clicky devices that we were dealing with all the time. So I went around San Francisco and I watched people put their dog on pause as they were walking on the streets and they could answer a text message. I just pretty much looked weird with a little clipboard writing down what people wrote. And a lot of people in my class said, why are you studying cell phones? It's not really a big deal. Why are you studying social media? And I said, well, one day, everybody will have one piece and the world will have changed and we won't really notice it as much. And along the way, I found this concept called quantum technology. And I found all these different quotes. One quote that I like to quote is this, 50 billion devices will be online by 2020. It depends on what you call a device and what you call online. But we're on our way to having many, many more connected devices than before. But usually this quote is given at conferences where chip manufacturers just want to make sure that we put more chips in everything without the consequences. And it's usually exciting, but anytime somebody gives a quote like this, I like to ask whether this sounds good or not, let's consider the things that we already have in our lives. When the smartwatchers first came out, they just replicated all of the notifications you already had on your phone, so you have to change them. And it's nice if you have like an IRC or like a Slack alert, but people were just getting panicked because they were getting double notifications. And if you've forgotten to bring your phone and probably have a phantom ring or a bus in their pocket anyway, when you think somebody's trying to get in touch with you, this plus your wrist. Or all these companies kept trying to, there was a whole year where lots of different devices and manufacturers tried to get me to make these smart fridges. They said, we want to make a smart fridge. And I said, can't we have smarter people, not smarter devices? Like, why do we need to have a smart fridge? And they said, well, you can have a payment plan where it'll lock you out if you've eaten too many sweets. And I said, okay, I store my sweets in the dumb cupboard, not in a smart fridge. I don't have to worry about this at all. Or the idea that it will tell you when the bananas have gone bad. The bananas have appeal, it changes color. I know what it's about. I don't need $100,000 of technology to tell me that, plus a startup in the background, it's only going to have funding for two years and then go away and then my fridge will be bricked and I won't be able to use it anymore. When you put all this together, you get what I like to call the dystopian kitchen of the future. This is what it's like to program right now, right? I mean, everything's written in a different language. Not all of it's really documented. There's open source that's exciting and some that's not anymore. Contra employees want the new language. Some people don't. People leave. They don't leave behind anything. They expect you to retain it. Half of it's hackable. I can imagine your teapot getting hacked and then that gets your password. I think on the Samsung fridge, apologies if Samsung is a sponsor, but on one of the Samsung fridges, you can use it to get somebody's genome password. It's kind of a cool idea. All you have to do is walk in their house and you've got your email. But there have been times in which these things have failed already and this just increases the surface area for a tact. I don't know about you, but I don't want to have to be a system administrator to live in my own home. I actually just want to go home and have an enjoyable time. For those of you who have pets, you probably have an automated pet feeder when you leave and the ones that are attached to a wall and set up a timer are pretty reliable. But this startup called PetNet, they said we'll go one step further. We'll make it so that you can Skype your pets too. So you can feed them water them and Skype them and check in on them. So what do you think happened with this startup? So of course, the server went down and these connected pets, people didn't really know what was going on. They didn't know how long the server would be down for. There's kind of a Twitter update that said sorry, our servers are down. So people had left, people said, the website said, you can leave your pets, you can Skype them, you can check in on them. Never worry about your pets again in a couple of weeks later. Oh no, everyone's bringing up other pets. They're breaking into their own house in this column, which doesn't tend to somehow go back for vacations and things like that. So, well, this makes sense. I mean, if on the cover, on the cover was my boss covering more, but on the site it says that you don't have to worry. Do you really have to worry? So I went in and turned to service and it says, you agree that you will not rely on the services for any safety or critical purposes related to you or your pet. And this is not intended to be 100% reliable. So where should the consequences? So then I tweeted it and then I said, so the one thing that you were supposed to do, you didn't do. And they said, yeah, well, we didn't implement offline support. That was in the bug tracking system. This is what I'm concerned about in the rush to just get everything out. And it's not usually developer's fault. It's really the managers and bosses of the market are saying, you have to make the product. You have to get it out. We don't care about code debt. We don't even know what that is. We'll just throw it out. It'll be fine. Then these things fell as we get closer and closer as sideworks to having these functions support or day to day, when things fail, it's catastrophic. So we have an era of interruptive technology. It's not just interruptive in that. We're getting text messages all the time and robotic notifications, but we're also dealing with bad battery life, disconnected networks, servers that go down. So how do we deal with that? How do we design technology for suboptimal situations instead of the perfect situation that we design it for in the lab? So we need the opposite of what we have right now. We need a kind of calm technology where the tech proceeds in the background that supports us to amplify our humanness instead of just turn us into robots as we kind of are when we yell at Siri four or five times in a row because some of us have a slight accent or a list and it won't understand us the first time. Maybe the only time a technology understands you the first time is in Star Trek or in films where they can do 40 takes and they can just put the script in it. And that aesthetic from film is what gave us these unrealistic expectations that technology can understand us better than we understand each other. We don't even understand ourselves, not to mention the person next to us. How could technology understand us better than that? So we have to work with the limitations of the technology that we have and make better experiences. When I was writing my thesis, I found Xerox Tark and Mark Weiser Mark Weiser died in 1999 which was kind of considered the father of the book of his computing which we kind of now know as IoT. And the idea behind the research that many of these people did is that they didn't just have pure engineering, they had anthropologists and artists and social scientists to kind of balance out the pure engineering. And what they said is that in this paper called The Community Called Technology is that in the future, the scarce resource will not be technology. Technology will be incredible cheap. Scarce resource will be our attention. And how we use tech and how we design to handle that attention will make a great future technologies because we have a limit to what attention we can have. I think that every day, the amount of attention and information that our devices forces to deal with is kind of akin to maybe World War II style if you were like a general and you were dealing with all these information streams and having to coordinate them but that's a wartime situation. We're dealing with that much information on a daily basis. So how do we kind of take our time back a little bit so that we can think a little bit better? One of my favorite quotes is from Mark Rothery, just as a good tool is invisible. And by invisible, it's not this invisible interface that doesn't exist. It's that the good tool is invisible because you can focus on the task, not the tool. If you look at somebody that's really great with woodworking, they don't focus on their tools and have to update the saw or whenever they just use a saw and it's just an extension of their hand. It's just a tool. And the book is fantastic because the complexity is internalized. It's covered. So you just close the book and there's the complexity and you store it on a shelf and then if you want the complexity, you open the book back up. It's pretty stable. You don't have to worry about updating the book and it's there. You can throw it on your shelf and when you read a book, the interface dissolves you into it. You don't even notice that you exist anymore if it's a good book and well-written because it's just there. So how do you design a calm technology? Some of the research that Weiser and I have taken and compiled and put into a series of principles. The first one is that technology should require all of your attention. Just some of it and only when necessary. An example of this is a teacup. You set it, you forget it. You can go into another room. It'll call to you when it's done. It's really simple. And this kind of calm technology is just there when you need it, not when you don't. And it lets you know. This is part of using your peripheral attention. If you think about technology empowering your peripheral attention, you have very high resolution attention in front of you and you have lower resolution attention around the back of your head. You can hear something in the background. You can feel something by haptics, by touch. How do you compress information into those different senses to still be attuned to your environment in the same amount of information and not be distracted out of it? If you think of a car, everything about the car is attuning you to your primary focus in front of you to drive on the road. Everything else, like the foot pedal glancing back and forth between the rear view mirror, that's all there is secondary tertiary attention to getting your focus on that goal to keep all the variables where the cars are in your head. This is a silly example when this is the Lumobag master sensor just buzzes you when you're slouching. Technology should be informed in a cop. How do you compress some information into a kind of ambient awareness? My old co-founder and I made this tiny Q light bulb system. It's just a Q light bulb attached to a weather report. So instead of having a disembodied computer voice wake you up and telling you the weather report, you just walk into the kitchen and it will just show the color of what the temperature is going to be like for the day. So I was walking in the kitchen this morning and it was a few years ago but it was yellow. And I knew it was going to be something important. I was really excited because most of the time I wake up I'm just a few boredom. But I didn't have to look at the weather report if I didn't want to but there's just an iPad on the wall so you can see it anytime. But the idea is low resolution then high resolution give the person a choice to go from low resolution to high resolution so that they're just not attacked by the information. Technology should amplify the best of what tech can do and what the best of humanity can do. If you try to make a technology that acts like a human you end up putting humans on pause. You end up making humans act like robots. I like Google because when it first came out people tried to scroll down because they didn't know how to use it. The idea of Google is it's just a human switch for it. It's just connecting you to what other people have done on the web with a bunch of bots behind the scenes. It's doing 80, 90% of the work for you giving you results but you're the one that's choosing the results. It's not trying to make a choice for you. And in this way you're collaborating with a machine side by side. You're not having the machine choose something for you which will almost always be wrong or if it's right really terrifying like on Amazon. This is my friend Tom Kaufman. He found that all these biologists were scanning tissue samples by hand and they would lose their vision over time. It was horrible things. He said, well I'll just take some Alienware computers and make a robot that scans the tissues and does two-dimensional scanning the three-dimensional tissue sample and then with a network of doctors and a voting system be able to send a sample in see whether those cancers are not and then send that back to the database and that cybernetic feedback would not an AI, not a machine learning a cybernetic feedback with humans and computers was able to be more accurate and be able to predict cancer at larger and larger time stamps before the cancer actually hit. And that's much more interesting to me than a closed system that you think is perfect. You say, these things are imperfect. We can improve them over time. Let's make something that's not entirely great but it will improve. And even if it's 80, 90% it's better than what looks like a perfect system that just doesn't scale at all. Technology can communicate but it doesn't need to speak. C3PO knows how many 200,000 languages but he's annoying in every single language. But R2-D2 just has a language of cute beeps and you know no matter what country you're from no matter what background you're from no matter what language you speak you know exactly what R2-D2 feels because you can hear this universal tongue language. When we have devices that speak to us in a human language, in English not only do we have to translate them to a bunch of different languages to ship them all over the world but we have limitations because we expect to be able to communicate with the device in the same way that it communicates with us. And that's one of the problems about having things that speak at us. This is an old installation it was just a servo motor attached to a string and this would work around based on the serve activity at Xerox Park. And so if you've heard this you knew that somebody was doing something cool and then you'd go and run around the lab and try and figure out what they were doing. Everybody's wearing location tracking badges by the way so you can kind of see them on a board. But it was fun, it was a lot of social activity and it was ridiculously silly but this kind of art project because they had artists there that were just doing weird things with this fun indicator of what was happening with that actually happened to tell you what was happening. You had to go find out and that created connections across different departments. The Roomba just gives you a tone it goes da-da-da-da when it's done and da-da when it's stuck and you have to help it out. And it's an awful vacuum cleaner it doesn't even clean any of the corners. People find it so adorable that cats ride around on that YouTube videos because it doesn't expect to speak to you in this English language when it's not shaped like a human it's just a trueabyte filter feeder it's a prehistoric device that's more right. Technology should consider social norms. What is a social norm? Well everybody in this auditorium has a phone with a video camera in it and that's normal. We aren't panicking like 15 years ago saying privacy is dead, don't point that thing at me. What is this? Now people just use cameras on their phones to take pictures of their food before they eat it as a pre-digested ritual. So whatever is normal is invisible and you have to figure out what that norm is. Anything that restores you to that invisible norm is restored. So it's like a prosthetic leg or glasses they can be even decorative. We don't look at somebody who has glasses on and say, ah, that's terrifying. But we do look at somebody with Google Glass and say, oh no, the general public does. So you have to think, what was the difference between, let's say, Google Glass and an Apple product? Apple products innovated through making a digital walking mission where they could store more than CD. And then you could wear that around and eventually it led to a touch interface and then it led to an app store where 14 year olds can make a fart app where you press a button and makes a fart sound. And that led to dinner table discussions and friendliness and silliness and then lots of apps being made in the store. And then you had Google Glass which one of the only apps that came out for it that made a lot of sense was the blink to take a photo, which was terrifying. And when you release all of these features at the same time, people often press usually focus on the scariest one which is persistent video recording. But you can't do on Google Glass. It makes the display really hot and you can't even wear it for that long and it runs the battery out. But tell that to the press, that's not as exciting as the world is over and Terminator glasses are here. So you have to figure out what the metabolism rate is for the people using your technology because people can only handle them by just maybe one feature per year if you want to. When elevators first came out, people were terrified that they didn't compress into a disk like this because they never accelerated that direction before and it'd be artificially slow down. And then elevator operators were the only people who could press the buttons and people afraid to press the buttons. And then when the elevator operators went on strike, people realized they could press the buttons themselves and they wouldn't die and that's why we could press our buttons on the elevators. This is how long it takes for new technologies to get into an ecosystem. The right amount of tech is the minimum to solve the problem. This is the hardest thing, especially when companies are getting so much funding right now. They're all just spending money on everything they can. I did a startup a couple of years ago where we just said, let's raise the least amount of money we can. We wanted to do location-based messages. We said, well we can't afford tech messages so we'll try this new thing called pushing notifications. We didn't know how that worked so we learned how that worked. The whole time it was just, what can we not spend money on? Because each new feature, each new support, every single thing that you do, the many add more things, you have to support every single one of those things and it becomes more complex. So how can you use the least amount to get the thing done? It's the last for a really long time and that's hard to talk about when everybody's trying to build things really, really quickly. These are my favorite technologies. The toilet-optic bed sign on a plane is like the only sign that doesn't change because it's a pictogram so anybody in any language will understand it. Even if you're red-green colorblind, you understand because it lights up whether it's occupied or not and the streetlight is just a punctuation for reality. When we created cars, urban streets and districts, we kind of figured it out. It's a light indicator. Could you imagine making a streetlight today? You have the blue tooth to each intersection on your phone and then you have to disengage and press 10 buttons and then it would send you that little heart that says, the light is green. You wouldn't be able to see which light it was and what you did to it and it would be awful. So technology should be used to be near and far. This is my kind of weird graph of what I think could happen in the future. If we had computing that was really far away from us in terms of mainframes where if you were building it, it would sneak out in the middle of the night in high school and try to use your time share of like 3 a.m. to 4 a.m. on it and then we had desktops where everybody kind of had their internal in their house which was a really nice time because you didn't have to have any web access because there wasn't really a web. I mean you could have a PBS if you wanted but then suddenly all this data started to go away from you to kind of like remote mainframe cloud things. I know that my Facebook photos are stored somewhere in the DALs but I can't just walk there and say, hey, can I have access to my photos in the service facility but kick me out and they'll report me as like trying to hack into their service facility. So much of what we do now is not with us and if you were to put your phone in airplane mode you probably wouldn't be able to do anything other than like take photos and like the notes that maybe the audio recorder. So what I'd like to see happen is we go back to a kind of desktop era but it's on our devices that we use each of our devices as a kind of personal server. So if I go to the doctor, I can say, hey, I'm going to share my personal data with you for the next 30 minutes in order to get treatment plan and then that data will be stored back to my own personal servers, back to Star Trek, the off-person servers and then if somebody hacks into that database they're only getting the data that's been shared for the last 30 minutes. So it's all about decreasing the service area for attack and the excitement of trying to get into this kind of iceberg style databases. Not to say that this would work. There's a lot of interesting technologies that break bits up and allowing you to download things from other people around you. IPFS is interesting to follow but it'd be nice to see a mix. Process as much as you can on the device itself because there's a lot of power. And that way when you're on a subway and you get out of some network range you don't have a really bad experience. I would be encouraged with the Sirius 7 radio and you go into a tunnel and the song stops because they won't even buffer for like three or four seconds. So you're going to download most of the song anyway. Why not have more in the buffer? Why not say that the experience is going to be bad? For most people they're going to have a bad phone. They're going to have horrible battery life. Why not make something work even when it fails? An escalator reverts to stairs when it fails. Why can't our technology when it fails degrade gracefully level after level so that we can guarantee a good experience for people? It's harder to build but the support requirements are far less if you do it right. So good design allows you to accomplish your goals in the least amount of years and you try to take those moves away until there's nothing left to take away. Then a calm technology allows you to accomplish the same amount of goals with the least mental cost because I don't know about you but when we talk about automation if you want to automate hanging out with your kids or falling in love or getting married, I mean some of you might but the whole idea is that we have an excess of what the Greeks call the chrono time. It's kind of an industrial time where it's 90, I'm 10, I'm 11, and everything is structured and everything can just interrupt you. But the Greeks also had a name for the opposite time chyrus time and chyrus time is this undefined time where you lose track of it, where you go into a state of flow where you say why was I doing that thing for six months that was totally unnecessary usually when you're on a vacation and you have this epiphany and then you have to go back to the work schedule. So if we make technology a little bit more like what they thought of maybe in Xerox Park we can have some of that time back we can get some chyrus time back and then we could be more intelligent instead of having smarter technology and then we can have smarter humans because the scarce resource is our attention. That's the thing that we need to pay attention to and sort of otherwise we'll just be doing the same thing over and over again with more funding and it will crash in catastrophic ways we've never seen before and we'll have to clean up the mess. So I wrote a book on this called Calm Tech and I'm writing another book which is about if you have to have audio alerts how should you make them sound? So this is just a sequel that will come out later this year. And then I made a website where I put together all the original research on Calm Tech and it's just calmtech.com and it has the principles there I really encourage you to read these original research papers some of them are from 1989, 1991 and they talk in a way that sounds just like we're reading it today it's really ahead of its time. So thank you so much.