 Hi, so I'm here today to talk to you about making super humans. I know that sounds a little outrageous, but I'm telling you it's possible, especially with all the cool new exponential technologies that are developing. I mean, why not? We're already cybernetic. We already do most of our lives through technology. How many of you Googled me before coming to this early morning talk to decide if it's worth? Yeah, see? And a bunch of you are meeting new friends and colleagues trying to decide if you want to work with them. You might look them up on Twitter or LinkedIn. The other day I watched this YouTube video to figure out how to tie my husband's bow tie. I felt like Trinity from The Matrix. Only there was something missing, right? It wasn't as immediate as Trinity's experience. It was kind of painful, actually, because I had to watch this video and then I had to figure out how to make that happen in real life. The two were not combined. It was a linear progression, which was not the way that that actually happened in real life. So I would say that we're actually cybernetic, but with an anchor holding our brains back. It's like technology doesn't want us to catch up with us and it's keeping us at bay. Now, let me give you an example of what I mean. So there's another cybernetic tool that we use called GPS, right? I used it this morning to get here. Got a little lost because I didn't understand the map. This happened to me recently. I was in London, or England, actually, and I was stupid enough to rent a car, which was already a challenge because I'm driving on the wrong side of the road, on the wrong side of the car, and it was a stick shift, so I'm using the wrong side of my body to do the manual part of it. And on top of it, I don't really have a lot of familiarity with rentabouts. In San Francisco we have some fake ones, but they're basically four-way stops. So I get this map indication telling me that I'm about to hit a roundabout, so I'm already nervous. I'm already stressed out because I'm driving on the wrong side of the road, so I'm already have very limited ability to multitask, okay, because I'm trying not to die or kill anybody else. And then I get this map. And then on top of it, this map is indicating that intersection, which I've never seen before and I don't understand at all, and I really don't understand the relationship between the map and that intersection. So what happened is that I ended up driving around this stupid intersection for about four different times, eventually screaming about Big Ben in Parliament, before I found the right turn, okay, and it was really stressful. The problem is that the reason when I say we're cybernetic, it doesn't feel like, yeah, great, we're cybernetic, it feels like, we're cybernetic, is because the burden is so hard on the user. The user is doing all of the work, right? It's very confusing, and especially in multitasking situations, this becomes a big problem. Now, humans have working memory, just like our computers do. Oh, oh, there we go. Humans have working memory just like our computers do, and we only have so much of it, okay. It's your short-term or your task-based memory, and it's what you hold in your mind just to get the test done that you're doing. And when that gets interrupted or overloaded, we fail. Especially as humans are horrible at multitasking, we all think we're good at it, we're not. We're all awful at it. And the reason for that is very simple. So I know all of you have had this experience. You're sitting at home on your couch, and you're like, oh, I have to go get that thing from the other room. And you walk into the other room, and you go, why am I here? What am I doing here? And you have to go back to the original room to remember. Well, that's because your brain only holds that short-term or task-based memory in the context of which it's being used. And when I switch into a different room, suddenly I've wiped the context, and my brain says, oh, good news, we don't need this stuff anymore. And it wipes my working memory or my short-term memory, freeing it up to do new things. But of course, that causes a glitch in our memory. Now, our UI, I'm giving up on this clicker. Our UI is a multitasking nightmare. It is completely all about constantly loading your short-term memory with different contextual switches. Here I am trying to put an image into this presentation that I'm working on right now. As you can see, I'm focusing on the slide itself. And then I say, oh, I want to go find that image. And so I have to do a task switch. So I have to look at the interface to find the new application that I want, and then I have to go searching. Now, every single time I'm touching my UI, every time that I'm switching between something that I'm focused on and switching to try to find this thing, I'm creating a doorway that I have to walk through. So every interaction with my interface is basically walking through a door and wiping my memory about what I was doing before. This is called an interruption. And interruptions can cost us anywhere from three seconds to 30 minutes to get back on task. So every time you're touching your UI or asking your users to touch their UI, you're causing a major cost to them. So much so that in an eight-hour workday, we waste about 3.5 hours of that day messing with our interface. That's astronomical amount of wasted time that we could be getting stuff done and accomplishing a lot more interesting things than playing with my UI. And that's because, essentially, our computers are dumb. They have no understanding of the context of use. They only know what they know, which is a lot. But it's not enough to save me that 3.5 hours of my problem or my day. So while I was at Microsoft working on HoloLens, I made my friends wear go-pros on their head, which sounds like it would be an easy task, but it wasn't. I had to beg, trust me. And I had them record every single thing that they saw. My computer is super-having. Go to the next slide. My computer's frozen. I don't know what's happened. Sorry, guys. All right. Well, let's hope this can continue. Sorry. Ha, computer interaction cost. All right, so I made my friends wear go-pros. And I had them record every single thing that they did all day long. And you'd be surprised at what things they kept private and which things they felt were perfectly reasonable for me to watch them do. But even that became very predictable because the fact of the matter is, you're boring. You are very boring people. You do the exact same things at the exact same time with the exact same people every single day. Within two or three days, I could predict very easily what you were most likely doing, given a given context with specific people. It was not that difficult, based on understanding your context. And that's especially true when it comes to interacting with your technology. You're very predictable with your computer and your phones and things. In fact, you check your phone about 2,600 times a day. Can you understand how many interruptions that might introduce into your world? And every time you touch your phone, you're most likely doing Facebook email Twitter. Facebook email Twitter, or some derivation of that. I know I'm older, so I do those things. But, you know, it's why you have quick routes to certain applications, like your most recent applications or your favorite applications, because device manufacturers have known your habitual users of your technology for years. It's just that we couldn't get very specific, because it was adapted to, you know, you and 2 million people like you. So right now, in our human-machine interface, in our HMI, what we do is that we train humans how to push the right buttons in the right order to get the technology to respond in the way that we need in the context. And as I've shown, that creates a huge cognitive burden on the user, because they're constantly having to tell the interface to be quiet. It's kind of like a little petulant child that's trying to tell you every single thing that it knows, regardless of the fact that you're trying to talk to an audience of people, right? It's telling you every single thing that it knows, everything's a single thing that it can do at any given time without ever understanding that it's inappropriate to tell me those things at that time. For example, I love to watch British murder mysteries when I'm falling asleep. It's something that I do every single night. Regardless of what time I go to sleep, that's what I'm going to do. Yet somehow, this is the recommendations that I get from Netflix. Yeah. I don't want to watch any of that stuff when I'm falling asleep. Maybe when I'm sitting down to have some fun, but I can guarantee I'm never going to watch any do-over. That's not going to happen. Anyways, so it's telling me every single thing that it knows just in case I might want to know about it, but I don't. I only want to know this one thing, but it's up to me to shift through all this stuff and find the one thing that I'm looking for. And the computer is smart enough to understand the one thing that I'm looking for if it was given the right context. Now, in my early days of the design career, I was working for somebody named Fred Stafford. He was an advertising guy, and he used to train bodyguards how to prevent kidnappings. Really, nobody can say peanut. Trained bodyguards, right? And he told me that if I messed up, within two to three days, he'd had me in the trunk of his car because he could figure out exactly what I was doing and why. And I trust me. I think he was joking, but maybe not. But anyways, if Fred can predict my patterns, why can't supercomputers? It seems like they should be smart enough to be able to figure things out. If only there was some way to train at context, maybe we could do that. The good news is, slide missing, the good news is that a bunch of exponential technologies are finally converging to make that happen. We have the portable handheld computer, a.k.a. your phone, which tracks your every single move. And if you don't believe me, check out Google Timeline. It knows everything about you. We have wearable sensors that are going to tell you every little detail about your physical state beyond the amount of information you want. I mean, I have a Fitbit. It sits in a drawer at home because I don't know how to deal with all the data. It's just too much data. It doesn't do me any good. Which is tracking all of your interactions with your space and your objects in the space. We have self-driving cars who have dramatically advanced machine vision. So we have really accurate computer vision now so it can understand what it's seeing. And then we have a whole bunch of augmented reality devices about to hit the marketplace that are going to see every single thing that you see, plus a bunch of stuff that you don't see through things like infrared sensors and other fancy tools like that. And then we have a really rapidly developing field of artificial intelligence and machine learning. And what's most exciting about that in regards to our conversation today is things like YAMLON, which are unstructured machine learning, which means that they can take all that raw data that I'm throwing at it through all these other means, and it can determine the patterns of me specifically. Not me and two billion women like me, but me, Jodi, or you, or you, right? It will find out your particular patterns. And then, just as the self-driving car takes action on our behalf before we even know it or are aware of it, the technology can start to do that for us. So it can start to reduce the number of interruptions, the number of choices on my screen, the number of hoops that I have to jump through in order to get things done. So that is basically when tech starts speaking tech. So the technology already knows how to speak tech. I don't have to learn it. The technology can learn to talk to each other. We're seeing a lot of that happening right now. But I think we can do better. And we're going to have to if we want to make superhumans. As Clarke said, any sufficiently advanced technology is indistinguishable from magic, right? And that's exactly what we're about to be able to do. Now, any good magician will tell you that magic is skipping the right steps, right? If I was a magician and I came up here on stage and had a rabbit next to me and shoved it in the hat and showed you guys it all in the hat and then showed the false bottom in the hat, you wouldn't care about me pulling the rabbit back out of the hat. You'd be like, of course it was in the bottom. What? Right? So the magician is only effective if it skips the right steps. He goes up on stage with the rabbit already in the false bottom and pulls the rabbit out. That's the magic. Not all those steps in between. Not all that cognitive burden that we're currently experiencing. Now, our brains do exactly that same thing. Our brains skip the right steps through innate cognitive functions. So what it does is it offloads things from your short-term working memory into these innate cognitive functions that you're not even conscious of happening. Some of those might be like proprioception. I know where my hand is. Most of you know where your bag is right now without having to look. You just know where they are. These aren't things that are in your short-term memory. These are innate cognitive abilities. And our brain is very effective at using them to reduce the burden on our short-term memory. So just as ergonomics work to help optimize systems to create the most human productivity, we can do the same thing with cognitive ergonomics. So now we can start to adapt our technology to communicate to us and use our innate cognitive functions to activate a lot of technology without us having to be actively conscious of it, without it having to create an interruption. I know that sounds a little bit crazy, but let me tell you, when we do that, when tech finally learns how to speak human and communicate directly to our brains the way that the rest of the world we've adapted to use the rest of the world, suddenly we start to get some superpowers. And that's when it gets really magical. Let me give you an example. Spatial cognition. Spatial cognition is really important to humans. There's a reason for that. You take your two hands and you put it like fingers crossed on the back of your head. That's the size of your spatial cortex. That's a huge part of your brain. Go ahead, do it. Nobody's going to laugh at you, I promise. Okay, I'm laughing at you. No, just kidding. Right, so that's about the size of your spatial cortex. That's a big part of your brain. And over 200,000 years, we as humans have evolved to use that very effectively for all kinds of really important tasks. So much so that we call spatial memory free. Spatial memory is free, right? Because you're not really thinking about it. It just is there. We're constantly creating a spatial map of our world and our surroundings. And we use our vision just to update that occasionally, to see the changes, indicate the changes in that environment. So we use it for all kinds of things. First of all, dimension. I can tell the difference between two objects because of their size and shape. I can tell the difference between that little tiny dog and that big dog, because I'm using little and big. I can remember things. If I put a note to buy milk on my refrigerator, it helps me to remember that. Next time I leave my keys, next time I leave my house, I'm going to remember to grab that thing because it's next to my keys. It's also really good for things like spatial semantics. We have timelines, we have alphabetical lists, we do whiteboard sessions, and we group ideas together to kind of help us to understand all the bits and pieces as a whole, sure? But more importantly, we use it for multitasking. Space is incredibly beneficial to multitasking. So that... But if you look at the way that our current UI works, it's a magic piece of paper UI. That's what I call it. Magic piece of paper UI, because it's completely flat. And it has boundaries. It has edges, right? Just like a piece of paper does. And basically just loading pieces of paper, one on top of the other, there's no z-depth, really. There's like a very narrow z-depth. That's it. That's all the space you get. The rest is linguistic, which is not innate. That's a different tool. Okay? So this thing is really a lot of cognitive burden. But we know that by adding space to technology, we improve productivity up to 40%. So we can reduce that 3.5 hours wasted on your interface by 40%, just by giving you multiple monitors. So when I was working on this presentation, if I had the luxury of having three monitors, I could have the slide that I was working on, the image search that I was doing, and say the emails with Bruno trying to coordinate the event on the other, right? And so rather than having to go through the doorway of my UI to move between those tasks, I could simply move my head. And it reduces, astronomically, the time taken wasted on those interruptions, just by using my head. And we've known this, by the way, since 1993. This is not new information. We've known this for a very long time. The good news is that synthetic reality adds space to computing. That's what it's all about, right? AR, VR, mixed reality. All of those technologies are basically giving us infinite canvas to put our content. Because not only do we have the Euclidean space that we live in, but we also have an nth dimension, where we can put all kinds of other things. And the really interesting thing is that humans create spatial memories in synthetic reality. We create spatial memories just like we do in the real world. So we can use it very effectively. So basically what we're doing here is we're taking that multi-mon situation and turning it up to 11. It's amazingly helpful. And because AR both sees everything that I see as opposed to my current screens, which only see what I put into them, right? This is an input and output device. You can see everything I see. So not only can it spatially orient my task, but it can spatially orient my task exactly where I'm standing in relationship to what's there, right? So this is a fulfillment house, and this guy's walking around fulfilling his thing. And it's telling him directions exactly where he needs to go within that space. And it's also recording whatever he picks up so that he doesn't have to go through and manually input the fact that he's picked up that box that he needed. So it's putting the task exactly in the context of use that it's needed while recognizing the spatial aspects. I use this all the time. I told you about the key trick, right? This is where I put my keys. If I don't put my keys here, I will never remember where my keys are. It happens every single time. I forget to put them there, and then I'm screaming at my husband as I'm trying to leave the house because I can't find them, right? So... And, you know, we all have really effective abilities to use these spatial memories. For example, if I asked you right now to close your eyes and picture sitting on your bed at home, I bet you could tell me exactly where the door was in relationship to where you're sitting or what's on your nightstand to a relative degree anyways, right? So... because we are using these augmented and synthetic realities that can understand that space, we can use that exact skill, as we use in the real world, where I put stuff next to my key box to remember, well, now I can do a virtual object next to my key box as well as the real world objects, so I get the benefit of my spatial memory in my virtual space. With the added benefit that if I forget to put my keys there, I can just push a button, and the synthetic reality will put whatever was supposed to be there there. Or maybe I don't even have to push a button. It just knows that something's supposed to be there and puts it there. All right. This happens because, like I said, your spatial cortex is a huge part of your brain and the visual cortex makes up a large amount of that. It's about the sciss of your hand back here. It communicates to both sides of your brain, not just one side, but both sides of your brain. And it's also about... it's the majority sense in humans. It's the primary sense that we take in, and it provides about 70% of our sensory input when we include sound. It's the sense that we use to color all of our other senses. So when we use synthetic reality to hijack it, we're basically using that synthetic reality site vision in the same exact way that we use real-world vision. We're basically processing it exactly the same. This is kind of why you fall over. It's because your brain can't differentiate between what's happening in the real world and what's happening on that VR screen. Your brain believes it so much, especially that spatial information. I kind of want to laugh, but... All right, that was... So spatial interface... So now we can start to build things using that spatial cortex, ways of interacting with our devices. So this is the GE digital twin. I don't know if you've heard of this, but it's a really interesting thing. What they're doing is, as you may know, GE has spent the past few years looking up this Internet of Things business where they're really starting to understand the objects themselves. They're putting sensors in them. They're doing all kinds of things to understand objects and interact with objects. And then in addition to that, they have all these plants that they've been working on where they have CAD models of the plants, but there's really no daily insight into the wear and tear on the machinery or things like that. So what they thought they would do is take all that digital data that they have and rather than looking at it in ones and zeros like somebody from The Matrix, they decided to turn it into basically a reflection of the real world situation. So now they have virtual models of the real world that they can overlay together and start to really understand what's happening on the plant level here in the digital space. So they're creating basically spatial models of data so that they can navigate through that data just as they do in the real world and make it easier to understand the relationships between the data that they're seeing and the actual physical object that they have to deal with. Now, that's one way to do it, so creating just a twin of the real world. Another way to do it is like this company here. This is ProtectWise, and they're working on creating these dimensional interfaces for things like cybersecurity. So this is a big corporation's data and what they've done is rather than using ones and zeros or spreadsheets to understand it, they've created this dynamic spatial interface. Now they can tell where their data flow is coming because the buildings are bigger. They're using dimension here, and you navigate it as if you're walking through a physical space and on top of it, people are able to jump quickly to the areas that they want to pay attention to because they know which neighborhoods they have to go to. It creates a whole different model of interacting with our devices, and it's incredibly more effective to determining the health of the overall system as well as individual nodes on that system because they can see it and they can interact with it and remember and process it as if it's spatial information. So it's reducing the cognitive burden not only of interacting with the interface, but of understanding and using the data that they're developing. So great. What does this have to do with big super-humans, right? Sounds interesting. Well, for starters, we just freed up about, you know, what, 3.5 hours of your day to do something other than interact with your devices. What are you going to do at that time? Well, first of all, you're going to live heads up. You don't have to look at your screens anymore. You're going to be looking through your screens. Thank God, right? You're not going to have to be distracted all the time by interacting with your devices with the buttons in the person's right order. We can put that petulant child to bed and actually enjoy some grown-up time in the world. In addition to that, we're not even going to have to remember where we left our keys. There's this pattern coming out from Microsoft. It's going to track even your real-world objects. It'll track your virtual objects as well as your real world and be able to tell you, oh, your keys are right over there. Yeah, you forgot to put them in the box again, but they're right there. Don't miss them. But already we're seeing that we can develop superhuman healing factor, just like Wolverine. And I don't mean in the way that surgery or pharmaceuticals help you to heal, but through these technologies, your own body becomes the thing that heals yourself, just like Wolverine. So this is Diplopia. This is developed by my friend here, James Blaha, and he has what's called the lazy eye. That's what Diplopia means. It means one of his eyes is not fully functional. It's only seeing two. So he only sees 2D data in the real world because he's only seeing fully out of one eye. His brain has decided he no longer wants the information from that second eye because it doesn't focus properly. So he ignores it. So he only has stereo vision in real world. What he did was he created a VR application, and the doctor said basically after age 12, you're done. That's it. You have lazy eye for the rest of your life. There's nothing we can do for you, sir, because your brain stops creating new neural pathways. Well, he didn't buy that. So what he did was he created this VR game that sends a different signal to one eye than the other, especially the lazy eye. And by doing that, the brain started to pay attention to that eye again. And it reconnected the synapses between the brain and the eye. So not only does he have full 3D vision in VR, but it also works in the real world. No medication, no surgery, just by retraining the synapses to this eye. In fact, this was such a big deal that it's been proven that VR rehydrates neuroplasticity, which is the brain's ability to rewire itself, which we thought was something that you really couldn't do once you became an adult. Okay. So then there's also these things called like mental rehearsal and visualization. I'm sure maybe some of you have heard about this basketball trick. They took two different groups of kids learning to play basketball. One, they had Ashley throw hoops over and over again so they could try to sink free throws. And the other group, they had visualized half of their time throwing free throws, not actually doing the activity, just sitting there and visualizing making a successful basket over and over again. And what they found was that the kids that did visualization actually did much better than the kids that practice physically over and over again. Now, you have to have that ability to visualize things in order to use this. So it leaves out a lot of the population who aren't really visual thinkers, who can't really picture things, right? But VR can do that for everyone. It democratizes that ability. So recently, there was a group of people working with paraplegics. They created exosuits to help the person retrain the leg movements, the muscles in the legs. But they thought, you know what? Let's also put a VR component on there where the patient can actually see their legs moving. And something really magical happened. After about a year of this physical therapy, the patients went back to their bed and were able to voluntarily move their foot. So paraplegics, we used to think, that's it, you're paraplegic, you're never gonna get your legs back. And here, we prove that with VR, no surgical invasion, people were actually able to rehydrate the synapses between their brain and their leg muscles. And hopefully that means that they'll eventually be able to walk again. All right, but remember how I said that your visual sense colors all your other senses? Oh, yeah, I'm not done with you yet. Wait, there's more. Well, because of that, what happens is, let's say you're touching something hot and it looks to your brain to be cold. What your brain does is say, no, that's not hot, that's cold. You know, it's burning, your brain thinks it's cold. And so the researchers at Washington's HIT Lab decided to use this. Burn patients, people with burns over 60% of their body, they're in excruciating pain 24 hours a day, but especially when they're getting treatments. And morphine only does about 20% to reduce the pain. That's like, forget it, don't even give it to me, right? But what they found is that by giving people this VR, this thing here is called Snow World, where it looks like they're throwing snowballs at snowmen and poor little penguins, but it looks like a really icy world. So first they're distracting the brain, and secondly they're telling with their visual sense that what they're feeling is cold instead of burning. Now this reduced pain by 60% without morphine. Okay, so that's incredibly powerful. And all this is being done just by paying attention to cognitive ergonomics. You're getting, you're by working directly with the brain's innate cognitive abilities, we're giving people superhuman strength to heal themselves. So let's go ahead and create ourselves a league of superhuman, shall we? There's all kinds of stuff that we can do. It's up to you. What kind of superhuman strength do you want? Do you want to be like Gene Gray or the Phoenix and have some telekinesis? Well, we can do that. We have the MIT city home, which is a whole space that reconfigures based on robotics. We have the void, which is a mixed reality space that the space and the AR works together to create this whole another world that you couldn't have any other one alone. We have thermosets that pay attention to you and life that changed dynamically. So I could stand across the room from my MIT city bed and pull it out while the room turns into a nice dark space to watch my mystery movie that has started on my computer without me even saying so. I can have precognition. I can use that GE digital twin network, along with daiquiri, which is an AR industrial grade helmet. It has infrared sensors, so it gives you spidey sense, right? It gives you additional sensory abilities. And then I can understand what's happening within the system before it even happens in the system. And in addition to that, I can use VR and synthetic training tools to train my workers on how to fix those things before we even recognize that there's a problem. So they're training in the lab, fixing it before they actually ever have to be deployed, which means they make the mistakes there and not in the field, right? So now I have precognition. A precognition that I can train before, which is pretty cool. I can be like Trinity Day for reels. For reels! I can have super cognition. Oh, Colonel's missing up there. Colonel's pretty cool. It's a neural implant for cognitive abilities. It's kind of scary, but also kind of cool. But I don't really need something embedded in my head. This is Google Translate, which works with a phone. Basically, I just hold it up in front of something and it translates it. I've been using it all over Lisbon. It's very helpful. There's also Stryver, which is from Jeremy Balenson and team at Stanford. And what they're doing is they're creating a football training program so that the quarterback can play this American football. So the quarterback can practice plays over and over again with the team he's about to play before he ever gets there. So he can try out all kinds of different strategies and be perfect at them so that when he hits the field, he knows exactly how to defeat them. Or we can have Augmenta. Augmenta is an AR tool. It's looking at this guy, trying to solve this Rubik's Cube. And rather than, like me, where it has to take apart the Rubik's Cube and put it back together properly, it shows him how to move it in real time just through his AR. So you can augment your cognition already. Imagine what can do when we can streamline the interface with him. Superhuman strength. Of course, we can already make Iron Man, right? We've got self-driving cars, right? We have NASA's GM robot glove. Do you guys know about this? It's one of my favorite things ever. Okay, so what they did was they created an inflatable glove, a robotic glove. And so with the effort, it takes me to lift, like, a pound. I can lift, like, 20 pounds because the robot arm augments my physical strength. So, basically, it's reducing the amount of burden on the user by adding robotics to my existing physical strength. And that's exactly what Lowe's is doing with their exosuits for their employees. So they have these exosuits now. It's a hardware store. And the exosuit kicks in and augments their physical strength so they can carry, like, massive plates of whatever hardware they need to. But I think more importantly is this brain gate thing. And let me replay that because it's that awesome. Yeah. So brain gate, this one's the quadriplegic. That means that she has no function below her head, right? No arms or anything. And they've put a chip on the M1 neuron. That's the neuron that you use to move your arm. Okay, but because her brain no longer talks to her arm, she can use that now with the amount of effort that it takes us to do this. She can move that robot arm. And she was able to feed herself for the first time in 17 years by using that robot arm, just by thinking about moving her own arm. So when these types of technologies start to become easier and less creepy to embed into your brain, you'll be able to just think and operate the room full of objects. Imagine what that does to telekinesis. So frankly, we are exiting the era of supercomputers. Sure, they're going to continue to get smarter, faster, smaller, cheaper, which is great for our needs because we need them to be really tiny and something that can just kind of disappear, right? And now we are entering the era of superhumans where we teach technology to speak human. And as a result, all kinds of things become possible. You can get started by envisioning your role in it. What is your driving need? What is that superpower that you want? Keep in mind that this new HMI is going to affect every single aspect of your life. Not just work, not just home life, but every single thing from all aspects of your life. So my question to you is, what are you going to do with your superpowers? Thank you.