 The world is growing an electronic nervous system as sensors get embedded into everything and they get increasingly networked. And my research concentrates on how people connect to this electronic nervous system in different ways. I think that can have profound impact on a very broad interpretation of what learning will become. If you look at wearable technologies, especially, then you look at how people connect to the world of information. Taking a cue from Michelangelo back in the Renaissance, the world outside your body was the spirit world. Now it's the world of information. We still want connection to it. And our wearables really begin to blur the definition of where the human stops and where the southern world begins. What is the boundaries of the human? This is going to become an interesting question. So I'll give you some examples. Looking at musical instruments, they're a great example because people work long hours to practice to learn how to play them. The instruments stay static and they eventually develop a mapping in the brain of how to play with some virtuosity. What if you could reverse that and have the instrument learn how the player wants it to play if they both could learn together? So we built 10 years ago already an example of this. It's a system that measures everything you do with it. You bend, move, you stretch, you pull. And it will give you an example sound and you give it the kind of motion you want to be able to play or express the sound. This is the beginning of an instrument that really learns how you want it to play. Again, how you both can learn together. There's a video here, very simple, it's only 20 seconds, so I can't show much. Just using these fellow armors. Here's a sound, gives a gesture, he wants to control it. So three sounds, three different gestures. And now he can just play it back. So it learns right away with one gesture. Just dynamic time morphing, very straightforward. People have adapted this since for general adaptive user interfaces. This has become a whole field now. It's very early 10 years ago. Also looking at tools this way, my student Amit Sauron has made a robot in the hand that's the future of a tool. The robot knows what it wants to do and it pulls the user's hand back if they violate a constraint. So here, somebody's sculpting out a figure, he doesn't know how to do it, but the tool is helping him. As you get better, the robot will start letting its hand off of you and let you express more. So again, you're working in partnership with the machine. A sad starter, like Georgia Tech, made a glove with little vibrators on the fingers that would pulse them in order so people could learn how to play a piece of music by going to sleep at night with this thing going off in the morning. Even if they never played piano, they could just play the piece. They were shocked, but they'd be able to play the piece. This is a set of shoes that we did with Master General Hospital about 10 years ago, instrumented for a biomotion analysis. We've got a lot of data that we could analyze, see if people walk. But we also used it to intervene, to help people that were injured or impaired learn to walk. Because we can make interactive systems back then that could respond to how you were walking. So in this example that I'm going to show, you've got somebody walking. We're working with participants and patients that start to shuffle. It makes the music very rhythmic, and that can help them break the shuffle and learn to walk again. We look also at tonality here. This is playing a major key. If you pronate, it turns into a minor. So now with tonality, we're exploring giving this kind of intervention in real time. Going further, we've been working with the doctors for a famous Boston baseball team that has red shirts and probably red socks that we'll go unnamed. But putting sensors on the players that measure wide range of athletic performance. And now we're working on giving real-time data to the coaches and to the players so they can start to hear, feel, and see their emotion in real time to perfect the perfect swing or the perfect hit. Other groups around the world have done great work in this area that's really intriguing. This group from Germany that's made a magnetic sensor belt with pager motors around it and a magnetic sensor that always vibrates in the direction of north. And people develop a sense of north that persists even when they take the belt off. If the right is an artist that is color-blonde, he wears a camera with an audio prosthetic-generated sound based on color. So we can learn to see color, basically. This is my student, Gershon Dublon, who's made a tongue display so we can basically see through his tongue. He's fascinated with getting input through other channels to people. It looks a bit awkward, but if you're impaired, this is a viable interface. And there are examples where, for instance, he can use the Intest with colleagues, you'll see in a second, where they can point to certain things on their laptop. He feels it in his tongue. He basically sees it in his tongue. It's very coarse now, but this can be expanded. There's so many nerve endings in the tongue. Here he's got a direction system, so it's giving him directional cues as he's moving around. He can basically see the map with his tongue. A bit awkward, but speaking of awkward, 20 years ago, we were privileged at the Media Lab to have the pioneers of wearable computing as our students. They've gone on to develop Google Glass, and other products are going to change the world entirely. Back then, they were already living with this world 20 years ago. And MIT instructors gave them special permission to go to class with their wearables on, because they said it was part of them. Fast forward to the present. This few weeks ago, these are my daughters with their first taste of Google Glass. The new generation is going to grow up with wearable computing like Glass and its progeny, the way we grew up with calculators and smartphones, and so on and so forth. So I think the question of where the human ends and where the wearable system is, is really going to be vastly redefined as we go into this future. And these systems will both learn with us as well as help us to learn. So I think my question, again, leads to it, is what is the future of education in this world of the augmented human? What will it be? Thank you.