 and you're having human life sensing to devices in the sense. Good morning. We're setting up just a few minutes. I wanted to start by saying how excited I am to be here to be able to share our work at Intel on the areas of the SENS and autonomous machines and intelligent devices. We'll get started as soon as the slides come. Okay, so how are you? Good morning. I think we are here as an industry at the onset of a very exciting revolution which our late visionary leader Andy Grove might have called an inflection point. I want to share with you a particular aspect of that which is to add human life sensing to computers. We of course at Intel and worldwide among partners have been doing a lot of work to enhance computing technology and a lot of focus has been on processing, the processor, computing technologies. Just to put things in perspective, it's been about 40 years since the introduction of the first microprocessor and we've gone from literally a couple of thousands of transistors on a chip to more than 10 billion of them. It's an enormous scale as the technology has improved allowing us to do amazing stuff in terms of computing. Form factors, things have gone from roomful of computing devices to devices in your pocket. But today I'm going to talk about the other aspect of it, being able to add sensing capabilities so the devices can see and the devices can understand. And I want to first start by looking at the examples of applications that are here and now and then give you a glimpse of what's ahead of us in the future. These are a few examples of a bit of devices that are shipping now. I'm proud to say that all of them have Intel Welsons technology and getting ahead of myself and telling you about what the technology is. But since I have a short time here, I'm going to just quickly go through these applications and then come back to the applications later. I'm going to go into a few categories there in a little bit of time I can tell you about. First is this area of 3D sensing computing devices. And I picked three devices there. Up on the top is Razer, Stargazer, Peripheral. It's a small little peripheral that can connect with your Windows PCs to the device from 3D systems for a handheld 3D scanning device that also connects with your PC and allows you to read 3D model of physical objects and you can print out on a printer. And the device on the bottom is one of many millions that you've shipped over the last couple of years with depth sensing technology built inside the PCs. This one is from HP and allows you to scan humans into 3D models so you can post them on Facebook or interface with the devices with gestures, login with technologies like Microsoft Windows Hello, which could not have been possible with just 2D cameras. I think it's going to flicker a little bit, so it's just a very good technology. It's still allowed us to tell the story. The second category we're going to talk about are reading this exciting area of autonomous machines. We have been dreaming about this for decades, but those of us that have dedicated our work in developing machines that have the intelligence and capabilities to navigate, understand the world. I've showed a few devices there, drones and robots. I'll talk about them, so they're with me. And the third category I'm going to talk about is this emerging yet very exciting area of virtual, augmented and virtual reality devices. And I'll tell you where I'm going to use them sometimes. Last, but not the least, I promise, I'll show you the glimpse of how your mirrors are going to be. And again, devices that are already in the market and something of that kind. Mirror's department have to be just a pure, dumb device that reflects light, but we can put smarts into them. I'll show you an example of that. Okay, with that, let me just step back and share just one slide that inspired us this work. And when you think of Intel, you probably think of microcosters. But we've been working on this for almost half a decade. And it's a pretty significant new product development and business evening now. And this is the slide that we put together at the beginning of this project that inspired us. What makes us the humans? We have some amazing sensors. These are perceptual sensors, eyes that allow us to not just collect photons from the environment, but reconstruct the world in 3D. To me, you are just what the light is conveying information. Getting emitted from there, getting reflected from your faces, chairs and floors. My visual system is capturing that light so that's the sensor part. But then I'm doing a whole bunch of stuff with that information. I'm actually understanding running some software in my visual cortex to recognize and understand the environment. I see you as humans, not just a collection of 3D objects. And then I'm using the data to interact. Walking around, not falling down from here. I'm saying hello, Rabu. That's part of interaction as well. So as engineers, we'd like to then decipher that into steps that we understand and we can attempt to build. Sensors, so we focused a lot on really understanding the electro-optics and the mechanics of the 3D visual sensing subsystem of humans. Those of your 3D visual system, consisting of the two eyes and the visual cortex at the brain. It's really a collection of electro-optics and processing technologies that are optimized for the function. And then in the speed of going fast, we're able to type into this, but also vestibular motion sensing that you have close to your inner ear. Those are prematurely perpendicular range that have filled with fluids. And as you move in one direction, it senses you six degrees of freedom of motion information. We're really using the data from all of these perceptual sensors as we navigate, understand the world. And if we understand the world, then we could potentially build devices that do the same. We're working around machine learning and artificial intelligence with the developments in sensors and computing technologies combined with neural network-based deep understanding technologies is what's now making this possible. So just bear with me as I take it through a very rapid journey of how all of this is coming together in bringing about this revolution. Translating those into technologies, we're building the real-sense technology to actually define two or few things. One, capture the world in 3D. You're all familiar with this color picture that I showed in the middle, which is the picture of a human hand. It's captured with the real-sense imaging device. But what's linked to the real-sense camera versus the phone or a webcam or your PC is that it also captures a depth image corresponding to the color image. So all the depth images for those of you who are not in 3D computer vision, although it looks like a basic picture, every pixel on the picture, it's an 8-bit picture, so values range from 0 to 25. That's an indication of where in space that point is. So now you not only know the color of every point that you're looking at, you also know where in the 3D space that point is. That's the fundamental building block of our visual system. When I'm looking at you at the room, obviously everything is colorful. So I've got color sensors, the cones and the eyes. But the stereoscopic vision plus the processor in my visual cortex that does very low-power, efficient calculation of binocular disparity, and hence calculates the 3D coordinates of every point I see, allows me to understand the world in 3D. So I know exactly where what point is that allows me to deconstruct. Also, motion sensor. When I'm going fast, it's a very critical part of the sensing system, subsystems of humans. So the goal for us, again, the engineering view of real sense, this is actually counter-real sense, is that we need to be able to see with this device a solid reconstruction of the 3D wall. That means I want to know the 3D structure real-time, just like humans do. And obviously I stick the color there. We also have the color picture, the texture map, the color of every point. So we see the wall in 3D, in color, like humans do. So I just wanted to show you an image of how to capture the real sense camera. So one of these real-sense devices. We have a bit of these sensors that I show on the left. Three generations have shipped in the market. We've shipped a few billions of them. But just to give you a sense of how big they are, or small they are, the first one that you see up on the top is the real-sense SR300 camera. We started with the F200. We replaced that with a better device, SR300. Don't have a lot of time. It's a beautiful technology. But in short, it includes, it's based on coded light technology. So we actually project invisible binary codes, special temporal codes of infrared light. It has the world's tiniest MEMS projector that has 307,000 meters of them. And we can clip one of them, each one of them, thousands of times per second. And then we have an IR camera that you can take pictures of that, of the special temporal code at 600 frames per second. Result? With a processor that's built right inside the sensor, we can capture 18 million 3D points per second. So if you just compare that with other 3D sensors, the visual sensors that are in the market, this is generations ahead in terms of what it can do into the fault factor, 100 times smaller than the first Kinect product that you might be having. A single USB device that it can connect, provide power and move data. This has been built into PCs and those kind of devices that I showed you. In the middle is the real-sense LR200 camera. We've done two IR sensors with a laser projector coupled with a diffuser. Works like a human visual system with two images captured. And we have a very low power stereo correlation chip that's right inside here that does low power 3D point calculations. It's going to be very fast, but I have a short plan. The one at the bottom, but the one in the middle will put on drones. I'm going to show you examples of this. The one at the bottom includes a 3D motion sensors as well and a fisheye camera for a wide field of view motion tracking. This is what is ideal for robotics. So we'll get these hardware devices. For the system makers, they'll buy these modules and put it inside. But for developers out there to experiment, you also package them in small little peripheral devices with a single USB cord. So you connect with your PC or any embedded computer board to develop applications. So for you also have a bunch of software libraries that you build to take full advantage of the 3D data that some of them are shown on the right on your right, on my left. And some libraries, for example, are full 3D hand-scaled on tracking, real-time. So you know exactly what I'm doing with my fingers if you're developing and just an interactive application to facial feature and a motion recognition library to full body tracking for robots, to video conferencing with background segmentation. You could be at a bar, but if you have a 3D sensor, you could be having your office picture as a background, green screen effect. To full 3D reconstruction, the picture in the middle is a 3D constructed version of our CEO. To jump to the bottom right, a full 3D understanding and bearing me on that, I'm going to explain to you what that does. So I'm going to just quickly go through a few applications that we have partnered with the industry to take full advantage of this 3D-sensing technology. Virtual reality. Any of you have a virtual reality device you've tried on? Very few for the rest, you have to. It's quite amazing how this is shaping up. You talk about screens, and you're going from the small screen to bigger screen to curved screens by bother when you can be inside the content or with your hands at arm. It's quite amazing for the gamers for people who like to watch videos. What is this screenshot? Anybody watch the movie metrics? Who has not watched? Most of you have. So I go with both of you, and he asks, so what is real? He goes on, how do you define real? We like that, we're engineers. So here is his definition. If you're talking about what you can feel, smell, taste and see, then nearly simply electrical signals interpreted by your brain. So that's kind of profound for us. Because if that's what is real, to me what's now real is the light reflected from your faces being interpreted by me with electrical signals in my visual cortex, then I should be able to engineer this experience. I should be able to create sensations and signals in my cortex that fool me to be leaving something to reveal when it is not. That's really the premise of actual reality. So to prove to you that it's possible, I brought my friend for those of you that have listened to my presentations before you might have seen it, VL Frog. Not very much so. I hope the sound is on, it's enjoying our friend. Can you hear the sound? So I get an idea, so I want to make sure that you hear the scream from the guys when the frog bit his finger. Because I want to show you that it's VL Frog. So why is this relevant to VL? Well, we were pulling the frog there. And to be leaving that is surrounded by insects. I don't have a lot of time, but I have a lot of proof that it's not just frogs, we can fool ourselves. I left out those slides, but we could really fool your visual and auditory sensory system into thinking something exists when it doesn't. And that's what we VR engineers are tapping into. But in the short time I have, I wanted to show you something very specific in the VR. VR is already exciting. It's going to change a lot of things. Gaming experience, India consumption experience is going to change virtual tourism or being able to put a headset on and go about watching your own things that you could not have done before. But we essentially are a project alloy in a sense. We are building as Intel a complete new, much reality experience. And we focused on just enhancing the experience of VR. Number one we want it to be fully undetermined and unconnected. That means we want competing systems inside. We see class processor graphics and sensor accelerators on the device itself and showing you a blown up view of project alloy. Number two we put real sense technology for inside out tracking. For those of you that have played with VR you need to equip your room with tracking systems. We don't like that. We want to be able to just stand up and go on a walk and the device needs to be able to track by its sensors inside which real sense can allow. Next, we need to be able to just lift our hand and see our hand and manipulate things with our hand. When you have a VR headset on and you are reliant on using these very heavy controllers for every sort of manipulation and control, it breaks the realism. So that's been the focus for project alloy. We have just shown up the device. We announced that next year we'll bring it to market. So I'm going to show you just maybe one video of many demos that are quite awesome about how we can push the virtual reality experience and make it market reality. Those of us that have played with VR need to manipulate things with our hand. So I picked that example. It's quite powerful. So here we have the virtual world and metal plate and that's the real of the hand. I'm just manipulating a virtual object with my real hand or anything real for any object for that matter. So this combines the virtual world with the real one. And that's just one example. There are many other examples. The one lower right I'm going to show you. But that's a physical scan of a gigantic castle in Germany and I can put my project alloy headset on and I can walk around it and experience that castle as if I'm visiting that area. It's going to be a transformative experience. But let me move forward because there are quite a few areas I have to hit. Autonomous robots and drones. Anyone here flies drones? You guys work too hard. Well, millions of you do because drone marketing is over a few billion dollars operating. And the consistent feedback from consumer drone companies are drone users are they not safe to fly because when you're flying them you're constantly struggling to make sure of their controller trying to make sure it doesn't fly into the tree or doesn't fly into the building and it's very hard when the drone is so far. How far is it from the tree from the building? So you can fly to an open space and it limits the applications. Two areas like inspections. You would like the drone to automatically go and inspect create 3D models of bridges so you can inspect them later on to see where they're at and your roof. Why don't you have people to go and walk and break the tiles on your roof? Go to sit safely in front of your screen and see what the drone sees that's flying around without running into the roof, of course. So we have added real-time 3D visual sensing with real-sense to drones. And I want to show you one example. From our partners in Beijing Unique, they are one of the largest consumer drone companies. They are now shaping Typhoon H just selling like hot cake. They are already a very popular drone except the Typhoon H model includes real-sense for autonomous 3D reconstruction seen understanding in college and awareness technology. I'm showing a video of new drones. So essentially what you put low power 3D scene capture and with deep learning technologies being able to understand what the drone sees unitary transformed drones from what they are today to the drones that you want them to be. That's what we are doing for us. And quite a few other examples. Anybody who has stayed in the Cupertino's A-Loft hotel, or a number of other drones that are now deploying Samuel Robot to help deliver stuff to your room. And last time I was presenting there was a lady in the audience saying, yeah, I stayed at A-Loft hotel and yes, I did order to ask the front desk to deliver an air dryer and the robot showed up to deliver it. With real sense, it has the complete 3D map of the hotel and it can autonomously navigate and go around delivering things to your room. This is the signs of autonomous machines to come and help you in your daily lives. ASUS Robot that's a consumer robot. They show it off at Computex and I'm probably going to show you the video of it. That's ASUS Chairman Johnny Sheen on this week. But let me show you their office device. Zephyr and we managed to take some time out of the show to check out the uploads at ASUS headquarters where Chairman Johnny Sheen gave us a quick demo. You know obviously Zephyr can do a lot more than just introducing himself by using simple voice commands he's able to control home appliances cast media to the TV take photos play music on guns into it Ember is essentially an Android tablet on Windows it also features several sensors for object avoidance, while strong avoidance a depth camera for object recognition and a portal for future add-ons. So I get the idea. I'm going to do a quick experiment on bringing companion to your home that can auto-muscle navigate, communicate with you and make your life easy. The future smart homes you're starting to see the beginning of that. Minors, why should we leave that category out? You're spending time in front of it doing a lot of things here and you put on the suit to see how you look. So what is unique with this picture here is that it's not just a bare reflection it can actually change your thoughts and we have several of these in deployment already on the left I show MemoBe's memory mirror and literally a live sharp of a demo here digital memory a red swenor that then quickly flipping through a digital menu of other thoughts and getting a sense of how he's going to look wouldn't you love that to be able to quickly flip through hundreds of pieces of clothes and then finally deciding which one you really want to try? The feedback is from the shoppers. It's amazing. That's going to transform your experience in shopping. I'm going to show you this a demo from here of partners in San Francisco a company called Naked Labs and they're focusing on a mirror that can scan your body to a 3D shape with precise measurements and help focus allows you to track on a day to day basis how you're progressing and they want to interview you on TV so let me just show you that video can you please all of it you keep doing it and you see the changes and then you go back turning and you get things and you do that again you'll see things come to consumer homes that do not exist today only allow devices to see and understand so with that there's quite a few developers here I understand people who are leading the wave of building new type of systems but I did want to bring the glimpse of what to come so I'm going to show you for the next few minutes things that are not available yet but very soon in a few weeks to a month will be I'm going to show you the real estate camera that we just announced the next generation and there's a lot of feedback on 3D sensors I really want long range well you are seeing here our next generation which I'm clipping off the depth at 60 meters and I'm going to try to show you how it can deal with outward right sunlight and high contrast areas of black to get a good flip factor in depth map so you see another picture on the left and the corresponding depth picture on the right and let me just play this video so we're taking it on the right and it's creating a real time 3D map of everything that it sees around you can imagine what other applications you're going to bring the assets into there, what can you get from us? well, intel.com slash real sense the kits that you can get for developer purposes the cameras themselves packaged inside easy to use peripheral devices of course if you want high-volume modules or intervention we'll serve that to you as well but these are just developer kits this one includes the fisheye camera motion sensor this one includes a computer board running your robotics applications that's our next generation atom processor inside easy to use and we'll also now stay drone development kit I'll have something more to say about that and I'll have more to say about this as well this little device which I'm holding up here our CEO called it a developer dream because it includes a real sense camera a wide field of view fisheye camera a motion sensor a view plus an atom processor plus Wi-Fi and Bluetooth all inside this little device so take any dumb robot and program this in and put it on it and it becomes an autonomous machine and I'm going to show you an example of what you can do first of the two that I'm going to show you one is the drone kit excited as we are I have the $2,000 $1,500 consumer drones that you buy $1,000 commercial drones that are going about inspection work we also decided to make and sell a low cost, ready to fly developer drone so if you'd like to if you're a drone maker you might want to just buy the drone board that has the processing electronics and the vision kit which is the real sense cameras or if you're an application developer and rather not waste time building a drone you could buy this complete set from us ready to fly drone drone kit for your experience it's available in December so you could actually open for pre-order already on intel.com slash real sense ok this is what I'm showing finish my talk in the beginning I started with a with a grand slide claiming to slowly work towards human like sensing and understanding how far are you from that so here I show you what we captured in a building with a device like this again remember this has depth sensing, color sensing, motion sensing onboard processing and wireless connectivity all inside this device we're going to take it around in this room and we're going to look at where things are we're going to create a 3D map of the environment and I'm going to show you the top view of that which is in the computer it's called the 2D occupancy map pretty much like a layout of your room and then once we're also going to learn deep learning on it to recognize things I need to be able to say that's a chair that's a pillow, that's a bathroom that's a picture frame, that's a door and I need to be able to automatically label the room let me show you where are we with that technology so we are walking through as you can see on here on the right is the 2D occupancy map that pops down here in the layout that's being created from the data that has been captured I'm going to speed it up but here you see the coordinates 3D coordinates are exactly where those recognize points are it's creating a layout and it's marking things like pillows, bed, table I'm speeding it up now as the robot or a drone or EU with your phone with this real system inside it walked around your floor you have automatically created a full 3D map reconstructed model with full understanding of where is what guess what when you go to a new environment whether you just checked into your hotel room last night this is exactly what you do you open the door, you come in and you actually create a 3D reconstructed model you go check it out, I realize that's a window there's my bed, that's the TV set and that's the lamp, here is the switch you are creating 3D 3D model of the environment you cannot take it for granted but when you want to build something like that you realize how complex it is now you are talking about building sensor systems deep learning technology to be able to recognize what you see building a map why do you do that? because that's the foundation for autonomous machines robots, drones, 3D scanning devices that could be now built because what we have to delivery is a human-like perceptual system the brain and the sensor module of the autonomous machine interactive systems so with that I'd like to give an open invite to collaborate with us we, as Intel, are an open company we try by partnering with the ecosystem we build the technology, we partner in companies that build applications and systems we really work together broadly all of the things I have shown here are available to you either as developing kits or experimenting with them or if you want to go into high-quality business as low-cost products these are the links let's work together to lead the future faster thank you