 Hello, my name is Barathe Rajagapalm and I'm responsible for helping develop STs market and strategy for augmented virtual and mixed realities. You know, we are at a really interesting time today in the industry. We were seeing a convergence of technologies, products, solutions and services that span across a range of sectors from augmented virtual and mixed reality, certainly, to advanced driver assistance systems and autonomous driving, for unmanned aerial vehicles and drones, as well as a range of mobile robotics that spans everywhere from the industrial, commercial and consumer sectors. And what all these solutions and services and products have in common in these industries is a need for computer and machine vision. Wherever you have devices that are moving by themselves, machines that are having to sense and understand the environment will be a requirement for machine and computer vision. And so today, we'd like to show you a demonstration about one stage of machine vision. And so as you can well imagine, for a computer able to see, there are multiple technologies and multiple things that have to come to bear. And one of the key things is the ability to able to sense and map your environment and to able to reconstruct your environment. And so the demo today is all about 3D reconstruction and what's also called SLAMP, which is Simultaneous Localization and Mapping. And what that really means is that if I'm a robot, I'm standing somewhere, where am I? Where am I going? What's around me? And that's localization and mapping. And that's the core that's required in order to be able to have any kind of functionalities for machine and computer vision. So with that, let me go ahead and show you the demo and show you how it works. So what you have here in front of us here is an ordinary smartphone. This is a Google Nexus. And we're going to employ and utilize the built-in RGB camera of the Nexus as well as IMU, the inertial measurement unit of the accelerometer and gyroscope that's part of the phone. And in the back of it, we have fixed over here a 3D sensor. Now, this is external for the sake of ease of use or ease of demonstration. However, you know, with SDs, time-of-flight solutions and other solutions, we're going to embed within the phone itself a solution to do actual depth mapping, but again, for the sake of demonstration, we've decided to use an external device. So this device basically essentially is an infrared laser plus an infrared detector, which then allows the system to calculate the depth of all the objects in the scene or the field of view. And so in this demonstration, what's going to happen basically is the camera data as well as the IMU data plus the data from the 3D depth sensor will be merged together. And with that merging on every single frame that's coming through from the camera, we'll analyze all the key features, all of the key structures will track those in space and time. And then by doing that, we're able to then really have a much lower reduced set of data, which can then be processed by the algorithm, which can be done in real time. And so it's really important, always applications to do it in real time, so that's useful, so that's usable. And so we'll go ahead and kick off this demo and we will show you exactly what we mean by 3D reconstruction. In the display of the smartphone, you will see three images over here. The structural image is the one that once I start the demo, we will reconstruct the scene that's shown in front of you. On the lower left hand side over here to my left is the image as seen by the actual camera, it's of the cell phone itself. And on the lower right hand side to my right is also an image of a camera, but the IR camera from this device that I mentioned to you earlier. And so now with that, I'll go ahead and I'll kick off the demo. And what you'll see now over here is I'm going to go ahead, I'm going to pan this assembly with the camera and the structured light sensor around. And I'll go ahead and stop it here. And what I've done now, or what's happened here now is you actually captured the 3D scene and reconstructed it in real time. See that? It kind of looks like a picture, but it's really not. What we're able to do is we're able to apply very intelligent algorithms in combination with the data from the camera and as well as the 3D depth sensor as well as the IMU, put it together and with that be able to faithfully go ahead and recreate in real time a reconstruction of the 3D image. And that is the essence of 3D reconstruction or what's also known as visual slam or simultaneous localization and mapping. So why is this important? And what does it do for us? This is important because as I said earlier, this is the first stage of many stages required in order to develop really compelling applications for machine and computer innovation. For example, if you're wearing a AR VR headset and you want to play objects in front of you in space with virtual objects, you're able to know where you're touching and where you're looking and where objects are moving. Or if I'm a robot that's navigating a city street to deliver something, same kind of a thing. You have to know where I am, where I'm going, what's around me. So in order to be able to do that, you have to have the ability of the machine to be able to see, know where it is, know where it's going, reconstruct a scene around it, and then our customers are partly able to develop applications that basically help it to navigate, identify objects and obstacles. So in the case of drones, you want to avoid, say, power lines. In the case of robots, you want to avoid people or other objects. And as I said, in the case of AR and VR, you want to be able to manipulate virtual objects in physical space. So that's the essence of 3D reconstruction and V-Slam or visual slam. So with that, I want to thank you for your attention. Should you have any questions, please feel free to chat to me, certainly. And, you know, let's look forward to working with all of you to help our customers and partners deliver great products.