 So, hi, my name is Celia. We are going to talk about how to work with 3D sensors on high-open knee, with also PyGame, NumPy, Matlab Live, and NDN, something like that, with JavaScript and 3DS. So, first of all, I'm a PhD student in computer science back in Argentina. I also an assistant professor at Patagonia University, but my main work is at the Patagonia Research Center and also the machine science lab in Vallablanca. Most of the work is by Informatics staff and serious game development. I'm also an organizer of the first site by Cone in Argentina and this year I have been with the second one. So, let's talk about the hardware we are going to use. Like a minimum setup, we are going to need an infrared projector and an infrared depth sensor. Depending on the models on the branch you are going to buy, you can have a lot of more capabilities, such color sensor for texturing information, or maybe micro phone arrays, so you can get some data from the audio. There are also mobile 3D sensors, but I am not sure what is the state and if they support open knee. So, what is open knee anyway? So, this is a framework that lets you forget about the device you are going to use, so you can write your code and you can use any of the device we see before and it's going to work anyway. So, it's not only to talk with the devices, it's also a glue between middleware you can develop before your final application. So, we are going to use 90 that is a bunch of computer vision algorithms that make player segmentation, motion tracking, gesture recognition and a bunch of other algorithms. So, where can you run it? You can run it in most popular platforms such Linux, Macs and Windows. I also work it with Raspberry Pi and Azure Section. It's pretty easy to run it and great results, and I don't have the opportunity to test it in another. So, but what can you do with it? You have a lot of areas, the most popular of course is gaming, but you can also manipulate with natural interfaces, some existing programs like media centers or even your PC. In the artistic part, you have a lot of motion captures for programs like Blender, so you don't have to make all the animation. You can record yourself and then they take that skeleton and put it in a character you like it. You can also use it in robotics. You can give your robot three division, so he can analyze the scene, avoid objects or pursue some specific objects. You can also use it like a very cheap 3D scanner. If you're not looking for high resolution, it's a pretty cheap way to get out. I think the most interesting topic is augmented reality, so you can convert your home, your entire place in a great plan, so you can play games or maybe for educational purposes like museums and stuff like that. So, how can you call OpenE from Python? You have two options. You have PyOpenE bindings. These bindings work with OpenE 1.5. I think they are working to support OpenE 2. And you have the PrimeSense bindings. These are the original bindings, but it's only for OpenE 2. So when we start to make a Python script to work with a 3D sensor, we are going to find a very similar pipe in all applications. First of all, you have to create a context. This context is a box that you're going to throw all OpenE stuff needed for your application. So you can start initializing a context from a config file, so you can say, okay, this device has this special node I want you to use, or maybe you don't have right away your 3D sensor, but you have a recorder. You can record any scene, you can move around, you can take this record file, and in your next use, you can load this record file to your context, and then you have input of 3D sensors. After you create your context, you are going to want to add some generators. You have low-level generators like image depths and infrared generators that are most close to the device, and you have high-level generators like an user generator, hand generator, that you can add some callbacks to them. So if you want to do some specific stuff with these generators, you can program it in there. After you create and start all your generators, you are going to go to a loop. In this loop, you are going to want to update the data that the sensor gives to you. It gives you the opportunity of maybe if you want to wait to the image generator to finish, you say, okay, let's start with, let's stop, let's make an update when the image generator is finished. So the next, you already have your update data, so maybe you are going to visualize it or maybe play with some position you are tracking. After your application is done and you want to go, it's good practice to shut down all your generators and remove your context. There are several ways you can ask for each node to stop himself or you can ask to the context to stop all the generators that you put in this context. So okay, let's see a demo. So this is the first demo. We are going to import all the libraries we talked in the first place. So we create our context and initialize it. Then we are going to try the image generator and we are going to say, okay, use it in this context. This is the function to start generating data. And this is a function to get the RTB data from the camera. By default, you can get a B-shear map of information, so you have to load it, split the channels and switch them, so you have RTB data. So in this place, we are going to return an umpire right, so we can plot it. There is not much to see, so you can plot one image or if we are using with numpy, we can create a live come. So this is a classic pi game loop you have to initialize, create a display, and then when you are running, the only thing you want to do is to update the data from the 3D sensor and then ask that function we create about capture the RTB data. So you can have a come, pretty simple, just a few lines. And like we are good programmers, we can stop and shut down the context. We create a new one. Don't worry, you can put a lot of generators in one context, but I want this to be line by line. So you create another context. We create a deep generator. You can set the resolution if you want, frames per second, and a lot of other attributes you can see in the documentation. We start the generation. We create a similar function that gets like an array, the data from the sensor and reshape it so we can see it in a nice form. And you can plot it if you want. So the infrared camera, we do the same, exactly the same, just an infrared camera generator. We start the context, the same function, and then you get, okay, I'm going to stay here. Some predetermination, especially. So when you finish, you have to stop your context. So let's go to skeleton basics. So inside of the player generator, you have the capabilities of tracking the skeleton and analyze pose. So for that, you can choose a profile. Maybe you don't want to track all the skeleton, maybe the half, and which half you want to track. OpenNIC gives you a set of joints. You can use all of them or make a subset. And you have functions to start and stop the calibration and tracking. So let's go to another demo. This, we are importing the same libraries. We are defining the pose we are going to use for calibration. And then I make a list of the joints I want to track. Maybe if you want to do something special with some joint, you can use a dictionary to get the values back. So we create again the context. We create an app image and user generator. All of them associate to the same context. We ask to the user for the skeleton and pose capabilities. And then we have to write some callbacks. So when you have a new user, I want you to start looking for a pose. And when the user is gone, you can say bye-bye or maybe you can remove some image or something you want. So for the pose capability, we can detect, we can define a callback to, hey, we detect the pose that passes to us. And then you can start tracking the skeleton. And when the calibration is complete. If it's successful, you are going to start tracking the skeleton. And if it's not, you are going to ask again to make the pose. So after you write your callbacks, you have to register them. And then I ask to profile all the skeleton. Then we have a little function that return the position values of the joint that the 3D sensor gets. This is the same old RGB capture. So we can see it. We can start to generate all the data. This is the same pie game loop. The only thing that change as we are in each duration, we are going to get the joints. So we can draw some circle in the display. So you can see the documentation of 90. How good it must be the environment to work better, the tracking of the skeleton. And then, like always, shoot down all your content. So the last demo that we are going to see about this generators and shester and hand generators. In shester, you have like a preloaded five or six shester like waving, sweeping, clicking a bunch of other more. And you can start and stop the tracking always you want. So we import same libraries, create a beloved context, create a depth generator image and hands. You can see that the shester generator, you have to add the shester you want to track. If it's not the basics, you can program your shester. So we write the callbacks. These callbacks are okay. We detect the shester that you tell us to look for and it's waving. And for the hand generator, we have to, hey, when you create a new hand, please do these steps when you update it and when you destroy it. This demo is just shown PyGamer sprite. So I tell them, hey, this is a new hand. Create this class we are going to see it down. And destroy it, update, just move the position to the image in your hand and destroy it is removed from the display. So you register your callbacks always. This is just hands and the files are images that I want to put. This is a tiny class that is PyGamer sprite. So the only thing that does is load the image and scale it so you can track your hand position. This is the same function to see the camera and this is the binding loop. Here, the only thing that changes is that in PyGamer you have to update your sprite. So we are going to see something like this. So when you're away, you can put something. Shoot down your context. So, okay, let's see the final demo for this. We find a, we mixed some technologies. So we are going to use from Pyopony. We use shester and hand generators. But when we are writing our callbacks, we are going to make in each callback Python dictionary. So then we are going to encode it in JSON and send it through 0MQ. So in the other side, it's going to be a Flask application that is going to be waiting for this update and send it to a web page. This web page is going to be more, is going to take JavaScript, take the code, the message, say, okay, what are you? You want to create a new object, you want to move it or you want to delete it. So for the 3D stuff in WebGL, we use 3DS. It's an amazing library. It makes some high abstraction of WebGL and you don't lose all the cool stuff like shaders from WebGL. So in 3DS, you can create cameras, objects, like a bunch of stuff, material, and even more, you can import and export objects or scene models. So it's pretty easy. So here, I don't want to, I'm not going to show this code because I don't have so many time. But in 3DS, you have the repo and this is all the code. It's commented. It's pretty easy to read. And so we are going to, so we load some monkeys from the Blender model. So if you say hi, there is, you create that too. And when you touch, some of the monkeys disappear and you gain some points. It's really difficult. But you know, you can go away, you can get close. And when you destroy more monkeys, there is more and more monkeys. Please not have an end, but it's pretty demo to show what can you do with 3DS and a few lines of code. We do have a little time for some questions, if anyone has any. There is a microphone for that. So you made a point of telling us to always shut down, call it shut down method at the end of every session. You make a point, you're very explicit about saying, yeah, you should always shut down at the end of every session. So I assume that if you don't, horrible things are going to happen. Does this support context managers that they're with something as, yeah, because then you don't need to do that. I assume you can just use with that and then don't need to worry about this. And what happens if you don't shut down? If you don't shut down and if you pass all the day working with 3D sensors, you get some point that say, hey, this is not tracking my head. What is happening? And then you start to look and you have a lot of trash context that you don't shut down. So they are still alive and make some trouble with it. Are there any performance issues one should be aware of? Are there anything that would be difficult to do because of performance? No, I mean, you're branching on Raspberry. Of course, you are not going to visualize it in a Raspberry because you're going to build it. But if you want all into process something and maybe send it to a web server, it works great. No, performance, I don't think of. I mean, she always, the part that consumes but not the 3D sensor analysis in case it's 5-year. I mean, if you are working with, I don't know, 3D scanning, so when you have to intersect a lot of clouds.