 agency where I did electricity service, buy and sell homes and list them online. So the Kinect, for those of you that don't know what it is, is a total buy for that Xbox 360. It came out in November and it's actually been the fastest selling electronic device ever. They sold 8 million Kinects in like 4 months and one of the interesting things about it is Microsoft didn't actually develop the technology, they just licensed it from something called Bram Sense, who had been fishing around for a while, I think they could fish an apple, but Microsoft ended up seeing a user of this thing in video games. So the Kinect lets you control your video games with no controller, it's kind of like the Wii but you just use your body, wave your arms and jump up and down and make it fool yourself and the Kinect can actually take pictures of you while you're playing the game. You can preserve those moments of time when you play the moron. How does it work? There are two cameras on the front of the Kinect, one is an infrared camera and one is an RGB camera and there's also an infrared laser projector that's mounted to the side of the Kinect and by projecting micro patterns using IR projector and using maps as mentioned yesterday, you can calculate depth on the fly using the IR camera. Both of the cameras have a maximum resolution of 640 by 480 and capture at 30 frames per second. If both of them are running they actually saturate USB 2.0. So this is an example of what the patterns look like, you can kind of see how they split out the frame into different warrants and project patterns towards the scene that kind of gauge depth. The livery Kinect is the open source driver that was basically created for the Kinect so you can use it on your machine, on your computer and as soon as the Kinect was released I think within like a week somebody created the driver that let you interact with Kinect and get data from it. It's released under dual license Apache TVL. You can download it at openkinect.org or you can, if you're using a Mac, you can use Mac ports over here when it's all there. So I'll quickly show you a tool that comes with livery Kinect, it's called GL View and this actually takes the Kinect and it shows you a live view basically of RGB camera and the IR projector or IR camera. And actually we're actually using the depth capability capabilities that are built into the Kinect to highlight areas that are closer to the camera. You can actually control the Kinect from this program by hitting the different number piece. You can change like, you can't see it, you can change the LED light. We can actually change the camera mode. I'll see that. This one right here, oh here we go. So this is the infrared mode. I guess it's too far away to hit anything back there but you can kind of see the patterns that it's projecting on my hand and the little dots, it's pretty cool. So that is livery Kinect and the GL View demo application that comes with it. So Scott came up with the thing, decided to make a wrapper for livery Kinect that you can use in Ruby and it supports getting depth map from the Kinect as well as image data. You can flash all of these until the Kinect is out. And unfortunately I must have screwed up my install of the headers for livery Kinect because I can't build Kinect to be able to actually look at code real quick so you can see how easy it is to use it. So he requires the Kinect to be modeled and initialized a frame buffer from the Kinect and then he just loops through and grabs the depth data and the image data from the Kinect and puts it out to the screen. This example is a little bit more involved. He uses OpenGL to actually render the depth frame buffer and it's the same idea basically. He loops over the frame buffer from the Kinect and gets depth data repeatedly and then he uses OpenGL to render. Okay, so that's livery Kinect. So OpenGL and I is actually a driver that was released by Prime Sense, the company that created the technology to find the Kinect. And they released this driver I think in the beginning of December. It was after the libfreenext initiative had begun and it's the official driver released from Prime Sense. The ANI stands for natural interaction and it's this idea of you can interact with the technology without having to have some sort of device that you touch. It's licensed under the LGPL. And I think the biggest reason to use this over the other driver is that Prime Sense also has this middle wire that you can use called night, which lets you do skeletal data capture. You can actually capture gestures that somebody in front of the Kinect is making. They have swipe gestures, they have waving gestures, they also have a blink gesture. And I think that this is really exciting, that's really cool. It's futuristic. And the problem I saw with the night middle wire is that the headers are all C++. So it presented a bit of a problem as far as making a Ruby library for it. There is a free license key that you have to use in order to develop with this. But I want to show you something else that's really cool. This is built on the night middle wire. And it's called OS-Skeleton. And it basically takes, it's a binary that was built using the night middle wire. And you can use it to capture skeletal data from the Kinect and broadcast it as OSU messages to another programming machine. So I want to try and do another demo here. Okay, so we inspired the OS-Skeleton program. We created a Ruby processing sketch that will take the OS-Skeleton data and render it to the screen. So I'll run that example in this other window. It takes a little bit of time to start up. Okay, so we have the Ruby sketch or the Ruby processing sketch running here. This example window. And I can kind of see the output from OS-Skeleton. In order to get OS-Skeleton to actually recognize that I'm here, I actually have to make a pose and calibrate, let it calibrate my skeleton. And you just have them. It should only take a few seconds actually. So I'll try to stay in the position. That's unfortunate. But I'm glad that it did. So it's just sending packets to OS-Skeleton packets to the Ruby processing script that I created. And then I'm taking those in real time and rendering them using Ruby processing. So let's actually look at that code real quick. So you can see what it is I'm doing. Okay, first thing I'm doing is I'm requiring a skeleton class that's basically just a container for all the different joints. And including the OS-C Ruby gem that lets us receive the OS-C data that is being broadcast on port 7110. I set up a method to listen to joint messages. And then I have a joint case statement that checks which joint is being updated. And I wrap the three-dimensional data for the message. And then I set the joint model to receive that data. So this is the processing code right here. Basically I'm setting up the OpenTL renderer. I set the frame rate. And the draw method, this gets called implicitly every time it needs to redraw. And I set some lighting. I make sure that we're not trying to render any joints. And then I loop through each joint. And I manipulate it according to the three-dimensional attributes for each joint. Alright, so I think the thing that I submitted when I talked was to believe that I said something about using Ruby to capture, to do motion capture actually, for like a video game or something. I decided that was probably not going to be super useful for most people. And I decided to do instead write a Ruby wrapper for the night liners that the middleware that Prime Sense made. And I called it Runect. Anyone want to see some dark magic? What do you mean C++? Essentially Runect is basically, it's just a poorly written wrapper code. In order to do that, I have to use this gem called Runect++ down to the Ruby module. It's basically just a starting point. We can look at it real quick. Okay, so we require the module. I created Runect, instantiated. We can get the depth data from it. The thing that I was really interested in was being able to do the actual tying to the middleware. So I get to find a proc and have it do something. And then I pass that proc into this global function that I created in the library that we'll actually... So one of the things about Rice is actually really... It's a great project, but it doesn't have any support for abstracting procs. So I actually have to create a global function so that I can actually call procs and create them from C++. So if we run this, hopefully it doesn't have as far of a time with this gesture that I created. Great. We got quit. It takes an awkward amount of time to actually start up. It's not. Okay, so it's initialized. We have the context, which is basically a frame buffer before it connects. I'm just printing out the class name here and the resolution of the depth map. So I step back over here in front of the connect and I actually wave. But it's saying hi to me. B with C++. Rice, I'd really like to work with you if you have any interest in helping to make this a better library to see if there's anything else on my slides. Oh yeah, so the mocap thing. Another reason I decided not to do that is because somebody has already built a really good program to do that with the connect. It's called Breville. Breville, he does know to capture for a living and he built this on the side. It's a Windows program, unfortunately. It's an open source, but you can use it to capture skeletal animation data in real time and export it to Autodesk or you can actually import it into Flendor if you like Flendor. So I figured that was the problem that's already solved. Alright, thanks for having me. Anyone have any questions? Thank you very much.