 My name is Julius Tuomisto, this is Janne Karhu, we are also presenting with the SKA, so it's three people. This talk is about Markerless Motion Caption Blender, from our background we are going to give up a little bit of a background and then go into the state of the art, just talk a little bit about all of this. We are going to move into our software, it's called NIMate and the new release of that software which we did this August and then the fun part is actually the demos, we have two demos with us for today and then we're actually lucky enough to have a workshop tomorrow morning, lucky enough 10 o'clock. So for those of you who are interested in 3D cameras and kind of what you could do with, especially with Intel RealSense cameras, then you could take part in that workshop. I'm going to give a little bit of a talk about that later. Like I said, we're three people at the moment, so also a tiny little company from Helsinki Finland. I'm Julius Janne and the essay is going to join on the stage a little bit later. We are specifically a company that develops applications that make use of computer vision and 3D cameras. We've been doing this for the past four or five years and our revenue comes from, just for those interested in kind of how to create business out of 3D software like Blender, then our revenue comes from licensing software and doing custom development stuff for clients. Just to give a brief background for those, I mean most of the people in the room are probably well aware of what is motion capture and performance capture. But just to give a kind of a background, then today's performance capture is aiming to kind of get, instead of getting a generic output out of motion capture and kind of replacing animation in the workflow for any film or production, then performance capture is generally really trying to get to the core of the actor himself. But as you might as those people who are familiar with these systems, they know that these systems are relatively expensive and time consuming to set up. Markerless systems are kind of a more recent development. I guess most of you have kind of, if you think of the image of what is motion capture, most people will immediately think of kind of people in funny suits with tiny little markers and then moving around in a studio environment. Markerless systems are of course a little bit different. They kind of use computer vision to go for a more easy, faster take up time. Unfortunately, these traditional systems, I will kind of give a clue about some new systems that don't have these problems, but traditionally they are less accurate than market price solutions. So this means equals to more cleaning essentially. So if you've ever worked with motion capture output, then it usually does require cleaning. So it's definitely no substitute for animations always. So examples of Markerless capture systems apart from our own, is a Russian software called IpaSoft and then a German system called Captury. And to be honest, from what I've seen from the Captury, I have to say that their system is probably the most developed Markerless motion capture system. In the market today. Then going into our own software, NMIT, which has been around for I guess now for three and a half years. We released the beta I think January 2012. And NMIT is essentially the name comes from natural interaction made or animate. We kind of in Finnish we always say animate, but actually I later figured out that when we name the software it's not necessarily that in English. But NMIT as a software is not necessarily doesn't aim to compete with these professional solutions that are in the market, but the software tries to provide a kind of an entry level point for people to get in touch with motion capture and to use it in easy workflows. What NMIT does is essentially instead of relying on RGP input, we've hooked up since the beginning, we've focused the earliest version was essentially a Kinect version, so you could use a Kinect for Xbox 360. Now the new version what we've done is essentially we've introduced a number of different new sensors into the software which then have different qualities. We're going to do a demo with the Kinect for Windows, for example the Kinect for Windows has some fantastic qualities which Kinect for Xbox sensor didn't, but then to be honest we found out that a lot of these sensors they have strong points and then they have weak points. So unfortunately you always tend to, when you're doing client work for example, tend to kind of learn the hard way that you cannot really do everything with these sensors. A lot of these, most of the sensors that we work with or actually all of them, all of the ones that are supported by the software are also infrared sensors. So obviously for those people who have any background in infrared or using infrared cameras, they can be difficult to use. Some of the different, one of the main nice points about this, about our tool is that it's, apart from being super easy to set up, is that we do support a number of these different sensors, so you can kind of mix your, if you want to use the leap with the Kinect or if you want to use the real sense with the Kinect for Xbox, you could essentially do that. The use cases are kind of, instead of focusing solely on motion capture in itself, we animate, and to be honest, I mean, Markkala solutions themselves, they are nice in the sense that they offer kind of more use cases than natural, like normal or back, let's say traditional motion capture, whereas you're using markers and so forth. I mean, obviously you're not going to ask somebody who comes to a shopping mall to wear kind of a funny frog suit with markers and then move around to play a game. So one of the nice things about Markkala solutions is exactly that, that you can employ them in environments which wouldn't naturally, or you wouldn't use a normal traditional motion capture system. We have a range of plugins for NIMED. There's one for Blender, obviously, that's why we're here. Our background, of course, is a lot in Blender, but we have some plugins for other software as well. But today, I mean, for the past, like I said, for the past couple of years we've been working on, or let's say one and a half years we really worked a lot on the new version of NIMED and in order to, the new, the cool thing about this new version is that we are able to really support a range of these sensors. So today we're going to kind of announce that we've now put out support for Intel RealSense, and personally I feel that RealSense is kind of, it's a boon for us in that it's the first system that's actually been integrated really into a range of laptops and devices. So for us, obviously, it's an interesting market, but it also, I think that it also verifies and legitimizes the kind of 3D camera as an input device because of the fact that you can actually buy a laptop from a shop with these cameras. But today, basically, we're making NIMED available for anybody with a RealSense camera. So now you can basically, because of this work, you can run your RealSense and you can use Blender with it. And in order to kind of demonstrate what you can do with that, we're going to do first a Kinect for Windows demo. So this is actually a client work that we did for a company called Celialine, a big boating company that works out of Helsinki and Stockholm. And basically what this demonstrates is an actual business use case for these sensors. Whereas we've hooked up, there's a kind of a game, this demonstration has been done in the Blender game engine and we were running it in kiosks around Finland and also now on the ships themselves. And essentially the nice thing about this is that we are actually also this one, we're using smile detection. So it's not only a kind of a connected game, but we're also taking input from the face, doing some facial expression recognition and then applying it on an actual client case. So if you want to, will I do it? So Janne is going to take over. Okay, so here we have an animate already running and it's now connected to the Kinect for Windows sensor here. And then this game, it's all done in Blender. There are quite a lot of stuff going on, but if I now then actually show that what it looks like when it's running, let's see. Okay, now you lose this detected by the sensor. Actually if you can first move a bit away so it isn't tracking you. So basically this is how it's going on a portrait display. So that's why the aspect ratio is like that. And then once it detects that there's somebody to play, it first gives some instructions and you can directly see that Julius can move the character and then you can play a half a minute of game to see how many times you hit the ball over the line. But one of the nice things is that because we have also smile detection or emotion detection, although I don't know if it could be the lighting with that too many shadows. I'm doing my best to smile. Harry isn't smiling. What should be happening is that the seal would also smile when you smile. So it gives that extra bit of nice interactivity. So essentially this is a piece of work that we did as a consultation work for a client just a kind of an example use case of Marcus motion capture for a client that you could do. And the nice use case, of course, that fun part of our party is that it's playing a game engine that is running in an actual environment. So this is that you can play this game on these boats from Helsinki to Stockholm and back actually today. But unfortunately the smile detection is not working so. Demo effect. Just to keep in schedule, let's see how short we can keep this talk. But we have another one. This one is more funny. I can't see. I can't see it on Rosendahl in the house. So yes, can you do something about that? So yeah. So what we are not going to be doing is a slight bit of animation using my own face. It would be better if Tom was around. Is Tom Rosendahl here? Is Tom Rosendahl in the house? No. Let's call Tom. So this demonstration is kind of gives a two way look into 3D cameras. I mean you get two use cases and one in this one. No Tom? So we have this model here. It seems to be a human being. And if we enable the texture, we can see the stone. If anybody remembers from last year, we had part scanning people so. So first of all I'm going to enable the plugin we are using with Enamate. This is actually the new plugin. It's not available yet. We can put it online soon. But the thing it does is that it creates this new menu and I can start receiving data from Enamate. And if I go here and first do a tiny bit of calibration. Okay. So right now Enamate is detecting my face. I can turn my face. There's this display with my facial gestures on it. And if I'm not going blender. So the system we are using here is quite simple. It's simply we are using these landmark points and we turn them into facial action units which are then received in blender and we are controlling some very simple shape keys. He's a gymnast also Tom. Very flexible. Anyway, we are going to be demonstrating this more tomorrow in the workshop show. If you're interested you can even try it there. So if you can put on the last slide. So we're going to try to keep this one on time. So we're going to run just short of 20 minutes. Basically you can download Enamate today to get started with Intel RealSense. Or any of your existing sensors. We support the leap motion, Kinect for Windows, Kinect for Xbox and all of them open N9 devices. If you're here, if you're taking part in the actual event which is actually awesome. It's always awesome. This I think is my seventh time here. Then definitely take part in tomorrow's workshop. If you can make it, it's 10 o'clock. It's a terrible time. But if you can make it, take part and unleash your 3D camera. The nice thing in the workshop, so we'll be going through these actual use cases. We'll make sure that Harry will smile tomorrow. We'll make sure we'll be doing these demonstrations giving a kind of a more detailed background into them. We also have Yunus Kulberg from Intel. Intel telling us a little bit about, so the workshop is a cooperation between us and Intel. Yunus will be telling a little bit about RealSense cameras at the event. But we'll be also giving out some SDKs so you can get started. You could basically walk out with a speaking tone tomorrow if you want. We'll also be giving some examples of things that we haven't unleashed in terms of RealSense and things that you could do if you're a developer for the future. For example, we'll be talking about 3D scanning. If somebody wants to get scanned tomorrow, we can do that and then we can hook you up, we can animate you if you want to talk with tone in the digital version. And so forth. But we'll be giving kind of a more in detail talk about all this stuff that we presented today. Hopefully you'll make it. And if not, thank you for your time. There's a little bit of time. So if anybody has any questions, please feel free. Yeah, I mean, I could maybe be on the floor here. Yeah, that was pretty much the main reason why we started working on NIMate version 2. Old NIMate didn't really allow you to have multiple sensors running at the same time. But now if your computer is powerful enough, you can basically run however many sensors you want. But essentially, if your question is whether we do fusion in terms of the motion capture, unfortunately not. So we are aiming to go for that somewhere in the future. But at the moment it's kind of a dummy system where all the input is just coming in and then you need to filter it out. Okay, all right, thanks. Another question. Can you use record two person at the same time, meaning that you have two person interacting like in a martial arts scenario or something? Yeah, I mean, it depends on the sensor. It really depends on, for example, the Kinect for Windows. You could do this also with the old Xboxes. You could do this definitely. It depends on the use case. So because we rely on the input that's coming out of the SDKs, the software, the tracking is as good as the SDKs, essentially. Thanks. Any more questions? Okay, so you can catch us also just come talk if you're shy today and we'll be here. Thanks.