 Hi. So yeah, welcome. We're presenting a bloop rapid motion capturing using Blender and Microsoft Kinect Please be aware that our software is very sensitive. So we need silence when we do our demo. I hope that's all right So yeah, who are we I'm Florian This is handsome fellow is Nicholas. Just turned 25 today Thank you very much We are both students We are both students of the digital media Program at the University of Bremen in Germany and we just finished our bachelor's bachelor thesis with Blender with those fancy topics up there nobody does actually care about and Now the question is what is bloop? Bloop is the sound is the name given to an ultra-low frequency and extremely powerful underwater sound detected by the US National Oceanic and Atmospheric Administration in 1979 Says Wikipedia, but not for us. It's a Blender loop station Yeah, bloop is a Python add-on for Blender 2.59 that brings speech control motion capturing and digital puppetry To home users using Microsoft Kinect so our motivation for this is obvious keyframe animation takes a whole lot of time and Everyone who does Animation has no time for experimenting. I mean just try out things. You can't do that. It takes way too much time and also other artists just Can't enjoy animation like actors or puppeteers or just beginners and Motion capturing is really expensive. I mean at the University we have this moving suit and it costs like 10,000 euros a piece or something. It's just weird So nobody can do that at home with standard hardware about standard hardware for motion capturing. I mean, oh Yeah, and if you're only one guy motion capturing is just a pain and Yeah, well by yourself It's just like you have to run up to your computer start recording go to your checking space Hack about stuff run back to your computer It just sucks So we thought a user should be able to fully animate a 3d character with a minimum amount of time experience and technological knowledge required and Without breaking his or her workflow by using different modes of interaction So the idea was to to implement Loop station like I don't know if you if you know these guys who who make music with their loop station in front of them And they're recording sound and doing it with their motion but our idea was to transpose that to animations and Just switch them old channels. So this is how the the usual music loop station works You record sound and physical motion like you turn knobs you press buttons Controls the actual system We do this We we use sound to control the system and We record motion Pretty easy actually and this is how our system overview looks like We have the user who talks to the connect and acts in front of the connect and the connect just sends data to blender and Now we'll provide you with a tiny tiny demo Like said before you should be really quiet because we don't know how how fragile the speech recognition system is It's it's it's it recorded the animation and now it's looking the animation that we recorded just now Changing Celebration Okay, so much for a quick demo So our main features are we can Calibrate no we can create new mappings via gestures the idea is you somehow select a feature of a character and you just Wave with your hand or with your foot or whatever you want to map to it and the system recognizes that and Stores this mapping So you create like two three mappings Um, whatever you like and then you calibrate those you saw how we calibrated and basically the system takes a snapshot of your actual pose and There's some fancy stuff. I don't want to go into details. It's not that complicated actually and then you can quickly record animations and Also layer recordings to different mappings as you just saw Oh, yeah, and that was the last Yeah And you can record animations with more than one user acting on the same character Which is handy if you like have a character with I don't know six lags and three people and Every lag should move at the same time uh while you're recording So That's what our system understands until now you can tell it to go into mapping calibration or recording mode by saying mapping calibration or recording Uh context sensitive stuff like start we can start calibration or start recording and next in previous we used that now to cycle through or pre-find mappings and uh the workflow itself looks like this You start uh somewhere most likely at mapping you map something you get a calibration Calibrate your mappings you get a recording you record something so you go back to mapping Uh map something you calibrate it and record again until you have your animation finalized Yeah, our whole system consists of three different modules Yeah, one of them is uh OC client which interacts with the Microsoft Kinect And then sends the data over over the oc via the oc protocol to the new add-on Yeah, well like I just said it interacts with this uh Microsoft well it interacts with the Microsoft Kinect to get the skeleton data as well as with the Microsoft speech at api which recognize the speech and Sends both data over you always see the library. We use the library use named bespoke bespoke oc Library and it's written in c-sharp Yeah, and then in blender Hi, hello Okay, so in blender we have two add-ons. We have the new add-on Which receives the oc data and presents it like it stores Basically somewhere in the python environment uh, we use pi oc from Ryan coiner which we Only slightly modified so it works with python 3 Oh, yeah, and it's written entirely in python as well as the bloop add-on uh, the bloop add-on is the piece of software that actually manipulates the character Uh, and that does all the other stuff mapping calibration Uh, and it reads the data directly from the new add-on And that's basically just one big model operator that runs on a timer Which pulls the main loop and yeah python So yeah, good question. What's that good for? Um As we said in the beginning, uh, it's intended to let novice users and other artists create 3d animations Like for digital puppetry That's pretty much what we just did you just map some arbitrary feature of yours to some arbitrary feature of the of the virtual character and you just move it like Take a puppet's hat move it around Uh, you can also call that performance animation Or you could also use this For example, perhaps for live performances. Maybe if it's not too loud And yeah, another obvious one is animation prototyping If you just want to get a quick overview of your scene, how could it look like how would how would it be nice to look What what if I do this you can just act it out quickly and you Have your results in pretty much no time Also, it's just fun to experiment with like motion capturing in blender in general with stuff like this Uh, so we encourage everyone who wants to try this thing out Extend it Try it out Use it for whatever you want. You can visit our project website this URL and you can download it from there. You can download it from there actually And um use it just try use it Maybe send us feedback not not everything is working As it should for example interactive mapping system does not work We use predefined mappings now, which are hard coded, but that's just minor drawback Yeah, and I guess That's it. We're pretty quick Oh, thanks for listening I hope it wasn't too confusing. Yeah questions True, um, we haven't we didn't get into that yet We want at some point to use data reduction algorithms But for now Our main interest was the interaction Uh, you mean like just to create a motion capturing library using the connect? Yeah. Yeah. Yeah. Um Yeah, um, we're working on that right now. We did that using uh inverse kinematic handles Or we pretty much put a handle on every joint Of a human figure and that worked pretty well. Um, we have a demo video somewhere in the internet We can see that Um, but it's pretty early still Yeah, we we finished the day before yesterday On time Please Yeah, it's fairly easy. I mean we have we we we have an alphabet of um Of joints. It's like our uh underscore Hand or something. Um, we use the Oh skeleton convention actually, I don't know if you know that and um, we also have a We also have a A file coming with bloop Where there are example mappings and it's really easy. You'll see you'll understand It's just position of the joints because the connect does not provide us with uh rotational data at all Yeah, that's that's quite difficult because um, the question is how to distinguish between commands for the system and like motion you want to record so that's uh Thing you you have to get around Well, yeah, so at some point you might Oh, yeah, well, I don't know Hmm, we could have also have used like uh paddles or something but but but we found it We found it it made more sense to actually separate those modal channels because it's it's It's just clearer clearer for the user and also for the system. It's just easier for us Yeah, true. Yeah, uh, but we could extend our our library of commands to pretty much As much as we want that's just like the initial set we created for today Um, we used one computer Uh, in general you should be able to use Two computers, but we kind of forgot to add a feature where you can enter The ip address where the oc data goes to so right now you need one computer And you need a connect and the microsoft sdk installed and you need a windows computer. That's the point I don't know we use windows 7 I mean No, it does not because we use the microsoft connect sdk and so so then you can say um Our requirements are those of the microsoft connect sdk if you that's what you want to go through. Yeah Sorry about that But you cannot still you can still if you want to use oskeleton I don't know if that works only for windows, but uh, that's where we started with and That works too, but then you have to uh, then you can't use speech commands at all Any other questions? Okay, so thanks for listening