 Thank you! Cheers! Thank you! Really, but you have a PDF? Yeah. Oh, then maybe we can put it on my computer. Oh, it's working. Yeah, wait, I just, it's a, I think... Oh, okay, should be right here. Misposition, replica. Alright, so let's go. Hi, everybody. We are a group of Italian designers from Politecnico di Torino. We'd like to show our project, Blender for School, which arose from a specific point during the last two Blender conferences, keynotes, that make a Blender 101 for kids training. So we developed an open software for children's education. A first analysis towards educational software shows that the big majority of these are focused on a single subject. Otherwise, thanks to Blender's tool and functions, we had the capacity to develop a software characterized by multi-disciplinarity, creativity and openness mind. We believe that Blender has a great potential for its different features, such as modeling, simulating, exploring, setting time, counting and so on. And we have decided to translate them into a new interpretation, useful and advantages for the primary school, identifying six different thematic areas that are representation, principle of physics, space, time, environment and numbers. Starting from this element, we developed our project concept that is divided in three principle steps. The first is the creation where the child can create the element that will be the subject of his work. In the second step, the adaptation goes through the six areas of interest and the making of the video. And the third one is the sharing that is a step with openness and comparison. Now we show you a short movie that tells the function of this software. Starting with login, you can choose the element from the library or create it with a simple geometry element that you can scale, rotate, bring on front and back. After the creation, you can give the object a name, for example, a dog and a category animal. Then you can share it. At this moment, you are ready to start your project through the area of interest, for example, from space. When you can translate your object in 3D with the extrusion and 3D simple shape, for example, a sphere. Then you can pass to the other area, for example, numbers when you can count the element, representation with texture and colors, principle of physics when you can compare some element, environment, and time, for example, using the season. When you finish your trip around the six areas of interest, you can create the final video. So if you like our project, you can contact us on Facebook or in an e-mail. I hope you like it. Thanks. Can you still wait a little bit? You have a video? Okay. Well, they're still connecting the laptop. We took the liberty of improving on the cycles Wikipedia or Viki, cycles Viki. If you've ever had trouble with a node or something, you probably went online and checked it on the cycles Viki. And a lot of the time, there will be something like the isShadowRay node will be true if the ray is a shadow ray. So after the third time that happened to me, I thought, no, we got to do this a little better. So I decided to actually write a book and try to write it in plain English so normal people would understand it. And I mean, there are some math stuff there that's just, I don't know. I have studied science, so I'm not that much of a stranger to math, but still sometimes I just had to give up. So I tried to translate all that stuff into a normal and regular English, and it started out as just the nodes. So the original goal was just to describe the nodes. And so of course I have a chapter for each category, and it came up to about 77, no, actually exactly 77 nodes. And then I started to talk a little bit more to my friend about it, and he said, why not include some other stuff? So instead of the original about 80 pages of the nodes, we now included also some basic camera settings. We're comparing the cycles camera to real world cameras. We're comparing how the depth of field works and telling you about those style elements that not necessarily everybody knows. And I included all the settings of the camera tab, which turned out to be quite a lot. So next thing I put in there was a render performance, meaning we got tips on how to improve performance. We got tips on how to deal with the noise, how to get rid of the noise, and all that kind of stuff, branch parse and path tracing versus regular path tracing. And so next thing I had was some miscellaneous settings. Those are just object settings like multiple importance per object. And all that kind of stuff that some of you just might have heard about, but you're not really sure what it does, so you just leave it by default. So if you're interested what it actually does, you can go ahead and check that out in the book. In the end, I even included a small chapter about how to use nodes with Python, meaning a few tips on how to address and link nodes using Python scripts or just single command lines. Well, now for a short demonstration of how we thought people could learn cycles really fast and understand everything about cycles in depth. We did not only create a textual representation of what's happening when you're working in node, but we also included a lot of images to tell you how things are working out. For example, here is an example for volume scatter. And you see that normally when you're using volume scatter, then the array will enter the volume and bounce into a random direction. But for example, if you set the n as a trophy to a positive value, then the rays will bounce in the direction that the rays enter the volume, or if you set it to a negative direction, the rays will bounce back more or less. And okay, that's one part. We have a textual representation. We write in plain English what actually each setting of a node does. Then usually we also include drawings that show you how things are working internally. And for last but not least, we have created an intense amount of preview renders. So if you want to know what a node is doing, you can always check out the preview renders so you don't have to work on the settings yourself for a long time. For example, these preview renders, there were all 25 in this case and all rendered with 1,500 samples. So if you want to do this yourself, you might have to wait a little bit. And here, for example, is also the preview render for the more unisotropic effect in cycles. It basically has a red lamp behind the object and a blue lamp before the object. And you see if you set the unisotropy to a negative value, then it will scatter back, even like the material is diffuse. Or if you set it to a very high positive value, then the rays will more or less just go through and only scatter a little bit. So this is basically how we designed each chapter. And I see if you want to go for another one, right? Okay, and yeah, we just, we are still working on this, but we have a special offer for all those of you who are attending the conference. Basically, you can get the book now, cheaply, and we will add a lot more stuff later on because this is not finished yet. It's a work in progress. Just check out Blender Diploma. We created a shop page that is so ugly that we are sure only you will get there and buy the book. So this is a secret just for you, for the Blender conference people. Thank you. We can start talking. Okay. So I am currently working for the project, the open-source project Super Tux Kart. It's basically a racing arcade game, very similar to Mario Kart. We are open-source and the tractor are from open-source mascot. Okay. And many complaints were about the quality of the graphic of the game. So we changed the rendering engine and now we have our own engine. And I will show you some screenshots and then a quick video. So, okay, so here you have a preview of the new generation engine. Everything is dynamic. There is no baking. All shadows and lights are computed just on time when you render the image. Basically, it's done on Blender. Blender is used as a game editor and all assets are managed in Blender. Then we export in our custom format and you can see here the result. The light of the sky and the environment will affect the player and the object in the scene. So you can see here the quality is vastly improved. Here is an example with some waterfall and particle on GPU. And you can see. Also, like the reality, we have an example with motorbike and the idea was to be as close as possible to the physics. So if you put the specularity to the max value, you will obtain a mirror that will reflect the environment. So here is some other screenshots. Okay. And now I have a video. Okay, so this is just a lap. And this is in real time on, let's say, good computer. It's shown at 60 frames per second. And as you can see, everything is dynamic. There is no lag or problem with that. And you can see the video here. You can play with Susan. So if anybody is interested, I have the game in my computer and you can check it out. Okay. So that's the list of the people coming up. So if every time you see your name and you think you will be next, if you can already come down, that's fine. So next we have Baptiste Guesquiere. Guesquiere, welcome. The 14 seconds of this count until I get nothing. And it's QWERTY. As you can see, I don't use Mac and QWERTY. So I'm just going to start my talk while I struggle with life logging into Facebook. Okay. I hope nobody saw my... Okay. Is this me? No, this is not me. Yeah, this is me. Okay. I'm just going to show you something I am working on. I am active in Coder Dojo, Belgium. And Coder Dojo, maybe some of you already know the initiative. It's a... This is maximizing? Okay, cool. Coder Dojo is an initiative that started in Ireland to get kids on coding. We use Scratch from the MIT. And it's a very huge success. In Belgium, we have 25 locations where every... In the weekend, one weekend a month, kids come from 7 to 18, and they come coding. They use Scratch. It's block-based programming. It's very nice. It's a free initiative and there's pizza at 12.30. But also we have... We give kids... How do I scroll? Okay. Oh, it's very modern. Wow. Cool. So as you can see, there's pizza and it's block-based programming. And what I just want to show you is I do Blender for the 15 years old because Scratch is a little kitten that you can make it turn and go left-right. In 3D world, when you're 15-year-old, you want to shoot everything up. So... And how do I approach this? I am pro. I'm not pro or against something. I just use something that fits my needs and Blender fits my needs. It's free. It's good developed. I can do whatever I want. And what I teach these kids is... I don't teach them, press there and press there and press here. I teach them what is a 3D model? What is the texture? How do you get textures on your model and stuff like that? I teach them the basics. I always say, don't teach people how to work with words. Teach them how to work with a word processor. Don't teach people how to work with Excel. Teach people how to work with spreadsheets. So... If you're interested in Coder Dojo or if you want to be involved in some kind of educational program, this is a really cool initiative. We're taking this to the European level. So a couple of weeks ago we had a Coder Dojo session on the European Parliament where we actually had European Parliament ministers getting lessons from the kids from Coder Dojo. Teaching them how to code. And it was very cool. We also have Lego Mindstorms. As you can see, the robots. We have Makey Makey, Arduino. So we're actually getting these kids into the IT. For myself, my name is Baptiste. I'm a bit of an artist. I'm more of a teacher. I sculpt, I draw, I do stuff. If you want to talk to me, I'm at the dinner. I talk like four different languages. It's like Dutch, English, French, Gibberish. And I was fluent in binary, but that was years ago. So thank you for listening. I hope you have a great time. And if you want to talk about education... You did log out? No, okay. I'm going to check your mail after. Okay, so while Ion is setting up, I'm going to give a little bit of a background. My name is Julius Tomstow. My colleague is Janne Karhu. We come from Finland. Long time blender aficionados. We have a company called Delicode. We make a software called NIMATE, which is a motion capture tool, which also has some plug... like an add-on for blender. And this year, we've been focused on extending the support for different sensors. And unfortunately, we don't have to connect to with us now, but we do have the leap motion. So we're going to showcase something with the leap motion. Kind of giving you an overview of what would be possible with the leap. Let's see if the screen comes up. Yes. Okay, so without further ado, I'm going to give the floor to Janne. Janne is going to just show you a little bit of the type of motion tracking or motion capture that you can do in blender in real time with the leap. Hello, I'm Janne Karhu. Some of you might know me from the blender particle system, which I did some, I don't know, six years ago. Since I've been working with you, I haven't had any time for actual blender development, but the plug-ins for NIMATE is pretty nice, too. So we actually have the custom version of NIMATE. This is not yet released, but it's coming out hopefully soon, which actually uses the leap motion device, which I have here connected. So what you basically can do is just start the NIMATE plug-in and then start moving your hand. These are just the empties that you get from NIMATE. Then you can basically, with basic constraints, you can... Let's see, where's my mouse? You can basically get it to an armature. And from there, it's not a big leap to actually use it in a rig, to rig a hand. And the new tracking for the leap SDK, it's pretty nice. The previous one, this is all you could do. But now it actually tracks pretty nicely, even if you turn your hand. So maybe just a quick takeover from Janne. And then Andy was actually asking us for one additional feature. Wait a minute. Janne cuts himself. So Andy asked us for something, whether you could kind of take OSC and whether... Because the protocol that we're using is OSC, it's open sound control. It's an open protocol. And then Andy was asking if we could kind of use that to manipulate whatever in Blender. It's actually possible. And Janne just today did a quick hack that enabled us. Yeah, let's see this... Hoping to avoid any other accidents. But let's see if we can actually... Actually get this to work. So, basically, this... It's just an empty as well. It could be any object. I just chose it because it was there. So it's named writePalm. And let's see if we can actually... Where is it? If we can actually get the Python path for, for example, the scale of the z-value. So it's reading right there, bpi.data.objects. And writePalm and the scale. Now if we disable the actual skeleton tracking and go to the OSC control tab, I input that Python path into here. And oh, it's not the z-value, it's the x-value since it's zero. Now if we enable the OSC controller, and it's for the distance between two hands now, then if everything goes well, we should actually be controlling that. Let's see, the mic is on the way. You can actually control any value in Blender. Yeah, thanks. So we haven't released this yet, but it's gonna come out in one or two months. Thanks. Can I just ask for two people, Damon, Lauren Baker and Jakub Batog, to come to me because I didn't get you and I don't know what you're presenting? If you are here, of course. And next one is Daniel. Yeah, exactly, that's the one. It's gonna be hard to come after such awesome presentations, but I'll try. Hi, Mom. Yeah, but if it takes too long, then... Okay, let's see if this works. Okay. Do I press play? Okay, so I'm going to talk about something a bit less fancy. Pupillary light reflex visualization with Blender. Now I say that it sounds fancy. Okay, so everybody has eyes, right? Well, almost everybody. So I'm... For school, I was asked to model the light reflex when you shine light into your eye. But first, a bit about me. So I'm a student of technical medicine in Enschede. I've been here for a few days here. It's really long for Dutch people to go by trains two and a half hours, I know. Three times. I've been a Blender Superfan for about ten years. Still a bit crap at Blender. And I'm from the Elysian generation. Show of hands, who is Elysian? Awesome. Cool. So the assignment that we got for school is simulating the effect of ocular motor nerve policy on the anus acoria of the pupillary light reflex. Or when you hit your head really hard, will that mess up your eyes? So this is the reality that we're trying to model. You see a lot of efferent and efferent neurons. Basically, the neurons that go to your eye that control your muscles, we're trying to model. So this is a model in simulink. It looks a bit more complicated than this. Unfortunately, this is not yet possible in Blender. So... But fingers crossed. So this is the differential equation that we modeled in simulink. And basically, we ran the simulation and we got the left upper corner. This is the pupil radius. And this I basically just exported into a CSV file and very uglyly pasted into Python. So let's go to Blender if I can manage... Oh. Okay, so apologies for the very ugly interface layout. I don't know what happened. So this is my eye. This is my Python file. I'll control. So basically, this is the data that I imported just to base it into it. And I scaled the data. I adjusted the time scale to the simulation. And I control two empties with the data. Is it control up to go full screen? Shift space? Awesome. Okay, thanks. So these are the eyes and basically... Oh, the Z is somewhere else. This empty controls the other empty which controls the scaling of the pupil. Okay, that was all I wanted to show in Blender. No, wait, wait, wait, wait. I have the image of the eye. Okay. So this is the final result. As you can see, the simulation is a bit too strong. It goes way too wide. But so this is the result. And what's cool about this is that we can adjust the damping to make it look realistic. So basically, we use the capability of humans to see what's realistic to adjust the damping. It's not very scientific, but it works. So I guess my... Let me go to the slides one more time and then I'm done. Which one is it? This one, right? Yeah. Okay. So other users would be augmented reality of surgical operations of Blender and medicine and also training simulations of, for instance, laparoscopic simulations. And these things I'm very interested in and want to explore in the future. So thank you very much. So basically... So if you're a noob like me, don't be afraid of Python. It's your friend. The next one is Fabrizio. But, I mean, after... The cable. It's faster. Sorry. It's supposed to be faster. We'll get to this in a minute. So I'm Richard Colburn, and I've been a Blender professional for about seven years and made my income exclusively using Blender for about five years. So thank you, Blender. And I've actually done a whole lot of things that Blender was probably never really intended to be used for, from aerodynamic simulation on rocket-powered motorcycles, bridge demolition simulations, artificial intelligence, visual displays. And I've used just about every feature that Blender has for one thing or another. I'm with Gerber Scientific, and Gerber Scientific is a global leader in textile manufacturing. So they have clients like Haber-Krombian Fitch and Under Armour and Disney. So they make the machines that are used to manufacture the costumes that they have for the theme parks and things like that. I wasn't really planning on speaking, so I'm not exactly prepared. But Tawn suggested that I share with you what our company is doing because it's relevant to Blender development. So right here you can see there's a... the avatar, and you see the garment that is near the avatar that is just about ready to be run by the cloth simulator. Gerber has an Accumark software, which is a 2D CAD software that drives their great big-scene sea machines that cut fabric and the software is also for designing garments. And currently, we've implemented a button in the Accumark software. When you hit that button, just as 3D on it, it launches Blender and it loads the avatar with the animation files. It loads the garment positioned around the avatar and then it actually starts this animation, which sews the garment onto the avatar. So that was all basically one button press in our software to make all of that stuff happen. When they asked me to design a rig for this avatar that they wanted to use, they said it needs to be able to follow motion capture. It needs to be able to pose... be posed manually and animated manually, and it needs to give realistic muscle and skin and body deformation in extreme poses. So arms all the way over the head and bent all the way down or squatting or whatever. It also needs to be versatile, so we want to be able to hit a single button and put a new avatar on the rig. So all of those things had to apply to this rig. And I said, hey, no problem, it's Blender, you can do anything. So there's about 350-plus bones in this rig, and it actually does all those things. So there are zero weight paint corrections in this rig. So I'm using the CMU library that is an add-on in Blender and I just dropped in a marker cloud and I have my rig following it. So here you see, that was just a viewport render and then just for the fun of it, I did a cycles render. And the ultimate goal is to be able to include all of the materials to make everything look amazing in the viewport and then also have a mode where we can run it and render it in cycles. So basically what happened is that Gerber decided the best way to develop their own 3D software to go with their 2D software was to use Blender. So I think their genius is me, because I knew Blender, not because they're geniuses. So ultimately we're going to be able to use the Blender interface for clothing design and then send that data back to Accumark and actually drive these machines. We're beating the cloth simulator into submission to try to make it do some really weird things that it's not really made to do like lapels and collars and belts and shoes and things like that. And we've already made some patches to Blender. We're running our own custom build so we have multi-threaded cloth which runs the cloth simulator two to three times faster than it normally runs and my boss would probably kill me for saying this but we're actually looking seriously into overhauling Blender's cloth simulator to make it CUDA enabled and also support for AMD cards so that it will run basically in real time. We're also hoping to add a bunch of tools to it so we may start with a standalone physics engine and a cloth cache but that's going to be impractical for a lot of things so probably Blender's cloth simulator is going to get an overhaul. Hello everybody, my name is Fabrizio Valpreta I am an assistant professor in Politecnico di Torino that is a very big university in Italy, western west north part of Italy and I am a designer I was an architect and Politecnico di Torino we are surrounded with many engineers especially information technology engineers and this is very good news because they are actually protecting us designer and architects from the very dangerous world around they are very precious so if there is any engineer here be happy to be an engineer so we are working with them to organize I am in the organizing committee of an international conference the 2015 edition of entertain which is a conference that will be held in Turin almost there in that castle that is one of the places we work in and this conference is the topic of the conference will be interaction design for entertainment we can imagine this topic very enlarged very more large than the picture these two pictures are suggesting us so feel free to imagine anything that is will be related to interaction design for entertainment we work a lot with Arduino that is very close to us for example what I would like to introduce in that conference is the open source perspective within the interaction design for entertainment because I am very good with open source perspective from the design and the process system point of view so the question I am saying you is if you accidentally know something about an active and very good open source community around here full of smart creative people involved in such kind of stuff interesting in submitting papers or leaving workshops paid please do not hesitate to contact me we are very interested to inject open source approach in otherwise maybe boring conference thank you very much can I just ask Jason from Gumpster to come back down just now you can be next because you don't talk and we change computers I just a comment I hear a lot here in this conference they were open source and I just want to make a comment if you say open source you mean open source but also free license or just open source just think about it just read about that in the free software foundation and I just recommend to use free software because most of the time you say open source you mean free software something's not really different I will just present a small prototype for a game that I'm working on with a friend yes if I touch on Mac I will break it so ok so it's a text adventure made in the game engine the setup is you are a scientist in the moon Europa the Jupiter's moon and you are in a facility in the surface of Europa the ice moon your two colleagues are lost and then you have to control a rover in the surface to find them the basic mechanics the way you interact with the game is via text commands you would have a file system working with files you can save, you can edit text you can take pictures and so on and you control the rover via these commands too which also adds some fun it's based on an adaptation of Lovecraft's at the mountains of madness so it talks about Cthulhu, Miis and it's a tragedy actually and so on here you can see just the basic console which was painfully and poorly built in blender game engine because we don't have the knowledge we did this with Python but every line is actually a text object so it's never very efficient so you are these meadows, you are the scientist and anytime something happens in the game the game prompts you to write a log when you write a log, the character you write this story and then this way you consume the story let me just forward a bit how do you forward yeah shift right to just not forward a bit okay yeah let me collect so you see here you control the rover with go, left, right, stop so you go, stop, left, stop and it's in some point of the game the this is your lab okay I will just spoil the story because it's very short you go with this road to find your partners you find your partners slaughtered by some entity and then in that moment, this precise moment you hear a metal cracking sounds in your hole in your chip or in your lab and then the error the meter would work you hear the error leaking and the error meter going down so you know you are going to die anyway like this alien or whatever and then the only hope you have is take pictures of the environment very quickly because that's why it's cool because it's very pressing and it's not as easy as joystick or whatever and then you have to collect pictures collect logs, collect analysis pack them in a zip file and send them to earth to warn them to not come to Europe or something like this yeah this is how you find something there I don't know, we worked this for class for a month or a month and some weeks it's actually just pause the game and if you have interest in games or in this story or you just want to collaborate with us please just give me a poke, thank you so can I ask a P3D to come get ready and I got to talk hi, I'm Jason van Gumster about a few months ago, I started a podcast called the open source creative podcast and yeah if you want to listen to it, it's monsterjavaguns.com monsterjavaguns.com slash podcast but really that's not why I'm talking here mostly because I want to interview all of you for my podcast because everybody comes to Blender for a completely different reason and I think it's worth it to capture it and if I haven't interviewed yet because I've gotten a few of you but if I haven't gotten you yet, tackle me very gently and I'll let of you just want to know where you are, where you're from, why you use Blender and maybe you can use that as an opportunity to promote yourself too you can stop maybe to talk then Simon and Reb come down can you be here yeah okay, so I jumped in I just want to drive home two points basically, first I'm Simon I'm from Austria Vienna, I'm one of two people here in the conference I think other more so basically the first point is because I've been asked a photographer, so I was running around with zoom lens and that equals photographer probably I'm not, I'm just making photos because I write for the Austrian CG Mac this one here it's basically the Austrian CG community at large we organize monthly meetings where we talk about the articles we write we're also part of the interest group computer graphics in Austria, it's like the based organization that holds together all the computer graphic industry in Austria there's also the pixel conference up there on the header that's November basically I'm here to write an article about the conference, that's one part why I'm here if you want to get contacted with the CG community, so over this channel talk to me also because of the photos I'm going to publish those photos on Monday I will upload all the rows about 16 gigabytes and I will drop the link into the Twitter stream so beacon14 keep an eye open for that if you need that I'll release it as cc0 so you can do with it whatever you want and the second point I want to drive home is I'm also organizing the Viennese blender community which is called Viennese Blend nothing connected with coffee or anything basically we run monthly meetings or I run them rather if you happen to come by Austria or want to or something or want to connect to the Viennese Blend community you can hit me up later and also one more thing I just spent like one week before this conference to get a new site up because I noticed a lot of user group sites were really bad and it's also the classical effort that no one wants to make making websites ultra boring but I figured I just should take the effort and make the best possible blender meetup thing I can do it's like really super state of the art web technology stack and if you want to use it you can so just talk to me and I will explain to you and give you the code and whatever I will release it after the conference okay is it working hdmi can we put it on my computer you have to be a mystery for a little while longer is there something we can do to try I mean I don't know why it's not working linux expert here in the room I have no idea if we can solve this issue but the announcement is still cool but maybe we will show it later basically we are part of p3d team site where you can upload stuff to the web but for our future development ideas we found that the 3d engine that we are using is limited oh it didn't work oh that's not very internet is good yeah so we found that the 3d engine that we are using is very limited in certain aspects you can go I guess in polygon cones about a million polys or a bit more but sculptors for sculptors that's nothing so that's one problem and then there is portability our engine runs on the web which is great but it cannot be used on mobile and if some of you have tried WebGL I mean directly on a mobile browser it is not very good so this guy came up with a great solution so yeah yeah I'm Pelle, I'm from Denmark also part of the p3d team so basically what we wanted to show but what with the internet is that we are working on a new version of the 4 viewer and it's kind of in an alpha stage one of the exciting thing we are adding is direct support for the engine running in the browser to read .blend files so you can just save a .blend and it will pick up the changes directly and as standard principles we are moving to a more cross-platform technology so basically we are now writing the engine in C++ which means we can run it in the browser and the most exciting part we are releasing this as open source under the Apache 2 license and it's already available on github so we are hoping this will make it possible to integrate to other usages for instance it could be interesting maybe for the blender cloud I'm interested in seeing it in action if we don't get it working here please contact one of us later we'll be happy to show it and have a nice evening all of you okay so I had one person that asked not to make a presentation but he made a little video so I'm just going to play it that's the lightning talk it's also a part of lightning talks is our daemon or jackub still here no talking alright so next one you're sure you can still come after this time so maybe now Ines directly from the thing more full screen presentation so hi I'm Ines Almeida and I'm here to present some of the work that I did for my thesis which is to translate from Portuguese to Portuguese sign language and the first is in text and the other is a visual special language and I chosen to represent it with the avatar using blender so I designed this system in which the first part is more about machine translation and how to translate the concepts and the structure of the sentence and then there is the more interesting part in blender where you have to actually create the animation to blend it together and to display it so it was a goal to make this open implementation so that others can work upon it as a research project and I've used blender this is on my interface I used it as an addon so the system is all coded in Python the natural language translation part I'm using a panel to put the interface and the animation runs directly in the viewport I've used Rigify to do this plus my own system of spatial marks because sign language is a very complex language in which there is a spatial agreement of verbs for instance I can work now or in the future in the past and so there is the need to define all these spatial keys around the character and in the character itself for it to touch the nose the ear and this is becomes mesh dependent how does the character know where its nose is so I defined a set of spots in the mesh and then I have used some base configurations for the hands and I can now just say that do this configuration this is a action and put it in this spot and that gives me avatar variability so Blender L there is like a regular human but this one has a very big head and it supports this thing to this spots definition and finally there is the actual building of the uterine of sign language so I have used the NLA editor for this all via Python to blend the gestures into the next one and into the rest pose if the channels of the animation are not used and then I also needed to adjust keyframes directly to adjust these in, these out for sign spelling and all that so this is a quick video can we get out of here please I tried that no so this is again my system I just input a sentence there which means John eats soup and this is a soup this is a spellfinger of the John and then it takes it well hi everybody my name is Sieben I'm a PhD candidate at the Utrecht University and my subject is the perception well for this talk my subject is the perception of collisions between virtual characters my PhD study itself I'm looking at the animation of virtual characters in dense crowds and with denser I mean really dense more or less like you're sitting there you're in a busy bar in a filled up bus and there you the assimilation cannot do more it's always colliding so I have to do efficient collision detection and then handle that so we're looking at crowds like this or even denser and to do this collision detection we created a system with bounding cylinder hierarchies so rather than representing a single character with one cylinder which is common in crowd simulation we use the hierarchy so this is a top view and from that we would create a hierarchy like this which consists of increasingly smaller cylinders but it's fast but it's an approximation I'm not going to tell you all the details this is just a pretty picture I made in Blender the question now for us there you have the picture again the question now for us is how precise do we need to be what can you actually see when you're observing virtual characters on your computer screen you can't see a collision anyway we don't need to detect it so we started by investigating using static images and we had various parameters on how we could describe such a situation we created about a thousand of these images by random and all that in python all of them automatically generated and rendered and then the paper got rejected because we were doing things with static images drawing conclusions and then applying them to moving characters and reviewers said well you can't do that so now we're actually repeating that same user study again but then with moving video so this is just an example video of our user study and to give you a demonstration this is what we can also do so in this case the characters are walking right through each other and by using a Boolean modifier we could actually visualize all this we also use it to calculate the actual volume of the intersection so my question to you is please participate in our user study there are 32 videos of 2.5 seconds each so it should take about 5 or 10 minutes but please do so because it's all about what you can and cannot perceive and on a small mobile you don't see anything so these are my cats they also think that you should join in that's it sorry I have a cold I have to blow my nose full screen perfect so I've been working 1.5 years on an architecture organization it's about this hotel village it's 1.5 years because I've started already in an earlier part of the project because it wasn't clear how they wanted exactly to have the hotel village and needed a rough model at the beginning at the very beginning before even an architect was in the project that's the reason why it took so extremely long and this is the ready animation they already used for seeking investors and for getting a license to build it Renault time of 1 frame is by the way 1.5 hours because I made the mistake that I've used transparent planes for the grass but at that time particles were still not implemented so by the way fun fact this is where the skiers are coming down the hill it is a bridge for the skiers and it was roughly built like I've been drawing it a blender because they considered it to be good like a busier shape so blenders everywhere kind of and just quickly one thing to show you technically of blender the background probably you would be wondering how it's done because it's I think quite photorealistic and quite just open it quickly in blender this is a terrain model and ask me later how we got it and it's just we've made area photographs which are looking like kind of for example this one and just projection mapped it on the terrain so really straightforward workflow and really looks in the end like this it just has a disadvantage that you can only watch it from one point and as if you go away it just starts looking like these strange lines here because it's really from one point projected there it's why can you can't really use it for games or something like that and it gets getting misty back there but yeah that's the way we do it just don't it's not the Barbie word on the left on the right yeah thanks Hi everybody I'm Valerio I'm Italian I work for protocol is a 3D printing factory in Turin and this is my little project my customer asked me to start from some concept some beautiful concept like this perfect concept I can slide press space and then you can simple okay five concept the character of Marvel Avengers and the most difficult of this project is model this for NARP surface I'm not a NARP modeler and I make this with this plane this plane is very similar to poly subdivision modeling and I try to make this with a software of subdivision like blender and I can open one of this and click maybe use the mouse oh mouse thank you this is the result this plane convert my vertex in control point of the NARP and is similar to subdivision surface okay when I no sorry I'm the difficult modeling of this shape is to avoid undercuts and little small detail okay stop time last time last is beautiful this is simple subdivision modeling in blender I make little setup for rendering for control the workflow okay sorry this is a little USB key and in the end I open space space open okay this is my character printed with z-corp printer don't slide space or something magic okay that's all thank you just miroslav do you know how to use a Mac? no can I play them like this one after the other one I can try to make them play hi guys so my name is Miro and I would like to show you a project I've been working on since Daniel Stokes committed cool essential feature for game engine which is LOD system so here you can see a terrain made for a game I worked a few years ago so the point of actually I forgot to mention the project is called Blenderer I'm going to announce it also at Blender Artist so the point of the project is to find a way how to have huge terrains open world terrains in Blender game engine another point is to maybe catch or get another video get the attention from say students doing the Google Summer of Code to say fix some bugs related to LODs or lip loading the streaming feature and also any guy who like to do open worlds or they have also they have already started working some game and they need like huge terrain so basically this is grabbed from Blender game engine the terrain it's 13 by 13 kilometers big there's over 4 million vertices or polygons being rendered the satellite imagery has like 30 13 by 30,000 pixels I also made I actually had a talk two years ago here about Blenderer terrain tools so I'm still developing it and part of the tool should be or it is already a tool which can grab the data and somehow process it so it it can be viewed in the in the Blender game engine and I guess that's it, thank you so that would be John now John Kirvel just a question Zachary McIntyre you need a computer sorry ok everybody I'm here a little bit of a whim I thought I would just quickly show this about a year and a half ago I got asked to do a shot on a film so so this film directed by Albert Pune should be coming out I think fairly soon and I got asked to do a very short 10 second clip for it now I hadn't really planned on presenting so I haven't really got a whole lot to show you apart from the very short 10 seconds so I'll just quickly show you that so this is just the only thing I had to go on with this was basically I had three weeks to do it and it had to be rendered in Blender internal so pretty much everything just the output straight out of the render engine so it needs it still has to go to the compositor to get everything out so there was lots of passes and essentially the city was split into three sections there was a back section and the middle section and the sort of front section and then there was lots of passes put out for them obviously I haven't seen the final thing yet because the movie isn't out yet so obviously there would be heat haze and stuff on those engines so yeah sure you'll get to see it very soon lightning talk so yeah you have your stuff you know how to use a mic nope, but I tried you've seen that hi my name is Zaharias Reinhardt from Germany you're a tall guy sorry together with my brother I have a film and a Swedish company called Brothers we do Blender training and basically we always wanted to do a big movie and we don't want to wait several years so we just started this year and the movie is called Repto 108 let's get to the browser so it's 3D animated movie mainly Blender for the 3D work basically our goals we see down here the movie will be freely available a year after it will be finished we mainly use Blender it will be a 3D animated short movie the production is quite open so if we scroll down here you'll see quite a lot of blog posts we document our production and we work together with an international team I'm from Germany and it's the first project we work with international team and I'm showing you this project because you can support the production first you visit this website and maybe click on about and status you'll see what kind of movie it is and the status you will see what the production faces are when they maybe will be done so we are in the pre-production right now we are working on the screenplay we have hired a concept artist for one month now he has done a lot of great work I can show you shortly okay here just some random images don't talk too much about it this is our main character it maybe that is an action movie but it is not it will be a science fiction a drama a deep story so you just want some random images on the website you can read about these images what it means what it has to be and so on so I can switch how can I open this I should open again maybe you put it down here okay you can read a little bit about the story we will release more about the story zoom and you can become a sponsor if you like the project and you want to support it we have some sponsor slots you can buy and certainly you get rewards for it you can see them right here and up here we have a description of all the rewards you will get and you can see 30 people have already sponsored some little amount 1600 euros that's quite much because I think this project is not quite popular right now so if you like it check out the website if you want support us next one is Zachary hi everyone I'm Zach I'm a film and VFX enthusiast so I've been messing around a bit with integrating live action and CG if anyone's seen Dynamo by Ian Hubert and Scott Hampton you'll probably know where I'm coming from with this I blatantly ripped it off well it's true but it's just where's my Zed key what right here we are that's a bit oppressive yeah I didn't do the sounding blender so this was all just done in the shed I climbed up and hung a green screen over the wall and then the rest is all done in blender yeah so it's all super basic the ship I did a fair bit I sort of unwrapped that and detected that in a guise of professionality the rest of it like the city is just a blend swap that I sort of took all the lights and duplicated that and everything compositing is all done in After Effects because I'm not that good in blender yet because and stuff blows up yeah of course I've got some complaints about the hat actually it doesn't fly off but my excuse is that there's a low pressure area just behind the windshield which is or space glue this is my completely accurate engineering depiction you just sort of plug bits in and it works again and it's all great so these are the shots I actually put a bit more effort into than just the static ones I edited a couple of these slightly cooler shots into the start to sort of trick you into thinking the whole thing was going to be of this quality yeah so this is sort of a very basic test of just getting the green screen and CG working together yeah thank you okay hello I'm Mathieu Dupont de Dinsha I'm an architect and I founded the Fab Lab too and I will just show you some the last project we did thanks to Blender Cam that made a presentation and so where is this you have an open office or do you have an open office okay so it was a truly interactive map with Blender okay the interactivity for the moment is not that good so we did that for a non-profit organization that is making some teaching to the children and they wanted some better good gcal material so I proposed them to make a 3D map on which we can project a video and they wanted to project water too and it's working because they are working about the water where goes the water if it fall on one side of the mountain or the other side in French we call it Basin Verson I don't know here in English so in France, half of France goes to Montique and the other half the other third goes to Mediterranean so let me show you basically the workflow we used a 3D point clue from the National Geographic Institute of France that now is free it's not really open source license but you can use it import it to Blender, change it to mesh with a point clue scanner that is working very well a little bit of cleaning let's see the picture this is the point clue millions of points after point clue scanner you get the 3D mesh and really I had let's say 20 faces to clean so the point clue scanner was working very well with that so this is Blender Cam and in Blender Cam it calculates the CNC router well okay, in two paces one with a rough router because if not it's very long and one with a small one so it's only, I have the Blender 5 but I think we have no time I have a video of this I will show you later so this is the result we did two of them it's 80 cm by 80 cm so it's the actual mountains with the z axis so that you can feel it and then the next step is that we project anything you want on it and you have, as the projector is a white screen we have the left and right part where you can project informations on things I even used Blender to model the scale 3D printed because it's not easy to understand scale and okay it's available with children too so the aim is that they go around and they interact with the map but they still feel the 3D part of it and I will show you the video if I manage to use this strange computer so I think it should be this I've understood that never use good time so this is the finishing pass you see on the right part is the result of the rough pass and this is the finishing pass we use a shape here which is an open source open software CNC milling machine router and you can see all the small parts and then it will come out I think the sound is not very interesting so this is the map you see the difference and the finishing one and let's jump because everybody wants to eat let's jump to the I think this one so it's very not easy at all to film this thing because you have the light of the projector this was small advertising for my FabLab so this is the result with the projection on it so the trickiest part was to cut maps images at the right size of the 2D model I didn't find an automatic workflow for that so it was just by hand and very ugly and so you can project a real view too and this is for the moment the interactivity is a very ugly click on the things the next step is we will have a cheap version for poor people for 15 euros with Arduino under light sensors so you put your hand it makes a shadow so the light sensor feels it and will make an expensive one with a Kinect sensor if somebody here can help me with Nemate that I so it's not ready yet so thank you very much and you see here the projection of the things on a part you have the picture of the part and the things like that and the children can interact with the map and if they take a glass of water and pour it on it they see where the water flows so it's funny but very dirty so I don't have a movie of that because I didn't want them to spoil everything around so that's it thank you very much Canberra, can you come to talk you're here this is in Promptu a lot of people are using add-ons for crazy stuff I didn't think they were using them for the Python API is maintained by I think maybe a quarter of a person it's like me and Bastion and there's maybe a few other people but really not many people do work on the Python API so we can't promise that we can add all sorts of crazy stuff that you want but I've noticed that when I do add things and change them and break them people complain so people do use the stuff we add to the Python API so if you are doing something interesting like a lot of the projects here and you have problems or you notice things that need to be added I can't promise that we will but let us know on the BF Python mailing list maybe pain points things that you've noticed just while I've been here talking to people I've noticed things that are slow things that I need to fix when I get back so I'd really like to help out people make people in the Python community feel supported and make sure that really crazy cool add-ons like you guys have been doing to be possible yeah, hello first hello my name is Tristan Salzmann and I'm from Germany and a freelancer I mainly use Blender for my work but also some stuff like Maya things like this and yeah I tried out a little Maya and a little Blender and I compared and I found out that there's something in Blender I missed that's available in Maya and that's the muscle system perhaps you are familiar with this this is the option to deform your character with meshes so with muscles for example because you can imagine your body is not only work with bones it also is deformed by certain muscles and for this case I made a little add-on muscle tools it's right now available on the Blender market if I get this right here wait a minute no yeah, okay here you can see it okay and yeah I wanted to show you a quick presentation here with this stuff here you have perhaps just some kind of mesh I created for example this bone here yeah this no this muscle here for example and this option allows you to create this muscle to a this mesh here to a muscle and everything you have to do is that the group name and now you are fine then you have to rename this muscle let's call this one here muscle yes then you can as I said this is just only a mesh you can take everything for this then you can convert it into some kind of a muscle and as you can see it works it deforms just like a muscle would automatically do but this is not everything you can after you converted this muscle choose some options here for example the resolution you can adjust the render resolution of course I'm pretty sure you know what this means then the base length this is a static length or a custom length that defines where the muscle starts as you can see then also some kind of volume how big this muscle gets if it's stretched then some very nice jiggle options if I press Alt A and pray this back you can see that this muscle jiggles also you can do some fancy stuff like this but the most important thing is that you can bind this muscle to your skin or to some some kind of skin I prepared a little example here so let's just select both of these perhaps you notice that there was a group created why isn't this full screen here you can see that there's a group created which is exactly like you call the muscle for example in this case it's muscle then you can fit this muscle here for example wait a minute let's see what's wrong selection is with the left mouse button no it isn't what's wrong here something weird happens perhaps can you fit this screen completely to one but this is stretched or something like this I think it's the resolution of the you can also do it you have full screen for example you mean oops I don't you mean blender okay do what you want to do because we cannot change the resolution yes that's fine sorry no problem yeah just show you the video okay I will show you the video this is pretty more easy okay wait a second so yeah this shot makes some stuff easier I think in the past you nearly got it there is no footage okay all steps are declared in this black still say something announced like no video or nothing but just say come to see me thank you and now the Susan Awards