 Let's see if it works, the online thing. Okay, I just say hello to the presentation. And yeah, so whenever they take the microphone, it starts to do, did it four minutes? Four to 20, yeah. So we give four to 20, and then they need to switch. Yeah, that's fine. Yep, that's fine. Okay, so no time left in the wire. Yeah, yeah, okay. There's nothing. No, no, no, we will not. I told them we should, I mean, some people will not come if they cannot. So hello, good evening everyone, and welcome to this another session of the lightning talks. I will be just organizing the things, so I want to talk a lot of things because we have actually kind of overbooked, like on your airplanes, and I'm not sure if everyone can fit in. So we'll try to be as quick as possible. So if you have a lightning talk and you feel you've made your point and you are happy with it, then that's fine also. So thank you. We have the first guy from BlenderKit. Good evening, everybody, all the amazing Blender artists and developers. I am here to present something we've been working on for many months with my friends, and it's called BlenderKit. Somebody would call it an addon, but we call it a platform for creators, for Blender users, and also for developers. What we are doing is on the first side, it's an asset database for Blender. Okay, so that's what's written here, and I try to scroll down this webpage to show you actually how our tool that we developed intends to improve Blender workflow. So what is happening now in the video is I'm sculpting just some rock, but you can see the brushes on the top in an asset bar or materials now, and these are actually online. And by clicking on them, they start downloading, so there is time I have to wait. This is like faster, so it doesn't show that, but actually I don't have to wait. It's not blocking, so while you are downloading, you can continue your work. Now with the models, you can see that there are thumbnails popping up. It's also a bit fast, but if you happen to be on slow internet, you can enjoy watching the thumbnails longer, but you can start working on your composition already while things are downloading. Okay, so that's... Yeah, thank you. I'm sorry, I'm sorry. Okay, maybe you've seen a tutorial for donuts someday in the past, so now you can do it a bit faster. And now I would like to tell you more about how the things are meant on the side of how accessible it will be, because we want to basically provide as much as we can for free, which means that now there is free everything. It's online, you can download the add-on, you can use it. And we want to have the material database and brush database and possibly other types of assets for free, for everybody, for whatever, let's hope. And the models, there is a subscription-based model that we developed. We believe it's quite new. Okay. Sorry, I don't want to show pricing or anything. It's just that we started the company to basically generate cash flow, not majorly for us, but for blender development. So even if there is quite a complex calculation of how we share the revenue with creators, it means that... Oh, sorry. Basically, from every transaction that happens there, 15% of the revenue should go to blender development and 15% is for us and the rest is for the creators. The idea behind this is really to generate new cash flow for development. We are currently also looking for creators who want to fill the database and there are several options for the creators like to... For example, we'd like to just support blender. Then here's the option to upload the models and then 85% of what these models make would go to blender development and otherwise it's just like a contract-based relationship. Yeah, that's it. It's ready. You can use it. Thank you. So Sketchfab, you can come up. Sketchfab, really, I'm perfect. And after that, we'll have Golem Network. So Dan, if you are here, you can slowly come down. Thanks. Cool. So hi, everybody. I'm already from Sketchfab and today we talk about the new add-on we made for blender. Yeah. Oh, yes. Thank you. So to present Sketchfab, it's the small flyer you had in the bag when you came here. So it's a platform to share, publish and update free content on the web and also find 3D models online. So today we have more than 3 million models on the platform with 2 million users and 170K of those models are available through a free download with a creative command license and also in addition to their original file formats, they are available in GLTF. So the idea of this add-on is to make those models available for blender users. So to sum up, yeah, the plugin allows to browse and import free content from Sketchfab to a blender using GLTF. So it used the search API and the download API to do that. To do that, it's based on the GLTF Blender add-on developed by the Kronos Group to read the GLTF and it supports PBR materials and all type of animations, as mentioned here. And the good thing is that we recently made an update to make it work with Blender 2.0 hates and the EV renderer. So just to present quickly the interface, so first you have the parts for the login because you need to be authenticated to be able to download content from Sketchfab and then you have the search bar and some filters to get results with, for example, certain polycounts or any feature and to search through categories. The results panel, so it's using the widgets you already know for the mad caps and the brushes, for example, and it allows to have the thumbnail of the model to have an idea of the shape of the model and then you can click on the view on Sketchfab to have the Sketchfab viewer to use the model inspector to break down and inspect the model for the topology, the texture, the materials, et cetera. You have some basic information in the plugin like the name, outro, the license and the number of animations the model has. And then you have the pages that are displayed like that. So now let's say you have selected a model so the idea is to show a quick video of the plugin in action. So here it's just logging, easy. And then there is a search and the first model will be imported in the scene. So we'll see that. So we have a model import. So it's loading the file, adding it to the scene and you have the EV rendering in PBR. So it's the first one. Let's try the other one. So the car, so same thing. We just have more some details. So the model is imported on the cursor so you can select where to import the model and the root node of the model is selected so that you can easily transform, place it right after the import in the scene. So we have the car. And the other element is that all the nodes from this model are placed into a collection with the name of the model so that it's easy to retrieve the nodes and do some modification if you need to do them. So... Thank you. Thank you very much. And here you have the famous character and everybody knows him. So it's just to demonstrate the animation. So it was very quick. So you have all the skinning data and you can play with the skeleton. And that's it. And the last model just for fun. Thank you. Make sure I can present. Just have the next one. Sorry, it's taking time. So just to go quickly through the next step and what is not working. We're working to support single-sided material over transparency mode in specific channels. And the next step is to make more models available like the one you own on Sketchfab and the one you buy. And to finish quickly, so the idea to improve the experience to make it easier to import and use the models. We want to track the license so that you are able to have all the license of the model you use in your scene. And the most important to take advantage of the new Blender 2.8 features that are amazing like the asset management. So we are looking for it. Thank you very much. How are you? Do you have sound on the video? Yeah, we're just doing the sound. Mathieu, tu peux descendre. Tu es le prochain. And golem network. Hi, I'm Dan from Golem Network. We are a distributed rendering platform and I'm just going to show you a couple of videos and I'm even going to do a magic trick for you. I'm going to narrate the video without moving my mouth. We have a table upstairs and we have some materials up there. We have stress balls if you're stressed. And we also have me if you want to talk to somebody about how stressed you are. And then also if you want to talk about rendering I can do that too, so I'd be happy to. Yeah, that's great. After launching, I start in the network tab. To create a new task, I go to the task tab to drag and drop my blend file to begin rendering. The file is approved, so I'm applying a default resolution, choosing the frame range, format, and output location. Now I'm scrolling down to the task settings and filling them out according to the file I want to render. You can check how to best prepare your render task in Golem's documentation on our website. The cost of the job should be around 6.67 gnt for this task. I'm now starting Golem and Blender and waiting for the rendering to finish. You can see the progress bar is below the screens. Golem is connecting to nodes on the network to complete the render. Blender is using only the power of my local machine. Golem is moving forward much faster. Golem has finished first. Blender is still at 9%. The total rendering time on Golem was 40 minutes and we still have to wait for Blender. It's finished and it took about 8 hours 11 minutes. About 12 times longer than Golem. To see my job, I'm going to the output location I provided earlier. All my frames are there. So I'm checking on them one by one. And here is the final result. So, um, so yeah, that's uh, that's the opinions in that guy do not reflect the opinions of Golem Network. Um, but no, that's a quick demo of what our network can do for you. If you want to learn more, please come upstairs tonight, tomorrow and talk to me. We want to learn about what you're doing and we want to work with people on cool projects. So, thank you. Yeah, and Sebastian Koenig next. So five minutes. Hello everybody. Uh, my name is Mathieu Dupondodin-Chan. Uh, Akavira Lata. I'm an architect. Uh, I'm a firm manager. So I used Blender for a long time for architectural renderings and a lot of 3D printing. So, okay, you can, okay, there is a mistake here. It's with, so you just go fast. Okay, so this is mainly the, the, the two fields I used Blender generally. So, okay. Uh, um, then I worked with a friend of mine that is an artist. He's making a lot of uh, lino cuts. So, maybe I can change myself. He's making a lot of lino cuts. He tried a 3D stereoscopic rendering with a lino cut. Uh, so you can watch it with glasses. Uh, he's using the default cube too to make uh, chainsaw uh, engraving and a lot of monumental structures like this big head in which you can enter and visit it. And he asked me one day, uh, okay, I want to build a flying saucer. And uh, he doesn't have the structural background to design the, the, the structure. So, he asked me to, to make the structural design of it. So, I worked with him for that. So, the first step, and I, of course, I used Blender for that. So, technically it's very basic modeling. So, we started to define the shape. So, he, he gave me those two drawings saying, okay, I want to enter the flying saucer. And uh, they should not be killed by the flying saucer. So, that's what the two main things. So, the first thing was to design the shape to see, okay, the size, the angle of the crashed flying saucer. Then, while we were drinking beers, uh, I made a small video. Uh, so this is not Susan and Willis. Okay. So, it was just, okay, for fun, where is full screen? Uh, sorry. Okay. Well, hello, Willcox. This is Tanya Tucker and you're listening to K-H-I-N. I think it's a YouTube that is playing at the back. So, the, the, the work in Blender was quite basic. Uh, I used Blender because it's very fast when you know it. So, even for structural design, I prefer to work in Blender. So, this is the basic frame of, uh, the shape. So, uh, I designed everything in Blender. Uh, there are a lot of boleans. So, with boleans, you got to cross your fingers because you put one bolean, a second, a third, a fourth. Everything is fine, you're happy, and then the 12th bolean, everything disappears. And, uh, you just have to move 1.01 in one direction and it comes back to life and start again. Okay. Uh, so what was interesting is, uh, to work with him. Okay, I will not go to you, Internet again, but I use Blender Web because he's living in Paris. I'm living in a small village. So, I send him the file in Blender Web so that he can move. The artist can check if the structure was what he wanted with the guy inside. So, he can verify the size. And then he needed the plans. Uh, so I use FreeCAD. I imparted the shape scene in FreeCAD. And then I draw it again with very precise plans so that he can take measurements. And then the 3D part was finished for me. He started to work. So, back to reality. This is the first test to check if it's working. And then he started assemble with, uh, some of him. So, the flying source of flow, uh, it flies for 5 minutes like that. And then, okay, so this is the way he did it. So, it's only, uh, garbage wood. Okay. And then, uh, you have the cockpit inside. It's garbage, too. Okay. Okay. And, uh, it was very nice. Uh, it's destroyed now. It was supposed to stay 4 months. So, he said, okay, I want to participate to another festival. So, it must be modimental and playful. And he said, okay, I want a giant octopus. Uh, easy. So, why not? So, I just use Blender to make the first sketches. So, basically, it's, uh, a remodifier, a cube modifier and we are waiting for the selection process. Thank you. Thank you. And I will ask Andreas Fischer to come down. You can start. Yeah, hello. My name is Sebastian König. I am from BlendFX. We are a small studio in Leipzig and what? And, okay, I'll just continue. Um, recently, we have been working on a short film and we are still working on it. And, um, my talk is about VFX color workflow in Blender, but not the default one. It's a new one. Also, it's probably not the best idea to talk about color management in just five minutes, but I'm going to try. So, I'm going to talk very fast. So, this is the movie. It's called Der Hauptgewinn or the Jackpot by Alice von Gwinner. And, um, flying saucers or sci-fi stuff. It's just water-colored boxes on top of buildings. So, it's basically set extension and, um, yeah, so it's not that complicated. However, the complicated thing was the color pipeline because, um, the movie was shot on the Arri Alexa and, um, the Arri Alexa records in Arri White Gamut and in Log C. Um, that will be a topic later. So, um, a few words about the pipeline. First, of course, you do the filming. Then, um, the director starts editing with the editor, of course. Then, um, the final cut will go to color grading. And while the grader is grading, um, it also goes to VFX. That would be us in that case. So, ideally, once we've finished, um, we give our, um, uh, final composites to the color grading and then he just, uh, switches his footage to ours and the grade will still work and everything will be fine. So then we can deliver to cinema and DCP, uh, DCIP, whatever, P3 to web and to DVD or Blu-ray. But that only works if the source footage is exactly the same as VFX. I mean, of course, with a VFX, but the color space has to be the same. In that case, that would be Arri White Gamut for the source footage and Arri White Gamut for the visual effects. However, in Blender, uh, the color space is Rec. 709. So that does not work. So the source is not the same as the visual effects. Um, so why does that matter? Well, the Arri White Gamut, um, is, uh, is that blue triangle over that weird shape and that's like the possible color spaces of the visible colors and that's, uh, uh, uh, sciencey stuff, uh, whatever. The point is Rec. 709 is a little bit smaller. So there's less colors that we have in practice, um, that looks like this. So we have the footage that looks, uh, like that. It's very flat. It's, uh, log C encoding, uh, Arri White Gamut color space. And that looks horrible, of course. So we need a lookup table. Um, you have the lookup table or a LUT and then you can do this. You apply the LUT and suddenly it looks not that flat anymore. It looks nice. Because the colors have been transformed to look nice on a Rec. 709 display in the, or Rec. 709 color space in sRGB display, blah, blah, blah. Um, so can we do that? Have, do we, can we use, uh, uh, Arri White Gamut color space stuff encoded in log C in Blender? No, we can't. Um, there's one guy in the probably the entire universe who really understands how that works. And that is Troy Sabotka. He's the guy who gave us Filmic. Yay! Um, and we asked him what we can do and he, um, made us, um, our own config file for Blender, the color management config file. And now we can do this. We can go to DaVinci. We just, uh, go to input color space. We keep that. We can go to the input color space, but the input gamma is not log C. It will be, um, linear, and, uh, linear footage looks a little bit crappy. However, we can load that into Blender because Blender understands EXR. Well, um, yeah, so it's, it's a little bit better. But now we have this EXR in Blender, but Blender still assumes that the EXR has been encoded to, oh my god, to Rec. 709, awesome config file, it looks like this. So, a little comparison before, after. So, looks great. Another example, we have this chicken. With default Blender, it's a bit crappy, but now with a new config, it looks great. So, the good thing is that we apply our visual effects, and we save that as an XR, and that XR will be in ARRI white gamut, because the entire Blender has been switched to ARRI white gamut with choice color management. So, the only thing we have to do is to go from lock C to linear XR, to do our visual effects, and when we deliver that, we go from linear XR to lock C, but the color space is still the same, so our color grader is very, very happy. So, open color AIO, yay! Also, thank you Troy. And I want to raise awareness for color management in Blender. Thank you. You made it! Whoa! All right, since we don't have much time, I'm just gonna, while he's setting up, I'm gonna briefly introduce myself. My name is Andreas. I'm the co-founder and creative director of Studio ANF from Berlin, and I'm gonna give you a little tour the force of our portfolio, like commercial, mainly some artistic, over the last, I would say, five to eight years. And if you have any questions, yeah, yeah, exactly. Yeah, that's the name of my talk. Okay, yeah, so, no. It's the other side. Okay, yeah. Yeah, so that's, it's the lightning talk, so we have to be quick. No, no more animation effects after this. So, shock and awe is the model for today. If you have any questions afterwards, let me know. So, we do a lot of projects that are mainly based on my personal artistic practice, which then somehow miraculously turned itself into a more commercial practice. So, we work for brands and different clients and kind of maybe sometimes also cultural fields. And the kind of core of our work is making things that make art. What do I mean by that? As opposed to creating a singular work, we create a system by writing code, by setting something up that has a certain autonomy and a certain life of its own. I'm going to give you some, I'm going to start with the fine art. So, this is a group exhibition at the Rua Red and Dublin. So, this, for example, is something that is made completely procedurally with the open source programming language processing, which is based on Java. It's like a multimedia wrapper for Java. So, this is all, so this is one of the prints. So, this was produced by the system. And I'm going to explain to you what I mean by that. So, you have a certain degree of randomness and autonomy. So, the software is running and each time it runs, it creates one unique image over time. So, it's like this is a particle system, which is something that you're familiar with. This rendering, this is like an in situ rendering, which is something that we sometimes do for clients. This is, of course, done in Blender, just that you get an idea kind of of the size of things. So, this is an example based on the same system. It's called Schwarm or Void, kind of the extension of it. And so, this is for a media installation. So, this is also something that we do a lot. The output from the same system. We also work with sculpture. So, all of this was, most of it was prototyped in Blender and then CNC manufactured. This is CNC milled MDF and then painted black. I'm not going to get into this. It's too little time. I have about two minutes left. I'm going to burn through the rest. More sculpture. We did some fashion. We did some prints. We didn't do the actual fashion design, but we did the generative patterns on the prints. This is for an Italian fashion house. Here is kind of some brief overview of the more corporate projects in the last years. This is for the EFA, for the trade fair. This was in a collaboration with a few other design studios from Berlin where we all contributed some of the content for this media installation for Samsung. This is a 360 degree real-time environment that we also made in processing. Also some prototyping of that happened in Blender. That was for diesel, for the flagship store in Rome. And more, we do a lot of media facade work. So we get commissions to kind of create animated content for large-scale displays in facades. This is the Marriott in Los Angeles. A little closer. This is in Seoul. This is an installation in a lobby for a bank in France. And I'm just going to show one video. Okay. Thank you. Hi guys, I'm Mars from Render Street. And as usual, I'm here to give you a quick insight into what we're doing. I suppose everybody got their envelopes. So now you know what to do if you get one of these banknotes. You have a few suggestions there. You can go for a coffee. You can buy one of these banknotes. You get a few different kinds of money. So we have a new project, which is in the month of Render Street 1 plan. And we see it do awesome things. As testimony to that, there are the 23 million frames for the Render Street program. We see you've seen a couple of movies here at the Blender conference with that plan. Both are Just upload your project by any means you want, really, FTP, Dropbox, plug-in, whatever. Just upload it and it works with dependencies, with plug-ins, with everything. Now, moving forward, even if Tones took us all by surprise and announced Blender 2.10 today, we managed to put it up on the farm. So you can render your Blender. Thank you. You can render your Blender 2.10 projects today, or tomorrow, or whenever. We'll update it as the beta comes and then as the final version comes out, of course. Now moving forward for the next year, you might remember that we had a program called Renderstiff for Artists when we offered free rendering for open projects. It's been running for a couple of years back, and now we're trying to reboot it next year. This means that details are not ready yet, but you can sign up to this URL and you'll get notified when it launches and how this will work. But again, it will be free rendering for selected projects. Some other things coming up, new interface and functionality for the job management page. It's long overdue. It will happen soon. The beta will be ready this year and we need your input to see how it works. So if you want to beta test it, it will be a closed beta at first. Just cross the line and we'll let you know when it's ready and we'd love to hear feedback. Also, we have a lot of other surprises ready for next year. I can't talk about them now, but you'll be pleased when they are launched. And love you all as always for your support. Thank you. And also, if you have a free space on your laptop, we have stickers for Renderstiff. Thank you. I will ask Pierrick Pico to come down. Next. Perfect. Okay. So my name is Oliver. I'm studying computer science at you, Berlin. And I quickly want to show you a quick project that I'm working on currently and want to ask for your participation for study. So basically, I'm going to skip this one. So basically, I'm doing a project on distinguishing between computer graphics images and photos using machine learning. And for that, I'm first collecting a data set, which I'm currently working on. Then I will implement a state-of-the-art neural network to distinguish between those. And I will conduct a small study, and this is the part that will come later. And then just compare the results to see how well does the neural network do and how well do humans distinguish between those. Basically, just to answer some questions, how good are we in distinguishing between CG and photos? What factors play into that? And then how well does the machine learning approach work? And which images or which kind of images are hard for humans? And which kind of images are hard for the neural network to decide on? Yeah. So basically, if you want to take part in my survey, just go to this very short URL. And there's a simple survey where there are two questions about your background. And then it's just images, and you just say, this is a photo or this is CG. If you already know the image, there's a button at the very top in the right corner that says, I know this image, because then it doesn't make sense. If you already know this image and know that it's CG, for example, then we can just skip this one. Yeah. So this would be it. Basically, if you want to support this, you can also see on my Twitter, I tweeted about this. It is pinned. So if you can retweet this, this would help me a lot just to reach some more people. And that's it, basically. Thank you very much. You can come. The next one will be Vladimir from OSM. So please come here. Hi, everyone. I'm Pierre from P2Design. Maybe some of you already know me, because I make some tutorials on YouTube. So I've been freelancing for five years now. And the last two years, I've been working almost full-time on a video game. And at the fall of July, we were like, we have to make a trailer to announce the game for the Paris Games Week, which is currently now, but in Paris. And I was hoping to finish this soon enough to present it during the Blender conference more officially. But I finished it last week, so I didn't have the time to prepare anything. But it's here, and it's quite exclusive. And it was made in Blender EV, because time-wise, budget-wise, and challenge-wise, I couldn't afford anything else, so I hope you will enjoy it. These sandy shores for a thousand years. We are the people of the dunes, adventurers of the high seas, exceptional anglers and wrestlers. We are the Crac Clan. And none dares take up arms against us. Other peoples may slaughter one another, but that is of no consequence to us. Our destiny is far, far distant from the lands of war. We have spent a millennium in relative peace, engaged in the contemplation of the stars, the blowing of glass, and the exploration of uncharted lands. I am Akuyani, king of the Crac Clan, and it fell to me to uphold this fragile peace. But now, those cowards have killed my son. Thank you. And that's it. I think I will make pretty soon the making of and the breakdown of everything, because I think there is a lot to say about it. Thank you. Next on stage is Alan Plains, so if you can come down. Thanks. Okay. Hi. Blender is certainly awesome, but I would like to say that it's even more awesome with awesome. Yeah. So OSAM is a world map created like Wikipedia by thousands of people around the world. So it's a screenshot of desktop editor of OpenStreetMap. So it's possible to draw a building outline, a street, and assign and set some attributes for like a building or street. So I am the developer of the OpenStreetMap importer for Blender. So some screenshots, what they don't does, so it's in New York. It also imports real-world terrain. So basically they don't create a real-world scenery with a couple of creeks for nearly every point on Earth. So the number of roof shapes are supported. It's Moscow Kremlin. And recently I introduced textures to the buildings, so it's a Skype-Skype area in Moscow. Also the late-evening setting, it's somewhere in New York. It's a late-evening setting in Paris. And a few words about the future direction. So it's my four-year-old project that I presented here at Blender Conference four years ago. It's nothing special except the fact that those buildings are generated procedurally. So I'm going to combine these two projects and generate the buildings procedurally to add more details. So grammar, how the building is generated, will be defined by a simple text format, so similar to CSS from web design, and eventually by a node-based editor. And to get the done, it's simply type Blender Awesome on your search engine. It's available for an affordable price. Thank you for your attention. I would be glad to talk to you. Thank you. After that, we will have Victor Bielcombe. Hi, everyone. My name is Alan Plains. I come from Barcelona, Spain, and I'm going to show a bit about my website and my work. I've been freelancing about five years. So this is me and my wife, my beautiful wife, Michelle. We work together. She does 2D animation. I do 3D. And I had the pleasure to recently work on the 2018 Emmys. So in association with OnePlus Tested and Oresh Kapoor Productions and Alucinari, we worked together to creating the full Emmy backgrounds and Blender and Eevee, which was pretty interesting, really cool. Here are some images. So this is done in Eevee materials and rendering. We did some volume effects. This is towards the end of the Emmys. It's actually really cool. This is Peter Dinklage. And that's some of the backgrounds we work together as a team. I'm just like, yay! Peter Dinklage. And a background. Mine. So it's really cool. And, yeah, this is some other stuff. I was sitting in the actress and basically what was great about using Eevee and Blender is the super quick feedback we had between getting feedback and kind of working fast and doing materials and testing things really fast. And it was very, very helpful for all of that. In fact, most people probably feel sometimes that using Blender professionally can work against us. Everyone's like, ooh, why don't you use other programs? But in this case, Blender is actually what got me the project. And that's really exciting news for all of us, that the hard work that is going into 28 and everything is giving us tons of possibilities. I'm just really thankful for all your hard work on that. And yeah, that's it. Thank you. Next on stage is Thomas Radeka. Victor. Distributor rendering. Calcol. Yeah. The first one. Yeah. All right. So I am going to talk about Blendist, which is experiments I've been doing, which is about distributed rendering with Blender and IPFS. So my name is Victor Bjalkholm. I work as a software engineer. I am not a 3D artist. So if you want to help me and tell me I'm wrong about some of this stuff, please do it so I can fix it. So I work on making the internet decentralized basically with Protocol Labs, which is the company I'm working on. We have some projects, which is IPFS, Lupit P, IPleaf, Icon, and other things. But in general in my free time, I try to lower the lifetime of my CPU and GPU by pushing rendering to it. So why did I do Blendist in the first place? So I wanted to learn the Blender API. I'm a developer. I like to deal with code and Blender exposes everything with code. And I also want to research into ways of making the rendering more efficient and just faster. So why IPFS? Why IPFS and Blender? So IPFS is a distributed file system. It's content addressable. It saves bandwidth and storage. You have secure transfers and everything is a DAG, which probably no one knows what that means. So I am going to explain this, but it might be a bit too technical, but I will be around. So if you want to come and talk to me about this, it will be very fun. Content addressable is basically like moving, instead of referring to data based on the location, you would refer to data based on the actual contents of the content that you are addressing. So when we want to borrow a book, we don't tell people to go to this library and go to this shelf. And then in the far top right corner or the left corner, you will find this book. Rather we have ISBN numbers that defines what the book is. And then you can go to any library in the world and you can borrow this book and you will be sure it's the book that you wanted. So basically what we're trying to do is to do the same with data. So instead of going to 192.168, whatever, you would just have a hash, which is the hash of the content. So when the content change, you get a new hash. And so you have the name based on the content itself. So why is it powerful? Why is this something that we would like to have? So you can get files from anywhere. You don't have to go to the internet if your friend right next to you already have the file. So you can securely verify that you get this file from your friend right next to you and you can just hash it again and you can verify that this is the right thing that you wanted through. So this is very powerful, especially in rendering, where we can shop up files and we can kind of like save a lot of bandwidth that would be traveling around in the network. So what is a DAG? This is a directed icyclic graph and now everyone gets it. Essentially it's just a graph of things that doesn't loop. So what we could do is to split up blend files into graphs and then if you're reusing objects in a blend file, you can then reuse this across many different files. So a quick example. This is a DAG, excuse my handwriting, it's not very good. I had a date through this. So we have different nodes. We have A that links to B and A also links to C and then we have B linking to G and E. And then we have a second graph which is F, but F is linking to B and F is linking to G. But we can see that they have the same, one part of the graph is the same. So if you are sending a file to a rendering form and then you just make one small addition and you send the file again, the render form is just going to pull down the entire file again. Even though everything is the same except some parameter or just one new object or one material, the render form is still going to pull down the entire thing. So this would be an example of how you can structure a blend file as a DAG. So you have a file, a file has many scenes and the scenes have objects and materials and many different scenes can share objects and materials. So the main point of this is to save the bandwidth on the wire and also allow this to happen offline. If you're not connected to the internet and you want to help each other to render, you can all go together offline on a separate Wi-Fi and can render each other things. A more complicated example, we have many different files and the files have scenes and the scenes have objects and materials, same as before. Every time you upload the scene, it's going to download the entire thing even though the form already have the files. Instead, we're going to make this and then we can reuse things. But we need to make it happen in the background. The user, the person who is rendering this is not going to care about this, which is fine. But we can make this work without the user even having to care about it. So this is a traditional architecture of how our render form works. You have a client, he uploads a job to a master and the master distributes the data. But rather, what we can have is a peer-to-peer architecture. So the agents would share the files with each other and because we have the content addressing, we can make sure that the agents are sharing the right content that they are not faking each other's data sending around. So that's it. The project is open source on MIT. I am not providing any binaries yet because it's a prototype. It's made with Golang and Python, obviously. The source is here. You can come and talk to me. You can email me as well. And here is a nice little graph that I made on how it's currently working. So how the master, the server is passing the jobs to the agents. It's still missing some things. But that's basically it. In five minutes, right? It was good. Five minutes. Like color management, I understood everything. So, yeah. Hi, my name is Thomas Radeke. My talk will be less technical and more stuff to look at. And thanks. The regular one, yeah. And F, please. Just F. All right, I'll switch on. So, yeah, I'm a teacher through 3D graphics and animation. And I also use Blender for fun and recreation. So, yeah, this is one of my recent projects that just popped up on accident. And it all just started with a rock. That rock is a 3D scanned model that I did myself in the excellent mesh room. If you haven't used it before, go and try it. It's open source. It runs on Windows, Linux, and Mac OS. And it's awesome. Anyway, there's this rock here. It turned out to be a three and a half million polygon thing. And I thought, OK, I want to render this in a beautiful way and just make it look interesting. And I rendered this and I noticed I wasn't really happy about the bouquet. So you can see the shapes of that bouquet. I already tricked around a little bit with that. It has a custom aperture in it so that the bouquet circles aren't really round anymore. But still, I was unhappy and I wanted to improve that. So first, I had to understand how bouquet is made. And I just wanted better bouquet. And bouquet is actually, on the left side, you can see perfectly mathematical, perfectly round bouquet. The stuff that is typically generated by 3D graphics software. And on the right side, you can see an actual photo that I took myself. And you can see that the shape of the bouquet is actually dependent on the position on screen and also has some irregularities like the cut-off pieces there on the lower right. And if you look very closely, you can actually see a little structure inside the bouquet circles on the lower right as well. So there's like a small type of fingerprint in it. And all of these things are missing on the left side. So I went on to create something like that. So what you see here is a simple lens made of two spheres and a simple reflection shader. And I set up a camera to look at a particle system that was made of little fireflies, basically. And I got that thing that you see on the right. And this told me it's possible to make better bouquet in cycles. And I went on to experiment a bit more. And this is one of the first results I did. Looks pretty neat. I actually already had the type of effect that I wanted, custom bouquet, but the problem was with this scene, this was like a 250 millimeter Taylor shot. And so it was completely unusable with regular scenes. And I experimented a bit more and got this. And that's not really what I wanted. It's kind of blurry. So I had to do some more research into how lenses can actually affect the image itself. And I basically went into photo history and found this wonderful piece of software called Optical Ray Tracer that helped me design lens systems made of classical photographic systems. And it's pretty neat. You can just push around the lenses and adjust the parameters of the lenses to see what kind of result you will get on the right side. And I used this to create more custom lens systems. This was one of the results. So yeah, it's even a bit better, but I'm still not happy yet. Using this kind of system introduced a couple of problems like focusing. You couldn't just put the focus point directly on the object anymore because you're basically behind the lens system. And I needed to adjust the focus point just in tiny, tiny steps and needed to have a life render to actually see what I was doing. And this, yeah, while it is very tedious, it actually produces pretty neat results. So when I zoom out and disable this, yeah, it's focused all right. Okay, this is an animation of what refocusing actually looks like in the bouquet. And you can see the shapes of the bouquet actually changing depending on the focus. By the way, what you can see here on the upper left and everywhere basically are a couple of optical aberrations like chromatic aberration. That's when the piece is actually string up like this. And I needed to correct that a bit. But here's another thing. I actually built a little synthetic aperture that I could control with a shape key. So opening it up and closing it would produce different aperture sizes and shapes. And eventually I came up with this here. As you can see, the camera's upside down, by the way, because that's the way photographic systems work. And it took me from this to that. Okay, so I need to finish up. Have a look at the closer look at the left top side. As you can see, there's actually a little structure in the bouquet itself. That's because I actually put a displacement map on the front lens. It's pretty cool. I did a few more experiments. I tried to assimilate lens flares, but I had to give up after 65,000 transmission samples and almost eight minutes render time. It did kind of work, but I think Cycles has some problems with that. Okay, anyway, here's some more renders. As you can see, quite a lot of distortion, but maybe I can fix it in post. I don't know enough about optical design yet to actually make a camera system that corrects this, but I think this is pretty neat. That's a scene that I had rendered previously and it's more like a dream-like thing, which was also the inspiration for the scene anyway. But yeah, that's not as cool. So there's also one attempt to actually fix that distortion. And it kind of worked, but yeah, I have to do a bit more experimentation. And downsides, it's pretty complicated to build and use. And the render times became a bit longer, not that much longer though. So maybe one and a half, two times longer. I think that's a kind of okay trade-off for that kind of effect. There are some things that cannot be simulated in Cycles like chromatic aberration. And of course, optical design needs to be your thing. But the plus points, the results are awesome. And I finally got that look that I wanted. So thank you. And in case you want to see more stuff of mine, I've got an Instagram channel as well. Thanks. Thank you. I will ask a few people. Relya Trajkovic, Frederic Steinmetz, and Kevin Kueck. Sorry if I eat the names. Hi guys. My name is Thomas Beck. I'm a planner, developer and entrepreneur. And as we heard yesterday, when Captain Disillusion had his talk, we are called. And so I wrote our Bible. It's... I went back in time, traveled back in time six years ago. I started writing this Bible. Then I finished it yesterday morning. I released this yesterday morning. You can see it's all in there that you maybe want to learn with Blender. The Blender game engine is not in there. That's the only thing that's not in there. If there's anything missing, write me. That's the address. Just have a look at it. And that's all because I want that you are on stage now. Thanks. Hi, everyone. I'm Relya from Videos last year. I stood here and I did a talk about automating motion graphics in Blender. And the last 10 minutes of that talk, I just complained and whined about how Blender is bad for motion graphics because it is. And I felt I was consumed by guilt afterwards. It really sounds bad. I was just complaining, not doing anything about it. So I talked to my colleagues at Videos. We talked to Tom. Tom told us, talked to Delay Felinto, who was project manager, project coordinator of Blender 2.8. And we fixed things. We fixed some of the things I was whining about. So we fixed, yeah. We fixed text. Don't get excited. It's text. No one uses text except us motion graphics artists. Yeah, so we fixed text. We fixed vertical alignment. And we fixed the way text boxes deal with extra text in them. So first video, please. So this is just vertical alignment. It works correctly now. It wasn't working correctly at all earlier. So if you take bottom, it seems like it's not correct until you use J or some other character that goes underneath the baseline. And we've also fixed. So that's vertical alignment working correctly. Center is not working correctly too. Top seems like it's off, but until you use a taller character. So this is information got from font itself, like you know, which is the tallest character in the font. And there's top baseline, which is you can only see it if you type extra text in the second line. And now if you center the text vertically, you can spin it and rotate it and it will work correctly, not like swing around because it was not centered. Okay, next video is about text boxes. So we introduced, this is already in Blender 2.8 and this next thing is not yet in, but it will be soon. So if you have a text box and you type in some extra text in there, the extra text is actually gonna go out of the box, which shouldn't happen. What's the purpose of the box? So now we introduced some extra options. It's also benefits for this fixed vertical alignment, but now we have an option like overflow is default, which is now, and truncate will cut off extra text by words. And there's also a scale to fit option, which will scale text down to fit that text box. Yeah. So yeah, and overflow is how it behaves now. So that's pretty much it. Thank you guys. Keep blending. I'll see you around. Bye. So I will ask Jonas Dichel, Jonas to come down. Everyone, my name is Kevin Cuckoo. For a while now I use Blender for my 3D printed products, jewellies, home decorations from lampshades to cloak. All my designs are always quite square and I was thinking, well, it would be good to use Blender where it really shines. Maybe make a character tell a story somehow. So a few months back I designed this character really basic, really simple, but yeah, I think it could be quite nice to improve my skills on this character. So I was thinking, yeah, what can I do with it? How can I tell a story? So I just rigged up a few characters. I made some 3D prints out of them and then, yeah, I'm a big fan of street photography as well. So yeah, just took TechnoNetman is his name, took him for a small ride in London. So my next steps will be to create more characters, improve my character a bit and go from there. And yeah, you can follow TechnoMan at 3DCAF if you want. Thank you very much. And after that I will ask the guys from Code Data, yep, okay, you're in. So, fine. Who was it? Yeah, you can. How do I advance one image? Sorry? How do I advance one image? Just go down. Down, okay. All right, yeah, hi. I'm Frederic and I just wanted to, I bought this mouse and I thought it might be worth sharing why I really like it. For example, I think that, if you want to use Blender, you can optimize your PC, but you can only get so far. At the end, often the bottleneck will be your performance, meaning the tools you have, the hardware and how it fits in your hand and how comfortable you are with it and how much you can, how fast you can use it, actually depends on your gear. For example, if I model a lot, and I know that if you're working with hair or smoke simulation or something, you need this one. You need this one. Because you're gonna be stressed out at some point. So that would be the next step in what I wanted. For example, this thing has a continuous mouse wheel, which I really like. If you push the mouse wheel, it will keep spinning until you push it, until you touch it again so it stops. And a lot of people can't really deal with that and they ask me what this mouse wheel is supposed to do. For me, it works really well because you can scroll through 600 lines of codes with just one finger spin. And for non-coders, that's about 15 Facebook paste wall thingies. Okay, so since this is a continuous mouse wheel, since this is a very designed mouse, there's actually more stuff. For example, this little thing, that means I can use the mouse wheel to push it to the left and to the right. And I use that for navigation, like in a browser. I would push the mouse wheel left and right and this way, I have my hands, I have these two keys here I can reassign. So I have more buttons on that mouse. And also it doesn't interfere with the middle mouse button, which I found is great. So I use this for forward and these I put control and shift. And that's actually something very important to me because when it's late at night, I sometimes really still want to do something but I don't have the strength to use both hands. So what I usually do is lean on my desk like that and you can do a lot more if you have control and shift on your mouse. For example, you can use the node Wrangler with nodes, you can use a lot of things, you can just hold control and shift and connect all your nodes. It's really handy. So in the end, I realize there's one button left. So now I have quite a few buttons and I can give quite a few things with just my right hand. And I see Alex laughing. So I'm guessing a lot of you guess what I assigned this button to. As I said, I'm a modeler. Thank you, that's it. So who's next? Yeah, okay. All right, so I'm Jonas. I've been using Glendor for eight years now and two years ago I tried the HTC Vive for the first time and I just thought, the first thing I thought was, this would be fantastic with Glendor because you have a 3D interaction with a computer. Glendor is 3D, so why doesn't it exist for Glendor yet? So I've been working on that ever since and this summer I started to really get serious with it and developed, well, yeah, that's the headset. If anyone's not seen it. So I created these menus, these pie menus that you can populate with functions so they're completely modular and then you can create a second menu or even more and just link them to each other. Here, I'll just, and you just add it to there and then you can switch between the menus and you have a bunch of functions in there. These functions you can either use the ones that I built into the add-on or you can add VR functionality to your own add-on with a simple API that I wrote in Python and here's a few examples of functions that I wrote. For example, here you can select objects and then see how long it takes me to do it. Taking my time here, sorry. So then you can use the translate function which has like pinch to zoom functionality and stuff. So that's pretty cool. It works with edit mode too and I recently added grease pencil too just last week. So here's me trying to draw. Yeah, that's supposed to be bear and then there it's done. So then I added edit mode too so you can extrude stuff and move them around with the translate function. Yeah, excuse my terrible modeling here. I was pretty tired at the time. It was like four in the morning before I left to come here. So, sculpting also works. So here I'm just moving stuff around and another bear. I guess they're fun to model. Pretty fierce bear this time. Angry for some reason. Yeah, at the bottom you can see that's the view from the headset itself. Yeah. Yeah, that's the bear. Pretty good bear. And to demonstrate that the Python API really works I've been working together with Chip and Master Z on 1001 I think that's his real name. He wanted me to call him that. So yeah, I've been working together with him. He made the hard ups and box cutter add-ons and I've worked with him to implement the his new kit ops add-on which lets you add like kit bashing elements to existing meshes and adjust to the angle of the surface. So yeah, here's an example of that. So you basically have a bunch of folders in there with blend files that you can then add to a mesh like that. Yeah. And yeah. Yeah, exactly. And I did not add anything to my add-on to create this functionality. I just had to add stuff to the kit ops add-on. So yeah. So anyone can basically add stuff to VR with the API. And here's another example of the grease pencil with the color picker which there you go. There's a real pencil on it. And then there's a color picker. And yeah, pressure sensitivity is there too because the trigger detects how much you're pressing on it. So yeah. Terrible handwriting. One more example for what you can do with the API is I tried to do something with physics. So here's bowling. Did not get a strike, but I did try again and managed to get a spare, I believe. Let's see. Will I do it? And slow ball butt. And yeah. So I haven't published it yet, but I will soon. And I will be posting information on how to get it soon and probably next weeks or months. And a little bit of self-promotion can't hurt either. So there you go. That's where you can find me. All right. I will just call that out to come down. Yeah, here we go. And Valerio and his friend. I forgot his name now, Michel, I think. The two Italian guys, you can come down. Thanks. I have a website. It's okay. We probably put the video, but you can use the website. Score that at the CC. Yeah. I can use the website instead of this. CC. Okay. Hi, everyone. I'm Bruno. I went to share with you a project I've been working on for the last years. It's a motion capture framework. It's a hardware software motion capture framework. Yeah, it allows you to do human motion capture and also another type of motion capture. Let me just show you a quick video that might explain it better than me. It's just audio. Core data is the motion capture system that you can build yourself. It allows you to have your own motion capture gear just by putting together some inexpensive sensors and open source software. Our goal is to make motion capture available to projects where the current costs of this type of systems would otherwise be unmanageable. We also want to help those of you which, like us, believe in free and open tech. We want you to have access to a motion capture system that matches what you think tech should be. In order to build the hardware, you can buy pre-assemble kits. Each one comes with a board and some pre-soldered components. Apart from our hardware, you will need a regular microcomputer like the Raspberry Pi. All the sensors on the microcomputer go fixed to the performer's body and connected together with telephone-like wires. You can now download the software. Flash an SD card with the custom Linux image containing the main program. On your computer, download Blender, the powerful 3D manipulation software available on Windows Mac and Linux. Within Blender, our add-on will allow you to start capturing with just a few clicks. Okay, I just found it here. This is a system. It's an open source system. I developed the hardware on the software. In the video that says that those kits are available for selling, I think the system is standard alpha version. It's past prototyping stage. Everything works really fine and we are working on making it easier to use. Like, for example, making you not have to solder it by hand. How much time do I got? A minute. Let me just show you quickly. Two minutes, sorry. Okay. How do I open it? Okay, the system is basically composed of those tiny sensors. It's really pixelated this image. But those sensors contain what is called an inertial sensor. It's like the ones that are on your phones. This is one of the main hardware developments of the project. Then all this data is collected by a Raspberry Pi, a common microcomputer, SBC. And it is processed and transmitted to a client. In this case, the client is a Blender add-on. We use Blender a lot on the development because it's a really handy 3D sandbox. But it can be any client. For now, the Blender one is available. All the data is transmitted on a wireless network. And inside Blender or inside any client can be recorded, can be processed, or can be retransmitted to another program. Like, for example... I don't have time to show you, but... For example, you can do live visual art with it. We did something with open frameworks. Do you know if any of you know it? For example, this shoot here is... You cannot see it properly, but there's a... projection there with live visuals. What to say? If anyone is interested, tomorrow I'll be putting the system together upstairs. If anyone wants to take a look, just come upstairs and meet me. Thank you. Okay. Thank you. That was the last lightning talk. So I know there were still people who wanted to show, and it's always hard, but we have to move on with the program. So... No, but yeah. Thank you.