 template, motion graphics template marketplace, actually a library subscription-based which allows our users to log into our platform, pick a template they like, video template like logo stingers, intros, outros, transitions, stuff like that, and they can get it rendered out with their settings, their logo, their color, their text within four to five minutes. So I'm just gonna show you how it looks in the action. So they just log into our platform, they pick a template they like, they can upload a logo. So here I uploaded Blender logo. Then type in some text, pick a color to match the logo, name the video and click render and in about four to five minutes they will have a download link for .mp4 file that they can play and put in front of their video, their YouTube clip, their show, whatever. So as you might have guessed, a lot of these templates, all of these templates are Blender .blend files ready for animation with placeholder logos, text and colors and Blender does all the heavy lifting in the background. So as you can see, a lot of stuff that we have, like we currently have, I'm just gonna skip this. So we have 60,000 users, over 800,000 videos rendered. Now I met with my colleagues yesterday and they corrected me, this was two months old. It's 75,000 users and a million videos rendered. So two and a half, two and a half thousand videos rendered each day. That's amazing. And we have 700 plus templates, the library that's increasing every day. This template marketplaces are not new thing. You have your video hive, like Envato's video hive with 47,000 After Effects templates and 4,000 Cinema 4D templates. You have Pond 5, huge selection of After Effects templates. And even Blender Market has things that could be considered motion graphics template. This is actually a preset. But what's common for all of this is that user needs to have access to an app, like After Effects or Blender, and needs to know how to use that app to get his final result. Now for Blender, owning an app is not a problem. For After Effects, it's a big problem for a lot of people. And even those of these templates are super easy to customize. Some people really need to hire a freelance animator to customize the template that they already purchased. So I'm going to talk about some challenges of creating this fully automated solution. Keep in mind that anything that anyone uploads needs to work out correctly without any user input. So pick a logo, any logo, and it's going to work on our platform. So as you can see in this image, there's a lot of different logos out there. Different colors, different shapes, most importantly different aspect ratios. There are wide logos, tall logos, and kind of square round ones like Apple here. So let's say one day Coca-Cola wants to use our template. And here's Coca-Cola logo in Blender. And they're super happy and they call NBA and they say you should use these guys and NBA uploads their logo. It's now completely different, like it's no longer vertical, it's squished. Because this plane was UV unwrapped and created for Coca-Cola logo. Or a wide logo. And now human input is required. Animator needs to go in and reshape this plane. Or UV unwrapped differently. Apple would not be happy either. So we don't have two and a half thousand animators on standby to do this. So this needs to work out of the boxes, whatever they upload. So just by looking at, just by looking at UVA image editor in Blender, when there's nothing loaded in it, the solution presented itself. Blender really likes it when image is square. Especially if it's 1024 or 2048. So we use the external app to conform any logo to the square texture. No matter what our users upload, it will be placed on a square transparent texture. So we don't care if half of it is empty. We care about the parts of the texture that's not empty. And if that's visible, it's going to work. And we always know what we're working on. It's always square. So animator can just put it on a single plane and do something around it. They can separate it into pieces, explode it, twist it around, or project that texture onto moving pieces or whatnot. So that's all the issue of aspect ratio. We're really happy with this solution. We never needed to look back. What about colors? So when we started working with videos, we had zero customers. We didn't know who's going to be using this. So the only thing we could look at was these big brands with their brand books. And they have huge brand books all available online. This is from McDonald's, available online. We saw how they use their logos and colors. And a lot of these brands have what's called logo negative. That's the last one here on the blue square. Brands like certain colors, they dislike other colors. Like McDonald's here decided that their logo looks good on red and green. And if you go into their restaurants, you'll see like wood and dark green and stuff. But if it's on blue and sometimes they can't control what they're going to need to put their logo on, if they're branding something, they're going to opt out for logo negative, which is completely white logo, just a logo silhouette, pure white. So we figured out why not help out these people, just make it look good from the start and make your logo negative really easy to use. So we could just use logo alpha. Whatever they upload, we can take alpha as a silhouette and just colorize it as white. So we made these templates that have pure white logo. Like here, this is like an 8-bit gaming template. And it has blue sky. What if their logo is blue? It's not going to look good. And they're going to complain. So we made it white. It's always going to be white. And we wrote that down, like your logo is going to be pure white. This one as well, we just liked how it looked like. Cool contrast between white logo and red color. Now I'm going to stop at this template because this is a breaking point for me. Because we received a ticket from a user who complained that their image came out like this. Like it was just a square. So immediately I knew what was going on. Her logo didn't have alpha channel. So it was just fully opaque, nothing to see there. But before I wrote back to her, I asked for an image. I wanted to double check. And I got something like this. Weird logo, right? So turns out this is not the actual photo. I'm not using people's personal stuff. But turns out it was old lady. And it was her granddaughter's graduation. She wanted to do something special for her. So she decided to explode her face in a million pieces in this template. So that wasn't my job to judge the template choice for this particular event. But it was my job to figure out what our users want, what their needs are, what they're expecting to get. So I went back and saw all the other tickets. And there were a lot of support tickets I'm talking about support. Support tickets are about this same issue. People were just not uploading logos with transparency. They would upload whatever. And that really helped us understand what sort of customers our users are. These were not huge brands like Coca-Cola and whatever. These were small businesses. They didn't have brand books. They had one logo that someone made for them ages ago. And they lost all their work files. And this is all they have. All families. Families don't have brand books either. Definitely don't have logo negatives. So we stopped the whole logo negative thing. And we started focusing more on, we started testing our template so it works with everything. Black, white, all the colors, everything. It needs to work out fine with alpha without alpha. And we started making more templates that have photos in them, not just logos. So as you can see here, it's no longer just upload your logo, upload your photo. So stuff like that. Yeah. Next point of our customization, sorry, I'm done with images, is our restaurant. Blender's text. Now this image sums up the whole section. We really, really hate Blender's text because it's really, sorry guys, it's really bad. So why is it bad? It can either be flat, extruded, or with a little bit bell on it. And that's it. For animation, there's not many options that we can do without converting it to mesh. And once we convert to mesh, we can no longer influence that text object. It can be changed to whatever our users want. So we're stuck with these basic Blender options with text. We had some limited success with font objects. I don't know if people know about font objects. In Blender, they allow you to create objects for each character in the alphabet. And it will be instanced instead of that character when you type in text. And it's awesome. But it requires you to painstakingly recreate each letter of the alphabet along with all the punctuation special characters. And in case of videos, all extra languages like Latin, extended, and Cyrillic if possible. So huge amount of work to get one template to work. Now also alignment. Alignment is horrible in Blender. Like left, right, and centered is fine. This work is expected. But this is top base, vertical alignment. This is top base. This is top. It's not a top. It's all the way down there. And this is centered. You can see where the origin is. That's not a center. And I tried a lot of variations. And it just doesn't work. So that's a huge problem for us, because we don't know what our users are going to input. And we can manually just shift it around. It needs to work right away. Also, text scaling is a huge issue. Because if you have a template that has something like this, it says your text here. And if you type in something of similar length, it's going to work fine. But some people decide to put their life story in this text field. So as you can see, there's a problem. It goes off-screen both ways. So with our team of scientists, we created a formula that scales the text down. The more characters are entered. So short text, longer text here. Much longer text here. It just scales down. The more characters the input, the smaller the text size. Now, it usually works, but sometimes it doesn't, because 10Ws is not the same as 10Is. Character count is the same, but text scale is the same. But 10Ws will just go off-screen. So we're kind of counting on our users to input a reasonable mix of wide and narrow letters, also known as words, instead of just typing W, W, W, W. So yeah, also we tried all various other solutions and we failed. You can't get text to behave and scale down, because it operates in this weird shady area between object scale and font scale, font size. So you can mess with these around and get the same-looking object with totally different values. So I'm going to skip colors, because colors are very straightforward. You just change color. But I'm going to talk about speed. So as you could see, it's all rendered on our servers, so speed is important. Also we want, it means higher price. Of course, if it renders slow, it's higher price. And also we want our users to get their readers really, really fast, like four-minute stops. So how do we achieve that in Blender? If you go online and you check for speed-up tips for Blender, you'll see cycle stuff. You'll see reduced number of bounces, reduced number of samples. You'll use clipping and stuff like that. We all know that. But there's no way to get it as fast as we need it. We need it in 20 seconds per frame on a desktop computer. So the busiest frame of the animation needs to render in 20 seconds on your average desktop. So no way to get a noiseless result, even with denoiser in cycles. So we don't render in cycles. We render in Blender Eternal. I know, shocking, right? But we don't have to render it completely. This is a cycles animation. This is not one of our best ones, but a good one to explain how we do it, how we speed up things. So some spheres go around this logo and it appears. If you could take a look, this is the busiest frame of this animation. Like there's some emissive material, which illuminates the whole scene with global illumination. There's some ambient occlusion there and high-quality motion blur. This is all cycles, right? Everyone recognizes that. You can't do that in Blender Eternal. So the trick is to figure out what will change in between these renders. We will only need to render the things that change. The things that change is logo in this template and color. So when you think color, you say, OK, we need to re-render all these spheres going around. But you don't. Let's start. We start with the background. We use heavy compositing here. We start with the background, which is simple color blend. And then we pre-render the first material. There are two colored materials and emissive one. We just pre-render the first one as pure white. And then we colorize it with mixed color node set to multiply, like this. And then we add another one and colorize that one as well. It's all pre-rendered. Takes a second to composite. Emissive material, colorize that as well. Almost there, not quite yet. And now we change the scene. We set the emissive material to pure white and everything else to pure black. And we get to see where the light goes in this scene, because there is this one emissive material. And we get to see what it influences. So we colorize that too. And we add it on top of the previous template. And here we go. It has lighting. It's called this emissive material. It's colorizing all these other stuff. Also, logo is peeking in somewhere in between these spheres. So we render out a mask for logo. We'll just render logo as a shadeless material. Takes half a second to render. And use its alpha and this alpha and this mask. This is shading on logo. And there it is, a logo in between these spheres. So some post-processing. And this is a node tree for this compositing. We just changed these three colors. That's all we do. And this is render internal now. Pretty much the same. And it's rendered super fast. So cycles render one minute 25 seconds. Blender internal render three seconds. That's a huge improvement in speed. Thanks. Now, are they identical? Nope. No, they're not. But you wouldn't have known that if I haven't shown you cycles and blender internal. You wouldn't have known that. And I would even argue that this blender internal version is better for color customization. Because in cycles, much more stuff is happening around. And this emissive material is colorizing everything. You can't see individual colors. This is another example. This is a grayscale render of these pins. They're moving up and down. And then we colorize it. It's pre-rendered. Colorize it. And then we recycle the scene we loaded in blender internal. We just project. We already have these pins where they were in cycles render. And we project logo on top of it using shadeless material and mask transparency. So if there's a pin covering logo, it's going to mask it. And it's exactly where it would be in cycles. And we just use that as influence. And with some post-processing and some lens flares on top of it, you've got a pretty nice-looking template. Technically, it's blender internal, but it's not right. So very fast, super fast. Of course, we are very excited about EV render engine. And ever since last conference, we funded the development of Armory 3D render engine. Lubas was presenting here. It's also super cool real-time render engine. This one single frame illustrates its power. So we're ready to use that in our workflow as well. Now, when we were sure that we knew what we were doing with this automation stuff, we started experimenting and doing other stuff as well. We started making live action templates. We started filming live footage and combining it with CGI elements inside blender. And it worked out pretty fine. We used blender's motion capture object, camera tracking, compositing, masking, rotoscoping, everything inside blender. And we were quite happy with it. You can see it here. So pretty much blender is a viable solution for this sort of stuff happen. OK, moving on. So I told you about how we automate stuff. And we want blender to be awesome. We want it to be better in motion graphics world. So I'm going to use the privilege of standing here at Blender Conference to offer some suggestions for some things that could be changed maybe in blender that would make it really, really better for motion graphics artists out there. So it's better non-destructive text objects workflow. Scenes as compositions I'm going to get to each one of these. Grouped animation instancing and image effects on textures. So non-destructive text. I already started talking about it when I spoke about text. Basically, when you see how much these are modifiers for mesh objects, these are modifiers for text objects, much less. And even the ones that are on this list don't really work as expected. Not much you can do with text. Right now, if you want to reveal text in some interesting way, this is the only thing we can do. We can slide it in from the side, scale it up, maybe play with some character spacing and rotate it into view. We need to be able to do more stuff like this. This is animation nodes. Top examples were animation nodes. We can't emit particles from text. Did you know that? For this, I had to convert it into mesh. And this is maybe too much. So text needs to be non-destructive. We need to be able to work with text, do something awesome with it, and be able to go back and change the text. This is not just important for us. It's fully automated solution. This is important for everyone. Imagine you're a motion graphics artist, and you have a client, and you finalize your great video. And they say, this is it. This is 100%. Oh, just one thing. Can you just change that text to something else? You can't. You just have to go back and do everything again. So scenes as compositions. For motion graphics, it's very important to be able to do something very complex somewhere else, and then place that in your main scene and instance it in space and time as well, and scale it up, down, and do that. You immediately reach a lot of complexity like that. So if you take a look at After Effects, this is how it works. This is a composition where you set up this complex high-tech ring animation, and then you just instance it in your main scene, and offset it, scale it up, put it in the depth of field stuff, and you offset it in time so they don't all play at the same time. And immediately you have some futuristic, hard-looking thing. We can do that in Blender right now. Not easily, though. But Blender has scenes that not many people use. And scenes could be our compositions, right? It's not really possible now, but with the rise of EV render engine, real-time rendering, couldn't we have a feed from camera in scene 2 pointing out itself, something awesome we made? Be a texture in scene 1 that we can then offset in time and do whatever with it. It would speed up the process immensely. Grouped animation instance, similar thing. Like, I have a simple motion graphics element here. Like, I want to do something with this. So it's just a curve that loops. And I can instance it. That's not a problem. Alt-D, right? Scale it up. And if I want to offset it in time, so it's not boring as this, I will need to click here in NLA Editor, reveal all this stuff, reveal actions, and move action around. And now they fire off in sequence. It's much more interesting, right? But what if I do, I can also make copies of this object and offset their curves. But now they're no longer instances. If I want to go back and change my mind, no way to do it. But what if I do something more complex? Not a single object, but a group of objects doing something interesting. And I want to do the same thing with them. I can instance them. That's not a problem at all. But there's no way to offset it in time. This is boring. This is nothing. We need to be able to do this. So some sort of group animation that groups actions of all the child objects would allow us to easily create rich and complex motion graphics with ease in Blender. For this, I had to, again, convert them to separate objects and look at the mess that was made in the curves. There's now 50 objects, I believe, busier curve 50. So it's a mess if I want to change something later on. And texture effects, finally. Wouldn't it be awesome if we can do stuff to images in Compositor? But we need to be able to influence textures while there are textures on a plane or on an object. So here, let's say we have this texture. Why shouldn't we have a pixelate or find edges or blur on the texture? And easily, you could create something like this, like logo being formed from the drawing first and pixelation effect getting gaining resolution as it forms and stuff like that. So yeah, that's it. And finally, this is the end of this presentation soon. I keep seeing a lot of great motion graphics artists posting their amazing work on Twitter with comments such as, I'm testing animation nodes. I'm testing something with spheres. What do you guys think? I'm testing this, testing that, trying out this, trying out that. No one says, I did this for a client or I got paid to do this. And for us to grow as a community of motion graphic artists using Blender, we need professionals. And what we hope to do is convert videos into true marketplace where anyone with skills in motion graphics in Blender can upload their template and earn percentage of each sale on our platform. So we would immediately turn a lot of hobbyists into professionals. We already developed an add-on that simplifies the upload of a motion graphic template to our platform. You just basically set up everything, click publish. You get the finished zip file that you upload to our platform. And if it all checks out on the platform, that's it. You're done. So it's called Videos Creators. Work title, we're not married to it. So that's pretty much the end of this presentation. Feel free to drag me away from free coffee and chocolate there in the lobby to discuss anything I just talked about. You've been a wonderful audience. Keep blending, and I'll see you around. Thank you very much.