 This was meant to be a very boring talk because I basically will only tell what's already online and available, and it has been documented by hard-working developers. But I know people, developers hate documentation and people hate reading documentation, right? So basically, I will give you a short summary then of what's happening with 2.8 and mostly to show what have we been doing and how do things work. So I think most of everyone here now knows what 2.8 is about. I'm not going to bother you with all of this and we hope it will be finished next year. You know, it's always very busy and stuff, but by the summer I hope you have something that could be better. Walking, released something, I don't know, we'll find out. So in the yesterday's slide I mentioned a whole lot of topics that are part of the 2.8 project. We take a break of one or two years and our release cycle at 279 was more like an update of all the projects that were going, but we have a two-year break to work on all the big things that we otherwise can never do or find time for. And that is 2.8, all kinds of stuff that you really have to do once. It might break compatibility, things stop working if you load old files, but as Barnard said, we have all kinds of cool, really interesting things. I'm not going to talk about everything of this, I will limit it to four topics, that's the ones I know most of, but also the work that's already being started, and the Greece for example had another presentation yesterday. Well, there's no goals, and the stuff that still has to be fleshed out is, well, let's just go complete this one slide, but okay. So this is an old slide, but I didn't want to have it in, but this is fine. So this is the stuff we are not going to talk about, that's why I didn't want that slide. Okay. Layers and collections. One of the new concepts in Blender is the whole idea of layers and collections. In the old Blender, a layer was already a confusing thing, so you had those little buttons in the headers, the 20 of them in a row, and you could press them, and then you could control the visibility of them, and those buttons are actually bits, so a byte is 8 bits, and in one byte you can store 8 values, and in the days that I was writing Blender, you had to be very cost effective, you had computers with 8 megabytes of memory, so we had to put everything in bits, the more you could put bits in bits the better, and I called those little bits, I called them layers, right, and actually there are some kind of layers because you can turn visibility on or off, and then you have layers, layers, visibility. And we stuck with that whole idea for 25 years, basically, right, when I started this in 1994, 2017, 23 years, Jesus Christ. So this Blender is getting old, Blender is getting old, so we have this old concept, and we have to get new stuff, what people want to have named layers, and the whole idea of layer, that people get used to like the render layer, was a new concept added in 2.5, where a layer more or less means a picture, right, something, you have a scene, you have all the objects, and then a layer is a selection of what you can render in a picture, with all kinds of options for passers, whatever you see, and then you can have multiple layers on it from one scene, and you get that to the compositor, and then you combine it, and you have an aspect here. So that layer system is now in 2.8, something completely new, and what we call the render layer, that's going to be the standard also for viewports. So to go one step back, I want to explain what has to be very clear, so in Blender, everything is like in a database, database more like a flat file system, but if you add a cube, or a monkey, or an object, or a curve, or anything you do, they're all little blocks of data, and they're stored centrally, alphabetically, and by type ordered, and that's what this picture shows, that is the main database, that's where everything is. The next step is you organize that into a scene, and the scene in Blender 2.7 and before was just a flat list, and you add all the objects there, and that was it, right, and per object you could set whether it was visible or not, that was then called the layer. In 2.8, the organization of objects in a scene is done through our collections. Everything is at first in the master collection, it's a special collection, we call it a master collection, which always has everything in it. So if you want to work quickly, and you don't mind because you only work on one cube and one plane, and then you add that in Blender, you press render, and you get the stuff, so in the master collection. But if you want to do a little bit more complicated things, you can use the collections to its full power, and that means that the collections can be like Venn diagrams, and you have a collection like in this example, some parts of the object can be in everything, and some parts are like the monkey is in every collection, but the camera is only in collection three, and the lamp is in collection three and two, and this is how you construct a scene, give them names, and you have unlimited amounts of collections where you can put your data in, that's how you structure your things. But further, those collections can also contain collections, so it's a hierarchical system. This is unlimited, and you can have as many as you want, you can have only one or two, or you make twenty or fifty or a thousand, and leave that to users to decide. Now once you have collections, you put them to use in the layers. So this scene, sorry, this is for the one slide too soon, so this shows how the master collection and the other collections are stored in the scene. So the scene has a master collection, and then you have collection one, collection two, the master collection refers to an object which is in the database. Then, if you want to render it, the scene then says, OK, I've got layers. For the sake of communication, because it's a new type of layer, we call them the view layer in the documentation. The view layers are similar to the render layers, but the view layers is now also how things work in viewports. So you have a scene, and you say I want to have a layer, and another layer, another layer, and for every layer, you can assign the collections you want to be part of that layer. It can be very simple, it can be say, well, I have a collection with all the characters, and I have a collection with the background, and I put in layer one all the characters, and in layer two I put the background. I render them separately, send it to the compositor, and I'm done. Or for the viewport, if you work on things, you can make more elaborate sub-collections that are based on your workflow, so you can have collections called all the trees, and all the plants, and all the rocks, and that together is the background, and that's how you can make sure that you can model or organize your stuff. So layers, some of you have to get as a picture, they are used to render things, there are things that you use for display, and a layer is not an animation system, for example. You have to keep that separate. So what you do in a scene, you define the animation, you have characters, you have everything set up for the scene to run, and then the layers are the parts, the visible possibilities of that scene rendered in a way you want to work on, or you want to render for final compositing, or whatever you want to do. But once you have a scene animated, you cannot say, okay, I want to have a layer on frame one and a layer on frame five, and this layer should have all the smoke sim from starting from that frame, and that's not possible with layers. So you have one scene, one animation system, and that's how you apply that to your layers. The settings for the layers are defined in the workspace. I come back to that, there's a new thing in Blender, the workspace, it's like the old screen, and the settings for layers are always like for the whole editor, right? So you have your settings, like how things are rendered, for example, global illumination of the amount of samples, or EV presets, or colors, or madcups, or whatever that the drawing has an option. You set that for all of the layers per workspace. If you want to have cycles render, you create a workspace for cycles with the layers. The same layers then can be used in another workspace for easy, or for modeling, or whatever you want. What you can also do is to make sure that your workspace uses the settings you render on, which is also always a problem in Blender, and you have cycles, and you have your viewport. And if you press F12, you can see what happens in the render. But then some parts in the viewport disappeared or stopped working, because the viewport uses different settings than what the render engine is doing. That's always a bit fuzzy and not really in control. So those things will now work very nice. Last thing to understand is that the layers are using the collections. The collections are not part of the layers, collections are part of the scene. But that's where you manage it. So the outliner will have the collection editor, and then you get a little layer editor that will say, layer 1, collection 1 and 2, layer 2, collections 3 and 4. And you can only set which collections are on and off, or visible, or whatever you want it to be. Plus, the layers then support overrides. So if you have a collection with trees, you can still say, well, I want the trees and this layer to only draw in white wires. And thus, if the render engine supports it, it has a couple of options for overrides, and then it will only run the trees in white, or blue, or whatever you think it is. Another new concept is the overlay engine, because we separate engines now in two types. So one is EV, or cycles, or what was coming, the workspace engine, more or less like what the old viewport was drawing. And after the engine will draw a complete representation of what you want to see. And people will be able to create new engines. So you can have a PBR engine, or that's easy. A non-photo-realistic engine. You can have engines in ways you don't know yet. You can add them. Because we try to separate the functionality, like selecting, offset mode, edit mode, hair editing, all the tools we have. We try to separate that from the engine itself. So the overlay engine is what will give you the interaction. This is what some people already could see as an early demo a couple of months ago, when somebody was using cycles rendering. And there was object bound overlaid on it. So you can render cycles, and you can select stuff, and you can see, ah, I'm selecting things. You can rotate the view. You see all the outlines in real time going around. You stop your view, and cycles is coming back. All that you see here in this screenshot, this is one of the demos. You see there's a very nice outline system that's been developed to clearly show in a rendering what kind of object you're selected or what you're working on. The overlay engine is also handling all the manipulators and other stuff you need to be able to work with Blender. So that means that if you have this overlay engine functional, which is getting there, it doesn't matter whether you render in Eevee, or in Clay render, or in MPR render, or even you could not render at all. Let's have a wide texture background, and use the simple outlines to manipulate the scenes. The sky is the limit. OK, we go back to workspaces. Is this still to follow? It's all new concepts, but I'm getting used to it. The workspace is basically a new name for what we had in Blender as a screen. As a Blender, you start Blender, you have screen layout one, a screen layout for modeling, one for video editing, and that's it. So what we want to do is to allow users to make multiple screens and combine them into one table. That's the workspace. But often, you want to switch between screen layouts, but you want to share all the properties for that. And even better, you want to have little previews already of the layouts, like in the top bar, and quickly switch between layouts, the ones, your modeling layouts, or then maybe your everything, but you have very fast access to it. And all those layouts in the workspace will share the same properties. So this diagram, and I'm not sure if it's readable from this distance. So what you can see is the new workspace concept. The workspaces are using the screens, but a lot of properties from the old object and the screens, viewports especially, they have been moved to the workspaces. So what you have to get used to, and this is already working in Blender, that the mode, for example, and the drawing mode too, but also the editing mode in Blender is going to be a global thing. You set that in the top bar, instead of in the viewport, which is confusing usually. Just the top bar is in edit mode, mesh edit mode. And that mode will be more persistent. That means if you would select another object, it would simply go to edit mode. You can go to another workspace, which is an animation post mode or so, and you switch, and you always will have the post mode ready for the characters. It's back, and you'll have the edit mode ready. The post mode can have a different drawing or anything, because you want to see things in the way. Animators like to look at it, and the edit mode and things can draw everything in nice clay and gray, because you want to see all the vertices and the lines better. But that's a bit how the workspaces are meant to work. It's a bit layout for it. There's a new top bar being worked on. This still needs design work, and the artists to come on board to help mocking up and make sure that the visual and the usability of the system is working. But the top bar will basically set the scene, which is the workspaces you are working on, and what the layer is you work on. There's only one layer for the whole window in Blender, the whole workspace. There's only one engine, so the whole thing will always be in one engine, and there's one mode. And those things then can quickly be switched. Also very important is to separate what you do with the outliner, the data, editing, and what you do with the layers, how the layers are working. The one is more on the fuel port side, and one is more on the scene side. So in the future, workspaces will have more control. This is something that has been put a little bit on the bottom of the to-do list. But in theory, a pair of workspaces, you could also set which addons or which key maps or UI scripts you are active. Because addons are also becoming a little bit of a problem in Blender. Because there are a lot of good addons, and suddenly you have to addon managers to turn them on and off and stuff. That is much easier to say, why make a workspace for my tree software or something, and when I have that workspace, the tree editor is on. And then you move to your normal model, and you don't have all the weirdness and shortcut overrides and model operators or whatever has been coded in the addon. That's important for workflow, but it will be worked on a little bit later. The last new concept, which is very powerful, is the template. But the template is a new, like a blend file, or a group of blend files that is being discussed as blend files. And Python scripts. But the Python scripts can combine the blend files. So the template will allow you to configure your blender from scratch, including what buttons or shortcuts or key maps and menus and what editors you want to have available. The template also will allow you to add any asset system. Or you can say, well, I want to have a number of preset materials and preset models. And when I start a blender, I want to have an interface that shows me the 20 monkeys and the 20 callers and the 3D window. And that's it. And I drop monkeys in the viewport, and I add callers on it, and I rotate the view, and I'm happy. That's how you want to use the software. That template, you could make it application template, is a way to configure a blender for a specific task. So the monkey editor is probably really fun to give to the kids, to have fun with 3D. But you could also think of an editor which is optimal for people who work in games, to have a level editor, or people do architectural planning, 3D printing, for education and training. And that's what the Blender 101 project is about. And so the template and how to configure a blender is very closely related to what we call the Blender 101. And to be clear, there is not one Blender 101, there's not going to be one version of Blender called 101. It's a concept. As proof of concept, we're going to build, as first, a version of 3D printing. And that's what our sponsors are paying us for, all of objects. They would like to have a version of Blender, which is more accessible for occasional users who want to do some 3D printing. Blender is really good for 3D printing. But Blender has 95,000 other options, and then you press the game engine and stuff that confuses. So if you can remove all of that, and only leave the options open for sculpting, a bit of modeling, importing, export, and you're done, and have a 3D printing configuration, that's what I would like to test. And that's what the templates will do. OK, new topic, the asset manager. Assets in Blender always have been a bit of a weak thing, right? The only thing you can do in Blender is load a file, and you have BlendFiles where you can link things with. And if you have too many BlendFiles, then you link things too much, you lose track, and then you think, oh, my God, what am I doing here, and I'm losing stuff, and stuff cannot be found, and I've found it back. But that's the whole asset manager project, to give you better tools and interfaces to find stuff, to navigate what you are doing. And to manage it, because you don't always want to have every asset on the planner to be included in your project. And you should be able to make asset collections, or groups of assets that belong together, and they say, look, I'm going to walk, and we're going to make a movie. You walk on it for six months, and we're going to have a number of preset, pre-built materials, very well configured and textures, which we are going to use for our movie. That's what can be called assets. To say a little bit more about assets, I'm going to invite Pasteur to come to the front with the mic. So you can find documentation about his work in the wiki. I tried to summarize it in a few points, but it's still way too much to do all in one or two minutes. But maybe you can give a very quick five-line summary of where you are now, and what's going to happen. OK, so well, using the list here, the asset management core code is mostly done. I mean, I consider it done. It's probably still have issues to be fixed and everything. But I'm right now mostly working on the amber, which is an asset engine. So the concept is that in Blender core code, we only create a framework that allows pretty much every body who knows a bit of Python to create its own asset engine. The idea is to be able to integrate it in any kind of pipeline of special use case. For example, you could have an asset engine which is linked to an online repositories of assets. There are websites who are doing that, so they could directly link the assets in Blender. Or you could have amber is aimed to be a local, a small asset engine on a local five system for artists who are single artists of very small studio and so on. And they probably also will have an asset engine for the open pipeline of the Blender studio and so on. So that's the concept. But the amber engine is part of Blender itself. Yes, the Blender engine will be released with Blender. It's using amber studios or individuals can connect their own databases to it. If you have whatever commercial database or an SQL system or whatever you have, amber will help you make it. Not amber. Amber is the asset engine. You have to create your own asset engine to connect from all the source of data. The amber is an implementation of the asset engine. It's kind of a demonstration of what it's possible to do with the API. The core code has an API. You can see how it works. Good. And then the override, I think, the other exciting thing. Well, static override is basically the idea is to replace the current proxy system, which is kind of working for what it was designed for initially. Anyone knows what proxy is in Blender? One, two, five, 25. OK. So proxy is basically when you are linking an object in Blender, you can't edit it. Because it's coming from another Blender file. So you're just kind of borrowing it. But you can't edit it. So to be able to animate a character, for example, you have to create a proxy of its armature to be able to edit it locally. And right now proxy is kind of a hacky system. So the goal is to replace it with, well, I mean, it's been working for years. So it's a hack which is working. It's a really bad hack system. I did it in a few days. I passed my master for it. And we did Elephant's Dream, I think, with it. Pick the bunny. It was over the 10 years ago. So it survived for a long time. It survived for over 10 years. So it was a successful hack. Lots of maneuvers have been made with it. But it's a very difficult system to handle. So the override will allow you to make this possible for anything. And so you can still have all your library files, all the stuff. You can have other people working on the backgrounds and the characters or materials or whatever you want. You can link it in. You reference it. You don't load it, but you link it. And then you can say, I want to have color local and the property amount of samples. And I want to have the bones local from this character or not. I mean, everything what you want is part of it. Exactly. That's the idea. You can selectively override any kind of property from any kind of that book. Fantastic. Thank you. So the other big topic we work on, we work on too many things. But this is very important, the dependency graph. It's the most popular topic as well. Developers, as the blended community, because everybody is telling me all the time, Tom, when do we get a new dependency graph? Right? So what is it, actually? We don't know what the dependency graph is doing. What the dependency graph is doing, everything in Brande. So if you don't have a good dependency graph, your software is simply not going to work. It means anything you do in Brande, you sense something of value, or you want to animate something. What is it about animation? You want to do games, or whatever you want. Anything that is time-based, I do senses. There is something new. Then you need a dependency. Because if something changes, like the color of the light, you want to send a signal to the viewport to redraw, or you want to re-render, or you want to update buttons, or everything. So Blender is complicated, but it also means that the dependencies, you can create are horribly complicated. Unless you want to do animation, especially character animation, the dependencies inside of a human body that are insane, and people want to be able to update that, manage it, and do it better. So what you see in this diagram is a little bit the design, the big picture, that you know how Blender already works, and how we want it to work in the 2.8 period. And we clearly separate a couple of data levels. It's important to understand that the DNA data, DNA is the Blender-fast format, that is the way how we encode data. That's the stuff that you save, and we try to keep that as compact as possible. Then you have scene data. And the scene data is managed, basically, by the dependency graph. And the dependency graph makes sure that all the data you have on your files is copied, or duplicated, or managed in a way that you can create your animations, or your edits, and do whatever you want. And you can have your physics tests coming in, or other things, or on the other side, you have your images and Blender files coming in. So after the scene data is updated, then you want to have render data. Because basically, every render engine might have a different idea of what kind of data you want. If you have a wireframe engine, you can stick to the lines. Or if you want to do easy, you want to have a number of other things for probes and stuff to be able to render things, or the game engine, or scope mode, or whatever. So the systems will be able to manage a lot of levels of render data, so that you can also create it in parallel. So you have your scene. You can have multiple scenes. And for one scene, you can also have multiple representations of that scene. And to make that possible, we have the cow. And that means copy and write. And to talk about that, I want to invite Sergei Selybin to come over to say a few words. So this is your most popular development project since cycles, right? Well, yes. Sort of. So yeah, you started working on the cows. So how's the cow going? It's going to do to planning, more or less. So there is a wiki page with a planning where I can try to put what's it being worked on, if it's in progress, if it's done or it's not. But basically, what people know, like Eevee, you can use it already to draw scenes and to make things. But there's no animation working. Yeah, that is correct. That is being worked on. And the first step is to make dependency graph to be parallel layer. So this is the biggest bottleneck who owns the dependency graph. And it's going to change into 0.8. And it's being changed right now that will unlock some more features like for layer overrides and stuff like this. And after that, we can enable copy and write because there are two conflicting systems, which is copy and write. And that ownership of dependency graph. So after finished with the patch to change ownership, do calls, and then animation will come basically for free. But can you say in your own words, why is copy and write important? What do we need it for? It's needed to keep artists happy. That's as best as I can explain it. So what is the most exciting thing that will be the result of copy and write? Two windows, same layer, but different time, for example. Or two windows, same work, different workspace, but two different layers, which shows the same object with different overrides on top. So for example, I can have the same cube in one window to be red and another on the second window with cube to be red. Of course, that's not that useful in terms of color. But if you think about simplification options, like I wanted to have simple clay engine view on the right window just to have a quick idea what it is. And I see the same exact cube on the left screen in EV with all the subdivision surface and whatever it is. That's where you need to have count to have the same data represented in multiple stage states. So you should be able to continue working while all the processes are working on the same data. That's why you have to copy everything. A copy on write means when you sign something, you make a complete copy, and you work on the copy until it's done, and then you try to merge things back somehow. Yeah, somehow. And that's why it's so complicated. Yeah. Yeah, OK. But you think there is a light at the end of the tunnel. It should end of the tunnel is around January. Oh, come on, that's way too late. So what are you going to do this month in November then? In November, I don't know. There is a planning. We have a planning show here. But when can we see the first EV animation with characters walking around? Oh, yeah, that's for sure. That should come in November. But then it will be some work spent on porting called modifiers to the new concept of copy and write. And that's going to take quite some time. Maybe we'll have some help from someone to port modifiers over. And that will mean less work for me and faster. It's really a fun job and simple so everyone can do it. Oh, yeah, it is very fun. Yeah, I heard that. It is very, very fun. If you ever board, just contact me, yeah. OK, thank you. You're welcome. So that's it. I'm a little bit over time. So thank you for your attention. I hope you enjoyed the talk. And we move on to the lighting talks.