 Yeah, welcome to my presentation about EV and the next generation graphics for Blender 2.8. We saw so many talks in the last days, and everyone afterwards was coming to me and saying, we need EV, would you do a presentation, please, quick. And so I was literally preparing this presentation until the very last minute. So if there are slides that surprise me, you see my face looking like, what is going on? And so I was trying to start with a nice picture of the viewport. That is currently the state of the viewport, and you see so many different demos and pictures in this presentation that I just don't want to bore you and start right off. Who am I? I'm Thomas Beck, that's my Twitter account and the mail, if you'd like to mail me. I'm a CEO of Plasma Solutions, we do design, we do author stuff, so documentation for Blender and bringing out books, and the next book is starting to happen in, I mean, three months or so. Then I'm creating Blender developers sneak peeks, but I'm only creating them if my son is not demanding to glue me a little lantern for him, so that is not that often recently. The topics of this talk are what is this EV thingy? I think you know what it is, but I'm trying to give you a bit more insight into all that. Then why do we need and want it? Which features do we support? Which features do we plan to support? And I'll show you some demos and screencasts. EV is the extra easy virtual environment engine, which is not extra easy to say, but it is written on the web page, and I don't know if any of you just read that. How many of you know what was EV called before this slide? See? So that was exactly how it went with my presentation there. It is developed by Clément Foucault. He's outside somewhere, so if the talk goes well, then give him a big applause. And it draws the complete Blender 2.8 viewport. What that means will be explained in the presentation. Yeah. So we got a viewport already, right? Why do we need a new one? Or especially EV? Because our current viewport is based on a fixed function pipeline. How many of you know what a fixed function pipeline is? Maybe I should explain it. In the early days, you had some color intensities, and then you would write it into a big array and then send it to a screen. That was easy, but it was very slow. Then graphics cards were introducing optimized methods for that, and then those methods were fixed into this GPU. And so those methods could be configured, but it couldn't be changed. And so the shader system that came afterwards was intended to just replace those fixed functions that you only had to configure by shaders. And shaders could be anything, even calculating or mining bitcoins, if you'd like to. More and more hacks and workarounds has been introduced into Blender, especially the post-processing panel. The post-processing panel in 2.79 has features like depth of field or some fancy other stuff, but the problem is that all those features are hacks, and if you'd like to do those better and even faster, then this wouldn't be possible with our current architecture. The legacy draw chords that we have in Blender are ancient. And for most, they are performance bottlenecks. So when you've got many polygons and you'd like to draw them, then you'd like to stuff them into one call. That would be best, but that is all not easily possible with our current architecture, and so we need a new one. So let's first see what's possible with real-time engines. This one is Unity, and that is a new movie from their YouTube channel. And I'll not play it completely. That's it. That's a pretty cool demo, and it shows pretty nifty what is possible with the current shading model that we got in all our new GPUs. And when you see that, then this is rendered in real-time, I think in 30 frames per second on a GTX 1080, so it's a consumer card and that can render such things in real-time. Typically, a Blender user would react like that if he sees this in another part of the software, and then it's followed by this, and mostly targeted at developers. And so most developers react like this, and then like this, but Clemore didn't. So Clemore was hiding two years in a cave and programmed so much, and he came up with EV, and we need new software technologies to implement such things. Because his EV development was forking Blender, and he was changing all kinds of stuff, and we then had to decide how we implement that all into our master Blender. Blender 2.8 is a perfect catch for that. Because with Blender 2.8 we said we can't rely on all this old technology, we have to remove things, we have to add new stuff, and so we said there is a new abstraction layer coming for Blender 2.8, maybe for Vulkan, but that is on the future horizon. The abstraction layer is capable of drawing everything, everything very fast, and so the new minimum requirement for OpenGL, OpenGL is driving our graphics pipeline, is now 3.3 core, which should be supported by all graphics cards that came out after March 2010. Then there are viewport and render engines, and I don't know if you tried to read all the documentation that is online, but I tried, and it is really an overused word, engine. So everything is an engine, and especially in Blender now everything is an engine, and so I'd like to explain what is this term, and the viewport and render engines are completely the same in the new Blender 2.8 viewport. So when you are hitting f12 later on, now it's not possible, when you're hitting f12, then the exact same engine is rendering your final render as the current viewport, and that is a very cool feature because we then just don't have to develop two things. We just have to develop one and adapt it to the render. And then an engine is drawing everything in its own way. So we have a specific task, like I'd like to have fancy shadows, and then I develop an engine for that. I'd like to have an object mode or an edit mode, and I develop an engine for that. And when the viewport, for example, is drawn by an engine, then you can adapt, take other engines and combine them together so you can overlay, for example, selection outlines into your viewport. And maybe you've seen some of those demos that are on YouTube where there is a cycles engine running, and then a selection over top, and all those things are possible due to those mechanics. Performance optimizations can be done very easily when you have an engine specifically tailored for a task, and so this is possible as well. Every engine is a black box, so you give data in and you get data out, and when you'd like to implement a new engine, then this is easily possible just rely on the last one and do whatever you like. Our plan is in Blender 2.8 to have three engines. The workbench engine will enable you to model. It should do that with nice, wireframes, eventually maybe we implement a better system with auto-coloring objects or something like that, but we will definitely come to a solution that will please everyone in the modeling process. The next one is the clay engine. The clay engine is designed for fast sculpting and model previews and will feature nice cavity maps, cavity passes, SSAO passes, and so on. The EV, that's the PBR engine of Blender is a high-quality engine, so the problem is with all those real-time engines, you have to make compromises when you try to render graphics on screen, when you are a game developer. Most of the time you'd like to have 30 FPS or 60 FPS, but we don't strive for this. We can just take and performance it of, for example, 20 frames or something like that, because we don't strive for real-time. We strive for high-quality, very short render times, and so that is our engine. We have the metallic roughness workflow. This shader is already in 2.79, so if you used it, this will be no difference. We support real-time rendering. As I said, we strive for quality and have high-end features, and which features are presented by small clips that I prepared right before this presentation, so I hope they will play now. Our completed tasks are PBR and indirect lighting. Where you can see it's easy to switch the colors, to switch between metallic and diffuse everything as live in the viewport fast. You could have clear cool layers, which are specular or with a bit of roughness, and all those things you would expect from such an engine. Motion Plur is maybe not that good visible in this clip, but it is enabled already, it is implemented, and we got very nice Motion Plur curves that we can tailor to our needs there. That is all live in the viewport. That runs on my laptop with a 1060 graphics card in, I would say, 20 frames per second. Bloom. All this overdone effect in the Internet where you have those lighting lights that are blooming, and I don't want to show any more, it's possible, and it works really cool. That's our field. If you've got several objects stacked together, then you can just enable depth of field, and you have a nice live view of this effect, select an object, and you've got depth of field. That's this one, screen space reflections, which target a specific problem that you're seeing in a sec. There's a part here that is occluded by geometry, and if you'd like to have that back, then you can enable specific effects like screen space reflections, then you see that this effect is coming through. Together with this bloom effect, it's quite nice. Then we got transparency and refraction, just add a class shader, and you're done. You could even have some roughness in there, and then some effects like frosted class. I don't know if I showed at the end, but let's just play it. So it's pretty easy. Then this is volumetric lighting, all real-time in the viewport, but real-time, that's maybe a bit too cross. It's running with three frames per seconds currently, so it's a lot to compute, but it's looking fabulously if you render it out, and one frame needed, I think, for seconds or so. Volumetric objects were just added two days ago. It's limited by the bounding box of the object currently, and what you can do with it is define spots in your scene where you have your volumetric effects, and normally you would do that by enabling it in your complete scene, but that is an extremely big waste of resources, and so you can now do it locally. And there are now smoke simulations which can be rendered, just like in the OpenGL viewport with super nice shaders, and I couldn't test it, but I trust Clemore, who said it works, so maybe you should try it. And now what happens when you combine all those effects into Blender and open the super nice demo scene by Glenn Mellonhorst, then you've got this. So volumetric lights, screen space, reflections, everything lit up perfectly with many PBR shaders and everything in real-time, more or less. So now we're coming to the development targets, and the development targets are always very interesting and at the same time very dangerous for us, because when we say that this is coming, then you are asking if this is coming now or in four weeks. And so I can say the first half of 2018 will be focused on this subsurface scattering, will be implemented maybe in the next week or in the next two, so don't blame me for that if it doesn't, but Clemore said so. The lazy shader compilation is currently a big problem when you load a file, because when you got many shaders in there, then it may be that your file loads up 20 seconds or so, and then 40 seconds, and then you think Blender has crashed, and then it's loading 20 more. So the problem is all those shaders are compiled before the user interface is reacting again, and then it's linking those all together, and that takes time. And with lazy shader compiling, you can just open Blender, open the file, have your user interface working again, and it's one after another compiling and linking the shader, so it's just a convenience thing. Um, first from calling, culling is for faster viewport updates, because you're just discarding every object that you don't see. OpenSubdiv is super fast, openSubdivision by Pixar that is implemented by Sergey, maybe. I don't know. Instancing, do you all know what instancing is? Or didn't you? Don't you? Okay. And hair drawing is self-explanatory. Then in the foreseeable future, that is, after the first half of 2018 until one, whenever, batch render and material previews should be easy to grasp. Crown to specular occlusion is a super nice specular occlusion method that is implemented by Clemore. Then the viewport compositor is an amazing thing, because then we could have openGL renderings that are fed into the compositor, and when they are fed into the compositor, you've got all your nodes, you've got all your effects that you can apply to them, and then save them out in your pipeline. So that is an amazing thing. The shaded UV view is when you've got a model, which is shaded in the viewport, then you see all those shading effects in your UV image editor, and that applied on top of your drawn image. Vertical shader material is a long thing, so we should just skip that. And parallax occlusion mapping and pixel depth offset is just to have a super nice depth effect that is very cost effective, because normally you would tessellate and then have all kind of stuff that happens on the GPU, but with parallax occlusion mapping and pixel depth offset, you have very fine methods to circumvent that and be faster. So, try it. There are those demo files that I showed on the web page, blender.org2-8, and many of those EV features are really self-explanatory. Just open it, the render panel will contain all the options, play around with it. When you've got a crash, just blame us. Thanks, and always planned on. That's one.