 And that's it. Thank you. So for those of you who attended QtCon last year, I've been talking about Qt3D in Qt 5.7, which was the first stable release. And the aim of this talk is basically to give a quick recap of what happened since then. Turns out we've been fairly busy, and I expected to get more time. So that basically means no lunch break for you. All right, so that's basically what we are going to cover. I will totally run through the first few bits because that's mainly trying to get you to remember the basics of Qt3D so that I don't lose everyone in the room. And then after that, we will cover really what's new. So for the novelties, that's mainly pretty much everywhere. We did a few changes for input. We have a big new block, which is about animations in Qt3D. And also for the quality of the rendering, we have new materials. We have new ways of making texture or integrating with Qt Painter, Qt Quick, and so on. So we'll see that. All right, so what is Qt3D? Just so that you remember that. So it's really not completely about 3D internally. It's really multipurpose and not just for making games. And at the core of it, it's really just a simulation engine, a software time one. Just happens that when you want to do 3D seriously, you need such a simulation engine. And so it's been designed to be scalable and extensible and flexible. All right, so the core is really not about 3D. And that's because we want to be able to deal with AI, logic, audio, and so on with that. And it just happens that there's, on top of it, a 3D renderer. And you get all you need for system simulation. So that could be mechanical systems, physics, and so on. And it turns out that games, you simulate physics. And mechanical systems. Yeah, then I will skip on that, because that's not very relevant for today. If there's one thing I want you to remember from last year, that will be what's coming now. So that's why I'm going through it again. The whole API of Qt3D is designed around the concept, which is what we call the entity component system. And so we briefly see and remember what that is. So that's an architectural pattern. It's fairly popular in game engines, actually. And contrary to some of the bad habits, we might have it favors quite a lot composition compared to inheritance. So one way to see it is that an entity is your general purpose object. OK, it doesn't do anything by itself. And then this entity will get its behavior by combining data. And that data comes from typed components, each component being a thin slice of behavior that you attach to your entity. And that's the combination of those components, which basically describes what your object in simulation does. And the fact that we split between the actual behavior and the data, so from the front end API that you get, you just manipulate data and no behavior, that allows us to manage the API better over time. Because that's much easier for us to just add some key there on new property on an object or deprecate one and change the behavior in everything private. So the way it looks for these kind of things, what we have there is that an entity can have several components. One component also for avoiding blowing your memory, we allow you to have a component attached to several entities for reuse. They all look the same, for instance. You want to use the same material everywhere. And then each component provides a bit of data, which might be relevant for physics, which might be relevant for putting it somewhere in space for its aspect, its shape, or for its animation. And then on the back end side, so that's a part that you don't see when you're a user, we have what we call aspects, which are the system part of our entity component system. And those aspects, they are visited data to find out what to do with the object, and that's where the actual behavior comes from. That's the part which is heavily multithreaded. All right, so just to wrap this up, last year I showed that example that just you get an idea about how that maps when you try to put something on screen. So if we want to display a donut, we basically end up with generally one entity, because those entities are objects, right? So we put them in a tree, like any cube object. That's generally good practice to have an entity which happens to be the root entity, which encompass everything you have in your simulation. And then one entity per object in your simulation. And turns out that in that particular one, we have just one donut. So we'll have one entity which is our donut. If we have just the entity, we see nothing on screen. And that's why we will give some behavior to that entity by attaching some component subclasses. In the case of our donut, we only need two. We need a geometry, so mesh to say, okay, I have an object in my simulation. It happens that it would be a torus, right? So I have a mesh representing my torus. And if I have that, I don't know how it looks yet, okay? I just know it's a geometrical shape. So we give it also a second component, which is material, which will control how the surface of the torus will be rendered. And so in that case, that's one with some foam shading and with several textures and so on, okay? So one entity, two components, and you get your donut on screen, pretty much it. All right, just a last word on that. On the API, we did the effort for Q3D to have everything accessible on C++ or from QML, okay? So the API is really one-on-one between QML and C++. I will tend to show in this talk mostly QML code just because it's more concise. But I mean, as long as you know that it maps one-on-one and you know how to convert from class name to entity names to element names on the QML side on vice versa, that's fairly trivial. So that's perfect, Miran, the C++ classes are named just in the same way than for Q classes with the Q prefix. And to get to the element name on the QML side where you just drop the Q in front, right? That's, so you have Q entity which becomes just entity and so on. All right, so what's new? For input handling, not many big changes, everything we add before is still there. We just figured that we missed something in the API so we added that. If you remember, I mentioned that we had a way to specify axis. So I can create an axis with Q3D to say, okay, I will have an axis, let's say, controlling the rotation of an object. And what we did is that if I have a joystick, that's actually easy, right? I have a physical axis and then I can just move it. Sometimes I don't have that and I want to control this virtual axis from the keyboard. And I want that when I press on the button, this virtual axis slowly goes left, for instance. And when I press another key, it slowly goes right to simulate as if I would have a physical joystick and move it slowly in one direction or another, okay? And that was pretty much it, right? So we would have just a value of where my axis is. But then to actually do the rotation of my object, I just have the plain value of where my axis is. So if I want to have my object rotate over time, I would have to have a timer at the right period and then each time read the position of my axis and then increment the rotation of my object. So that felt wrong, right? Because you would suddenly write very imperative code, have to have a timer, make sure that the timer is at the right frequency and so on. So clearly some piece was missing and that's what we fixed, okay? So we still have the previous stuff, the keyboard device, mouse device, keyboard handler, object picker and so on. But then we had this problem of how do I control the value in my simulation over time based on what's happening with my inputs and in particular of my axis? So obviously we use an axis for that and we only got the axis position, which is fairly limiting and that forces us to have this imperative code everywhere and this imperative code is executed on the main thread instead of being in the simulator. So that's kind of annoying. And so we need to sample over time and we might even have to integrate because if our axis is supposed to be about acceleration and not velocity then you have to integrate that. So just a headache. So in 5.8 we introduced a new class which is axis accumulator and axis accumulator basically does all that work for you. It's executed on the simulator end, right? So it's at the right frequency based on the simulation and so on. So it will manage this value over time for you and it takes an axis as input and then we'll do all the work. And you can actually declare that axis is supposed to be about velocity or acceleration and the integration will be done for you by the accumulator. That's all done in the secondary threads. So code wise it looks like this. So you would still have your logical device. You would still have your axis inside of your logical device. And that's the guy that previously were harassing for information from a timer. So to avoid that then later on you have your axis accumulator where you specify the input axis which is the mouse y axis in that case. And then you just say this one controls velocity, right? So when I'm completely on one end of my axis I have then the maximum velocity for the rotation for instance of my object. And I can scale that because my axis goes from minus one to one, right? Just at this position. So I can say yeah it's minus one to one but really the maximum speed I want that's 50 RPM for instance. So then I can say convert that minus one to one to one. If I'm at one I want 50 RPM, right? And that's pretty much what you get. And then the axis accumulator has a value property and you just have to bind on that. Okay, and once you're bound on it you're done. Okay, so let me show you briefly. So I will have to drop the mic. Okay, so I want you, all right. What's going on now? Okay, that one, right. So the idea there is, so I try to run the old one but for some reason doesn't want you. The idea there is that I have page up and page down which controls the rotation of the large box that we have in the center. And so previously what would happen, which I wanted to show but won't be able to for some reason, is that when you go page up you would see the, you would see it spinning, right? Because we would get signals telling us it's slowly going somewhere and we would add that to the rotation and then at one point the axis was stopping emitting signals, right? And so we wouldn't see the cube spinning anymore but in that case, I mean with the accumulator then we don't have this kind of effect and we have the speed and if I leave then it's slowly get back to stop again, okay? So that's all computed for you and nothing more to do than the code I just showed. All right, so the big part now that's the animations. So before 5.9, if you want animations, well you could reuse the Qt Quick animations, right? Because that's based on property bindings and so on. So that's what we did, right previously. But main problem is that they are on the main thread, okay? So they are on the main thread which means that you can't really do something else, right? Suck up some CPU time there. And the big problem that they are then not synchronized with the frame rate of the engine. So very often you would see with these kind of animations like some frames keeping because that was slowly getting out of sync. And so what we did is to have a new aspect which is the animation aspect that you can register on your engine and then you can have extra animation that will run inside of the engine. So there's no search synchronization problem anymore. And also that reduce the communication between the main thread and the secondary thread for the properties changing due to the animations. So there will be API and that's mainly classes narrating from the abstract animation clip which contains the data about a particular animation. And then we have classes narrating from abstract clip animator, okay? Which is not the data of the animation in this case but the one which is responsible of the information on how we want to play a particular clip. Okay? So there's a separation there and that's the actual component that you put on entities. So narrating from that, from the abstract animation clip we have animation clip simply. And that one represents key frame based clip which is generally a good thing for artists but not so much for developers because it's kind of involving to actually write the code for describing that. And so animation clip holds animation clip data through the clip data property. And those animation clip data can only be created from C++ for now because we didn't quite figure nice API from the QML side. And the clip data has a set of channels which describe the properties which are controlled by the clip. And each channel has one or more channel components for complex type. So typically if I want to control a position, right, I would have a position channel which are three components X, Y, Z. Okay, that's typically how it's done. And then each component will have a list of key frame which will describe the value of a particular component at a given point in time and the easing curve which has to be used to get to that point, okay? Not getting too much in the details of that. In practice, we don't expect people to use much a C++ API for that because it's just or able to write. You have to figure out all the values, right? That's generally how to do by hand or the values of the values key frames of the values components. That's not necessarily something you want to do by hand. So we also have animation clip loader. Animation clip loader loads the same pack of data but coming from a JSON file. And that JSON file, that's something which is more accessible to artists if you have an export to it. And so we also provide a plugin for Blender which allow artists to basically control their object using the Blender tool so they can nicely draw their, place their key frame and draw their easing curve, export that to JSON, right? And then the developer can pick that up and get it in the scene. All right. And so the use for that one, that simply you get an animation clip loader and then you give it a source, point it to the JSON and then you will be able to play it. So that's just a pure data. So to play my clip, I still need to have an animator. So we'll have the simple clip animator, we'll see a few more in a minute, which will point to a clip. That case, a clip coming from animation clip loader. And then I mentioned that we add channels and channel components. And I still need to declare which channel controls which object in my simulation, okay? And so for that, there's a channel mapper in the animator which allows to say, to give a channel mapping. So for instance, I can say the channel location coming from the animation will target the transform object and the property translation inside of that transform object, okay? So now I'm controlling the translation of my object based on that particular channel. Same thing for the rotation and then same thing for instance for the color. And now I will run that one. Clip loader. All right. So I just have a cube, okay? And that cube is basically the code we had on the slide is referring to that particular cube. And when I click on it, okay? I just get my animation and I see the different properties being applied on the cube while the animation is going with fine control of the easing curves and so on, okay? So if the animation is kind of me, I did it myself. All right, but that's fairly simple animation there. Very often you would need to have more than one and to be able to move from one animation to another one, potentially having several playing at the same time. And you want to create a completely new animation based on more, on simpler animations. So you need to combine them and again, we want to do that on the engine side, right? We don't want you to do all the job of creating new animations from the JSON file on the front end, for instance. And so for that, we also provide Blending Operators and that will make it possible to have new variations of a particular animation based on other ones. Typical example you might have seen in games that I have a character who is working, right? And at one point, he starts to run. As people doing the animation, you prepare the animation for working, you prepare the animation for running and then on the program side, you just say, okay, I'm 50% in between, right? And then you get something else, okay? So you let the system derive that from how much are you there? So if I have a character who is working and then gets to run, other developer will have just a factor which goes from zero and I slowly get it to one and then my character is fully running. That's just of the idea. And then you can go further, like having an animation for a jump and then I jump when I'm standing or I jump when I'm running or I jump when I'm walking or in between, right? So you need to combine all this kind of stuff. And so for that, there's a blended clip animator where you specify a blend tree, okay? And we have two operators so far which are additive and LURP. And so the additive clip blend that we have the main clip and then we add something else on top, okay? And then we have the LURP clean blend where we basically interpolate between two different animations and the leaves of that blend tree are animation clips, okay? So here we can, for the example I was giving, we can imagine having like three different clips, one for walk, one for run, one for jump. And then depending what my character does, I'm in between walking or running so I have a LURP blend in that case and the jump comes on top of that, right? So I would have an additive blend for it. And so I have a small example for that one as well. Again, not necessarily the prettiest animations in the world. I suck at blender, really. Yes. So what we have there is a small toy plane, right? And then, okay, my toy plane is flying, okay? So that's the main. We have basically the same blend tree structure than what we had on the slide. So it's just flying around, right? So it's slowly going that way. But then if I have a hotshot piloting the plane, right? I could prop up a bit the roll strength there and then we would have on top of the regular animation. We start to do a roll before plane and if we want to have the roll completely added to it, right? We get our plane, which does this kind of thing. So that comes on top of the regular animation that we had. And then that's the easy one, right? It's just fine and flying and so I could LURP as well toward like something in flight where I have some turbulences, right? And we see that the plane goes in one direction or the other. And of course I can have those small turbulences and then have the roll at the same time, right? Depending on what the pilot is trying to do in that particular situation. And I get a completely new animations, right? And again, that's all interpolated on the back end side. New things as well, so that's the materials. We had only font-based lighting on 5.7. And on 5.9 we introduced PBR, which is physics-based rendering. So we have new materials, which basically gives much better rendering in that case, which are based on the metal rough model. So for an object you can specify if that's metallic or not, or if that's rough or not, right? And depending on that you get different rendering. So that also allows to introduce ritual lights. Previously we had only the direction lights, spot light, point light, and now on top of that we have the environment lightning, which is available to those materials. So that's a new component. You can specify an environment light where you specify two textures which characterize the type of lighting you have in your wall scene. And that's what you see reflecting on the object, so that's what we barely see on the sphere, but I will show the example later, more examples later on. And generally you want to see that environment because it provides light to your objects, but then as the observer of the scene, you generally want to see that environment around. So we also introduce the skybox, which allows you to actually get a look on that particular thing. So I'm supposed to be over already, so I would just mention that we got painted textures as well. I can show you, for those who want to stay a bit longer, I will show you the examples if you want. Painted textures. Oh, there's a group picture as well, okay. Yes? I heard through the grapevine that...