 So Sergey, hello, I heard that Intel are pretty good friends with Blender, right? Yeah, they're pretty good friends with Intel and Intel developers and Intel development. Nice, how do they help, how do they are even related to Blender? So there are, of course, interested in getting cycles to run as fast as possible on Intel CPUs. And they support us both by providing some access to some hardware to make sure cycles run as fast as possible on it. And they also help working on cycle source code. Oh, so actually contributing to the code? First of all, they help a lot understanding how to write good code to help us troubleshooting some performance stuff. But recently they also started to contribute to cycles directly. And just dump a bit quite big patches and quite fascinating patches. Nice, so in general, just for cycles or? So mainly it is cycles because that's one big hip thing for Intel to find out is out. But cycles is where the best knowledge is to help us gain more performance. So that's the current focus in. Nice, so it's meant to work better with newer CPUs, right? Newer devices. Yes. Because if we optimize for some newer CPU, some optimization will also be applicable to older CPUs as well. Because new market architecture doesn't arrive all every new day. So it's not like, hey, all this works, you need like brand new CPU which just came in the market. It covers a longer range of CPUs. Nice, so Intel gives you access to your hardware so you can test it? Well, one something is needed to be tested on the brand new stuff. We do have access to that hardware to make sure our changes are good. And when Intel is working on the patches, they also make sure it works on brand new stuff. So we kind of know that it's something for the future as well. Nice, so is there any area in particular in cycles that we know like we can test? So it actually worked for a few years back. So many of the things already used by many users and many productions. So they started with making sure that we use vectorization, like single instruction, multiple data as much as possible. And that was a long-standing to-do in cycles. And it was disabled for a long time just because it was slower than the native code. So that was the first thing they did. And then they just started from there, like getting test scenes, seeing where the bottlenecks are, and optimizing that part, mainly using seemed instructions. So for example, they optimized curved hair segments and triangle intersections using AVX2. That was the first thing they did. I think that was around a year ago to run on CPUs which were already in the market for a long time. And that gave that measurable speed out. And then from there, they kept working on triangle intersection because we wanted to be watertight, but also very robust. For cases, only completely 5,000 units away from the origin, which is always a pretty issue from the precision point of view. And after that, that's where I'm not sure the exact event name. So that's where they probably say, hey, how can you know this? But there is a Google Summer of Code where students participate and work on open source. Intel had something similar last year. So there was actually a Russian student, Anton Gavrikov, who was interested in getting into all the slow level stuff, and he worked together with Maxim Dmitrychenko. That's the guy I work most of the time with. And with the help of Victoria Jilin, I believe, they worked on DVH8 support. It's wider, it's like DVH3. It means, hey, every node has eight children, and then every, like all the children intersect with one command, right? Oh, okay. And so for an artist, what does it mean? For an artist, it means faster render. Faster render, okay, I understand. Yes, so for, like, the tricky part is it's quite visible in more complicated scenes. It's not so visible in default queue. It's even slower, but they started doing this feature, and it's actually now in master and it will be 2.880 initial release, even. And stuff doesn't even stop here. They also did a bucketed triangle intersection. So with one intersection command, you can intersect the triangles to one right. So it's kind of also, like, minimizes the amount of CPU ticks needed to find an intersection. So faster render. Yes, that's also faster render. Like, those speed-ups that I thought was talking about DVH8, it's actually combined with the bucketed intersection. And that's also in master now. And it's testable. And you can compare, like, before and after using, like, previous release and that's part of 2.880 initial release. But to my knowledge, they're still, like, working strong into other aspects of cycles because we also have shading. We have displacement and other stuff. There's so much room for technical side of optimizations. And to my knowledge, that's what they are doing now. And we'll see how it all goes. Wow, so we can expect more improvements on cycles soonish in terms of speed. In terms of speed, I hope so. Yes, there is one big thing which actually was started by tangent and I believe it was also collaborating with them to get in-bree in cycles. Nice. There is some technical stuff we managed to be done, but we are going to work on this. That's one of the aspects. That aspect is, like, all the shading and some threading stuff because some parts of cycles are still single-threaded. Don't say that. We can cut this. Why not? Why not? Everybody knows. But I can't say that with 2.8 multi-threading, that part which is single-threaded becomes somewhat easier. Oh, okay. That's good. I actually started looking into that a few years back and I had the initial part and I ran into limitations of 2.7, like 6, 7 design which didn't allow to multi-thread that without crashes in production files. But now all that part can be multi-threaded. Because of all the changes in the new dependency graph in 2.8? Yes, because now we don't need to worry about, hey, there is viewport which needs to access dot and viewport resolution. There is cycle which needs to access the same dot and render resolution. No, everybody has its own dot, everybody's happy. Everybody gets his candy and we can put more candies at the same time now. Wow, that's amazing. So from 2.8 and the future, we can expect cycles to get faster, more multi-threaded areas that are missing. Yes, because I don't think we can put more into 2.8, at least. Oh, yeah, now? 80-0. Yes, so it will be 81. Because 2.8 is more like a series, right? Yes, yes. We're talking about like 10 years future probably. But definitely there is more optimizations coming. So besides cycles, are there any other areas in Blender that could get optimized into stuff? Yes, so there is not a secret anymore like a year or three years now that Intel came up with Optane memory, which they are pushing quite strong to SSD segment. And the benefit compared to regular SSD is that it has faster random access. So if you need to read like this chunk of memory, then this chunk, then this chunk, it happens more faster on Optane memory than with regular SSDs. And we are trying to investigate if trying to use SSD for stuff like smoke simulation, where we have like all those random dummy and being written here and there, can be faster on Optane memory. And the particles cache can also be benefitted from this, this is like all like research state to understand that if there is any potential, there are more stuff in there. So you said one last thing. Yes, well, it's not the last thing I said. I hope so. Okay, one more thing. Yes, one more thing is to my knowledge, Intel recently open sourced their implementation of Hefka codec, which is, which was heavily optimized for Xeon processors and probably means it runs on I7s. And that part is also on the way to FFMPEG library and Blender uses FFMPEG for video IO. So that's another aspect of Blender, which gets improved into the world. So you're saying that Intel is helping the video sequencer. We can PR that way. Intel totally supports the video sequencer project. Awesome. Okay, so writing OpenGL renders, like playblasts, everything that uses video and coding, decoding. That's like everything because in Blender we come out with video from so many places. Sequencer, Compositor. The motion tracker. Playblast. Motion tracker is only reading, but that's also... That is already good enough, right? When prefetch, when we used to have Viki prefetch, then it's fine, but for the initial feeling and stuff, that's what also going to be optimized. Awesome. That's amazing. So cycles, smoke simulation with hard drive, faster and stuff. And ffmpeg. Amazing. Yes, that's quite great. And was it threaded cycles for real now in 2.8? Yes, that would be so awesome. That's like Christmas in November. Well, in the next coming years, it's going to take a while. It's good to know it's been working. Some of the stuff will arrive sooner than later. So maybe some stuff. Maybe not for this Christmas because 2.8, like deadlines. Early next year. But there are many things that are already in 2.8. So they will be part of 2.8. Awesome. Amazing. So thank you very much. Very excited. Yeah, very much welcome. Yeah, okay. Thank you.