 So welcome everyone to this year's drawing layer talk. So this year I wanted to concentrate not to take on a high level about the primitive stuff and all of this again and what is happening in the background to prepare all the needed data for the rendering of the edifuse. This time I just wanted to show you four concrete examples which happened about last year which are in the product and have led to some good speed ups and they describe pretty well where the problems are currently and seeing how to solve them can help to understand how to use the current techniques to be able to solve such problems. So just let's start with with one of my favorite bugs in the last time the squirrel bug. So the big problem is that we have an SVG import which we got from an external customer I think it was Munich. They used SVG graphic called squirrel in very small form just in writer and this document just didn't work and the reason was really this SVG graphic it's pretty big half a megabyte and the reason it works so slow is it contains five SVG patterns each containing 440 polygons by itself just to describe simple points but 440 and five of those patterns in the SVG file and this patterns are used with texture repeat to fill just poly polygons and this multiplies with the 440 polygons so you had millions of simple polygons just to define the graphic in end effect when it was all folded up and prepared for rendering. So before render decomposes pattern and processes it transforms thousand times so there is actually some reuse when the pattern is prepared as primitive it is it is not copied or something it's just reference because primitive is a new API object so that is not even the problem but the problem is it said really all of these polygons get transformed system-dependent to the system-dependent form and have to be rendered in each each for its own and afterwards I just decided don't do that just pre-render one of the tiles as RGBA and render it as bitmap so this of course works too but you have to do some fine-tuning because quality of pre-render bitmaps is of course not as good so you have to take care for the output target is it output target bitmap like screen or do you have metafile or PDF export or printings and you should avoid to do that of course and only up to a specified zoom level because when you zoom DPN you automatically don't have the problem because only a few of the tiles are rendered so you can just go the way to use a higher quality so the original approach had a high quality but had speed problems when when all of the stuff was shown and this is a good example how you can optimize stuff using the primitives you can just react inside the primitive decomposition or you have the alternative to do it in the active renderer and in the renderer when when you get to pattern repeat primitive you can just say when you know the concrete resolution what will I do to get this graphic out as fast as possible and I have this here this is this is which which you have now and now you can really zoom in it's still not extremely fast but you can see you can at least the scroll in it and when you go really really deeply into it you see this pattern so this patterns are not really useful in the end effect because they are more or less just used to make some gray tone but of course you cannot ignore the definition of the underlying SVG so you have to render them somehow it would look different so this was solved and is much faster now as you see so maybe after each of the examples I have four of them maybe if you have questions for this example make a short break okay fine let's go to the next one we have a problem with fat line drawing on Linux fat line is every line may it be just a straight simple line with two points or may it be a busy curve when the lion sickness is not on zero which which means in the office to have independent forms assuming you have a one pixel line which is by the way not provided by other office programs and just for historical reasons so most of the problems come from all the charts which have such fat lines and in X 11 there's simply no support for field poly poly goons which are the result when you decompose the fat line and of course no direct fat line support so the problem is only on Linux but with Cairo there is a solution now and before the optimization the poly poly goons had to be decomposed to trapezoids in full quality quality every time no buffering and this is of course pretty slow and after the optimization I used Cairo it's direct a fat line Cairo renderer because I made measurements to decide what to do and this is just the best way to do it I also tried before to buffers the trapezoidation completely but this has a bad memory overhead so Cairo is just the way to go in here for the moment in long term we really need better support to draw in some way lines with some line sickness in just one color much better on Linux we don't have a good solution today I think what code a craylon was just talking about may help to get in the right direction but what we really need is renderer a basic renderer for the added views which is using that stuff that craylon is now offering and I hope we get this together because we can make huge steps forward when we do that questions to this example maybe by the way it this example also has the problems that a geometry information has to come somehow from the 2d charts to the added view at all so there's a bridge now which directly uses the primitive representation before that there were even meter meter files used for that so I don't show the previous example live because loading that chart takes a long time maybe when we have some time left I can show later so we factor 3d renderer to use multi-threading this is one thing I wanted to do for years and never found the time to do it because the 3d visualization is a 2d primitive which decomposes the 3d primitive content to RGVA bitmap which is sand painted due to supporting many systems we need a fallback software renderer implementation to show all the 3d stuff and up to today unfortunately the fallback software renderer is the only renderer for our 3d representations so 3d is not overused in the office but from time to time it gets used and it would be nice to again use use the stuff craylon was doing the last time to maybe at least get one day direct hardware render for the 3d stuff would be no problem you can just implement primitive renderer which converts 3d primitive geometry definition completely to this RGVA bitmap so just a replacement for this for this fallback renderer but the fallback renderer will still be needed even in the future you never know where the office is running and if you have something like OpenGL available so we always need a good fallback and you also need a fallback for PDF rendering or something there's no way to get OpenGL to render bitmaps in 1200 dpi when you want to print something in high quality or something it's bitmap data so before 2d primitive creates UDPn and GBR in its decomposition already intelligent buffering but singles read it already intelligent buffering means the fallback 3d software renderer is capable of just rendering parts of the 2d scene and it takes into account if you zoomed in or out and which which part can be reused and even if you just zoom slightly in up up to a difference of 25 or 50 percent I'm not sure I have to look in the source it even avoids re-rendering and goes on bitmap scaling and stuff like that so this is all this is already pretty much optimized and the interesting thing is this is all done inside of this one single primitive for 3d representation it's a 2d primitive for 3d 3d scene representation and all this optimizations and rendering and how to how to react on it can either be done in the renderer if the renderer is the screen target or directly in the decomposition which is offered from the primitive so there's even even a choice it's it's dynamic to do so luckily there was all already a thread pool in the office when I when I looked how I could parallelize set and first it was hard to use it because it was a global global thread pool and of course it makes only sense to use the global instance of it you could instantiate your own one but of course makes no sense when you want to share your work with with all existing CPUs lucky luckily this was optimized so it's what said you could wait exactly for the task you were scheduling yourself in your parallelization and not not for all tasks which could break other mechanisms in the office so after that I pretty rough parallelized rendering did work pretty well and brought some really good success so now really when you have eight cores eight cores maybe used and you can use much bigger 3d objects than before for example when you have something like this in the software renderer this is nothing I would try to do with older this with older office versions this simply simply will will take minutes minutes to not minutes but much longer to render and in the current version you can see it uses fat lines and even even with fat lines you get you get a pretty decent reaction now when when you when you are when you are zooming or or working with the graphic stuff so this is much better response and this is still the software render don't forget that no hardware involved and it's a software renderer which even does anti-aliasing with over sampling so said you don't get some hard edges or something so this big success and you can you can see here as said really the 3d renderer was using for short times a lot of the CPUs questions for for this optimization it shows again what you what you can really do using supermatives we have and where you can where you can go in between and do your optimizations so and hopefully when we get this CPUs with 32 cores or something we heard about it will get even faster but it's still the software fallback so we should really find some time or sponsoring or someone willing to do hardware 3d renderers so in the last the last example is a more intelligent handling of animated gifs that's a problem and a bug a quail and first found and roughly fixed by just on demand creating the gift frames because with huge gifts there was an example gift which was playing for eight minutes or something it's extremely crazy big stuff and of course the office was breaking completely because footprint did not handle this because up to then the office was was importing the gift completely the gift was rendered into the single pictures as preparation and the single pictures were put in an animated switch primitive and it were in this case it were 800 or something of them all pre-rendered as RGBA images and start in in this primitive so for small gifts this works perfectly and all all the images are pre-rendered and you you have a wonderful performance and no problems and the reasons this was this was done is said for the first time in the office history we were able to have multiple gifts on one screen with overlapping and working animation stuff but for this big big fed gift image we had just just to do something more intelligent and afterwards a specialization in animated switch primitive again the solution is to directly do something inside the primitive which was hosting all the sub-level data and add some intelligence at that spot so now it's holding it it's looking how much data it's holding how many frames there are does it allow itself to use a pre-rendering does it buffer or not how much does it buffer there are memory limits set and frames get thrown away when they are long time not used and and all that dynamic buffering stuff which works pretty well and even replay timing with the millisecond settings get get adapted when the first animation playthrough shows that it can just not be promised that GIF is running in in the defined mode which like in the file so again the important point is that this all can can be done dynamically inside the primitive stuff or alternatively in the renderer if you want so that's the fourth example you have questions for the fourth example so maybe I still have the time to show the other examples so overtime yeah so this this for example is for fat line testing just a lot of fat lines on this is now on on windows so I cannot show the optimization really but this works now on on linux too and just to show you the chart test you can already come I just just want to show you the chart the chart is now loading and it will load for a long time and at the at the end you get a little peek from the 3d rendering so the 3d rendering and chart rendering is really optimized but it's a it's a chart chart data and and the calc loading is really long and not and not parallelizes you see this is a really big fat 3d chart and this makes only a small peak with the software render thank you we have an issue my converter that's the question yeah we do yeah