 So, Leber of the Graphics, Subsystems, History and Visions, I decided to talk about a little bit about Zed because I just know a lot about Zed because I am in the office development quite a while. So, I wanted to give an overview where we came from, why it is like it is, so maybe it is sometimes hard to understand and how we maybe escaped from where we are. So, let's go. So, where did Zed all start? It started really with the release of Star Office 1.0 in 1985. So, our codebase is 37 years old, so mostly the original code is done. So, when you want to know more details about the development and the different companies involved, you can just go to the Viki page. So, the interesting part is the first release was Writer only. So, only Writer was created in the first one. It was on Windows only, so it had to use WinGDI because there was nothing else. The WinGDI 1.0 just allowed something like four windows, you had to run on 640K, all that stuff you cannot imagine today anymore, which is good, of course. And luckily, Zed decided to use C++ so we profit from that today. It would have been very bad because they would have chosen else because C++ was far from normal at that time and not very stable. So, long-term effects of using WinGDI. So, it was more or less directly mirrored to VCL output device. We still have Zed pen brush stuff. It is a little bit abstracted on our side. But this was done in one of the second or third releases. So, problems is only a single-context biographic target output device. We still have Zed, no transformations, only map mode. We still have Zed and output device, integer coordinates, meta files. We are a little bit away from integer coordinates. I have added a lot of stuff which uses double precision using baseGFX, which was developed in the transition, in the try to transit away from Zed. We still have meta files, still our own, but try to keep them as compatible as possible to our big competitor. Transparency at Zed time had just four steps. It was done using pixel pattern plans. We had no alpha channel for the bitmaps and no anti-aliasing was just no one knew about anti-aliasing at the time. And unfortunately, an own wide definition of gradients which I will talk about at the end a little bit. So, new versions came along. Marco Barrios decided to do graphic programs, door impairs, card and chart stuff. New targets were defined. So, writer state pretty unchanged, but VCL was extracted from it to get a graphic space for the other code bases which were to be developed. So, at Zed points, they missed the chance to do something as WinGDI. So, the problem was the target systems in Zed case mainly Linux just had no GDI. So, what to do? So, to not lose too much time and resources, it was just decided to implement GDI on Linux. So, to emulate and reimplement it. So, unfortunately, this has consequences until today. So, even today, we still reimplement kind of WinGDI 1.0 similar stuff when we write new backends. So, for example, single graphic context. So, state of paintbrush, I think most of you know that. We have some what stack stuff with push and pop, but still the problem in parameter calls. There are different parts in the office which use different norms to lay out what you need in the output device before or after the call. So, this crashes always, it costs a lot of problems all the years. So, no modern graphic system uses stuff like Zed anymore, you know, or Zed multiple render context, handed to door commands with moving transformations. The targets are just handled as pixel targets as it should be. So, other problem, map mode transformations. So, map mode is just a part of a moving transformation. It only contains translation in scale. It has no rotation or shear or not even mirroring. It is not embeddable because it only has this relative map mode, especially a problem in metafiles when you have to rescale them or something. No full moving transformations. Rotation and shear is handmade even today in the SCR objects. They are still multiplied the hard way with an angle where you extracted from Xenos-Goldenus. I already did Zed once to change that to something better using transformations, but this was lost work because the branch I did it in was not integrated and it conflicted with the crash that we were not able to continue that stuff. So, we have still bound-backed, snap-backed, unrotated snap-backed where the snap-backed is more or less the model data, but SCR unrotated snap-back is hold to react on rotation stuff. So, this is all in a strange state still unfortunately. So, metafiles has their own problems. It is more or less a recording of paint, so the basic idea is of course when you paint something, you want to paint it again, maybe it is a good idea to record it. I think it is not because it is not no high quality graphic definition and it makes all kinds of problems because it is not transformable mostly because it contains this code commands in the metafile itself. So, as I work around, we have move and scale which do their best but it never fits together. As a second workaround today, you can put the metafile in a metafile primitive and work with the primitives which get created from the decompose. So, that is clearly transformable completely and truly today, but at the cost of a decompose and to translate it to primitives. So, metafiles are not really maintainable, expandable, in practice you all know that loaded with extra data and command actions, Microsoft is not better said it the same. Some more higher level info encapsulating low level paints. That is interesting because that is a little bit similar to the primitive idea that you can get a decompose. It is integer, much old stuff you still have to care about which you cannot avoid to care about. Traditionally used 32 bit integer, I added quite some helpers in base GFX and all calls on the output device which use that and offer to paint for example, this transformation is all new stuff to lower the pain to use output device. So, alpha for bitmaps also a big problem because it was not added from the start. So, that did just not exist in old winch EISO, no one came to the idea which is clear because alpha blending was not known at that time. So, to solve it, I say just added a second bitmap and created bitmap X. So, we are struggling today is still from CIS design decision. It is incredible. But it can really be used as bad example for design consequences. So, be aware what you do, be aware of consequences. So, it is a little bit like we heard yesterday with choosing names of functions design decision are even more critical. So, gradients, old winch EISO just had no gradients. So, someone and I really don't know who because I joined 97. So, all I told you until now I've learned and heard from the guys I was working with. So, someone decided to implement some simple ones and we are stuck with them until today is incredible and we have some SVG gradients. I implemented them when I did SVG import but we never managed to get them to the UI. They are fully functional, they work in all exports, we have some in the course but we don't offer them to the user which is really a sad story. They were painted, they are still painted when you use VCL in pixel coordinates by making a rectangle slowly smaller normally but something like two pixels. This explains the form our gradients have when you think about it. The problem is this defines a nonlinear transformation which you cannot invert so it's hard to get a texture transformation from that. So, I spent quite some time when I did the primitives to solve that problem. It's a hard problem to solve but I succeeded and we have this 100% compatibility composition. In primitives, it is of course okay to fall back to VCL to draw them faster but it is necessary to have some primitives for the future and as fallback when we want to get a way using output device to render. So, it even got so far that I have a texture mapping from XY to color which is used in the 3D render to handle the gradient so it's proof of concept, it's working and this is exactly the chance we can use in the future for enhanced external renderers which use primitives to directly use a texture transformation even for the old gradients to make them 100% compatible. Another problem, paint invalidation. So, original paints in the S.A.R. objects before the changes were split in a paint in a calc boundary function which never fitted together because it's just not as easy as it sounds on the first idea. You have text overlays, you have hairlines which are problem class of their owner and when we have time I can tell more later but the line ends and line joints may be mitered so you really have to do deep calculation when you want to get to the correct result. The other problem is that paint was on the model function and we have a lot of few dependent renderings so in fact the range you have to invalidate for a repaint depends on the page you want to show for example there is a page number of stuff on master page and page are similar stuff with extending fields which are dependent on page number or something similar. So and there is the UI part of VCL it's not so complicated all buttons fall back which we don't really use anymore today because it's just ugly. So the question is how to avoid that for the future so my conclusion from that when I stumbled about that when I landed there and saw a drawing layer which not even had a linear transformation and no matrix class when I arrived was stop painting, start defining geometry so the idea behind that is all places that paint are dependent on VCL architecture because they use output device commands so these would all have to be potentially changed in the future when you want to migrate to something and defining geometry. You just need a small renderer which is more or less a translator for the defined geometry so you never need to change the definition places anymore and there are much more definition places than translation necessities for new targets and definitions can be extremely dynamic or primitive so this is all on the cost to stop direct rendering and instead of that use a scene graph of course but if you want to get independent and have full freedom in future renderers it's the only way to go if you ask me and I want to take a short moment to break a lens for the guys who did all that stuff as you may know we stand on the shoulders of giants as a phrase used in science and we profit from what was done before even when we may know better today or we do the next step but don't forget it iterates and Philipp Blumann once said to me when I was in that phase many of you are today complaining about what we have today and how could that happen he told me the new code of today is the old code of tomorrow and said that hits the point so just imagine in ten years someone will complain about your stuff and take it back a little bit if you ask me so everyone did the best he could and they had the same limitations we had just don't forget that so how to reach new shores from where we are so the original request I got was we need anti-aliasing so after I had worked there for a few years and saw in which conditions we were and what was possible I decided to not try to slowly move VCL to a better world but to do something bigger so with all these ideas in the back of my head and with what I told you about metaphiles which certainly had an influence on the decomposed idea I came up with a primitive stuff because this will allow a step-by-step transition so no chance to do it in one step of course not with just one person because they let me do but I had to do it alone so I did not get very much support so if you ask me the consequence is to get rid of VCL stop using VCL that's the point for me so for new stuff please try to use the existing primitive stuff as we will never get rid of it of the VCL so all and most can be done that is draw interest for example and also overlays and Ryder and Kalk are also using the primitives for the graphic stuff and it's just working because it had so replace VCL everywhere no, not necessary so my strategic three a few at the existing office is just to split it between Adifuse and UI so my tool is to get the Adifuse to complete primitive rendering that is realistic target because you see it in draw interest so Ryder Kalk partially as I said and I would just keep VCL for the UI stuff it handles the windows here message passing stuff anyways and it's in its transition to host other UI frameworks too so it will stay for a long time for that purpose but the Adifuse should be rendered with primitive renderers from my point of view and the ideal view would be on each target system if we one day would really not use VCL anymore to have a UI framework system dependent and plug the apps with the Adifuse as Adifuse so in last consequence this would be needed anyway if you ask hardcore Mac guys for example to get the UI in a form they would accept so I have a long number of slides for primitives but you can just study that when you download the presentation I have tried to get into the details what we have and what we not have if you are interested you can make a deep dive here maybe it helps to understand more how this stuff is working and what backgrounds are there lots of interesting but hardcore low level information so back to the interesting stuff current state of transition so the transition to primitives is not complete don't forget because we were interrupted when all were fired I could just not continue to do that I would estimate 40% are done so we have proof of concept with working staff staff at our interest we have lots of gaps which we should fill which use old stuff often showing problems and errors we have places where the two words collide for example in writer where you have normal paint containing the text and then you have to jump to paint graphic or something which uses primitives and for that purpose is buffered for which we have stuff too which does that with primitives but as I said with the floss transition sadly the resources were killed transition was interrupted pretty much on hold for 10 or 12 years now so I would love to continue that because I think it's really necessary but many needed steps are fired as tenders since years never really get voted because there are no shining new features but car reworks so no good chances unfortunately so what do we have you can check that when you download the slides let's check the renders because that is an important point which many seem to miss we have two basic primitive renders to target VCL output device so these were never intended as being used for 10 years or something they were just a proof of concept and they're in between step in the transition with primitives so the goal has always been to have systems specific renders which would be in reach today and would have been in reach 10 years ago too if the time would have been there so this can do with the geometric information whatever they want and need internally every hack is allowed inside a renderer and say can visualize the defined geometry identically say correctly implement it and say I'm not that hard to implement it but we just don't have reference implementations so a few days ago when I prepared this talk I started to do one and I will try to finish it as a reference and it's in drawing layer project and it's using Direct2D as an example and it will render the primitives without using output device and it will be simple because it will only support the most necessary primitives which are 4 and 5 grouping primitives and when this is done it can be copied and extended if someone has interest to do a good fast system-dependent renderer without having to implement SAR-GDE I will try to finish that work Metafile Procedure is another thing it also goes back to output device today in many places but that can be avoided because output device as many of you may know just creates metafile actions so we can make metafile processor independent of output device relatively easy and keep it for compatibility so what do we not have we just had primitive for PDF for example because we have a primitive for SVG for example the SVG import just puts the SVG file and when you call it decomposed SVG gets passed and primitives created we can have the same for PDF why not? so the good thing is when you change the primitives you will also have the next command to change the primitives to SAR objects if you need to so that opens the door for unified and better imports that would open now we also have no primitive for custom shapes currently the custom shapes are still created Regina knows for example I think by creating SDR objects which are never shown and live in the background and to paint the custom shapes the primitives are fetched from those fake SDR objects in the background we could simplify this directly create primitives for custom shapes would make the process much lighter and faster when you change custom shapes sometimes it takes some time during the interaction what do we not have I said writer and character should be changed completely but reads the small letters no worries you can still do procedural stuff if you need it I understand writer is fast because during text layout the text is rendered directly but you can do that in a specified renderer but do it in the renderer where you are on the specific graphic system you are targeting I think that would be better in the future so a slideshow and discussing with Torsten about that since years charge geometry would be an idea too for example we have so slow charge stuff so directly creating primitive geometry may be an alternative so much more stuff to talk about and then I have two excursions but the time is over so for homogene transformation some information which may be interesting because all the new stuff includes using more homogene transformations and an excursion for hairlines why are they so complicated and why do they make so many problems in the graphic cost so I hope this gives a good overview so important sentences during the presentation from my point of view stop painting, start defining geometry we stand on the shoulders of giants the code of today is the old code of tomorrow and to get rid of VCL stop using VCL so time for questions as a non-libre office developer you mentioned in one of the slides that tender possibilities for some of this work don't get prioritized it's not part of the core of LibreOffice if it's not part of the core does that mean that the work can be done separately like in a separate repository offering something that only presents like an interface as an independent library that would require just switching to it or just switching to it no, I said it is part of the core the problem is all of this is core changing work and you never get votes for core work and it's very hard to get funding for core work because core work is not a shiny feature it's the stuff behind the curtain which users don't see so no one pays for making that good and shiny but we would need that urgently starting this PDF you said that I already started working on the PDF that users draw primitive and also one is important is also SVG export I agree so for PDF import I think the best way would really be to do the same like with SVG import I think you took a look so just use the libraries we have Kenny mentioned them a few minutes ago and creates primitives and with the primitives you can do all the rest that's the easiest way and I agree I forgot SVG export we urgently need SVG export one big advantage of the primitives until today is I resisted to define a file format because defining a file format was the best for the meta files but I will resist to define a file format as long as I can so the SVG export would be an alternative for exporting primitives and getting some back in again because we can't even embed information Thanks Armin, yeah