 So, good afternoon everyone. I'm Bernardo Cebovia. I'm speaking to you from Argentina. As Adam said, I usually go by the big name Amyspark. I'm Amyspark on GitHub. And today I'll be talking about integrating Hollywood open source with KDE applications. This talk will be a bit on the long side, so please ask any questions in the chat and answer them at the end of this talk. So, for a bit of background, who I am, I've been studying for the Masters in Computer Science since 2017 at the Universidad Nacional de Sur in Vallablanca in Argentina. My thesis is in modeling animation and rendering of hair and food in future animation. I got my bachelor's in computer science the previous year from the same university. My thesis was in real-time professional handling and deformation. I got involved in open source in 2016 with Krita. This was because David Reboi, who is the author of Pepper and Carrot, a webcomic, invited me to contribute to the Krita painting suite. And that kind of snowballed from there. I participated with the home group project between 2017 and 2020, maintaining mainly the cast project. And I participated in Google Summer of Gold 2018, implementing Zootopia Hair Shader for Blender. In this application, it is named Principal Hair Shader. And my thesis participation was in season of KDE 2019, fixing Krita's support for 14-color space operations. So, what will this talk be about? This talk is meant to share with you the experiences working in my GISTO 2020 project During the past four months, I worked on integrating an open-source library from Disney Animation called CXPR, which will let us render dynamic textures in Krita like the ones shown in the right. There will be an additional talk in the student showcase next week in which I'll share some details on the other side of things. This talk will only cover CXPR and its technical aspects. The key take that I wanted to take from this talk is that code which is production proven, especially by a big content company like Disney, is not necessarily cross-platform like we would expect for usage here at KDE. Throughout this talk, I want to show you a selection of pitfalls that exist in the original CXPR code and how this will improve the impression that it's currently integrated with Krita. This is meant to show you where this kind of code, I mean open-source, production proven, authored by big companies, can break and why that could happen. So, in the next few minutes, I'll go over four key aspects of the CXPR. The first is assumption that makes us to the underlying platform. The second is bloat or extra stuff. The third is internationalization efforts. And the fourth point is framing. I make lots with the summary of my efforts throughout the project. So, the first subject of this talk is called platform assumptions. By platform assumptions, I mean the fact that it's kind of called runs in common platforms. This means a super configuration that is known and reproducible. For instance, each year, a committee led by the recent effects society technology committee releases a version of what is called the BFX reference platform. This is a specification that sets such a personal components like a compiler, libraries and SDK versions. However, we use KDE applications random on just any platform that you can imagine. This is just an example. The localization teams translate application into 70 languages and counting. KDE framework runs on almost all of the desktop and mobile operating systems on the face of the earth. Neon, the distribution of KDE supports two architecture in 64 bits, AMD and ARM. Ubuntu, the after distribution supports not only 64 bits but also XETG, ARM, PPC, et cetera, et cetera. And this public sample doesn't even cover all the possible library versions, for instance, Qt5, internationalization sets, et cetera, et cetera. So, the key question for working with server is, does it work in all of these possible configurations? The answer is no. This needs to work as it was released. It's neither platform nor architecture independent. To begin with, there is a platform.h header where they attempt to punch, meet and define, platform specific classes, timers and spin logs. But this header assumes that running other windows means instantly that you are using the Visual C++ compiler, and thus tries to link against the Windows SDK and Visual C++ specific libraries. This should not happen. For instance, Qt5 in Windows doesn't use Visual C++ but uses Mingwood. So it should be able to detect GCC and instead use APIs which are based on Unix. The second point is that CFO, if enabled, uses SSC for instructions which is an extension to the X86 instruction set. These instructions are statically compiled, which means that they are supported by only 98% of the hardware server sample according to Steam as of July 2020. This means that the remaining 2%, which are usually people with old hardware for that for some reason cannot run SSC4, will break the data, instantly the data for these users. As I said before, they are specified that combined time with a switch for that, which means that if enabled, they will not compile a non-interview platform for obvious reasons. The worst thing of all this, it is only used in a single function, which is run from the point numbers. So this platform was easily dealt with. It was a matter of factoring operating system-specific items around CPP files. I only left opaque types and included in platform.h. As for SSC4, for the purposes of JSOC, I left it alone. This is what he had behind a CMake plug. I'll show you later if there is more to do about it. Going back to our example, I mentioned that KDE supports more than 70 languages. However, this next work does not because it is not local independent. The definition of local independent is given in the C++ reference, which says that local is a set of features that are culturally specific, which can be used by programs to be more portable internationally. This in short means, among other facets, number formatting, currency symbols, substance separator, and decimal point. Decimal point is key here because there is quite a big bug. And to explain it, we need to jump directly into what they call the innards of SSC4. So, this new SSC4 as a work of computer programming is actually two libraries in one. The first is a language parser, which is based on GNU's general purpose parser generator, which is called Bison, and the lexical analyzer flex. And the second is a UI toolkit based on QT, pure QT. Why is locale's decimal point so important here? Because we have a question. Once an expression is parsed, how does this library figure out the value of the number terminus? This is performed in two separate ways. The first one, then the number terminus are directly parsed with a system function called atof. And the second one is done in the UI library. Comments are parsed, used a function called sscanner. These comments are used by the UI to tell what range does a variable have. The key take here is that both of these functions rely on the current locale of the application. So, the C17 standard in this section 7.11.1.1 says that the default is a locale called C. But in application that supports different cultures, this may not be the case. For instance, if you call set locale with the variable LC-numeric or you set LCL in your user profile, this may not be true. Here I put you a very little example. You define a variable called channel. You give it the value 0.5 or a half. And you say for the UI that the value range is between zero inclusive to 0.5 inclusive. So, in light of how CF proposal is, what may be the end result? The answer is it depends on the operating system. On the left, you've got macOS. When you launch the application by double-clicking it, it starts with a clean environment. Unless the developer of the user does something weird with the startup, it has a clean locale and you get expected results. On the right, you have Linux. In Linux, since all applications inherit the Clocale from their parent, which in turn is the regates from the user profile, stuff happens. The example in the right was gotten with the locale set to Spanish of Spain. The key take here is that the parser is incorrectly locale-dependent when it parses the tongue language syntax. So, fixing this issue means making the parser locale agnostic, which is not easy because the C standard provides no standard way yet. ESD and Windows have underscored L versions of these functions, 8-of-STR, TOD, et cetera, et cetera. Linux, and by Linux mean both the CNU, GNU, C library, as well as Muscle, do not provide any of these functions. We bought an excellent alternative to replace this, which is called STD from Cards in the Carcom header from C++ 17. However, Citas based in C++ 11, not just at 17. And there is a cross-platform replacement called SCN-Li from the discussion in the GitHub, which works on C++ 11, but, however, its locale support is just a placeholder. It doesn't have any locale support working at present. So, for the purpose of JSOC, I later to replace 8-of with a function called click-crack-8-of from TAMBO. And SSCAN was simply working around by setting and resetting the locale before and after each call. This was, this is just the first objective of our task. So, we move to the next, which are called BLOAD. What is meant by BLOAD in this context? I mean, literally anything that a post-application like RITA doesn't need to use or know in order to invest the effort. This means unused and unnecessary features, for instance, code that isn't used anywhere, which it appears to be for Disney internal tools, as well as applying system. I also mean enlarged headers, for instance, multiple-classifier header, as well as not depending on the header and CPT code separation. That is more in the next slide. By BLOAD, I also mean QT for support. This is not only a problem because it has been unsupported in our lives in that at least 2015, but because it prevents understanding of breaks, I will see more later. And it also brings the prepay the dependencies like QOP and GL. There are also needed upgrades, like find programs for Python and Flex, where you can just find package where it was initially intention of Disney, but it was never upgraded. And there is a big need to not blindly install everything in the kitchen sink, just copying and pasting every header to the USR, that should include. And there are no requirements declared, so there is literally no way for you as a developer to know what your application would need to link against a Python. There is a last one, which I've called my favorite because it cost me a big headache. Is that macOS and Windows need pre-generated parser files because neither of them bundle Python or Flex by default. But the current tool chain as it was shipped by Disney needs the user to manually copy these files to the bidirection, which should be done automatically. As I said before, like the platform pitfall that we saw in the previous section, this can be solved in conceptually simple ways. To begin with, we wall off UNIX features by putting them behind CMake tracks. This was the case of UNIX Disney internal widgets, like Deepwater and Encore, an entry point called Xpermaint. And the OpenGL based widgets are put behind as another plug, which is called Enable OpenGL Dialogs. This point incidentally helped us get rid of the deprecated linkage, which is excellent for building with Qt file. Secondly, I'm trading the above error factor, which is as much as possible to split them into header and implementation files. And finally, I ported destination depth to CMake imported targets, which means that CRP features are now explicitly selectable through the wall off as I described earlier. Everything is can now be target linked in the host application, which means that you no longer have to worry about what targets we will need to link against. This is the end of the second objective. We move now to the third one, internationalization, which is one of the most important features of open source. What is formally internationalization? The World Wide Web Consortium call it and I summarize the design and development of an application that enables easy localization for target audiences that vary in culture, region, or language. CSPRO is a new surfacing library through its UI toolkit, so it should be able to support localization when importing its results. This means, as I said, displaying error messages and UI text. Let's recall again, from the first section, CSPRO is constructed two different libraries, a language parser and an UI toolkit. Localizing the UI toolkit is easy between brackets, because it was enough for a lot of work with regular expressions. However, we are saved by QT and KDE Extrasimic Modules to glue the translation files. But this leaves the parser or a binary terms. How do we localize a library that doesn't use QT? You can achieve this by with two alternatives. The first, you can come and update one of the QT translation tools by deploying your own translation macros. By this, you can extract the messages, you can later apply the translation from the UI toolkit side parameters. Example, a given function here called accept has no definition. You can probably interpolate this little part in the UI, but if you are going this path, you can go and do something better. What I chose to do for Chesok was to refactor the whole library into error codes with a parameter payload. This means that whenever the parser has an error, it emits an error code with a parameter payload, which is later taken by the UI toolkit. Since the UI toolkit is translated thanks to QT and Extrasimic Modules, this means that host applications are now responsible for performing the interpolation of the message with a supply payload. Incidentally, this international effort also brings safety improvements. In the left, you can see the code verbatim from the original Chesok parser. This is the example that applies when there is a syntax error or the parser that's the end of expression. You can see that this is manually created and then copied using a zip function called as an printer. On the right, here you can have a look at how I did it in my project. It is vastly simplified with just an error code and then the UI library has the responsibility to interpolate it using zip as parser strings. We are finally at the last section, which deals with how CFSport handles theming. By theming, I mean that CFPort should respect the application's theme styling and preferences, which means both the user preferences as well as the platform conventions. In the example below, you'll see that Disney did not follow this as the original widget was heavily compressed and did not follow the GNOME convention. In the right, you can see an alpha version of my work, also taken in GNOME, but you'll see that the controls are now much more usable in terms of space and visibility. As demonstrated previously, co-ontrols were completely restyled to follow the post-application stemming. This involved the compressing all fields or labels, all basic web text or icons are shown. Also, operating buttons and layouts to modern alternatives, in particular queuing from layout and tutor button, improving contracts, in syntax highlighting, with a big thank you to Adata for her suggestions, and enabling color selections for vector variables. This actually was a hidden feature, which was broken in Disney release and was uncovered when I did the compression work, which means there are now two ways to specify colors by clicking on the variable label or by moving this little bit beneath each channel's input box. And finally, I fixed the help tooltips to make it fully accessible. At this point, I'd like to publicly thank BottleUp and Hubbell, Adata, and Avitra Boy for such many of these improvements. As you may expect from this kind of work, this changes uncovered additional mistakes in the developer side of things. I fixed two hidden crashes on CXper, that were hidden by the original styling, above and under flow when editing color gradients, as well as initializing memory accesses in vector widgets. And as a bonus track, this time changes enable users to use CXper widgets directly with beauty creator. So for future improvements, are there any bits that I would like to take after GSOC? Yes. A first one, modernize the code base for 2C++11, which brings as I said before safety improvements. I would like to fully replace SCANF or HF with STL alternatives, as well as complete the cleanup of headers into separate files. There is a plug-in subsystem that I have not been able to reserve yet. I would like to do so and also see if I can implement this functionality for Windows in a safe way. To make a scene make a configuration step, how to fix these press-generated parser files, because at this point they still use the build folder that it was used at business. And finally, research with this work could be much upstream, which is complicated because of the license requirements of the translation things, and also additional fixes as suggested by users whenever we release creator. So, for a summary, we covered all problems that I found during the development of this project and how CFP was converted from production to a truly good platform alternative. In case we wasn't obvious, this version has so many improvements that it's not directly compatible with the original from Disney Animation. And all modifications are available at invent.kde.org slash graphics slash cx. This concludes our journey and thank you. And now open to any questions.