 Okay, so for those of you who don't know it, the GET-A-PROJECT is a suite of free software programs for electronic design automation. It has GSCAM, a schematic capture application. It has GNETLIST, which is a command line tool for creating NETLISTS, which are lists of components and the connection between them. It has PCB for PCB layout, GOPWE for inspecting GOBA files before sending them to application. And recently somewhere in between those connecting this zone. This talk is about what zone is, why it is there and if maybe you want to use it. So I got involved with EDA in 2009. I was using GSCAM to create some simple schematics. And every time I would add a component, I would double-click it and nothing would happen. Because you have to select it and then press OK, which I knew, but it still was counter-intuitive to me. So I created a patch, which would be allowed to double-click a component and I submitted it problem-soft. But then I wondered if it would be a good idea to put the component sector to the side of the main window as a dock. And if it would be a good idea to have a project browser where you could select files in the current walking directory or property editor. And then I realized that the way the code is organized right now, this would be not easy. And that's probably the way why it hasn't been implemented before. So because the wish to have a common user interface, both PCB and GSCAM, was around for some while with GAM, I decided to work on that. And four years later, I solved most of the user interface issues. I had the library, dock, a project browser and some other features which were nice to have, which didn't exist in GEDA right now. But I still haven't connected this to the main GEDA code. And that is because I wanted to have a good scripting foundation, which would allow me to not replicate any code which is not user interface related. And this scripting foundation didn't exist yet in GEDA. But in order to work on that, I would have to answer a few questions asked. What is scripting? What purpose does it solve? And what are the constraints under which scripting is working? So in a proprietary context, an application is this big blob of code and you can't inspect it. You can't change the way it works. But users with more experience usually want to automate the tasks done in an application. They want to change the application's operations into more complex functionality. And also it makes sense to extend the user interface depending on the workload for maybe a simulation or importing a schematic into PCB. And obviously combine those two, create more complex functionality and extend the user interface with that. So what we want to do is we embed a scripting interpreter into an application and export the functionality of the application as procedures to the scripting interpreter. And then they add at various points, for example applications start up or pressing a user interface button, saving a file where user scripts are executed. So this is basically a way to regain some of the flexibility which is lost by not being able to modify the application in the first place. Free software users can modify the package, so there is no need to do scripting. But there are a few reasons why it may be a good idea to do so anyway. Because the high-level application, the high-level logic of an application is usually more complicated and error-prone to express in a low-level language. So it may make sense both for the developers of the application and the users to have another high-level language to address this functionality. Also, if you have worked with the WellerConsole Firefox, you probably know that it's really good for learning how the script work and for debugging. And it would be a nice thing to have that too in an application. And finally, today most users will get the binaries from the distributions package management. So it is a very important thing that they are able to modify the application without having to rebuild it. So you may ask why not write an application in a high-level language in the first place? And actually in most cases that's a good idea. There are a few problems with that. If the application does very data-intensive work like bit-shifting stuff, this is much easier to express in, for example, C. And a solution for that would be to create an extension for the high-level language which extends the language by this functionality which is done in, for example, C. And you could write the rest application still in a high-level language and would not have any performance problems because the important data parts are written in C. Also, application step-time may be an issue. Depending on the machine, a script interpreter may take like a second to load up. And this is not always acceptable as a delay for a user interface to show up. So one approach to this would be to write the user interface in, for example, C or C++. And then embed a scripting interpreter which executes the rest of the application, a high-level code which is written in a high-level language. Another point where there's a big difference between what's useful for proprietary and for free software is how do we deal with user-contributed code? With proprietary software, Wenders add this API and the users use it to add their scripts. But what they do with it is basically their own problem. It is not usual for user scripts to go back into the main application. In a free software context, a model where user code is contributed back upstream is usually much better. For example, the Linux kernel is really aggressive about this and tries to get things into the repository. This has a few advantages. Users will see the code if it's in the repository, but even more important, the developers will see this code. When a developer changes something and, for example, the signature of a function can just grab the repository for lines which this and can fix it with the same commit. Scripting is often used as a way to move the responsibility for the contributed code from the users, from the application Wander to the users. But this doesn't make sense in a free software context. So, by taking responsibility for the code, the contributors are not left alone with old incontable versions and the users have the not-begotten code, even if the original contributor is not around anymore, the project. And finally, there is a part which concerns the programming. Because, up to now, this looks like much more additional responsibility for the developers where they don't get anything back. They do, in fact, get something back. And this is really a good thing because they need to worry less about what they are breaking outside the repository. So, if you have a plug-in model where others have their own code living on their own machines, you usually define an API to have still some flexibility because you would break everything if the add-ons would rely on the internals, the program, and the internals changed. So, if the code lives in a repository, this is not necessary. You can have the code use the internals of the application and can just change the code along with the internals if the necessity is there to change anything. So, in order to create a good scripting foundation, it is necessary to see what is a good idea to have for scripting in a free software context as opposed to proprietary software, because the models we know from there may not be very useful in a free software context. The most obvious thing probably is functionality of the application should be available to use. And if you ever used GIMP for, for example, script 2 and have tried to run it from a command line, there is an option, not to show the GUI, but it is loaded anyway. So, when you run a GIMP script from the command line, it takes a very long time for GIMP to start up with not usually acceptable for command line tools. So, the most, the easiest approach to this would be to try to strictly separate between the GUI code, which is not loaded at all if there are any scripts executed, and functionality, which is the one that the scripts, for example, how PC handles this. But if you do this very seriously, what you are left with is a slim occasion, which basically embeds a scripting interpreter and exports a lot of functionality, and then runs a script. But if you have this, you could also have just made the part of the functionality of the occasional library and invoke a scripting operator, which then runs the script using the library. So, you could state this, this idea to have the functionality of Git available as one of multiple libraries, which then be used by other programs, which are having to spawn another approach process. Another thing, which would be a really good idea to have is high-level code being able to operate on the same data, the same process as low-level code. This has a number of advantages. For example, if you want to be able to switch which language to use, according to which is appropriate for a problem, then there should not be the need to serialize the data feed into an external program and deserialize it back. And also, it would make the difference between an external script and a high-level functionality of the application much smaller, because it would basically both do the same thing. It would execute some high-level code. And in one case, in the application, and in another case, it has been invoked another way, but it's basically the same code. And also, it would make an interactive code possible. So, in order for this to work, what we need to have is a clean point where the low-level code and the high-level code can both access the same data without having to worry for each other's implementation details or private fields or whatever. So, it is necessary to enforce a very strict object model where there are no fields private to some part of the application, so they can all, in the same way, access the data. And this, from this follows that it is not possible to have, for example, notifications when anything changes, because it would mean that every part of the code would have to run these triggers. And if any part didn't run these triggers, then something would not be updated on the screen or undone or undone and redo. So, for this to work, we need to have a one-oriented data model where we have a number of revision objects instead of this one file object. And if anything changes, the application just has a pointer on one revision object and can compare these revision objects and see what needs to be updated or what needs to be undone and redone. In order for this to be implemented in an efficient way, it would be necessary to encapsulate the storage code into part of the application. So, for example, the high-level code does not have to query every object and iterate over it and see anything if it has the right property it's looking for, but it can run high-level query like, please give me a handle for every object that has color blue and move all these objects with fit this handle to an alaya or read all top-level floating attributes. So, this is what I did. There are steps on storage, which is this shared storage part of the application. It defines the revision type, which represents one version of the file. It implements object type, which represents the identity of an object, which is basically what you would have left if you just show me five minutes left. Which is what would be stay selected if you changed the revision by undoing or redoing. And what would still be selected, and it defines a selection type, which is the same thing for a set of objects. In addition to this, it defines set of data types, which contain the data for an object. And this local example like this, this is a structure for a net, and you might notice that some recombin things are not in this data structure. For example, the bounding box of the object, which is on the screen, or if the object is connected, or what that's this net is connected to, or what attribute objects are attached to this net. And this is because the bounding box is something which can be recalculated. There would not be one situation where the bounding box is changed, but the object parameters are not changed. And the selection is something which is only useful in interactive context. And it would make sense to have an additional pointer to, for example, a selection object. So if the script wants to change the selection in addition to the contents of the file, it can update the selection object. And it would be possible to change the selection, not change it on undo redo by keeping track of a selection object or ignoring it. Attribute texts are visual objects on their own in get-adjust. So it makes sense to have them as individual objects and keep some break of the structural data. So there are dedicated lips on storage functions for looking up all attributes attached to one object, for example. Also a question which is important is what data should be inside a revision. For example, symbols in get-adjust are usually embedded from a library. So the user does not expect them to be undone and redone when they undo and redo. And pxmaps, there is no way to change a pxmap from site gscamp. So this is a strong indicator that they would not be part of the version. Paths, on the other hand, are objects which represent all the geometry which cannot be represented by symbol line or circle or arc objects, for example, the arrow on the transistor. And they are modified by moving handles inside your skin. So this is not a truly question. It's what makes sense in this context. This is how a ton is currently implemented in get-adjust. There is lips on storage, which I just talked about. Here is an example program which I will skip for time reasons. There is sonnet storage, which are Python bindings for lips on storage. I decided for Python because it's a language which is relatively easy to learn opposed to a scheme which is currently used in get-adjust. Here is an example for that. Son.gedo, which is basically the parts of lipgedo, which are interesting to use in other application, ported to use the new functions. There is a new file form. There's a command line tool son, which offers all the functionality you would expect to be available on the command line, for example, convert a file from one form into another or attracting symbols from a schematic file. Genetlist, which has been ported and factored, so it's now a Python package and can be used from any application. And there is support for guile, which is important because it allows users to view the old setup they have with executed scheme configuration files and made custom scheme code executed in the netlist, and it will still work. So, if you use get-adjust, you can use on-write-no, for example, for writing scripts, which manipulate schematics, writing an Eston-based netlist backend. The most important restriction on this is GSCIM does not use the new libraries yet. So it can't manipulate the objects load into the editor. But you could, in theory, use a Python interpreter to edit a schematic file, which I wouldn't see what you're doing. The most obvious next step would be to use the new libraries in GSCIM too. This is trivial because libgable is the shared part of GSCIM and some other applications. And my approach so far has been to port the other applications to use the new libraries first. I've done this for the netlist, which is probably the most complicate part, and started doing so for g-trip and libgetacairo. But there are some problems, like, for example, g-adrip uses gdk-spread-widget, but it's not supported anymore. So it may make sense to duplicate the libgetacode and have for some time two versions around, one for the old tools to use, one so it can merge back into GSCIM, so GSCIM can be updated to the new libraries. With PCB, it would obviously be also a good idea. I posed a data structure for PCB on the mailing list, but didn't have time to pull things, parts of PCB out into the library there yet. And for other projects, it may be interesting to use this approach too, or maybe even the same library too. And this may as possible have a common user interface, which is used for several projects and applications. Thank you for listening. Thank you for one quick question. One? Yes? The same for Kaika. Excuse me? Could you do the same for Kaika? If I could do the same for Kaika, well, in theory, in practice, I would definitely not have the time for that, but I could support you if you wanted to do that Kaika.