 It's a project started in the year 2000, which was stormed for a while, in 2006. The film was about, oh, something's happening and I want to do something and I think this is a good idea to work on. It's compositing. Compositing is the combination of images to create new images. We're actually using Gagel, so I'm demonstrating now the compositing in Gagel. It's not very well done. I'm doing a layer, tweak the opacity of this layer. If I give a sign, you want to do something. I'm going to change this image now. I'm not creating a raw image of the layers to see more of the details. But since I blur, it's just in terms of the blur. I want to blur the operations I'm doing to the... If I was going to have proper use in the basement, that shouldn't have brought in the most important time to work on the shortcomings of the current give-core. It's that the current give-core is limited to 8 bits per component. It's kind of a lot of precision. It's kind of a borderline output that is rendering this image, this element internally. So I can take that image and say I want the brightness of it and the thing I want to do is to reduce the contrast of bits. And then we should be able to... This image has a lot of detail in the sky that you originally couldn't see. It kind of comes forth now, but that's not the way to see the image either. But I just want to show that within the image there is a lot of brilliance within the angle to do what you call tone mapping, to do what normal people do, because the results you have in these high-growing colors. But that's actually low dynamic rate images. It's a low dynamic rate representation of a high dynamic rate image. But the game probably lacks those kinds of transformations. It has some package stuff that looks very, very bad, which I should be able to get over Christmas. But not a kind of important thing. It's probably that. And this is not entirely true. Whoops. I have moved too far. I was thinking to actually show up there with another location. So I'll just skip that. I'll go forward to the Gagel API. And the core of the Gagel API is that you have a graph. The example I'm going to look at is a model bit more. So we see a rendering of the result. What we see here is that we have a lower image. We go into place, another lower image, which we have adjusted the opacity. And this end result we're going to display. This is a graph. This is a compositing or processing graph. Our nodes are source nodes. Both of these are filter nodes. Here is also a compulsive node, which takes two images and combines them in some way. That has two input paths and one output path. In this case, display. In case we show this to the screen. You could also have a node that's rather a BNG file. What we're going to construct. What we're going to do. Six nodes along the show is how this would be when we're doing the graph, which is a top-level node. I'm not doing that in this instance. We'll turn to later in the presentation. So I create new child from this graph. There's also some parameters. For the ones on load, I'm setting to BNG files. And with the standard deviation of the radius, this is more mathematically correct way of describing the portion blur. From first load, it should go into the input of blur. Move it forward. It should go into it. It is loaded to a basket into the auxiliary. I'm not going to break that in terms, but you don't know that using this API. And finally, that buffer is provided to this place so it can do its work. And then it actually sleeps for 10 seconds. Let's continue. And the example. So the image is exactly the same example. It's very similar to how G objects in G-Lin works. In the way you have properties that can be set with values that's used. So I'm creating that raw model in the header file using kind of the same amount of syntax that people use normally, like GTK.org, Doxicon, or Java. The single header file, and I've wanted this to be the reference documentation. There is also, as the core map, but it's impacted sugar to make it more convenient to program in Python, hasn't been added with the properties, new processing. The way to describe exactly the same as an XML based serialization format or as it's layer mode. And the contents, which starts out with loading, this is probably not what's going to be the next generation file format of the game, but something that's very similar to the operation between many years like with these IDs and things based on things before it. Discussing type, standardizing, a way to have an open document based format. The hierarchy of layers and properties should be composite together to be able to exchange things between applications. And we do think that we should be able to come up with something that wouldn't be too hard for others to implement and to understand that we'll also meet the needs that the Gecko can provide for the game and the way that between developers have it out of the core of the data. This is now structurally over that it's used to implement the real and more likely findings coming up soon. In case customers might actually want to file as a public API in some instances, change of mind is like, hmm, I don't like the name of that method. But I think the only thing we've done since has been reneging methods is having changed what was actually happening. And we started to be quite satisfied. But beyond this, public API is a Gecko core. It provides a working implementation doing what we'll be able to do. Which is kind of a nice, small need, but it will have to change as desires for more capabilities of the end work gets followed by Gecko. It's not nice and easy to write plugins for. So I will write the wishes of these other plugins that I want and I kind of desire to actually keep moving along if I change Gecko as yours. And that's also where all kinds of other third-party dependencies come in. The plugins, depending on them, the Gecko core itself depends on the very, very few. So actually, sentiment, brain-based, all magical understandable languages as often is that there is I, C, V, C, R, you have L, A, B, you have H, S, V, and you have I, P, B, P, R, OK. You have a lot. And then you have many, many, you have 32-bit floating points, you have 64-bit floating points, you have 16-bit floating points, you have 8 bits of different variations, you have 16 bit of different variations, and you can probably dream of more other variations of data types. And then you're allowed to mix any color model with any kind of data type, with any permutation of the order of the components. And that's what people do, because when you write yourself, these are just kind of a wonder, why the heck are you using that kind of pixel format? But it's a color model with 64-bit floating point values in RGD with alpha to register if any new data types. It's like, this is how I convert the value of the components of the color model. That's 64-bit floating point as well. Now, Babel knows how to convert 64-bit floating point RGBA into i.e. L.A.V. This might be quite slow, but at least it's able to do it. And on top of that it has the ability of conversions you're actually using quite a bit. For instance, converting 32-bit floating point which is what we're using internally to gamma-corrected 8-bit RGB that needs to be passed. So you can register a shortcut. But Babel refuses to use that shortcut unless it passes the regression test and random data pass through both the reference implementation which is kind of slow, and its optimized regularity version. So if it passes the elevation, okay, it will use it. But that's not enough because it's kind of boring to have these optimized versions. So Babel will also allow to chain together these shortcuts and the regression test them. So this matrix I have here is the current version of Babel, I'll just zoom out. Current version of Babel with the source format going down the left. And going in columns is destination formats. So the place where there is a single dot is where there is a hand-coded conversion. There is a chain of conversions put together which doesn't lose any quality but is faster than the reference implementation. And programming sometimes, oh, this looks very, very slow. And Babel tells some other problems in Gagel to show me where does the time go. And if it says that in one of the nodes most of the time is spent in Babel. Okay, uh-oh. I will run some other introspection giving a such thing as this to highlight which conversions have been used. And if there is a blue dot in one of the blank cells it tells me here it's using a reference implementation. So I can implement either that one or one I can see in the neighborhood and there will be more fast versions available for Babel to use. One thing that Gagel currently does is that it does its processing in chunks. When it's going to render a large image I had it pre-rendering in cache but it kind of starts saying I want to render this small rectangle this small rectangle and go into the entire large, large buffer. And probably saw it when I was doing the non-destructive editing that things were updating in small chunks. And operations in the nodes know what data they need to, if this changes in my input it will impact the following rectangle in my output. And so using that Gagel is able to determine what's the smallest amount of buffers that need to be computed to satisfy a given rectangle in the weakest. I'm not going to go into details about the bits of all that work and but I'm going to show the source code of our plugin. It's a higher source code for the brightness contrast operation. So there's stuff to hide boiler code boiler page code of geo-object away. There's stuff that has two properties which are values brightness and contrast. The minimum and maximum values, default values and the computation spring. Source code file, that's the discharging in this thing. It also says what is the superclass processing function and there's a restriction and it includes the gagelchance.h file with chance for the magic encampations that leads to geo-object actually working with inheritance stuff. What it does is that it includes this source file that's something like five times giving you both destructive properties giving you registration of that in the geo-object type system. Drop another such C file into one of the direct operations type make install and you install a new plugin. Processing function for a point filter, a filter that only depends on the single pixel original value with this new value, linear output buffer and they come up with pixels that it has to process put on to all the pixels just setting the alpha of the output to what the input was. It's working in terms of RGB voting point with an alpha value. I could here say that, well this is not what it actually wants we just use bubble to convert things back and forward. The plugins most actually do work with linear buffers that the gagel internal uses tiles such things is hidden away than that code should be readable. To study some plugins in the game and learn something from them and change code in them there's a lot of cases and special conditions and it's optimized quite far away from reliability. I'm going to deny performance optimization when in gagel and say that, well it probably fixes when the architecture changes or something. It's still quite fast actually and there's a lot of things that can be optimized with the current architecture. Even having 8 bit and 16 bit versions of all these operations isn't my opinion nonsense and be usable for single processing research and similar rather than provide better results. We will be writing to add a new or some of the nodes that actually do it so it shouldn't be more sophisticated than that. Adding new functionality, new nodes that can be used is to create method operations. That's operations that are built up from core components, other operations or nodes that exist. And the example here is a drop-tap of this and just the introspection of gagel or the drawing key. And so kind of the order isn't how I would play it. This is shifted, translated add a lot of drop-tap of both too and then composite it over the background image before it's passed on to some kind of output. For gagel this machine is a dual core machine but only one of the cores are active. I actually suspect that once I start doing that, and try to make it multi-threaded, it won't be much of an issue because the operations themselves are already re-entered with other purposes. So I'm just going to start hacking that with all of Austin because I fear that if I did it was typically going to be more work than I suspected is the final rendered image all the way from the loaded files all the way up to this after this point in the graph all the way to have the results so when this thing is dragged over it it computes things from there and forward for an interactive application like this one which can send the processing with something that uses the just-in-time compiler which would be retarded a little and you know fast and gagel suppose you use something called gil the general image language which was kind of a pre-processor for generating 8-bit, 16-bit, 2-bit versions of all operations and I don't think that's the reason I called it just-in-time compiler it would be more for this reason to be able to use all of our 8-bit acceleration architectures but in itself RGBA has its kind of intermediate conversion format and for most applications most color models people work with are three-dimensional or three-stimulks color models there's one exception and that's CMYK which has a half-dimensional space and the problem is that you will have information loss going from CMYK to RGBA and to another high-dimensional format I'm not quite sure if this belongs in bubble or not, I haven't decided but ideally it won't bubble to be able to gain spectral data and a lot of its spectral images with different libraries doing color management like little CMS or just keep using sRGB primers internally in Gagel and bubble yes, I do look into it but I can't just for the operations and loss within Gagel that's kind of a side effect for state licensing and public API or well, it might be interesting to write a small Gagel background that only does that kind of processing because if you have a small core set of the operations that as long as you are in C you wouldn't have to change public API so if you could create a different thing that would be faster, it would be nice for previews and such there's one thing I didn't add to this list which is rather important and that's invisible but that it could work as all the operations, all the plugins need to be able to understand what it means that it has game factor included in how it probably is done that's, okay, for all point operations like depositing, that's no problem for a Gaussian blur, it's no problem but on sharp masking which is actually just a macro operation on top of a Gaussian blur you start having almost all the things you actually need for basic photo editing and then we have live previews that are live in fast process the full resolution image against that data but that should be possible to actually change without changing public API there's all the abstractions on top of Gagel that we'll be trying to have and that's the ability to actually do animations animated properties etc that doesn't belong in the core section there's a lot of somewhere else this application by the way is written in Ruby and I'll probably blog about the source online when I have my work in time to do it it's kind of neat but mostly hexed together it's just demonstrate Gagel there's a set of operations that also would be nice to have it's like live open RAW to RAW photo files, I'm using that from DC RAW having to just ask for a large chunk of data waiting for another process to give me that graceful image in one big chunk I want more direct access to it and more than once they are demo shaping regimes at the moment with low reward it sucks more than most things you could probably see because it does some weird neighbor stuff there is some noise reduction already implemented wouldn't really work for real domain it's quite close to having full as in a G1.1 filter set to implement it so it conforms to that would be nice plugins needed to do exactly that we did a prototype test doing that and it was nice rather long queues after the Gagel was a separate project that I coded that is that at the time before the firewall API was stable custom firewall widgets or GTK groupings should be easily do three commands this week I started hacking together the epidemic input and output operations so one sample of using that is three matching looks and this is more a two unified version to wrap the input and output operations was that I wanted to test just to figure out what actually on live video there is a graphical operations and within there is a graphical workshop and those are all plugins that are not complied with default my workshop contains quite a bit more things than the one in subversion but there is also the FF load and FF save the one that the show was just a small room script for the test I think that actually made a lot of noise what is the room script I used to generate the video file that was kind of cartoon like version of elephant 3 requiring Gagel and setting up a small graph that I am using as not in subversion yet but will be checked in soon which is an edge enhancing and edge enhancing blur filter which ends up actually being cartonification and running loopy durations so I have a source node loaded to depodated three such filters and the athletic save at the end connect together the graph which is actually just a chain 240 times that is at the property frame on the source node the process was then we will process the graph and a panel frame to the video and each time it does that it acts as a frame to go with video and when it is done there is a video on and we want more questions the bubble up ABI is very very small the essence of I can show you bubble documentation basically the usage of bubble is you want bubble to process you want bubble to process on a bubble fish you create a bubble fish by saying what is the source format you can use all the names and a pointer to also construct new formats if you start doing that your moron is unstable around but don't see that changing marmot drivers the model here is correct following components now will be of type on the side date which could be used in place of those strings and creating the bubble fish it comes to the list of different things those are the date types in bubble that is the as well as pre-modified output and not pre-modified output reason for that is that I had a cord that worked and just treating it as separate color models I didn't have that anymore things to it and basically changed it since it works that's kind of from a structure point of view we would actually have a special property exactly pre-modified output and as long as I keep the core set of color models low, this will still work and that's the same as you saw earlier with the supported color formats this list is generated during perfection and what are the available pixel formats and how they are composed of components and things is probably something that will change more than the basic usage of it but if you only had you had to use this format and other formats you needed to convert between them it would be a very very small API that is easy to use that would probably not change well it was listed as kind of the dependency of operations except the device arm use this device the image buffer you get from there the implementation of that is a little bit unstable it's much easier with the video format for you to say use this video file and give me a phrasing of 150 because that wasn't changed video for linux device actually changes over time and continues so we are kind of handling of the composite image isn't quite as nice with beautiful linux as a video file so I haven't quite decided how to do that anyway what? it doesn't have anything to do with those issues it's an issue of the thing that changes over time I want to be able to catch results it's fine unless a parameter is changed I don't have to redo this work but with a linux video source you actually have to continuously do processing that's not really designed to do it I was trying to find a way to make it recognize these ones how hard the game developer can't remain on this by the defense of the game the game developers will be born with my customers in that kind of bad respect shouldn't need to change much for games to use it they are still claiming that they will backstab and they directly manipulate the objects and probably you don't need to do that it's not probably not fast enough but some of these things will change and you can start so it's no longer yes are you doing operations or are they going to go by error? the operations themselves the point operations are all done in public products but when you are coding operations from scratch you would have Gaggle buffer data into a linear buffer that you are going to work on and when you are requesting that linear buffer you are using bubble so you can say that I want to have 64 with LAD in a linear buffer and then you can't change the data or commit to a new linear buffer then you write that linear buffer saying write this to the Gaggle buffer as the output and this is the format of linear buffer I am providing so you won't have yourself a working point and bubble ration doesn't have to do the work because it's probably got to the working point data already almost all of the operations in the moment are going to be working point RGBA which means the least amount of conversions and it's high in the quality of the things I want to do high dynamic resolution to view the high or key files or CMG files or regular RGB RGB data that you have in high dynamic resolution that's right what kind of file for what kind of unit in storage data at the moment the focus of file format has been mostly on input so there is file loaders for PNG JPEG and OpenEXR OpenEXR being the only one that's high dynamic range including 16 bit PNG files in terms of saving things there hasn't been much focus on saving things in GEP it supports saving PNG I think it supports called 16 bit native PNG I'm not sure and it supports OpenEXR I haven't started playing with creating animations yet but it should be rather difficult to create some interesting things for video so my answer there is well at the moment source operations and sync operations isn't that much work either and I think I think someone is working on tips for loading at least and comes to saving and more focus on getting to the screen than to the disk so are you interested in making your own file format and not for all the range that you have of course the stack files can be loaded back in but when it comes to open roster it's initially together with CRETA and that would be a file format for an entire structured layer image and JPEG perhaps for local versions but PNG since it has both 8 bit 16 bit RGB and grayscale and it has to reuse SLR in addition for high amount of range layers the structure of all the things that are connected together and all of this probably will be in a specific file similar to other open documents formats do you have operation or special writing to generate IMEG-2 engineering from on the JPEG why do you do that and no and until I myself decide if you know what I mean I know what you mean if you're asking whether there is a way to create a high amount of range image from bracketed shots so that you have on a single occasion you shoot 4 or 5 frames in different exposures both that operation to create a high amount of range image from low amount of range images as well as proper tone mapping of both things that are missing but can easily be added within the architecture