 Hello everyone! Today I'm going to talk about the Weyland Compositor and how we are using it in the Agile project. First of all, a few words about myself. My name is Marius Foulad. I work for Collabra on the graphic side of things. I'm a regular Western and Weyland contributor and I've been doing that for a while now. Under the Agile project I've been involved in supporting and maintaining the Agile Weyland Compositor and basically helping out with anything related to graphics. More to the point, I'm going to talk today about what is the Agile Compositor and I'm going to take a look at alternative means of doing Guino management using gRPC, which is Google's RPC and how we integrated the Agile Compositor with different runtimes like Flutter and Chromium. Some of the past and current issues we've been having while doing that integration. Now, what is the Agile Compositor? It's a tiled Weyland Compositor which doesn't allow users to move on their windows nor it allows to grab and resize them. And why different compositor, why not using Western? Well, rather than taking out an existing list on Shell or modifying or create a new Shell plugin it would be far more beneficial to be able to modify the entire stack if one would need to do that. But also because in Weyland the compositor owns the entire graphical stack and at the same time the window manager, as well as dealing with everything that requires working with the hardware. Whereas in X11, XORG, these components are entirely separated. Now, in the same time this allows the compositor to be actually slimmer than using some kind of intermediary of APIs and in the same time it allows other users to customize it as the same fit. So, there is no need to use IVI Shell, instead we rely on just USDG Shell, which is far more used, tested and implemented in old toolkits and runtimes, so we get that basically for free. Now, we leverage on Libweston to extract anything related to backends, outputs and input processing. Libweston has dedicated optimized paths for scanning out the buffers directly onto hardware planes or ignore discard, occluded surfaces, which do not need to be painted. And all of that happens internally. Doing that, it just allows us to focus on providing window management functionality, rather than just dealing with the predicates of having several software stack layers that we need to manage. Infotainment and IVIs, we don't interact with the environment the way that we interact with desktop environments. So, how do we convey to the compositor how it would like our information to be displayed? Furthermore, how do you tell the compositor which application to be displayed at a certain time? For that, we basically use a private extension, which is a Wayland protocol extension, custom to the AGL compositor. And we use private extension, one is called AGL Shell and the other is called AGL Shell Desktop. The first one, AGL Shell, is to manage panels, backgrounds and perform window activation. Now, the client that implements this protocol ideally should be able to handle multiple top-level surfaces at the same time, from within the single process, within a single application. Now, this is rather important because I'm going to get back to this when talking about runtimes and toolkits and we're going to see that this is a bit problematic. And additionally, we have another protocol called AGL Shell Desktop, a protocol that is designed for all clients that would require to perform additional window management operation on their own surface or to others. This diagram shows how the compositor stacks different surface applications with the help of this private extension. Now, on the left is the shell client with distinct surfaces, each assigned in a different western layer. This is, the western layer is a structure maintained by a live western. And these layers are similar to live AGL Shell layers so that they have different stacking positions in terms of the z-axis. The application when they're going to be activated will be stacked in the same layer, but they're going to be stacked one on top of the other. The protocol extension allows switching between these surfaces by removing the current one being displayed from this layer list and replacing it with the one that I want to be displayed activated at that point in time. Now, at the end of the day, why exactly do we need two different extensions? So, for one, we can let two clients manage the same surfaces and here I'm talking about specifically the panels and the backgrounds. Allow multiple clients to bind to the same protocol interface with one client taking over from the other. So, we don't obviously want that. And secondly, we still want other clients to perform window management functionality. For them especially, but also for other clients, we don't interact with the system the way that we would interact in a normal desktop environment where we have other input devices like moist and keyboards. But, yeah, having these two private extensions complicates things quite a lot and has quite a few shortcomings. And for quite a while, there wasn't really an alternative to doing that until we settled on the IPC, a global IPC which we started to use recently in the Age of Project. Obviously, I'm talking here about Google's RPC, GRPC. And I'm going to talk about this and how we transition to it in this next part of my talk. So, the Age of Project decided to use IPC in the entire system and with it we just landed on using the Google RPC for it. And, yeah, along the side it with the Protobuf protocol. There's a lot of pros and cons around it. It just felt the most suitable for us. Obviously, that's not the case to a lot of platforms. And for this particular use case for window management use cases, this is more than suitable to use it. So, to recap a bit, what I've said previously, there's a requirement for using these private extensions. So, protocol require access to the Weyland connection and they're going to need to be implemented in each client. And that's true for, yeah, AGL shell desktop protocol. In the same time, the AGL shell protocol didn't gain all those new features that I'd be adding into AGL shell desktop, like state events for windows. And we kind of need to bring this from AGL shell desktop into the AGL shell. Furthermore, having just one single protocol extension simplifies cold handling quite a lot in the compositor. And because now we have GRPC, we can actually use GRPC as well. And we can actually use it from different environments like Dart and JavaScript. So, it seems we have a lot more reasons to provide a way to extract this Weyland connection and instead use a more easier API which can be used from all kind of language bindings. So, basically, what are the steps for doing that? So, you need to migrate some of the functionality from AGL shell desktop to AGL shell and then, yeah, get rid of AGL shell desktop. We still need to keep the panels and the background surfaces managed by a single client, so we shouldn't want to allow multiple clients to do that. And rather than embedding this GRPC server inside of the compositor and dealing with all kind of, yeah, multi-firing issues, we need to proxy this RPC using the helper client. Now, migrating this functionality from AGL shell desktop to AGL shell, I've done that already with the version free of the AGL shell protocol. So, we already have that in main master. At the same time, while doing that, I found a issue that we do not know reliably when an application actually started and is ready to present content, but the compositor actually knows that information, so the compositor can relay back that information. And with that, I would like to let the shell client control start up rather than having some implicit policy in the compositor, which, yeah, still continues to do that in some other runtimes with the Qt platform that's already present in master in main. Now, the second part would be to proxy this communication with a distinct client rather than just adding this code in the compositor. And the way that this happens is that when the compositor will start, it will start up as well this helper client. It will be found by the compositor. And this does two things. One, it implements the AGL shell protocol on one side and bonds to the compositor. It uses the same interface on the server side. I mean, on the other side, it will just implement the server side of the GRPC protocol. Finally, we still need to have just one single client manage the panels and the backgrounds. And for doing that, I kind of added a new interface station. It's going to be in the same protocol as AGL shell. But this additional interface allows to work as a token to be able to still use the AGL shell protocol. I mean, the interface is not a protocol. And in the same time, make sure we don't mess up with the panels and the backgrounds. Additionally, as a feature, any protocol updates that will happen at some point will be basically abstracted to the client using the GRPC, because they won't really care about that. They don't need to update their own URL and implementation. And we can much easier modify the protocol, and the update protocol wouldn't need to do that. Here's how this working plugin looks like. Now, there is a bunch of stubs. Basically, these are like some requests I've been having in the AGL shell. This protocol, they don't really have an implementation. They are just like empty stubs. We do have implemented the Activate app, which activates an application. And I have this stream or this continuous event from the compositor and different events from the windows state events happen. These events will be propagated to all clients that will listen for this application in the status state events, like termination, like deactivation, hidden, and stuff like that. Now, the last bit I would like to talk today is about an ideally slightly related issue to the private extensions. It's about the AGL shell. And it's about the fact that it seems a bit problematic to handle multiple surfaces from one to the same process with toolkits like Flutter and Chromium. So, we have more than one ecosystem, more than one platform in the AGL. And we let users choose a toolkit which handles and abstracts the relevant API and connection for that. But now the question is how do we implement these private extensions? Where do we implement the shell client? And how do we communicate from the runtime? And the runtime is going to be differently than C. It's going to be JavaScript Dart or C++. So, we have Qt, we have WM, which is the web application manager, and Chromium and Flutter. We can also have GTK and GTK, just that that's not really used in AGL, but it can be used and also we can use native well in the C. I'm going to look a bit at each of these toolkits and talk a few words about how exactly we gain access to the well in primitives. For instance, for Qt and Qt Wayland, we have something called Qt Platform Abstraction, and this is not specific to Wayland. It does kind of the same thing to other Windows systems like Mac or Windows. And basically the shell client can retrieve any kind of backing well in primitives. We are interested in the well in surface and the well in output, for instance, and we need those primitives to control them. They can handle multiple surfaces at the same time, so internally how they do the repainting, they can do that. And obviously the application is written in C and C++, so it's actually probably the ideal use case for implementing the shell client. Now, under Web Application Manager and Chromium, WAM provides application lifecycle for HTML5 application. I would suggest watching Lorenzo's talks about WAM. Now, due to the way that WAM is built, it's built on top of Chromium, which is an adapted version of WebOS. And here, not WAM is the well in runtime, the one that maintains the well in connection, but Chromium. So there are different stack layers to reach from WAM to YAM to the native well in connection. So we have no direct access for the well in primitives. Applications obviously are written in Javascript, so we kind of need to plan different layers of their stacks until we gain access to the well in primitives. And yeah, I've written here that we need the WAM instance per web page, in that with all those above, we can't really manage multiple surfaces at the same time, but at the same time, we have some kind of a workaround in place to handle that. Finally, with Flutter, which is our newest platform, we have something called the Flutter Embedder, which is a way to let users create their own Embedder on different platforms. They have an Embedder for Linux, they have one for iOS, one for Android. So for AGL, we needed to create a platform Embedder, and we have that Embedder. The good part is that this has native access to well in primitives and it's the one that manages them. But the bad thing is that it cannot handle multiple surfaces at the same time, although that's obviously possible and much easier to do than Chromium. Now, application on written in Dart, and from Dart we need to go to the Embedder and then to gain access to the well in connection and then to the Composer. And August history requires some kind of a product. Now, in order to run more than one application, basically we just need an instance of that engine or that to run time. And that's how we basically have multiple applications in the system. So what actually are the issues to do a kind of a recap? So basically we don't really have direct access to the well in primitives. For instance, in one management of double leveled surfaces within the same process. This is problematic for both Flutter and Chromium. And basically application code doesn't really have access to the well in primitives and requires planning several software stack levels until we reach the well in connection. Now this last part should be actually fixed by using the Google RPC because application won't really need to gain access to the well in connection. They just can use GRPC and just be done with it. It will be far more easier. But we still have the problem of managing multiple level surfaces from between the same process. Now, how things are currently running at this point? Well, Chromium and WAM basically split the panel and the background into two different processes and they just orchestrate this startup. I mean they need to be sorted in certain way to be able to implement correctly the IGL shell protocol and protocol. And for a well, the Flutter and better just manages the single surface, which is like the background surface. So a possible workaround for both front ends as we have like an engine instance per surface why not have multiple engines per surface, an angel per surface basically. And have like three distinct drawing loops each for each surface. Well, I've done that for Chromium and as a like proof of concept, but obviously it's computationally expensive and it requires a lot, a lot more work than just doing that. And yeah, a lack of interest. So just rather than forcing squared pegs into round holes and trying to adapt the tool kits and runtimes to support what would have been normal otherwise. We now have a new workaround in place designated in way to designate a certain rectangle area as the activation area for application. And just have one single surface to manage rather than multiple from the same process. Now, this picture shows a lot better than I've explained it words. So on the left, it's how we basically have multiple surfaces. And on the right, what I mean for that designated area. Basically, we have the same surface just the client will have to have certain parts of it. And basically, we never include those parts. We just tell it look this background. This area will be the activation area with all the other application will be placed on top. Now we have this implemented for Flutter. I mean, in our demo, Scott Murray added this initially. And I've added a protocol update to have a request for doing that directly. I've also suggested doing that as well in Chromium. And I think we're going to try to go as well with that to avoid having all these multiple applications just manage one single surface. I guess that's it from my side. Yeah, if you have any questions. Thanks a lot for watching.