 So, yeah, let's start without further ado. So my topic today is walking through the Linux Graphics based stack, yeah, Linux based Graphics stack, yeah. So there are actually other conferences, one today, I think later, that are also covering the same topic. So I decided to pick a number of, let's say, little known subjects. So yeah, we're gonna see. So maybe it's not exactly what you expect or exactly what was in the description, but you're probably still gonna learn some stuff. So that's what matters. All right, so who am I? I'm Paul Krzhakowski. I am an embedded Linux engineer. I work at a company called Butlin, where we do embedded Linux engineering. It's mostly a service company, so you can hire us if you have some issue to solve about that. And I've been contributing to free software projects, especially in the multimedia and graphics area. So I worked on a number of V4L2 related drivers, so things like sensors. I've also been working on an ISP driver for the all-winner SOCs, and a little bit on the DRM side as well, so on the graphics side, mostly on display controllers and things like that. So also on all-winners, so the central IDRM driver. And I also wrote my own DRM driver, the Logic VC DRM, which is a driver for a specific IP in FPGA, mostly. So from Toulouse, West of France. What is my topic today? So like I said, it's a big picture overview of graphics, so kind of broad. We're gonna try to touch on different types of topics related to graphics. I'll start with kind of a general overview. If you guys are not so familiar with graphics, I'm gonna give you some basic information or some hints on what this is all about. Then we're gonna see a little bit about early graphics, so what happens at boot, how all of that works, and then a little bit more on a running system. So yeah, more like general, typical use case of graphics with Linux. Like I said, I'm gonna shed some light on little known aspects, so things that are usually not really well known or kind of hidden under the hood. So yeah, I'll do that. And also try to give you references to popular free software projects, so things like Western or the Linux kernel to kind of give you some pointers and where you can look if you wanna have more details on some specific area that I'm mentioning. You can basically look at the files and the functions that I'm listing so you can actually see how it's done and the actual code that runs. So let's start with that big picture overview. So the first thing I wanted to mention is how graphics pixels are stored. So we have this notion of frame buffer which is really just a word to describe the memory area where the pixels are stored. And that memory area can actually be in different phases. There are different situations. So the first one is that we use system memory which is the general memory, the general DRAM of your system and we store the pixel there and then we have some hardware that's going to access that memory. But we can also have dedicated graphics memory. For example, in your graphics card you will have something called VRAM which is video RAM. It's just memory that's dedicated to the purpose of graphics. It's usually attached to the GPU for example or to the graphics card. So this is not handled the same on the software side of course so you have to know about these differences. And there's also the fragmentation case. So basically you can have your memory that is continuously stored in physical memory so it means that you have a large area of your RAM that contains the frame buffer in one block let's say or it can be fragmented using paging. In that case you need some memory mapping units to be able to basically create a virtually contiguous buffer from physically fragmented memory. So you will have just pages that are scattered here and there on your memory and your mapping unit will put that back as one contiguous virtual buffer. And that can exist on the CPU side. That can exist on your device side as well. So you can have page of augmented memory for your GPU, for your display engine for example. So on the CPU side that requires a MMU as you probably know and on the device side you have the equivalent called an IOMMU which is also doing the same task of translating between physical and virtual memory. One thing I also wanted to mention is that there are no metadata stored in your frame buffers so you always have to know the format of the pixels because of course there are lots of different formats to represent the pixels. You have to know something called the modifier which is basically the order in which the pixels are stored in memory that can be linear order which is the typical top left line after line storage but that can be something different for various reasons. You can also have compressions, et cetera. So the point is frame buffer is really just the data and you have to know like how that data is encoded on the side. So it's always some information that you need to carry along with the pointer to your memory. So yeah. I mentioned the DMA, oh no not the DMA actually. So DMA is a way for your devices to directly access your memory and so the CPU doesn't have to fetch the memory, fetch the data from memory and push it to the device but the device can actually go and pull it from memory by itself. So that is better for performance and latency and things like that. So that's one thing that needs to be implemented for efficient graphics access, graphics memory access and there's also the bus mapping that might be an issue. For example, if you have a graphics card that's on PCI and it has memory on board then your CPU needs a way to map that memory to its physical or virtual memory at address range and to be able to access that memory. So you have to do those bus mapping operations and another topic on memory is cache. So for example, when your CPU is going to write to a memory area to a frame buffer, it's possible that the write actually stays in the cache of the CPU. So you have to be very careful about cache synchronization points to flush the memory at the correct time and to invalidate memory before you read it at the correct time as well. So these are all topics that you need to take care of when you're dealing with graphics memory. So that just gives you an idea of the topics involved. That was quickly for the memory side. Now I'm going to spend a bit of time on display. So there are a number of different components that are chained in a typical display chain. So it all starts with a frame buffer here. So that's where we have our memory and then we have a number of components that allow us to send that memory to an actual display. So the next one in line is called the plane. It's part of the mixing process. So basically we have that memory and we might want to apply some operations to it. For example, we might want to rotate it to scale it. We can crop it, we can convert its format, things like that. And we can have one plane or multiple planes. So we can actually have multiple inputs to the display hardware. So multiple frame buffers that will be composited together by the display hardware itself. The point is that at the end we want to have a unique picture that will be sent over to the monitor because the monitor is a single surface. It doesn't have multiple layers. So if you have multiple layers as inputs, you need to composite them together with those planes. So once they are composited, they go to the CRTC, which is really the timings generator. So it will make sure that the data is sent to the next stage at a particular rate following specific timings. So typically nowadays we send 60 times per second the image over to the display. So it's that CRTC component here that is in charge of applying this patching rate. So after that we go on to the encoder. So it's a FIFO between those stages. Then we have the encoder, which actually shapes the data in the physical format for the display interface. So for example, if you're using HDMI, you have an HDMI encoder, which will receive the pixels and format them to the actual physical lane of HDMI, which is some differential signaling TMDS. So you need a component for that encoder. Then you might have another one, which is not always there, but which can exist. It's the bridge. This one is about translating between one display interface to another. For example, if you're using VGA, nowadays there's no actual VGA encoder in your display pipeline on your device, on your SOC or your CPU or whatever has a display engine. Instead you'll have some output like HDMI or display port and then you have a bridge which will convert from that interface to VGA. So it's also very common. And then finally we have our panel or monitor. So the panel would be more like an embedded display panel for an embedded case and the monitor is more like a full-fledged monitor with a HDMI cable and different inputs and things like that. So that's basically the chain for display. Now let's talk a little bit about rendering. So nowadays we mostly use the GPU for rendering, pretty much everything that's happening, actually both on your typical laptop or desktop and on embedded use cases. All of the rendering operations will most likely be done by the GPU. So a GPU is really a very complex beast that can run the 3D, but it can also be used in 2D use cases. And even when we have hardware on the side that exists, for example for vector drawing or pixel mixers as well, those hardware blocks tend not to be used in favor of the GPU, which is typically because of APIs, because we have great APIs for driving the GPU, but not so much for the other types of rendering, 2D rendering blocks. So people tend to always use the GPU because the APIs are easier to use. There's a downside of course, it's that the GPU consumes a lot of energy. It's quite complex. It has a significant latency, one could say, especially because GPUs work as programmable pipelines. So a GPU is actually kind of a very specialized CPU with its own instruction set. It can actually have multiple instruction sets. And along its rendering pipeline, so its internal pipeline, it's going to run small programs that we call shaders. Oh, sorry. And so we have different types of shaders for different things. And those programs actually need to be compiled from a source description that is provided by the application that wants to use the GPU. So that's why it's quite complex. It's pretty much like its own computer on its own pretty much. So it's pretty big. Keep that in mind when you have to choose between a dedicated hardware block for something or using the GPU, the GPU is kind of a huge piece that will solve your problem, but at a cost which is power, latency and complexity. Okay, so now we've covered display render a little bit about memory. So let me tell you about Linux kernel related stuff because that's why we're here. So let's start with the display APIs. So basically the kernel has two display APIs. The first one is called FB Dev or FrameBuffAdvice. It's considered legacy. It's a pain in the neck. We really want to get rid of it, but we can't for a number of reasons that I'm going to mention. But keep in mind that this is an old interface that is mostly fit for hardware that came out in the 1990s, but nowadays it's fully outdated. It has so many issues. So please don't use it. And instead we have DRM, which is like the new subsystem of the kernel, the new framework that has all of the great features that we want to use. And DRM is actually pretty big. It has multiple different parts. The one related to display is called KMS. So that's the user space API that we want to use to actually display things. So to coordinate the frame buffers and the whole display pipeline that I was mentioning. There's an extension of it called Atomic, which allows basically grouping changes to the display hardware together so that you can have basically changes happening all at once, which wasn't possible with the previous KMS API. There's a block called Gem, which is in charge of the memory management. So all the things I mentioned on the previous slide about the cache synchronization, about the bus mapping, et cetera, all of that is done in Gem. It also supports some user space APIs for things like zero copy, where you want to import memory that already exists into your graphics device. You can use this prime API of DRM. There's also fences, which allow for explicit synchronization points between graphics devices. And we have this sync-odge API as well. And finally, there's the render part of DRM, which is a bit different because it's not like a unified API between all of your drivers, but it's actually kind of a base building block on the kernel side. But on the UAPI side, it's basically one API per driver. So each driver has its own API to render. That's because it mostly targets GPUs, which are highly specific and it doesn't really make sense to have like a generic interface for all of them. So each have their own interface. And there's a library called libDRM that you can use in user space to kind of interact with all of these components with like call, well, basically Ioptro wrappers. So it's just kind of a convenience library. Yeah, that was for the kernel side. Now we move on to user space and we can talk about display server APIs. So probably you've heard about the X11 and Wayland display server implementation. So they are actually protocols, X11 and Wayland are protocols. So they are a way for applications to coordinate submitting their buffers and getting input events from a central location, which is the display server or display compositor. So X11 is now considered legacy, also has various issues and the up-to-date standard is now Wayland, which supports lots of great features and actually does things the right way. They also have associated libraries that you can use for your application to interact with those display servers. So on the X11 side, there's Xlib and XCB. XCB is the old one, XCB is kind of newer and kind of easier to use. And on the Wayland side, it's more like thin wrappers for the communication protocol between the clients, the applications and the display server. We also have graphic toolkits like Qt, GTK or EGL, which are basically just, well, they wrap the display server APIs. So they are easier to use for applications. You don't have to care about the details of whether you're running on X or Wayland or even Windows or something else. You can just use these toolkits and they also provide in most cases some rendering UI rendering abilities and input management. So for example, we talk about widget-based toolkits. The widgets are really some graphic elements that you can add to your application. So it really makes it easier to build your graphical application instead of having to draw the pixels yourself. Now, let's talk about 2D rendering. So there's a lot to do in terms of 2D rendering. So number of libraries exist for that that are commonly used with the other software that I mentioned. So basically on Linux-based stacks, Cairo and Skiya are there to do vector drawing. So really to draw base shapes and things like that. So Skiya is the one developed by Google. For example, it's used to render the Chrome or Chromium web browser. So yeah, that's what it does. You also have other libraries that allow you to manipulate pixels. So for example, if you need to do some pixel format conversion, scaling or things like that, rotation or even more advanced stuff, these libraries are really commonly used for that. Then there is font rendering, which is another big aspect of 2D rendering. Of course, if you want to create a user interface, you want to have fonts or text shown. So you need some library to do that. The two main ones are free type, which is kind of the old longest standing one and half bus, which is the newer rewrite by Google as well to do font rendering. So with that, you can basically just give it some memory and some input text and it will provide the pixels that correspond to that text with a given font. So that's font rendering. And in terms of UI rendering, I mentioned the graphics toolkits already. So like Qt, GTK, things like that. And you also have some immediate mode user interface rendering like IMGui on nuclear. These basically won't deal with the display server side of things, they will just produce a user interface in memory. So yeah, that's kind of a different approach to UI rendering. So with all of these components, we have everything we need to actually create user interfaces. Yeah, so. That was for 2D. Now let's move on to 3D. We have a number of APIs that are used. The most well-known ones and used ones are OpenGL, OpenGL, yes. So that's the APIs to actually do the 3D rendering to configure everything needed to render a 3D scene. It's kind of high level, meaning that you don't have to deal with all of the actual complexity of the rendering process. It will abstract things a little and it will make life easier for you. You can provide shaders, so the little programs that run in the rendering pipelines. You provide them a source in the GLSL format, shading language, so it's a little bit like C, so it's just a programming language for shaders. Then we have EGL, which is basically in charge of doing the glue between the rendering side, so between the OpenGL and the display server integration so that you can directly display the buffers that you have rendered with the GPU using the display server API. And then the new one that, well, it's not so new anymore, but most recent one I would say is Vulkan, which is kind of a more advanced approach to 3D rendering. So it's just also an API that you can use in your program to generate 3D images, but it's lower level and it requires you to do a lot more things than OpenGL to do 3D rendering. Of course, it means that it has greater potential for optimization, so it's preferred in a number of cases. And the shaders that it takes are no longer a source language, but they are in kind of a pre-compiled form, which is called SPERVY. And this pre-compiled form is called the intermediate representation for the shaders. In terms of implementations, so these are not software implementations, they are APIs, and we have implementations which are either message 3D, the reference free software one for 3D rendering, or you can also have proprietary implementations, which are relatively hardware specific and provided by the manufacturer of the GPU, for instance, and they have various issues due to the fact that they are proprietary, so there are security issues, maintainability issues, compatibility issues, so yeah, it's really usually a big pain in the neck to deal with those, but they exist and we sometimes have to use them, so that's that. That's for a kind of summary diagram on all of this. So we have our applications here that might use the toolkits to talk to the display server. They might also use the 2D or 3D rendering stuff. The toolkits might also use them because the toolkits will also be in charge of rendering elements. The display server might also need to use them, for example, to do the compositing of the different buffers from the applications. I'll get back to that a bit later if I have time. Yeah, that's for the user space side. On the kernel side, we find our three main APIs, so DRM KMS for the display parts, so it's related to the display server, so the display server will push its frame buffer to KMS. Master 3D will use DRM Render to access the GPU rendering features, and both will use DRM Jam internally to manage all the memory related aspects, so yeah, that's how it goes. And then we have the hardware at the end, so KMS for the display, like I said, DRM for the memory and Render for the GPU. And of course, these components, display and GPU will do the MA to directly access the memory. So yeah, that's kind of the summary of the stack. Now we're going to move on to some specific topics. Wanted to talk about the frame buffer console, which is kind of the first thing we see when we start a system, so the idea is that we have a component called fbcon, which is basically a bridge between a generic TTY or virtual terminal interface, and the frame buffer side. So this is what allows you to have like a terminal shown on screen. And this can be useful for showing the kernel boot logs, for example. This can be useful when you have an encrypted root file system and you need a way to enter your password. So you need to use this fbcon bridge here. That will allow you to actually enter some input and see something on the screen. So it's enabled with this configuration option. You can also display a logo there. Yeah, that's that. And so in order to use that, you actually need a frame buffer device driver. So using fbdev, not KMS. So that's actually why the fbdev subsystem is still alive because we need it for fbdev. And so that frame buffer can come either from the boot software, which might already have allocated a frame buffer and configure the display. So you can basically inherit that. Or you can have a dedicated driver for your hardware, which will be your fbdev driver. Of course, nowadays no one writes new fbdev drivers, but it might be the case that you have one. And otherwise, we have a compatibility layer between DRM KMS and fbdev, which allows providing the fbdev API so that fbcon can use it to display something on the screen. So basically if you have a DRM KMS driver, then you have fbcon thanks to this compatibility layer. So that's some references that I'm keeping here. If you want to look a little bit more about the details of how it works, I'll skip that. Quickly, the boot splash. So the boot splash is basically an application that runs in the Inetram FS that will just show something on the screen. So it will use one of the display APIs of the kernel. So either fbdev or KMS. And it's basically nicer for users to have some graphical feedback of the system booting rather than a text console, which is what fbdev directly is. So moving on to the running system. So I mentioned this fbcon, which basically is an internal user of the frame buffer in the kernel. So fbcon will bind to your frame buffer driver and it will show its text console basically. But as soon as user space starts, it actually wants to display its own buffer instead. So we kind of need a way to coordinate between fbcon and that user space user, well, yeah, that user's client of the display device. So this is called, well, this is done with VT mode. So basically fbcon is bound to a specific CTR. And you can apply some IOC tools to the TTY to ask to detach it. Okay, so there are two modes, text mode and graphics mode. So in text mode, VB, fbcon will be attached to the virtual terminal. And in graphics mode, it's detached. And then a user space program can start using the display side. Thing that happens is, yeah, I think it's the next slide. Yeah, VT switching. So I mentioned that fbcon is that bridge between the virtual terminal and the graphic side, but you can actually have multiple virtual terminals and you can switch them using control alt and the function keys on your keyboard. So maybe you've seen that at boot, you start on TTY one, but you have a number of them. So you can switch to TTY two with controlled F2, et cetera, et cetera. And so this is called VT switching. And when you are only running fbcon, it's fine because fbcon will keep track of which VT is currently active, so it works. But as soon as you have a user space program that starts using graphics, when you want to switch to a different virtual terminal, you actually need cooperation with that program to ask you to release the resources to release the display use so that fbcon can get back to being active on the new virtual terminal that you have switched to. So this is done using signals. So basically the application will register specific signals that the kernel will send to that application to ask it to release the display. And when you basically switch back to the virtual terminal of that application, it will notify it that now we can acquire the display back. So there is this switching mechanism that is done through signals that are registered with this VT set mode IAC tool. So it means that basically you can have multiple graphics sessions running in parallel and this wasn't so much the case with X because of internal limitations, but now with Wayland you can have one Wayland session running on each of your virtual terminals so you can have multiple ones in parallel. This is typically what happens with login manager. So you have the login manager that pops up at boot and then if you'll start a graphic session for your user on a different virtual terminal and you can have many more like that. So that's the references again. Now I wanted to mention system D log in D which is basically, yeah, it's basically a way to allow display servers to run without increased privileges because actually configuring the graphics pipeline requires root privileges. And so some time ago if you wanted to have a graphic session for your user you needed to have the display server run as root which was an issue for security. So now we have a component provided by system D which is called system D login D and it will basically open the file descriptor of the DRM KMS device and provide it to an application through a unique socket so it will pass on the file descriptor through that unique socket and then the application can receive that privilege file descriptor but it doesn't have to run as root globally. So this is implemented with a D bus service here and any application can, yeah, well just one, actually one application can request to have the DRM KMS file descriptor and then can configure the display pipeline. And it does the same with the VT operations. So that's also a privilege operation to switch the mode from text to graphics. And so system D login D will also do that for you and you just have to use the D bus interface and then your display server can stop running as root which is nice for security and other reasons. So that's a good highlight. I'll skip the login manager because I'm running out of time and yeah, we're gonna move on to the display server side. So how do applications actually submit their pixels to the display server? Well, they don't actually transfer the pixels so we are not like copying the whole frame buffers from the applications memory to the display server. Instead, we want to use zero copy buffer sharing. So most of that is done through file descriptors. Mainly there are two approaches. The first one is shared memory SHM. So you can basically allocate anonymous memory on the application side. You'll get a file descriptor for that. You fill in your pixels in that and then you will share that file descriptor with the display server and the display server will read from that memory. So you just have one memory location, there's no duplication, there's no transfer and then of course you have better performance, reduced latency, et cetera. So SHM is the case when you draw the contents using the CPU. So for example, using one of the 2D rendering libraries that I mentioned. But if you're using the GPU for rendering then you're gonna use a different API and that's when EGL comes in. So I mentioned that it was the glue between the rendering side and the display server. So that's typically when it comes into play. It will allow you to share a reference to a buffer between your application and the display server. These APIs or ways of sharing buffers also imply that you have to deal with the allocation. Through these APIs, so it means that if you have already existing memories, you'll need a special way to import that memory into your device and then maybe share it with the display server. So for example, on EGL you have this extension here which uses the DMA buff mechanism that allows you to import existing memory and have it described as a file descriptor and then you can share that with your display server which will then be able to read from that memory. So the typical case for that would be, for example, that you have a camera supported by V4L2 which is another API of the kernel and you will do a DMA buff export on that V4L2 device. You will get a file descriptor and then we'll import that through this extension here through this call. And so what it does is that it will import that memory to the GPU device which can then access the memory that was used by the V4L2 camera and then you don't have any copy required and you can again just address the same memory. There can be some access issues but it's typically not the case. For example, if your camera has an IOMU and it can deal with fragmented memory but your GPU can't, well it's typically the other way around but for this example let's do it that way. If your memory is fragmented and your GPU is not able to create a virtually contiguous representation of that buffer then there's no way that it's gonna be able to use that memory as an input for whatever it has to do. So that's a kind of incompatibility that can occur but generally speaking in well designed systems you'll typically find equivalent capabilities on those different devices and so what you are able to export on one side you can typically import on the other side. So that works. So that's all for basically sharing the memory between the application and the display server and that's one thing that you need to do but you also need to let the display server know when something has changed on the screen so that it can redraw that part. So this is called surface damaging. So the application will say that this area of the buffer was updated at one point in time after it has shared the buffers. So for example on Wayland we have this WL surface damage call here that is responsible for that and then you can also tell the display server that you have finished your changes to the buffer. So for example you can have multiple damage regions because you have updated multiple parts of the buffer and then you will tell the display server that okay now my changes are done and you can start updating the full view of the display with that buffer. So this is the surface commit call here on Wayland as well. So here you find some references as well on how pixels are actually submitted in different cases. So the first one is the SHM case, the first two ones actually and that's EGL. So when you want to use the GPU and that's GMA buff EGL. So in that case you're going to import external memory into EGL in that situation. So that's it, next step is compositing. So like I pretty much mentioned the display server will grab or it will gather multiple buffers from multiple applications and it is in charge of creating a unique buffer that it can submit to the display like using KMS typically. So it needs to gather all of these buffers. This is called compositing and it is a very demanding task. So this is why we have those damage regions to avoid redrawing the whole buffer at every frame. Instead we just want to update the minimal parts of it. And this typically has to be hardware accelerated because it's so demanding that if you were to do this on the CPU, your CPU will be running all the time just doing that and it would barely be able to catch up with the 60 hertz refresh rates that we have for the display. So keep in mind that compositing is very demanding and you typically need a GPU for that even though there are some software implementations that can be enough for simple cases but yeah it's still very demanding. So references as well and the final part that I wanted to mention is page flipping. So page flipping is basically when we submit a finalized composited frame buffer to the display pipeline and one effect that can typically occur is called tearing. It's basically when the display server will try to update the buffer and the display pipeline so the hardware will read that buffer at the same time so then you have a concurrency issue and what you might get on your screen is a half updated buffer. So that will result in a visual glitch which is not great. So one solution that we have to prevent that is called double buffering. So we'll have two buffers. One that is currently active for scan out which is when the display pipeline is reading from that buffer and a second buffer where we can do our composition our rendering and when the composition is done we are going to flip those buffers. So this is called page flipping. This is when we exchange the world of those two buffers and that can only happen when the display scan out is it has basically finished writing sending the previous frame and it has not yet started sending the next one. So this is when we page flip and so when it starts reading the next buffer to send it to the display interface then it has a new buffer that was updated and composited. So this way we never have this half updated state that goes tearing. Instead we just have one clean buffer that is on for display scan out that we don't touch. We update a different buffer and then we switch them at the right time and this is done through again the DRM KMS API. We have two IOC tools for that. Page flip which is a little bit legacy and Atomic which uses the Atomic new API for that and we can also get an event to know when the page flip has actually occurred. So these are the code highlights and yeah, that's it for me. Sorry it was very dense. I hope you still got something out of that. So thank you everyone. Maybe we have time for one or two questions, yeah, sure. Having triple buffering that reduced latency but not needing to block and always having the latest or at least when it's completed frame being moved right before the display plane. Yeah, so there can be different situations for that. I was thinking of a case where your rendering is really not the bottleneck of your pipeline. So if you can render very fast, if you render in advance you increase latency because the frame that you produce was produced at a time that is further away from the point that it will be displayed than if you use double buffering. So for example, in a video game if you render three frames in advance it means that if you move the mouse it will take at least three frames to have impact on the display. So that's in that sense that it increases latency. In other situations, for example, for video decoding it might be a good idea to decode in advance because the frames that you decode are not going to change because they come from the file or something like that. So in this case, yes, if you decode in advance and decoding is your actual bottleneck then it might give you some extra room in terms of catching up with the frame rate. So if your decoding is a bit slow then indeed you have more interest in decoding in advance and doing more than double buffering with triple or four, five, six buffers as many as you want because then you can start decoding a bit before you start displaying and then you have kind of a backlog of buffers that are ready and that you can submit directly without having to wait for the decode to finish. So it really depends what's a bottleneck and what's your situation. So in the general case I would say that it kind of adds latency in that case but from the decoding perspective it's a bit different. So yeah. Okay, I think it's a wrap. Sorry. It's very good. You should use it. Yeah. Yeah. So thank you. Sorry. Yes, yeah, these throws are using it. Yeah, of course. It's ready for use. Yes. Okay, that's a wrap. Sorry guys. Thank you everyone. And yeah.