 Okay. Hello. Welcome, everyone, to my little talk on DMA Buff Heaps. When I was planning this, I was expecting, well, I knew it would be virtual, but maybe I was expecting a little bit more, kind of like a moderated Zoom talk. But it looks like what we've got is questions in a question box. So I think what I'd like to have happen is we, if you'd like to ask questions as we go, just go back and after each slide review a couple of them so we can try to keep it interactive the best we can. Okay. So again, talk is on DMA Buff Heaps, which is our new little Linux user space interface for allocating buffers. And we're going to go over its uses, history, and a little bit of the why, which we'll kick it right off. So it works well to start with what we're trying to solve, I think, so that everyone can get on the same page. And hopefully everyone can come to the same conclusion that this is the way we should be doing things. So the way things evolved, we have all these different frameworks. DRM, V4L2, Remote Proc, the Trusted Execution Environment framework. And they've all come up independently, which is fine. And they all use buffers, which is fine. But at some point you would like to share them in a zero copy fashion. And so this is where DMA Buffs come from. This allows these frameworks to export their buffers that they're working on as a file descriptor in user space file handle. And they can be shared and imported into other frameworks. So that's what that is. One of the things we got going on, though, is that the memory areas that are exported themselves are not necessarily part of the device that's doing the exporting. So for instance, like a graphics card would be DRM. But when you get a Dumbbuffer, what happens is that's just going to come from like a CMA area or normal system memory, which of course does not belong to the graphics card. That is not GPU memory. So that's one issue we've had. And there's also non-standard memory locations. So SRAM, Tyler memory is one that I'm familiar with. We'll go over these here in just a moment. And to export these, they need to be basically shoehorned into some existing framework. The one example that I'm kind of really familiar with is the Tyler memory space. Again, we'll go into that. And it was basically put into the DRM memory space because it needed to go somewhere. And so we've got these memory spaces and we have no good way to export them. So what do we do? That's the question we're trying to solve. So we'd like to go into a little history of a couple of the existing memory allocators. There's a lot of pieces and they all come together. So it's not very linear. So we're going to talk about some existing memory allocators, then go into DMABuff and DMABuff heaps. So what I'm familiar with is the Texas Instruments Allocators, CMEM. We're not the only ones to do this. NVIDIA had NMAB Qualcomm, had PMEM. And what they do is basically exactly what ION does. They allocate memory and they carve out memory. So the way CMEM worked was a little hacky. Basically you could define your start and stop and memory allocations as a module parameter. When you mod probe, you can give it all the different pools and stuff like that. Or device tree, you could define your areas in device tree. This was tacked on later. Rob Herring would not approve of this and for a good reason. It's basically in everything in the KitchenSync API, you can allocate from pools, blocks, everything that was needed. We would just add it because it's not a tree module. We can do whatever we want. So we were just adding to make it work with what we needed. So it was extended over the years, had a lot of V2s, so CMEM allocate two because we forgot something. Then we had to make a new API. 64-bit, this was all designed around early OMAP platforms which were all 32-bit. So when 64-bit showed up, we were given out physical address spaces which only needed to be 64-bit from 32-bit and all the types had to be expanded. So we just kept adding and adding and adding. The next thing we did, we started by transitioning to Ion. We thought, well, Ion is going to be the future. It's a lot cleaner API. We got to work and that's how I started getting involved in all of this as I was tasked with converting a bunch of applications over to Ion. The feedback I got from a lot of the software teams I was working with is that, well, Ion is also not upstream. It's in staging. It also changes, which basically means we're fixing a problem with another problem. They said, why should we move to something that's also not stable and also not upstream? So good point. So started working upstream. Let's get Ion destaged. Let's go talk a little bit about Ion first of all. So Ion is an Android thing. It started out as an Android thing. We started using it for everything. Basically, it's generic buffer sharing allocation framework. Started out doing a lot of things. And eventually shrink over the years as DMA buff showed up. So let me skip right to DMA buff heaps. Like I said, ask questions as I go. I was going to make this interactive as you want. DMA buff heaps. Hopefully a lot of people are familiar with this. If you're not, it's that the common mechanism to share buffers across devices is how it's usually presented. Basically, you as a driver can export your memory. And as another driver, you can import that same memory. Provides a bunch of just little APIs like that. And it's really the foundation for everything DMA buff heaps does. So it's important we talk about a little bit. And as we see here, user space application started to use this more and more. So OpenCL, for instance, you can import it as a memory area. If you can get a DMA buff handle, EGL accepts it. And you can make an EGL image out of a DMA buff area, which then allows you to pass that down to the OpenGL layer, OpenVG, as textures or render buffers, V4L2, and DRM framework. Of course, that's kind of where it was originated from. All basically can consume, and a lot of these can also export those frameworks. So as long as someone's giving you out DMA buffers, you can just use them in more and more frameworks. So really, the question we're trying to answer is, who should be giving out these DMA buffs? Because most people just want to consume buffers right to them. They don't necessarily want to be handling the back end of how do we export them, the cache operation, stuff like that. So that brings us right into ION. I don't want to beat up on ION here, because really ION is what evolved into DMA buff heaps. But I would like to go over a couple of the pitfalls and shortcomings we saw while working with ION so we know why we went and made the decisions we did with DMA buff heaps. So with ION, we had lots of legacy. Pre-4.12 ION kernel was a bit of a different ION than the post. 4.12 ION. This led to a lot of projects having a couple sets of headers. That's kind of also a problem with BNN staging, is that you don't get your UAPI headers exported along with the rest of them. So a lot of projects just had to carry a copy. So ION started to, a lot of the things that it did started to get replaced by DMA buff. So things like synchronization, ownership of the file handles, stuff like that, we could all start doing DMA buffs. And so more and more of the core of ION kind of got pulled apart. And another issue we had was that the flags, so ION basically has like a central handler for the flags you pass in, but the actual heaps that you were allocating from didn't have to respect those. So we had a cached flag. You could say I would like caching memory or I would like uncached memory. For instance, if you're allocating from an SRAM area, you're always going to get uncached memory. And so the flags didn't really do what they were supposed to be doing. And if you didn't know that, then doing a cache operation on an uncached area could break coherency in some systems. So we'd like to solve that. Excuse me. There was also one device file. It was Dev ION, which for instance with SC Linux and Android were doing file-based permissions, you only had the one file granularity. So for instance, if you wanted to give out the ability to allocate from maybe a CMA region to a bunch of users, but not the system memory, which is a valid thing to want to do, if you give out the ability to allocate from system memory, it's kind of like a fork bomb. It can allow you to simply drain all the system memory and then bad things happen. So we don't want that. So we want finer grain control over what heaps certain programs can allocate from. So we need more files. File per heat is what we went with. Android ION probably did too much with one interface. It had a pseudo-constrained solving interface. So you'd give it a bit field of the heaps that you could accept from and it would try to allocate the best, although it basically just went the first that matched. So we don't want to try to do that in common code anymore. Same thing with cash management. Basically it should be up to the individual allocator whether you want to do any synchronization operations under cash. When you do them, do you do them at map time, allocate time? When do you zero the heap? All that stuff was trying to handle in the ION course that every heap looked the same, but they weren't the same. So we started to get vendors who would modify the core. If you modify the core, you can't upstream it because it's tax specific to your heaps. Lastly, there was just a fair amount of DMA abuse. Basically, things would work on ARM, but wouldn't work anywhere else. So sync for device, we would do that from memory that didn't actually have struct pages backing it. And it works on ARM for some reason. I never figured that out, but it doesn't work anywhere else. There's no guarantees that works. So we were abusing the API. Let's see. So we introduced, basically we saw that what was happening. I say we, it's basically me and John, there was several others. The mailing lists kind of gives a history if you want to actually look into it. Saw that ION was basically kind of shrinking until, well, we thought it was just going to get distilled down to its core operation, which is this API with user space to allow for allocations. And so I first pushed a patch that basically stripped out everything out of ION and I got a lot of pushback. And it was valid because if it's still called ION, if it still looks like ION, people are going to get confused. And so we basically did a rebranding called a DMA buff heaps. It's a little bit more greppable than ION. But, you know, we're still trying to figure out if we want to call it DMA heaps or DMA buff heaps. Anyway, what this allows us to do basically was just a really thin shim layer that allowed allocations from user space and passed them straight on through to a specific heap. So all the logic goes into the heaps and not into the core. So you would basically have your heaps deal with everything like the coherency, whether it's a contiguous or not buffer. All that goes into the back end. And basically it allows for centralized exporters. So instead of having all the exporters in individual frameworks, you have them all in one system level central heap. We've got several types of heaps though, so we'll go into that next. So I want to give some specifics here. Just one, like I said, that I'm really familiar with is the DMM Tyler. This is an IP that's available on OMAP class devices. And it basically sits out by the external memory interface. And it allows you to do all sorts of kind of cool things like rotating images, basically transparently, which is super useful for Android and stuff like that where you're spinning your phone and stuff and you never know the orientation that the buffer is going to be in memory. But you have to do it through a window. So it's basically an IOMMU that sits out, even past the CPU can still use it as Windows. So the question is, how do we expose this? And back when this was originally being implemented, 2011 timeframe, we really only had the DRM framework, and so everything was just put into the DRM framework. You could go OMAP, buffer object, new, and a tiled, and you would get a buffer, a video graphics buffer. There was actually a window into the tiled space, and then you could export it with a DMABUFF export. But wouldn't it be better if what we do is, we actually had a device, it was just a Tyler device. And that's what DMABUFF heaps is allowing us to do. Once that's done, we can get rid of all the OMAP specific buffer handling from libDRM. So that's another use case. SRAM, so we're working on SRAM heap. Patch has been posted upstream there at the bottom if you want to take a look at them. I think we all know what SRAM is and how it differs from our regular DDR space. SRAM has a couple of challenges when working with it. It's not in the normal kernel mappings. So it's usually out in device memory if it's memory mapped. It's not cached, and that's not needed. For instance, on our TI3 platforms, the SRAM and the L3 cache are actually the same memory. So there's really very little reason to try to cache this. You're not going to get a huge speed improvement. The SRAM locations actually already had an existing way to export to user space. You could do read and write. It was kind of like this peak and poke interface. You couldn't in-map, and then you couldn't pass this up into other drivers. So the other drivers would have to specifically know about the SRAM location and do this whole OF get pool, alloc pool, and then do a virtual to physical. It's not standard. Whereas if we expose the SRAM area using DMA buff heaps, you get a DMA buff handle and then you can do all the normal operations on it. You can map to it and read and write and do all your coherency operations. And it'll do the right thing in the back end. It will not actually perform cache operations because those would break your system. So one pushback we got when trying to upstream this particular heap was how do we deal with the fact that there's already a way for user space and drivers to interact with the SRAM regions? And should we make them exclusive? So that's also on this LKML list if you want to actually look into the reasoning. I'm just going to read the questions for a second here. I just want to go through a couple more types of heaps getting an idea of what's out there. Secure buffers, these aren't an interesting one. There's really two types of firewalls. I've seen in most systems there's the static type where at boot time chunk of memory is locked off and it is considered your firewall region. Non-secure user access cannot be performed on this region. There's a certain set of permissions on it. So this would be a carve out heap. There's no math. Let's see. And then there's dynamic firewalls which arguably are the better firewalls but they might not be from a safety perspective. It's hard to tell. So everyone goes one way or another. But these will actually you can allocate from normal memory and then apply the firewall to them later. So here's any memory. And then when you attach it to a specific device you can lock it to that specific device or use whatever policy you want because in the heap you can decide the policy. I've already converted the Opti's X test to DMA heaps from Ion. So I'm going to post the patches for those here soon. As you can see it's a lot of deletions basically because you can see that they carried both the new and the old Ion versions basically as their headers in the project. So let me just look at the questions really quick here. Let me just throw this question out that I think you all can see this. So the question I'm answering is why would Linux need access to the SRM regions? And why don't they just use for things like the boot loader like our SPL? That's a good question. The answer at least for AM5 is the DSPs basically. The DSPs are going to perform much better. It's something like a guaranteed two cycle latency to the SRM regions. So if you're going to take a buffer and you want to do some operation on like an open CL operation you're going to want to allocate from an SRM region if you can. And actually I have a slide on that so we'll do that one here in just a moment. Okay I'm at the slide I guess it was next. So with our open CL implementation what we had before was we would have these specific allocate from DDR allocate from Mismic which is our name for SRAM space. And then you could turn that into an open CL buffer and this would allow zero copy which is what we want. Whereas if we have some way to allocate from these locations ourselves, like DMA buff heaps, then we simply can pass in using like CL arm import memory. Much more standard API. So that's basically where you would need your SRAM. In AM5 it's not that big of a deal because I think you only have like a megabyte. But if you have like a K2 device where you have eight or more megabytes you can put whole images that you're operating on right in the SRAM regions which is going to show some improvements. And it should be up to the user space to decide when to do that because it's a policy thing. So we've already converted everything over so open CL on TI platforms you're going to get it. Using DMA buff heaps as the backend on all 64 bit platforms. There's a lot more legacy stuff on the previous platforms and so that's actually in the works. I can guarantee you'll get it if you're on an AM3 or AM5 device. One issue we did run into when doing our open CL conversion here was that DMA buff only allow the cache operations on the entire buffer. You can sync for device or for CPU on the entire buffer which open CL allows you to create sub buffers out of existing buffers. And so if you only wanted to flush half a buffer or invalidate half a buffer you'd have to invalidate the whole thing which might not be what you want to do and you could lose data. And so the way we worked around this now is whenever you make a sub buffer we actually copy it into a new buffer which is kind of hacky but it becomes technically correct. And I've seen this issue pop up with some graphics related DMA buff issues where you would want to only flush a given window part of a larger buffer so 2D flushing and strided flushing and stuff like that are also things we're working on and hopefully we'll solve this issue too so that's in the works. Another issue we ran into is a remote proc subsystem. So this is one where we didn't really have a good way to allocate to begin with but we could import buffers. So the idea here is if you're loading firmware or if you're passing to a remote processor a big buffer what you need is a physical address because your remote CPU when it gets its message please operate on this buffer. It's not going to see through the MMU of the CPU it's going to be through a bus address. And this is supposed to be transparent to user space. You do not want your user space knowing about physical addresses for security and it's just not clean way to do things. So what we're looking for is a way to basically take a DMA buff handle and pass it as a remote proc message like an RP message and have it on the other side come out as just a physical address after the Linux sets up any IOMM views. And we actually have this in our little TI evil vendor tree because we have remote proc, remote procedure call. So we can basically say perform an operation on this handle and on the remote proc side it gets the right physical address. So I'm not really sure how we're going to do this upstream but it's in the works. And I think a good example of where DMA buff heaps basically is doing something that could not be done before because we didn't really have the exporters in the remote proc subsystem. And I think I got a link to this in the end but we've also converted our IPC examples a couple of them to DMA buff heaps. Most of these were in a two stage process where we converted from CMIM to ION and then once we realized DMA buff heaps was going to be the thing we were going to use. We then converted from ION to DMA buff heaps. And I don't know how well you can see this image but you can see that on the bottom one it was much easier just 18 insertions, 18 deletions. It was basically just a rename so DMA buff heaps is really just ION under a different name and it's pretty similar. So when converting if you're already using ION you're in really good shape which brings us to Growlok. So this is another user. Basically you give it a bit mask. It handles the constraint solving in user space which is where it belongs. It's really not something that's ever been worked out well in the kernel. There's been attempts but it's doing it in user space seems to be the right way to go. At least for our Growlok we always just go and use the DMA buff CMIM which is fine. And it was really trivial to convert over our Growlok from ION to DMA buff heaps. And if you want to see that it's all on git.ti.com slash Android. Cash management. We talked about this a little bit earlier. So this is kind of one of the big gaps John's working on. At least on Android side a lot of the software uses this consumer producer pattern. So buffers are given out to the different pieces of the software. They use them and then they completely deallocate everything. They unmap and then pass it back to get given out to the next step. So when doing zero copy it's really hard to know when a device is actually done using it or whether it's getting it passed right back into another device without the CPU ever having use of it. Which means we end up doing a whole bunch of extra cash operations. Some of that ION was able to get around a lot of vendor heaps, vendor ION heaps. They would just hack away the cash management and it would allow their systems to be really really fast because you're not doing any cash operations in between giving out the different pieces the buffers. How do we do this in a generic upstream way? That's still being debated. It might be what we're thinking is if the different users can tell us, add some hooks to the DMA buff API where did we actually write to this heap and then the exporter basically can choose to do the right thing because it has this extra information. It knows that these particular devices either are coherent with each other and the CPU never used it so we don't need the cash operation or whether we do. And John actually did a really really great write up for LWN so if you go to these links it's a good read. What's next? The final bits pieces that we're looking to fix up before we can call it done. Basically heaps are accessed through device file systems and you have to know the name of the heap you're going to be using and you have to know what type of heap it is so it requires a bit of pre-knowledge. So I know that this heap is continuous and it's cached and it's secure and it has these properties and I have to know the whole name to it to use it. And this makes an ABI out of something that really shouldn't be so we're hoping to get some kind of interface in front of it like a library like a LibIon had where it does basically sorting for you. You give it the flags, you tell it what you're going to do with it and then it will give you the right heap back and so you don't have to hard code anything because right now you have to hard code which heap your application needs and you just have to know that it makes for non-portable code. So we do have a couple of users of IonLeft that we need to convert over before we can call this done. Gstreamer is one of them. We have some binary blob GPU drivers although ours, at least the groundwork piece I went ahead and converted over for us but there's a lot of tutorials and reference material out there that still points to Ion as the way to do things. So it's a gradual process there. A couple more questions but they look generic enough. I'll just answer them at the end. So again a couple missing bits. There's a kind of continuation of the last slide. Are we really ready to get rid of Ion? Just what are our bits missing? We've got deferred buffer freeing so this was an optimization where you would release a buffer and Ion would keep it around, it would put it in a list and then when the system is idle it would come back and actually do the freeing and the page zeroing so a neat little way to speed up certain operations but that's going to be specific to certain systems whether that's actually fast or not. So I personally don't think that's something that needs to be decided in the kernel but others just agree so we're looking into that. And DMABUFF, or debugFS stats that DMABUFF had just the regular DMABUFF API does give you some stats but Ion also had some additional stats which were neat and I think they were used for some user space programs to show how much like a bar, graph kind of thing, how much memory you're using, how much memory different programs are using and should we report how much memory is left, that's the same thing. How do we know how much we've used, how much we have left? Certain heaps like system heap where it's just system memory all it's going to do is give you how much system memory you have left whereas carve out heaps would probably be more useful because you have a set amount and you could have multiple carve outs throughout your memory space and you'll want to pick the best one based on how much memory is left or some other heuristics, that's why we would have it. Big missing piece, your heaps, this is just a called action lots of vendor Ion heaps out there, convert them and upstream them so the sooner we do that we can start to find problems with the DMABUFF frameworks and start to fix them and find the missing pieces so there's missing pieces, every time I convert some user of Ion, at least internally to heaps I find some missing pieces and it's better to get them out of the way early. So let's turn your heaps. I guess before I go to questions just a couple references, acknowledgments, John Stoltz, so he was kind of the guy who he saw this as the way to go and he kind of really took over everything so I think he went 16 versions, patches upstream to get it submitted so this is really kind of his project. Submit who, so DMABUFF heaps, the DMABUFF maintainer which DMABUFF heaps is obviously based on and uses for most of what it does so it's really now the maintainer of DMABUFF heaps and Laura who again Ion is basically turned into DMABUFF heaps so she maintained Ion for years and years and years and did all these cleanups that led us directly to DMABUFF heaps which is basically now Ion destaged so those are relevant folks and then I can go to the mailing list and just see all the comments and contributions from a whole bunch of folks. Okay, let's see, I guess I'll look at some questions and answer some questions. Try to go in order here. Let's see, there's a question, are there still things missing for DSP or PRU remote proxport upstream? There are but they are not related to DMABUFF heaps. A lot of interrupt controller and stuff like that that needs to get sorted out. Nice question. Is it not a good idea that the current owner of DMABUFF, either producer or consumer, let me publish this. Producer or consumer does the cash operations. So it's going to be at least personally I think it's best that the exporter does the cash operations. It basically has the list of attached devices and so the devices themselves were doing cash operations. It really wouldn't have a full system picture of who's doing the cash operations. So a central authority on cash operations is what we're going for here. So all these different devices, all they have to do is attach and signal when they're using the buffer based on mapping and unmapping and everything else that the exporter should handle, which is why central exporters is what DMABUFF heap is trying to do so that we can get more centralized exporters so we can handle the cash operations more efficiently. If you're talking about inter-device cash operations, so for instance GPUs and remote processors that have caches on them, really everything I've talked about so far as far as caching, I'm talking about from the processor that is running Linux perspective. How the device caches know how to synchronize with each other, that is up to them and it's usually handled with fences and fence syncing. So you put in a fence and then the device themselves would ensure that their caches are in the right state. The memory is visible to the other devices before signaling that fence. Let's see what else we've got to publish this question. Does this mean that TI will be dropping CMM and changing to ION? Yes, in a way it's not called ION anymore. It's called DMABUFF heaps. But yes, we're dropping CMM. We have done it for our newer platforms, AM6, J7, for AM5, like you mentioned here, which is on the Beaglebone AI. That's still a work in progress. I've seen it done but it won't be on our next SDK. It'll probably get pushed out to the one after that. But yeah, it's going to reduce a lot of that carried patches we had to make the CMM allocator work. Now we're going to upstream all that and it'll be the DMABUFF heaps allocator. And it'll be used by the DSPs and the EV processors. More questions? Limited for device. What? My publishes question. So security implications of DMABUFF to user space. I'm not sure if you meant DMABUFF heaps to user space because DMABUFF is already a user space thing. If you're talking about DMABUFF heaps, then yeah, there are some security issues you need to worry about. The different heaps all need to behave correctly. They need to zero out their memory so that the next user can go and look into what the previous user had just written there. That takes a lot of time. Every time you pass the buffers on or I mean reallocate from a heap, you have to go and clear all that memory out. So some folks just don't do it, which is a big security problem. As far as reliability, security goes. As I was saying before, you can basically just... If you can get to the DMABUFF heaps, you can drain the system of memory and do like a denial of service against whoever it was. Who's intended to use that memory by the system integrator? And for less complicated socks, it becomes less important immediately because you're basically just going to be using system memory anyway. And there's a lot less buffer passing and zero copy going on. So for risk v cores, although that really doesn't change much. Whether you use an arm or risk v as your main processor, it's much more dependent on the peripherals you have. If you have a camera that needs to then feed a GPU that needs to feed some kind of AI accelerator and then out. You're going to want zero copy across all of those and that's where the DMABUFF heaps is going to be useful. So that no particular one of those devices has to actually handle allocating. You have a third-party allocator that then feeds all of those different accelerators and does the right thing as the buffer passes between them. So let's see. Could you point to the TI tree for sharing buffs? I'll get on Slack and if you ask me that, I'll get the links all together. I didn't realize that I didn't put them here on this last slide. That's... My bad. A question was, will the slides be available online? Yeah, I think that one could be answered by our moderator. I'm pretty sure these are all going to be online somewhere. My understanding was this will all just end up on YouTube and the slides will be a link somewhere. If not, like I said, find me on Slack. I can just send you the slides. They can write back here. Okay, so this question is basically... It's asking more about the different types of cache memory you're going to get with DMABUFF heaps. DMABUFF heaps does not define a specific type of memory. The individual heaps that you allocate from are going to be deciding what type of memory you're going to get. So if you want a cached memory heap, then allocate from a cached memory heap. Or yeah, if you want a cached buffer, allocate from a caching heap. We try not to enforce that in the core. That was something that ION did where you could pass in flags and it would try to give you the right memory. And it would choose whether it should be cached right back, right through, right combined, whatever. We try to avoid that simply because a lot of times you can't actually change the type of caching that a memory location has. So if you've got a kernel virtual address or kernel virtual linear space address, you'd have to actually go out and zap all the pages on ARM. Otherwise you'll have mixed attributes in your MMU and that can cause all sorts of problems. So for performance, if you want to do the cache maintenance yourself, allocate from a cached heap. And then DMABuff already provides an IO control for syncing and unsync, which basically just calls right on through to prepare buffer for CPU access. So you'd in-map it and then run the sync operations as you choose. And that takes care of it for you. Let's see if we get this question. Let me just publish this. I think I'm doing this right. Does DMABuff heaps have an equivalent user space library like libion? No, we do not, not yet. So far, because all we do is we provide a single file with a single iOctl that you can operate on it, we've not needed that yet. So when it gets more complicated, I think we're going to make a lib DMABuff heaps. But right now it's just not been needed because it's just the one iOctl. One benefit of having, though, a library that I see is that you won't actually have to, in your code, include a kernel header, which they can change and stuff. So the library kind of will abstract that for you, which is something we're looking into. So I'll do that next. It looks like all the questions. So I guess I'll be hanging out in the Slack channel. Thanks for watching. Let's see. Not really sure how to shut this thing down. But yep, we're still there.