 Hi, we're here from Argo and Tecton to talk about pushing the boundaries of what's possible on Kubernetes. So, who are we? Well, I'm Alex, I work it into it. I specialize in Kubernetes and kind of OLTP stuff and the lead engineer on Argo workflows, Argo events and Argo labs data flow. And I like coffee and cycling. Ideally, a bike ride to a nice coffee shop is my kind of ideal Sunday. Yeah, and I'm Jason Hall, I work at Red Hat. I've been involved with various developer tools for nine on eight years now. I helped co-found the Tecton project and I like pizza and sitting and ideally sitting while eating pizza. And what do we do? Like Alex said, he works on Argo workflows. Argo workflows is a general purpose workflow execution engine. Steps in Argo run sequentially and tasks in Argo run in a DAG or a directed acyclic graph. It's built on Kubernetes and it has a cute logo. Tecton is also a continuous delivery-focused workflow engine. Steps run sequentially, tasks run in a DAG. It's built on Kubernetes and we also have a cute logo, they're friends. Why did we decide to build on Kubernetes? When building a workflow service, there are two, well, there are a lot of problems. Two of the biggest ones are node management, just managing the resources that will be doing the work and another is workload scheduling. So when a user's request comes in, I wanna do this work, putting that on one of those nodes to do the work. Well, if you are at KubeCon and know anything about Kubernetes, Kubernetes is very good at these and so by building on Kubernetes, we don't ever have to, well, we do have to deal with it sometimes, but mainly we just get to upload that and make it Kubernetes's problem. Kubernetes also has a great resource and custom resources. This lets us build flexible, extensible APIs inside the Kubernetes API server and ecosystem and we basically get RBAC for free. RBAC is another huge source of work if you don't have one already built for you and Kubernetes built that for us, so we love it. And then there is the long tail of just community stuff. That's all of you, that's everyone outside, that's everyone watching all of this later. There's a huge community around Kubernetes that provide people to look out for the security of the platform and the performance of the platform and observing the platform and portability across different architectures and platforms, client tooling for all of these things and multi-tenancy concerns and policy enforcement and tons and tons and tons more. All of these things are things that if we didn't build on Kubernetes, we would have to build ourselves and that would be a massive amount of work and largely pretty wasteful and instead we get to sit back and work on features while Kubernetes improves underneath us every day thanks to all of you. However, Kubernetes was not really designed for this. It was more designed for long running, serving workloads, things like deployments and services and ingress, the usual suspects. These things assume long running pods. Well, we have relatively short running pods and they assume long running containers and we have fairly short running containers and they assume no control over the life cycle of the containers starting and stopping. They just sort of assume they run forever. Well, we need to start and stop these things a lot and they have no convention for passing data from one pod to another. Pods are pretty isolated on purpose in the Kubernetes ecosystem. Well, we need to pass data between these things a lot. So these are the sort of four main points we're gonna talk about, container life cycle, starting and stopping, especially with regards to how sidecars are involved. Container IPC, inter-process communication, talking between containers and a pod. Cross-pod communication, talking between pods in a workflow and custom resource proliferation, which turns out to be a gigantic pain and with that I will hand it off to Alex to talk about container life cycles. Thank you, Jason. I just asked Jason if we could do something. Oh, I've completely forgotten. No, it's fine. Okay, so put your hands up if you're an Argo workflows user. Yeah, come on and put your hands up if you're a tecton user. Tecton user, let the record reflect they had the exact same number of people. All right. Yeah, the great thing about this talk is we get to talk about things that we have in common, which is brilliant. And one thing that we have in common is a container life cycle. Now, both Argo and Tecton execute processes in graphs in a directed acyclic graph and that's not something that Kubernetes supports out of the box. And to make things more complicated we actually can do things like modifying the graphs at runtime. So in a way that you specify in your pod spec what containers that you want to run in that pod without our workflows and our graphs we don't necessarily even know what the containers are gonna be when we start it. And we have some nice simple ones but we also have some pretty big graphs that we run 20, 30,000 pod graphs sometimes and this is an example from the community. I can't even make out how many there are there. So what does Kubernetes provide out of the box for startup? Well, it kind of provides two options. The first option is init containers. It's pretty simple but it actually does fulfill some pretty simple and useful use cases that allows you to start a whole group of containers and they have to run to completion before you then run your main containers. And the other option is effectively run one container in a pods and then you use the Kubernetes API to order the pods execution which is expensive because pod creation isn't actually that as cheap as you might imagine. And then the other thing we need to do for lifecycle management is ordered container shutdown. So what we wanna do is shut down the containers in a specific controlled order, stopping container B before stopping container A and when they stop, when they're shut down they need to be able to do some graceful termination, clean up, flushing their buffers, that kind of stuff. And that shutdown really needs to work with standard Kubernetes shutdown. And so the way that Kubernetes shuts down a pod is firstly you get a SIG term to the root process, then you get 30 seconds, configurable, yes I know, followed by a SIG kill, a hard shutdown. So you need to work nicely with SIG term. So the way that we do this, the primary way that we do this is what we have termed the command look pattern which we termed Wednesday last week. That's why you've not heard of it. But you probably know this better in tecton as the entry point rewriting or in Argo workflows as the MS area executor which is very heavily influenced by tecton. How does this work? Well, it's actually pretty simple. It's actually described completely by this YAML on the right hand side but let me walk you through it first. What we wanna do is we wanna replace the user's command with our own command that forks the user's command as a sub-process. And the way that we do that is we'll have an init container and both the init container and the main container will share an empty directory volume and the init container will just copy that binary onto that volume and then that's made available to us in the main container. So they don't need to be baked into the image of the container. This doesn't mean this is really handy. Actually just loads of things that are really useful for us. So it allows us to do that ordered startup because the entry point, our new command can wait for some condition before the sub-process is started. That we can use that as a way to signal the sub-process. So to send sig terms and sig kills when we want to send them to it. We can actually capture the sub-processes standard in, standard out and exit code and you can actually do things like remap the exit code to a different exit code if you want to. We can also wait for a condition before shutdown which is pretty neat for debugging. So the user's main process can finish that sub-process finishes and then that container is held open still running while you can connect with kube exec to do some debugging. And the great thing is it that it's very difficult with two containers in the pod is they can access the same file system. So you can get access to any files on the user's file system because actually your process is running in their file system. Has a couple of caveats, couple of downsides. If you're gonna delay the start of that sub-process based on some condition, then that container's still gonna be running and it's gonna be consuming memory and it's gonna be costing you money as a result of that. Now it'll be set by your resource requests. And typically this kind of commandlet doesn't need very many resources at all. It can be pretty skinny, pretty lightweight. But your main container may be doing some really heavy lifting and might have some high CPU memory GPU requirements. So that could be costly. And you can mitigate that by working with resource requests. And the other thing it doesn't really allow us to do is we're still tied to pod specs so we can't dynamically add containers to the graph. Originally our title list title just shut down but it's really shut down in sigterm, I think. So how can you ask a process to gracefully exit? What you need to do with the sigterm and there are two ways to send that signal today. One is to delete the pod which is a pretty, it has the downsides that the pod is deleted so you can't inspect the pod afterwards, can't look at status afterwards once it's deleted and unless you've archived the logs you lose access to the logs. And the other one is to use kubectl exec kill one. But I hope some people can realize the problems with that. So it sounds like a great way to kill a pod but it has three big drawbacks. Most pods don't have a kill binary on them. They might be a scratch or distreless images because security is more important now I think for many people than a year ago. And so those images are much more common. Or it could be something like Debian where the kill isn't a binary, it's a built-in, shell built-in. It doesn't necessarily work with shell scripts. Particularly well, it can be hard to kill a shell script and have it gracefully shut down. And finally, if you're running this non-root, your root process won't be started with PID number one. So kill one, I mean, won't do anything at all in those containers or I think a return and exit code. So here are a couple of mitigations for that. One is to use dominant. So there are, I think there are two or 300 mutations of this. The one I know is by Yelp. And that provides an init process which handles those symptoms correctly and that can fix your shell script forking and get those guys to shut down correctly. Like we did for the entry point and you can have your init container copy a kill binary onto the image, onto that shed directory and then you can actually just invoke that directory and you'll know the path to that. And actually you can write kill in about 20 lines of go. You can just write your own kill command. Well, and to mitigate the PID one issue, you can use the PID of command to figure out which PIDs to do it and there are various other ways to do that. Now I can't talk about this with just saying if you wanna find out more about this, Jason and Christie Wilson spoke about this in depth at KubeCon 2019, which feels like a very long time ago. And there's a link there in the slides if you wanna find out like a load more depth about it. It's hard to talk about writing your own controller and creating pods I think without talking a little bit about sidecars. So a little bit of revision for anybody who doesn't know what a sidecar is. A sidecar is just a container that runs next to your main container provides some kind of facility. It's usually some kind of cross cutting concern. This example is straight out of Kubernetes documentation and it shows a log collection sidecar. Pretty standard stuff. Now the problems with sidecars is we don't control them and they can become quite unruly. You know, we don't know how they behave. We don't know if they're going to accept sigterm correctly and run it. We don't even know if they have an init process. They're a bit of a black box to us. And it gets kind of even worse when we talk about injected sidecars like those that come with Istio and Vulse. So an injected sidecar is a term we use to describe a container that's added to the pod spec after creation by a mutating webhook controller. And because it's added by that mutating webhook controller we just don't, we don't have any information about it. We can't intercept it or change it. We can't rewrite the entry point. Now Istio does provide a quick, quick, quick endpoint. I think that's how you pronounce it, quick, quick, quick. But it's not really a standard. And our solution today, certainly for workflows, is if people have Istio Vault running, we say, well, disable it. And that's a shame because people want to use those technologies for really good reasons. And we'd really love to be able to support them doing that. Container IPC, or if I've termed it, CIPC. Again, I made up last Wednesday. It was a big day. It was a big day. It was a productive day of making up new terms. So why would we want to do container IPC? What's our use cases for it? Well, it's typically about sharing data between two containers. So for example, you want to download a file from S3 or some other kind of bucket storage and make it available for that main container. Or you might want to do some kind of remote procedure calls between the two containers. Or you might want to do some kind of streaming data. Get messages from Kafka and provide those to the main container amongst quite a few different use cases, I would say. Now, container IPC is just Nick's IPC. And there are about five to seven to nine different ways of doing Nick's IPC. And I'm going to talk about a couple of them that we've found that work really well with our systems. The first one that both Argo and Tecton use is this ubiquitous shared empty volume. Alex, how do I solve this problem? Well, I think it's pretty solved by a shared empty directory. But any Argo workflows users know that it's not too far from the truth. So the shared empty directory allows you to communicate between two processes, typically using some kind of marker file. So one of the two processors will write a marker file into that directory. And the other process will let sit there polling for changes to that file. And when that file has changed, it's going to read the contents and perform some kind of option. Obviously, in a shared empty directory, you can mount a FIFO using MKFIFO or whatever your language's FIFO creation API is. And that's quite fast if you want to just do read and write of bytes. But it's not particularly proven. If you look out there, there's not too much documentation on that. But the nice thing about shared empty directory volumes is that they're really simple. I mean, just so simple. They're super secure and they're really robust. So they're great for what I would call a slow IPC where you don't have a lot of data going through. And those messages don't change a lot. But if you want something faster, then there's another really great tool in the toolkit to use. And that's just HTTP. HTTP has these great benefits of being well known and easy to implement. Most programming languages come with HTTP server and HTTP client built into them. So you don't have to worry about checking your dependency tree for those problematic security issues. It's just part of the core SDK. You need to define an API contract. Not particularly difficult. The nice thing about this is it's actually relatively secure. You actually don't need HTTPS between two containers within the pod because they have own network namespace. You can just use HTTP. And it's actually pretty fast. It's pretty fast. Especially if you are using HTTP Keep Alive, so you don't have the socket establishment cost. And you're using Unix domain sockets as well. You can get a pretty nice performance and throughput benefit from using those. And when we rehearsed this earlier on, I was telling Jason how Java 16 now supports Unix domain sockets. And it turns out I should have also been talking about Java 17, that's out recently as well. So yeah, widely supported Unix domain sockets. And this is great for fast IPC, if you've got a lot of throughput. Now are there other ways of doing container IPC? So these are some of the examples of the kind of TPS messages per second you can get with some of the other technologies out there. So things like pipes, message queues, POSIX and SysPy message queues, shared memory and memory map files. And if you just take a look at this graph, you can see that the throughput you can get from memory map files is, what is that, 20 times? 20 times faster. So there's like, there are other ways, much, much faster than TCP. But these are kind of unproven and in the kind of research that I've done on this topic, it has been a bit heavy dragons on the internet. And I'd love to hear from anybody who has looked at memory map files between containers and be fascinated to know what their results were. Mechta, my colleague, Jason. Thank you. So like with containers, we also need to pass data from one pod to another pod efficiently. For example, the canonical use cases for this are, if a task does it get clone of some revision of some repository, it might need to pass the commit that was actually checked out to the next task in the pipeline. Or if it built a container image, it would need to pass the digest to the next task that signs it or scans it or does something else with it. We also need to expose that information up to the user that's looking through the API or the UI or CLI. And these are all short-lived containers that we don't necessarily control. So we don't have a lot of options for having users request HTTP endpoints directly on those containers to get that information. And so in Tecdon, at least, we've found a fun little work around using little new technology called termination messages. Not a lot of people know about this, or I don't think they do, but if you write to this magical path in your container slash dev slash termination dash log by default, it will magically get collected by the Kublet and written up to the pod status for that container. So this is a little way to ferry information out of your pod, out of your container through the pod up to the API server. And it's configurable with the container's termination message path. So yeah, the more you know. So the way that Tecdon uses this is if a container, if a step container writes to slash Tecdon slash results slash something, the injected entry point that Alex talked about before will, after the step is complete, it will scan all of Tecdon results and see if there's anything in there. It will collect that information. It will stuff it into a JSON string and write it to its termination message path that gets collected by Kublet, written up to the API server. The controller watching that pod pulls out that JSON and then puts it into the task run status where it goes. And this lets us pass that data on to other tasks and show it to end users. We also use this to report the actual start time for the containers. So like Alex said, all the containers start at once and then only the first sub process starts and then only the second step sub process starts when that's done. So we lose the actual start time of each of these containers but we write it to the termination message path as well. So there are some limits with this though that we have started to hit. And that's that the Kublet will only write, will only collect 4K of data to pass up per container and only 12K of data across all the containers in the pod. This is mostly enough to get the job done if we're talking about Git commit shas and container image digests and timestamps and relatively small bits of information but it will start to break down if you do anything more crazy than that. And it's really only a matter of time before people come and ask if they can do something more crazy than that. To poorly paraphrase Steve Jobs, 4K should be enough for anybody is not actually true as it turns out. And we've talked about compressing this data, compressing the data better than JSON or encoding it in something better than JSON but ultimately if the 4K limit is there you're gonna hit it one way or another eventually. So something we started to look into is instead of writing to that termination message path we will have that injected entry point contact the API server and write to a config map. The config map max size is much, much larger than 12K. And so for every task run that will run we'll create a config map to hold its results. The entry point will write that data to the config map and we can tightly narrowly scope the RBAC to that results object to so that the entry point is only allowed to write to it and the controller is the only one allowed to read from it so you can't have cross task contamination in the results. And that's basically exactly what Argo Workforce does. So it's nice to have proof that that works. There are however some disadvantages. If we want to use config maps we can use config maps. If we wanted to use our own type that we define we now have to define that type and manage that type and version and upgrade and validate that type. And the sort of bigger concern is the additional load on the API server. So instead of just writing to the pod that we already use and update all the time we're also making frequent writes to this config map or another custom resource. And we have to create RBAC for that on every new task and every new execution and manage it and delete things when they're done so they don't leak and it can get difficult. That also leads me to my next issue that we have started to hit which is custom resource proliferation. As I said before, custom resources are great. Texon wouldn't exist without it. Argo wouldn't exist without it. Plenty of other things in the ecosystem would not exist if Kubernetes didn't provide an extensible API server but fundamentally they're not magic. At the end of the day CRDs are just writing to etcd and etcd while also really great is not magic. It's not the key to unlocking free infinite scalable storage. And if you try, like some people do, you will hit limits and when you hit those limits, you will experience pain. You will start to experience pain in a few dimensions. One way you can mess up at CD is to write too many bytes. Like I said, it's not an infinite storage. You will eventually hit some limit and etcd will start to fall over. If you create a lot of tiny objects, too many tiny objects, however many bytes total it is doesn't matter, too many objects, etcd will start to fall over. And if you're just constantly writing requests to etcd through the API server and constantly updating etcd you won't like it and will fall over. De-stabilizing etcd is really, really, really bad. The cluster just sort of starts to act funny and things don't work and requests start to time out. And what's up, yeah, pagers go off and you get angry calls from SREs. And the worst thing is that you can't debug it because you're using the system that's destabilized to debug it. And so everything just sort of turns to mush underneath you and it's awful. We have discovered some mitigations for this. One really easy one is don't use jobs when you really just want pods. If you create a job, it will just create a pod for you and now you've created double the resources and double the QPS because when the pod updates it'll update the job and then you read the job. So that was an easy one. That's like 50% off right there. Avoid unnecessary updates to your objects if you can in your reconciled loop instead of making 10 requests to update the status of something, batch those until the end and make one update at the end. Avoid duplicating the same information across a bunch of objects. So Tecton actually doesn't do this well today. When a task run status is actually copied and aggregated into the pipeline run status for ease of the user but that means that we have to make two updates every time anything changes. Also avoid these monolithic mega objects like the pipeline run because you will update them more often and they start to hit those size limits that we talked about. At the same time, avoid lots and lots of little objects because you will also end up making a bunch of QPS to the API server and you'll end up with maybe too many objects for at CD to be happy. Argo actually has a really interesting feature that I didn't know about until earlier until we were working on this talk which is that if the status of an Argo object gets too big, the controller will offload it to another database and just give it a pointer to that. So instead of your status, you just say like go chase this pointer to the real database to go get that information. So that's really interesting. Other mitigations we've had for custom resource proliferation are just resource quota. In a namespace, you can say this namespace is not allowed to have more than 1,000 task runs ever and if you try to create 1,000 in first, it will fail. You might also want to prune old resources. We do this in Tecton a lot today but the question there is always like do you want to prune by age? Do you wanna say something I only wanna keep the last week of history or do you wanna prune by number of resources? I only wanna keep the last 10,000 requests whether or not however old they are but fundamentally users don't wanna lose this data especially if it's security sensitive like what did we deploy three months ago? Well, sorry, we needed that space so we deleted all record of that deployment ever happening is not a good answer for users. So Tecton and Argo have also solved this in a similar way or planning to solve it in a similar way. In Tecton, we have a Tecton results project and in Argo, they have Argo workflow archive which effectively runs another controller to watch for these executions. When they finish, it copies that data to another relational database and then prunes that object from the Kubernetes API server. This also gives us an opportunity to have better indexing and searching and you can search for failed task runs that took more than 30 minutes in the last 20 days which is not something as far as I know you can do with the Kubernetes field selector today. But unfortunately this means like we lose all of the, some of the nice ecosystem stuff of like kube control doesn't work and all of these things need to be sort of custom built to support that. So are we gonna do this? Yeah, we're gonna do it. We're gonna do it. So what can we do about this? Well, we could do nothing which is my first option. I always present everybody in my organization. What is doing nothing mean? Keep working around it. Yeah, we just keep working around it. It's kind of value add for both Tecton and Argo because Kubernetes doesn't support this out of the box. We get to add that value to it and if it was supported, we wouldn't get to that but we could do... Boop, boop, boop, boop, boop, boop. Caps! That was the thing. We enjoyed that. I did. You did, okay, we won't do it again. We'll never do that again. So, we were great to go out and add some additional features to Kubernetes. So one of the things would be an API to start and stop containers. That would be fantastic, like a container sub-resource or something along those sort of lines where you can say, stop the container, start the container, don't start this container just yet. It would be nice to be able to declare the daggers a dependency tree with inside the container spec. So say container two is dependent on container one. We could have that, that would be pretty neat. We've talked a bit about standardizing the commandlet pattern. So maybe providing a library that people could use that is kind of well-tested and robust that you could just use that and that commandlet would expose some kind of API that people could use that would allow you to use kubectl-exec or curl to just invoke commands on it and it would deal with it for you. And for resource quotas, so people use resource quotas to limit the number of custom resources in a namespace, typically used for limiting the number of pods but you can use it for custom resources. But it'd be nice if in that we could specify what to do when there are too many, you know. Some kind of strategy saying, for example, delete the oldest one of these custom resources and kind of clean up afterwards. I think that would be pretty neat. I guess we've freestyle a lot of these ideas, don't we? Yeah. Yeah, so I think it's kind of like an interesting space to kind of think around in it. Yeah, so with that in mind, do we have any questions from the audience? Yeah. So the question was given, a common use case for tecton at least is clone some source from a repo and then build it, run tests, do other stuff, scan it, do, you know, let's say five other things in parallel. Currently today we would ask you to make a PVC to have one task write that data to a PVC and then share that PVC read only to the other ones. There's some, and obviously there are downsides to that of now you have this PVC you have to clean up or at least have around and it can limit your schedule ability of those things. There's some interesting work going on in being able to run all of a pipeline in one pod. So effectively that would, you would have one big pod that does all the resources of all of those things and they would run in one shared on the same node, but in that way you wouldn't have to write the persistent data outside of the execution of that pod. There's, if you find me afterwards I will send you a link to the actual proposal for that, but I think that's an exciting sort of area, a frontier of solving that problem. Yeah, so you can also do the, sorry, the follow-up was instead of using PVCs could you write to some external object store S3, JCS, whatever? Absolutely, that's absolutely an option. It has more or less all the same problems as PVCs. You have to write this data somewhere and it costs money while it sits there and costs money to delete it and it costs money to, you know, it's management overhead. I really think the fundamental problem is you don't want that data to exist longer than it's being operated on. And in that case, if it just was isolated to the pod it would just disappear when the pod disappears. A code of this is that we actually do something like this in Argo. You can share data between the steps in your workflow by using S3. And it's sometimes cheaper than PVCs and sometimes faster and sometimes more expensive. Kind of depends on what your use case is. Okay, next question. Okay, so the question was, did you look at Kubernetes events as a way to message between pods? I'm gonna say no from Argo's side. I don't, I think we did. I mean, it has all of the same API server scalability problems, right? Like if that's your message bus, what's the way you communicate between containers in a pod or pods in a workflow? It's sort of, you might be able to get by with it, but it's going to be plagued by all the problems we've described here, which is it's not built for this, it's not designed for that scale. So maybe, like by all means, we can experiment and see what falls over, but I don't know if it's better than what we have today. Okay, any questions from the middle? From the back, the middle? So the question was, with Vault and Istio, is there an alternative of Vault by using the CSI injector ahead of time? Yeah, yeah, yeah, you could do that. It's the fact that they use a new setting webhook that makes it difficult to work with. And specifically to inject new containers that we don't know about? Yeah, yeah. Like if they were injecting other stuff, you know, whatever, I don't care. Volumes would be fine, yeah. Yeah. Yeah. I'll take this one. Sure. So the question was, you want to understand what I'm laughing in a second is when you're running a DAG and you have a container in it, believe it or not, it's shot in the head. I'm going to terminate it. Terminate it because Kubernetes might want the resources. And I feel like the one contractual promise Kubernetes gives you is I will kill your pod. At some point, I'm going to kill your pod when you don't want it. And it's a really hard problem to solve because we're trying to one really reliable, robust workloads on Kubernetes. And those two things are a diametric opposition. Yeah, they are fighting against one another. And there are kind of mitigating actions. You can take like a pod disruption budget, for example. In workflows, the main thing we do is allow you to just retry those steps automatically. And things like retry them on a different node within your cluster, change, change where it gets ruined. That's the main thing that we do. And the good thing to do, and I don't know if this is the case for Tecton, is don't have very long running processes which take an hour to run, cost an absolute fortune. And if they get 55 minutes into that, being terminated gracefully, you have to do all that work again because that can happen over again. It's better to have some kind of memorization going on in the process. So the question is, do pre-stop hooks help? We don't use them, but yes, I think they can, yes. I mean, the fundamental, so this points to another way in which Kubernetes was not designed for this. Kubernetes is designed for replicated serving workloads where if somebody unplugged a node, it should be fine, right? We very much need that pod to finish for the next one to start and things to work. So it's sort of a difference of assumptions between what Kubernetes assumes and what we assume, yeah. Do we have, she's giving me a thumbs up. We're good. We're good, we're good. Okay, any more questions? All right. So just one more thing. If you want to find out more, oh, has the size gone a bit strange on this? Yes, it has, never mind. I don't know why that text is so small. I'm at the Intuit booth today, we're in zone stage. I'll be on the booth from 3.30 if you want to talk about Argo kind of related stuff, or you can obviously go to the Argo booth to speak to some of the engineers there as well. Will you be on the Red Hat booth? I'll be around, yeah. Cool, cool. Sorry, that's why you can find out more. Yeah, thank you. Thank you very much. Thank you.